You are viewing a single comment's thread:
I'll try to answer the questions in order.
start_time = time.time()
# Make a simple query to measure latency - use a lightweight call
api.find("tokens", "tokens", {"symbol": "SWAP.HIVE"}, limit=1)
latency = time.time() - start_time
I would love some more input, kind of went into this blindly, and the original intent was to do what we did with @nectarflower with @flowerengine, which was run a benchmark every hour and store the results in the account metadata.
curl -s --data '{"jsonrpc":"2.0", "method":"database_api.find_accounts", "params": {"accounts":["flowerengine"]}, "id":1}' https://api.hive.blog | jq '.result.accounts[0].json_metadata | fromjson' | jq '.nodes[]'
@nectarflower runs on the :00 minute mark, and @flowerengine runs on the :30 minute mark, to avoid clash. Both benchmarks take ~10 minutes to do all the servers.
Would absolutely be thrilled to get a PR if you see something that could be done better. Also, I am aware the head section of this post was printed twice, that should be fixed on the next run, don't know how I didn't catch that the first 20 times I ran it.
Thanks for the answers. I would love to do more PR's, but lately, time to reply is already a luxury. I will keep giving this a watch and provide any feedback, and if I find anyone keep on helping out, will point he/she in your direction.
I have noticed also you guys used python 3.13 (which is very recent). Would be nice to support older versions, but its not a problem if not. As in a few months, all pythons everywhere will be supporting 3.13 anyhow.
Just a thought.
View more