You are viewing a single comment's thread:

RE: Hive-Engine Node Benchmark Report - 2025-04-18

Interesting... didn't know about this. Some feedback!

  • Can you please indicate where you are making the calls (IP or country perhaps)... all benchmarks are relative to sources.
  • "were processed by the node within the specified time." - How much is this specified time?
  • It says "The following data shows trends over the past 7 days:" but then the number of runs shows 1... should it be instead the number of total successful runs in the 7 days divided by the total number of runs planned to do in 7 days?
  • Can you add the command being used to get "Latency" results? I am assuming it's just getStatus, but it's nice to have it clear. Same sort of thing for the other results.

Anyhow, my diagonal. Let me know what you think.

0.00355188 BEE
3 comments
(edited)

I'll try to answer the questions in order.

  • Frankfurt Germany - Digital Ocean Droplet
  • It should state that, since it's variable it was vague, but I'll clarify, but the standard tests are 30 second timeout / time limit
  • I was originally running it on my dev machine, That was the first run from the server, it should populate over the week. If it's still not right this time next week, will adjust as needed, but you are probably right, I wasn't sure exactly how I wanted the trends section to be, so if any of them need it badly, that's one that will get refactored.
  • Latency is the average time of 5 samples of:
start_time = time.time()
# Make a simple query to measure latency - use a lightweight call
api.find("tokens", "tokens", {"symbol": "SWAP.HIVE"}, limit=1)
latency = time.time() - start_time

I would love some more input, kind of went into this blindly, and the original intent was to do what we did with @nectarflower with @flowerengine, which was run a benchmark every hour and store the results in the account metadata.

curl -s --data '{"jsonrpc":"2.0", "method":"database_api.find_accounts", "params": {"accounts":["flowerengine"]}, "id":1}' https://api.hive.blog | jq '.result.accounts[0].json_metadata | fromjson' | jq '.nodes[]'

@nectarflower runs on the :00 minute mark, and @flowerengine runs on the :30 minute mark, to avoid clash. Both benchmarks take ~10 minutes to do all the servers.


Would absolutely be thrilled to get a PR if you see something that could be done better. Also, I am aware the head section of this post was printed twice, that should be fixed on the next run, don't know how I didn't catch that the first 20 times I ran it.

0.00900080 BEE

Thanks for the answers. I would love to do more PR's, but lately, time to reply is already a luxury. I will keep giving this a watch and provide any feedback, and if I find anyone keep on helping out, will point he/she in your direction.

I have noticed also you guys used python 3.13 (which is very recent). Would be nice to support older versions, but its not a problem if not. As in a few months, all pythons everywhere will be supporting 3.13 anyhow.

Just a thought.

0.00085206 BEE
(edited)

This is fantastic feedback!!!!!

(Its very new still, but will fill out 7 days...over next 7 days 😅)

0.00351646 BEE

Going to run from my server to see the difference... I was just watching the repo code now... and I think a couple of things will be massively different because of the total timeout being a limit on calls.

But hey, this is really cool! Love benchmarks because it's part of what I do in real life.

0.00350033 BEE


Your comment is upvoted by @topcomment

Info - Support - Discord

image.png
Curated by friendlymoose

0E-8 BEE