ATX Node - Hive 1st/2nd Layer Benchmarks

avatar
Authored by @Forky

The new ATX node!

Finally revealing what I bought. And, for some, this might instantaneously be a disappointing thing, about me going to Intel, although I had a reason (double one actually) for this strategy.

First, Intel CPUs are still faster for some "front ends" (web stuff) and for mixed workloads, it might be a debate, but might also win in some groups of workloads. Power-wise, it's OK... most stuff in the real world is fine and efficient. Yes on games AMD wins mostly... but mostly, for things that need clock speed and rich architecture branches, Intel has a better domain of cover.

The second reason was the price! And I will explain below why...

For IO and storage, definitely AMD! IOps and overall latency vs power, AMD rocks... So many lanes and a very good connection to IO and on bigger enterprise models, the extra memory channels.

But I have bought this one with a vision already... and that's, buying later an AMD system for the IO part! and leaving this one in the future for the front end. As Intel CPUs can easily sustain longer periods of support for many codes without being tooo deprecated.

Anyhow, apart from this, and some other tests I can't reveal due to NDA's, I have decided to procure the Intel 14th gen CPU. Also, because this CPU in NZ was quite cheaper than AMD (due to high demand) - a problem I have to balance due to warranty, which is a pain in the ass when I get stuff outside the country... because anytime I have a problem, I incur more costs! Hence why do I default to in-country when I know it's not worth the pennies...

Either way, the CPU is quite good (I would probably buy this for games too, but probably buy AMD for anything else, at least while under AMD high demand, otherwise AMD first). And I have yet to reveal some power usages... but compared with my other AMD Zen2 6-core stuff, this one beats it quickly! Idle power is super nice... if you running stuff very low at the start... and it can quickly scale too.

Zen5 is going to be fricking crazy! Intel has some stuff under the sleeve tooo... things are changing with the introduction of HBM and NVIDIA coming into the play with ARM... there is so much activity this year, it's a clusterF... of news!

Either way, here is guy! Enjoy the benchmarks and let me know any questions...

Results Disclaimer

Be aware please, especially in the VM solution, that results may vary (due to other workloads, although I have attempted to minimize the impact as much as I could) due to specific details I might have missed or failed to detect their conditions/reasons for possible impact. As a result of this work, please take these AS IS.

The reason I do these, is to be sure what I will be counting on and to be able to find baselines for when troubleshooting is needed in the future. You don't need to do this yourself if you don't want to. For me, this is also a mix of "keeping up to date" and enjoyment, due to inherent professional skills.

Hive Compile

  • 6-core (VM) Intel 10th Gen, DDR4 2933 MHz, NVMe 2TB PCIe 3.0
    Ubuntu 23.10
time cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_HIVE_TESTNET=OFF -GNinja ../../hive/
# (3.7-seconds)
time ninja
# (real    18m9.183s) 6 threads
  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    Ubuntu 22.04
time cmake -DCMAKE_BUILD_TYPE=Release -DBUILD_HIVE_TESTNET=OFF -GNinja ../../hive/
# (1.3-seconds)
time ninja
# (real    2m53.608s) 32 threads (system consumption of 150-440 Watts)

Less than 3 minutes 😱

Hive Live Sync

Otherwise mentioned (if different), these were the seed nodes I used:
p2p-seed-node = api.hive.blog:2001 rpc.ecency.com:2001 seed.openhive.network:2001 anyx.io:2001 hiveseed-fin.privex.io:2001 hive-seed.arcange.eu:2001 seed.liondani.com:2016 hived.splinterlands.com:2001 node.mahdiyari.info:2001 seed.roelandp.nl:2001 hiveseed-se.privex.io:2001 hive-seed.lukestokes.info:2001

Partial

  • 6-core (VM) Intel 10th Gen, DDR4 2933 MHz, NVMe 2TB PCIe 3.0
    first 2M blocks live sync, Ubuntu 22.04
real    6m3.114s
user    9m59.928s
sys     3m2.908s
  • 6-core (VM) Intel 10th Gen, DDR4 2933 MHz, NVMe 2TB PCIe 3.0
    first 2M blocks live sync, Ubuntu 23.10
real    5m52.383s
user    10m12.387s
sys     3m11.303s
  • 6-core (VM) Intel 10th Gen, DDR4 2933 MHz, NVMe 2TB PCIe 3.0
    first 2M blocks live sync, Ubuntu 23.10

p2p-seed-node = localnode:2001

real    6m23.933s
user    10m37.310s
sys     3m29.787s

p2p-seed-node = localnode:2001 api.hive.blog:2001 rpc.ecency.com:2001 seed.openhive.network:2001 anyx.io:2001 hiveseed-fin.privex.io:2001 hive-seed.arcange.eu:2001 seed.liondani.com:2016 hived.splinterlands.com:2001 node.mahdiyari.info:2001 seed.roelandp.nl:2001 hiveseed-se.privex.io:2001 hive-seed.lukestokes.info:2001

real    6m33.510s
user    11m4.258s
sys     3m47.537s
  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    first 2M blocks live sync, Ubuntu 22.04
real    2m6.406s
user    4m10.067s
sys     0m51.402s

Full (blocks indicated)

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    full live sync, Ubuntu 22.04
2023-12-24T00:04:31.632 application.cpp:174           load_logging_config  ] Logging thread started
...
2023-12-24T19:22:59.666 chain_plugin.cpp:1176         accept_block         ] Syncing Blockchain --- Got block: #81310000 time: 2023-12-24T16:40:12 producer: guiltyparties --- 979.81µs/block, avg. size 10553, avg. tx count 39
2023-12-24T19:23:03.017 chain_plugin.cpp:498          operator()           ] entering live mode
2023-12-24T19:23:03.017 chain_plugin.cpp:519          operator()           ] waiting for work: 0.01%, waiting for locks: 0.00%, processing transactions: 0.00%, processing blocks: 99.95%, unknown: 0.04%
2023-12-24T19:23:03.331 block_log_artifacts.cpp:443   flush_header         ] Attempting to write header (pack_size: 56) containing: git rev: 0acce16829c0702ac09d9e51beb57b9fac157c22, format version: 1.1, head_block_num: 81313230, tail_block_num: 1, generating_interrupted_at_block: 0, dirty_close: 1

19h18m 😱 - 81313230 blocks

And I have observed the same performance with just a local node (p2p-seed-node = localnode:2001). Probably using more than one local node could have advantages, but likely very little to no improvement (as most of the time spent here is buffered/hidden, assuming enough resources obviously).

(after firmware update)

No difference, and I am guessing here, but possibly because there are too many hidden processing overlaps between access to IO and the time that takes to process things. Cumulative with the downloading time, adds no difference. And since most of the time (writing to disk) is cached, the impact is very little to none (or hidden as previously mentioned).

Hive Local Replay

Partial

  • 6-core (VM) Intel 10th Gen, DDR4 2933 MHz, NVMe 2TB PCIe 3.0
    first 10M blocks replay, Ubuntu 22.04
real    19m24.695s
user    23m27.612s
sys     2m21.661s
  • 6-core (VM) Intel 10th Gen, DDR4 2933 MHz, NVMe 2TB PCIe 3.0
    first 10M blocks replay, Ubuntu 23.10
real    19m36.713s
user    24m6.525s
sys     1m52.247s
  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    first 10M blocks replay, Ubuntu 22.04
real    6m35.121s
user    7m25.173s
sys     0m36.348s

I tried overclocking the memory to 5800 and 6000 MHz and it gave worse timings... 6m41s and 6m40s respectively (voltage raised to stabilize, but controller frequency ratios might have needed more frequency to compensate). I didn't spend enough time with this to explore better conclusions. Timings were also calculated by the motherboard, so they might not have been ideal.

Somewhere in the process of many of these benchmarks, I managed to experience a problem with the 990 PRO NVMe, when this was running the 3B2QJXD7 firmware, so I have updated it to the at the time most recent (4B2QJXD7). I shared some notes on how you can do this here.

(after firmware update)
  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    first 10M blocks replay, Ubuntu 22.04
real    5m51.730s
user    6m42.006s
sys     0m35.455s

Full (blocks indicated)

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    full replay, Ubuntu 22.04
2023-12-24T20:44:35.362 application.cpp:174           load_logging_config  ] Logging thread started
...
2023-12-25T04:34:16.905 block_log.cpp:837             operator()           ] Exiting the queue thread
2023-12-25T04:34:17.386 block_log.cpp:887             for_each_block       ] Attempting to join queue_filler_thread...
2023-12-25T04:34:17.386 block_log.cpp:889             for_each_block       ] queue_filler_thread joined.
2023-12-25T04:34:17.386 database.cpp:460              reindex              ] Done reindexing, elapsed time: 28181.39933899999959976 sec
2023-12-25T04:34:17.386 chain_plugin.cpp:1101         plugin_startup       ] P2P enabling after replaying...
2023-12-25T04:34:17.386 chain_plugin.cpp:768          work                 ] Started on blockchain with 81314881 blocks, LIB: 81314862
2023-12-25T04:34:17.386 chain_plugin.cpp:774          work                 ] Started on blockchain with 81314881 blocks

7h50m 😱 - 81314881 blocks

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    full replay, Ubuntu 23.10
2024-01-03T04:54:41.502 application.cpp:174           load_logging_config  ]
...
2024-01-03T12:39:03.301 chain_plugin.cpp:498          operator()           ] entering live mode
2024-01-03T12:39:03.305 block_log_artifacts.cpp:443   flush_header         ] Attempting to write header (pack_size: 56) containing: git rev: 0acce16829c0702ac09d9e51beb57b9fac157c22, format version: 1.1, head_block_num: 81592963, tail_block_num: 1, generating_interrupted_at_block: 0, dirty_close: 1

The difference compared with Ubuntu 22.04

2024-01-03T05:00:27.502 10000000 of 81583660 blocks = 12.26%    +5m46s      346     x1      -4s
2024-01-03T05:36:49.577 20000000 of 81583660 blocks = 24.51%    +36m22s     2182    x6.31   -35s
2024-01-03T06:44:02.082 30000000 of 81583660 blocks = 36.77%    +1h07m13s   4033    x11.66  -71s
2024-01-03T07:35:10.006 40000000 of 81583660 blocks = 49.03%    +51m08s     3068    x8.87   -55s
2024-01-03T08:08:33.446 50000000 of 81583660 blocks = 61.29%    +33m23s     2003    x5.79   -9s
2024-01-03T09:07:53.024 60000000 of 81583660 blocks = 73.54%    +59h20s     3560    x10.29  -49s
2024-01-03T10:38:08.463 70000000 of 81583660 blocks = 85.80%    +1h29m16s   5356    x15.48  -189s
2024-01-03T12:25:57.193 80000000 of 81583660 blocks = 98.06%    +1h42m49s   6169    x17.83  -491s
2024-01-03T12:38:13.505 81500000 of 81583660 blocks = 99.90%    +12m16s

7h45m22s 😱 - 81592963 blocks

Same system, filesystem, etc.

(after firmware update)

Not much noticeable difference! As expected because most of the code is dependent on rather lower latency things.

Hive-Engine

Light DB (restore)

  • 6-core (VM) Intel 10th Gen, DDR4 2933 MHz, NVMe 2TB PCIe 3.0
    mongo 7, Ubuntu 23.10, zfs recordsize=128k

time mongorestore --gzip --archive=./hsc_12-27-2023_b81403927.archive --drop

real    40m36.832s
user    9m0.805s
sys     3m9.786s
  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    mongo 7, Ubuntu 22.04, zfs recordsize=128k

time mongorestore --gzip --archive=./hsc_12-27-2023_b81403927.archive --drop -j 8

real    19m17.922s
user    7m16.488s
sys     0m46.082s

Note: Using 8 (-j 8) numParallelCollections did not make any difference (against the 4 default) for this DB dump, in the same system.

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    mongo 7, Ubuntu 23.10, zfs recordsize=128k

time mongorestore --gzip --archive=./hsc_12-27-2023_b81403927.archive --drop -j 8

real    26m47.608s
user    8m47.959s
sys     0m58.855s

vs -f 4

real    26m42.191s
user    8m44.144s
sys     1m3.569s

All the above had ashift=9 (512 bytes), which makes a huge impact on NVMe wear...and it's not worth the performance, given the amount of work you will be doing. So make sure you increase the ashift if you are using ZFS, to at least 12 (4k bytes).

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    mongo 7.0.5, Ubuntu 22.04, zfs recordsize=4k, NVMe with ashift=12

time mongorestore --gzip --archive=./hsc_01-10-2024_b81806836.archive --drop -j 16

real    21m16.251s
user    7m25.942s
sys     0m42.511s
  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    mongo 7.0.5, Ubuntu 22.04, zfs recordsize=128k, NVMe with ashift=12

time mongorestore --gzip --archive=./hsc_01-10-2024_b81806836.archive --drop -j 16

real    21m10.244s
user    7m37.555s
sys     0m42.051s

And it looks like 32k aligned IO, is still more performant.

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    mongo 7.0.5, Ubuntu 22.04, zfs recordsize=32k, NVMe with ashift=12

time mongorestore --gzip --archive=./hsc_01-10-2024_b81806836.archive --drop -j 16

real    20m24.926s
user    7m29.396s
sys     0m41.792s

Full DB (restore)

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    mongo 7.0.4, Ubuntu 23.10, zfs recordsize=128k

time mongorestore --gzip --archive=./hsc_12-27-2023_b81403927.archive

This one was cancelled due to some sort of performance bug between Mongo 7 and Ubuntu 23.10

2024-01-08T00:50:33.938+1300           hsc.chain  43.6GB
2024-01-08T00:50:33.938+1300    hsc.transactions  42.0GB
2024-01-08T00:50:33.938+1300
2024-01-08T00:50:34.122+1300    signal 'interrupt' received; attempting to shut down
2024-01-08T00:50:34.202+1300    terminating read on hsc.transactions
2024-01-08T00:50:34.239+1300    hsc.transactions  42.0GB
2024-01-08T00:50:34.239+1300    finished restoring hsc.transactions (539924017 documents, 0 failures)
2024-01-08T00:50:34.239+1300    Failed: hsc.transactions: error restoring from archive './hsc_01-05-2024_b81662950.archive': received termination signal
2024-01-08T00:50:34.239+1300    539924017 document(s) restored successfully. 0 document(s) failed to restore.

real    1583m42.338s
user    67m41.998s
sys     8m37.995s

And then because I have seen that there was an update to Mongo, I did another run...

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    mongo 7.0.5-rc0, Ubuntu 23.10, zfs recordsize=128k

time mongorestore --gzip --archive=./hsc_12-27-2023_b81403927.archive -j 16

2024-01-10T00:46:54.603+1300    727706502 document(s) restored successfully. 0 document(s) failed to restore.

real    1480m42.037s
user    124m20.456s
sys     14m11.413s

Slightly more than a full day... and I am not sure why it can't do more IO, but it likely needs lower latency than parallel IO. The next step is to increase the number of NVMes to see if this scales.

And because this one was done with -j 16 I am now thinking about the first run, and if I was actually going to finish, but there is a part of the restore where the speed of restore is super slow. In this run, I have also increased the cache of Mongo from 8 to 16GB, as I saw it being limited when using 8GB size.

  • 24-core Intel 14th Gen, DDR5 5600 MHz, NVMe 1TB Samsung 990 PRO PCIe 4.0
    mongo 7.0.5-rc0, Ubuntu 23.10, zfs recordsize=32k

time mongorestore --gzip --archive=./hsc_12-27-2023_b81403927.archive -j 16

2024-01-11T04:27:19.628+1300    727706502 document(s) restored successfully. 0 document(s) failed to restore.

real    905m53.218s
user    113m19.469s
sys     12m29.928s


0
0
0.000
32 comments
avatar

@tipu curate 3

0
0
0.000
avatar

😍😘🥰 - still in the 14th Feb mood 😝... and indeed wife had a lot of effort (supporting the time) to make me able to do these things. So yeh hail to her !LADY

0
0
0.000
avatar

Well, after reading your enthusiastic post in all its technical details. I think that even with the overclocking and all that paraphernalia, I suspect that for only playing this game of Hive you didn't save yourself as much money as you could have saved by choosing Intel over AMD.

And for just that, there are much cheaper and more effective options. Like for example, the one I am currently using and which I describe in all its tech splendor right here.

But yeah, I also admit that it may be because I don't have or earn a lot of money and that I can also be a little nostalgic sometimes.

0
0
0.000
avatar

suspect that for only playing this game of Hive you didn't save yourself as much money as you could have saved by choosing Intel over AMD.

Not just playing the #hive game mate! =) Lots of other games... and they can get very hungry for more than what I just acquired. Even hive alone... depending on how many requests you get on the node.

0
0
0.000
avatar

Not just playing the #hive game mate! =) Lots of other games...

Yeah, I can understand that. Perhaps it's just that I'm not a gamer and that for what I'm doing right now with my super rig horsepower, it still has a lot of muscle to crunch pixels around fast enough. ;o)

0
0
0.000
avatar

Though I might not really understand that pretty much what you stated here technically but I believe strongly you are putting in a great lot of work to make it work

0
0
0.000
avatar

No worries. Either way, I was not anticipating as this much of replies!

0
0
0.000
avatar

There are very few people in the world who use their brains to do such high-paying things for less money, spend their time well and fulfill their needs the way you have. Spend less money on the computer that you wanted a good quality item and you got it and now you are doing it with your bladder.

0
0
0.000
avatar

High-paying is subjective here... but if you meant emotionally or even in terms of learning skills, yeahhhh... maybe. A little bit at least.

I could have easily go to AMD, now. But I was more interested on Intel not much because of the price difference, but because AMD is still "aligning" a few things with the AM5 socket. And for these kinds of tests, on an AM5 platform, it would be a shoot in my own foot. Especially because of that re-learning memory thing...

Either way, next one will be AMD most likely, because I know what's coming (can't talk about it thought). And if all goes to plan, might even be this year... although this would be a very low probability situation.

Sorry, did go extended comment here!

0
0
0.000
avatar

Of course it is true that the price difference does matter a lot. If a person is earning good money and living a good life in one place, then it is better to live there. Thanks for your detailed comment.

0
0
0.000
avatar

It's nice to see you so happy about your new purchase! :) even if I can't understand the technical part, I guess the results you got are particularly good!

!PGM !LUV !PIZZA

0
0
0.000
avatar

Sent 0.1 PGM - 0.1 LVL- 1 STARBITS - 0.05 DEC - 1 SBT - 0.1 THG - 0.000001 SQM - 0.1 BUDS - 0.01 WOO - 0.005 SCRAP - 0.001 INK tokens

remaining commands 9

BUY AND STAKE THE PGM TO SEND A LOT OF TOKENS!

The tokens that the command sends are: 0.1 PGM-0.1 LVL-0.1 THGAMING-0.05 DEC-15 SBT-1 STARBITS-[0.00000001 BTC (SWAP.BTC) only if you have 2500 PGM in stake or more ]

5000 PGM IN STAKE = 2x rewards!

image.png
Discord image.png

Support the curation account @ pgm-curator with a delegation 10 HP - 50 HP - 100 HP - 500 HP - 1000 HP

Get potential votes from @ pgm-curator by paying in PGM, here is a guide

I'm a bot, if you want a hand ask @ zottone444


0
0
0.000
avatar

I am happy about them yep. Even if only with 1 NVMe, once I stuff the other ones there, things will become more interesting. Waiting for crypto to go up, to buy some more. 😜

0
0
0.000
avatar

Your target is a RAID configuration or more NVMes serve a different purpose? :)

0
0
0.000
avatar

Excellent question!

Reality, is I can do both at any time. But leveraging ZFS I can have a lot of dynamics on how I use devices. My target is 5 x NVMes of 4TB each... and 12 x HDD disks for backups and lower performance things. Everything can be "mixed" within ZFS, so I can either create a RAID 0 of the 5 NVMes... or a RAID5... or 6...

On the 12 disks, I will be creating a RAID6 most likely (10 disk + 2 parity), and because how ZFS is built, if I need some cache for the raid, I can just slit a fast USB3.1 PEN into the system and get some fast IO's out of it.

It's quite dynamic and that's what I usually look on a system when I don't know exactly where I will be landing in terms of requirements. But obviously, I need to have some good level of hardware... bugs or incompatibilities (usually a role on the exploration of the edge tech) can bring you more set backs than progresses. So, a balance is key in hardware. Not always the cheapest per core/performance is the key.

0
0
0.000
avatar

This job must have really been quite demanding and involves brain work also. You must have really been putting in a great lot of work

0
0
0.000
avatar

To benchmark, for many might be boring actually. I enjoy doing it, because it envolves some troubleshooting and "finding out" about things. So, for me its quite a pleasure, hence I don't do it with any pressure. Just a daily thing in life (almost).

But thanks for the different appreciation view.

0
0
0.000
avatar

What do you do with all the old stuff? and how often do you feel you need to upgrade. I suppose tech stuff must go out of date pretty fast.
!LUV
!LOL
!PIZZA
!PGM
!COFFEE
!ALIVE

0
0
0.000
avatar

Sent 0.1 PGM - 0.1 LVL- 1 STARBITS - 0.05 DEC - 1 SBT - 0.1 THG - 0.000001 SQM - 0.1 BUDS - 0.01 WOO - 0.005 SCRAP - 0.001 INK tokens

remaining commands 4

BUY AND STAKE THE PGM TO SEND A LOT OF TOKENS!

The tokens that the command sends are: 0.1 PGM-0.1 LVL-0.1 THGAMING-0.05 DEC-15 SBT-1 STARBITS-[0.00000001 BTC (SWAP.BTC) only if you have 2500 PGM in stake or more ]

5000 PGM IN STAKE = 2x rewards!

image.png
Discord image.png

Support the curation account @ pgm-curator with a delegation 10 HP - 50 HP - 100 HP - 500 HP - 1000 HP

Get potential votes from @ pgm-curator by paying in PGM, here is a guide

I'm a bot, if you want a hand ask @ zottone444


0
0
0.000
avatar

You just received 0.05 COFFEE! Good coffee my friend

number of commands left: 0

If you also want to send COFFEE buy them here on hive-engine

0
0
0.000
avatar

@forykw! You Are Alive so I just staked 0.1 $ALIVE to your account on behalf of @ new.things. (2/10)

The tip has been paid for by the We Are Alive Tribe through the earnings on @alive.chat, feel free to swing by our daily chat any time you want, plus you can win Hive Power (2x 50 HP) and Alive Power (2x 500 AP) delegations (4 weeks), and Ecency Points (4x 50 EP), in our chat every day.

0
0
0.000
avatar

Old stuff can still be used for smaller tasks... proxies, backups, smaller and less demanding front-ends... unless the power bill starts creeping up.

Upgrades depend, but mostly blockchain stuff does not creep out over time, its by experience the other way around. Often the technology advances faster. So, upgrading is usually because you want to take over other challenges.

Like for example, me, is trying to run the FULL HIVE node from home (currently I only run a witness node which is a sort of a minimalistic version of the full node), in parallel with Hive Engine that I already run.

But this time was also because I want to move to DDR5 and NVMes are getting interesting to do some cool stuff at home. Hence the move... and because I didn't want to wait more time. Has been quite already a wait for this.

CPU's are not the main reason for upgrading, they are actually in the bottom of the reasons in the list, and often is the part that hangs quite solid for most of the time.

Crucial is usually always memory speed/bandwidth and then IOps... those are the motivators nowadays. Then for whom plays games or wants to do AI, it will be the GPUs or for AI, the specific accelerators (even if GPUs can do the job too, there is cheaper and more powerful stuff out there for AI).

Anyhow, all part of the job stuff... 😇

0
0
0.000
avatar

You didn't mention how much RAM these machines had onboard. I am wondering whether it is feasible for me to run my own Hive node.

0
0
0.000
avatar

Currently 96GB, and if necessary can go up to 192GB... And most likely for the same exact frequency. On the AMD sockets not yet (using 4 DIMMs lowers substantially the clock speed). And that was another reason I didn't want to go now with AMD. But I know this will be improved on newer CPUs. So, I will wait for that time to acquire the new AMD CPU/board, for the IOs of my infra.

0
0
0.000