10th update of 2022 on BlockTrades work on Hive software

avatar

blocktrades update.png

Below are a few highlights of the Hive-related programming issues worked on by the BlockTrades team since my last post.

Hived (blockchain node software) work

Mirrornet (testnet that mirrors traffic from mainnet)

We completed some experimental optimizations for faster verification of transactions and blocks and yesterday we launched a new mirrornet to observer the performance of these optimizations in an environment that closely matches the mainnet to verify that they perform as well as they did in a more isolated testnet environment.

Further optimization of OBI (one-block irreversibility) protocol

The optimizations to the OBI protocol are mostly done, but the dev for this work is currently tied up with the refactoring of the transaction and block handling code (task discussed below), so it still needs to be fully completed tested, but I don’t expect this task to take long once it resumes.

Refactoring of transaction and block handling code

As I mentioned in my last post, one of the last planned changes to hived was to support assets using “network asset identifiers” instead of strings, and this led us to the discovery that we also needed to rewrite a lot of the transaction code as part of this task. And since we were going to have to touch a lot of code anyways, it made more sense to make this change as part of a larger code refactor that we were planing to do after the hardfork to improve the speed of transaction and block processing.

The primary change associated with this code refactoring is the creation of new full_transaction and full_block objects to avoid unnecessarily repeating previous computations. These objects are wrappers around the old transaction and block data and they also contain metadata about the original data (for example, a full_block can contain: the compressed binary block data, an uncompressed binary version of the block, the unpacked block decoded into a block header and associated transactions, and various metadata such as the block_id). Similarly, a full transaction can store the binary version of the transaction, the unpacked version of the transaction, and computed metadata such as required signatures and whether the transaction has been previously validated.

Access to this data is encapsulated inside accessor functions of these full objects, allowing us leeway in the future to change when such data is computed and for how long it is cached in memory. For example, some of this data can be precomputed on the p2p side prior to delivery to the write_queue thread (either in the p2p thread itself or in worker threads). Alternatively, such data can be left uncomputed and lazily computed only when it is actually required (i.e. when one of the accessors is called).

This encapsulation and caching scheme even leaves open the ability to dynamically make decisions about when to undertake computation of this data based on the form of the data (for example, depending on whether a block was small or very large or how many signatures are in it).

The good news is that we’ve made swift progress on this refactor task. We have completed the largest part of this task, which was introduction of the full_transaction/full_block objects to all layers of the code (p2p, blockchain processing, fork database, and API layers), and as of today, this branch is passing all automated tests.

In the next phase of this task, we’ll begin benchmarking this new version of the code and experiment with further optimizations.

Hive Application Framework (HAF)

We fixed a bug with the operations-filtering feature that could arise when a HAF server was configured to filter out account creation operations from the account_operations table. HAF assumes that almost all HAF apps will at least want to store data about what Hive accounts exist, because users interact with most Hive apps by signing transactions with their Hive accounts. So even when these are explicitly filtered out using the operations-filtering feature, some information is still collected about these operations and the resulting “partial” filtering was causing a foreign key index problem during creation of indexes.

We also created a new standardized HAF SQL call, hive.app_reset_data to reset a HAF app in preparation for replaying the app from scratch.

But aside from the above bug fix and new API call, HAF development now is mostly focused on establishing best practices for deploying, securing, and managing HAF-based apps in conjunction with a HAF server (including documenting these best practices), primarily via the use of docker containers to deploy both the hived that feeds the HAF database and the HAF apps that respond to API calls and read data from this database. Much of this work is being done as part of related work to simplify and speed up automated testing (CI testing) of HAF apps, so you can see some of this work in the hafah repo, for example.

We’ve also continued to update the documentation for HAF, especially with regard to creating and deploying dockerized HAF servers. I still plan to make a few revisions to the docs and I’m thinking about ways we can organize all the information further, but the documentation changes so far have now been merged into the develop branch: https://gitlab.syncad.com/hive/haf/-/blob/develop/README.md

HAF account history app (aka hafah)

At this point, there are no known issues with hafah functionality or performance and we’ve been testing its real world performance serving up data on our production API node (api.hive.blog) for a couple of weeks now without any issues.

We did add one new feature to hafah: a new API method called get_version to allow hafah clients to track the git revision of the server they are relying on, although I’m thinking we may want to look into how to next “standardize” this particular API call across all HAF apps (i.e. have a standardized way to request the git revision of any HAF app).

HAF-based hivemind (social media middleware server used by web sites)

We found one further problem during live sync testing of HAF-based hivemind last week when a fork occurred. At first there was a suspicion that the problem was actually a HAF problem, but this turned out to be red herring, and earlier today we determined that the issue was an error in hivemind itself (now fixed).

We’ve launched a new full sync of hivemind to live mode to test the fix. Based on past performance, we expect this system will take about 54 hours to reach live sync and then we’ll leave it running in live sync mode.

Some upcoming tasks

  • Finish dockerization and CI improvements for HAF and HAF apps (nearly done I think).
  • Complete hived full block/transaction benchmarking and optimizations (largest remaining task other than testing).
  • Merge in new RC cost rationalization code (this is blocked by hived optimizations task above because those changes will impact real-world costs of operations).
  • Collect benchmarks for a hafah app operating in “irreversible block mode” and compare to a hafah app operating in “normal” mode.
  • Complete and benchmark HAF-based hivemind, then deploy and test on our API node.
  • Test enhancements to one-block irreversibility (OBI) algorithm.
  • Continue testing using updated blockchain converter for mirrornet.

When hardfork 26?

Assuming we are able to complete sufficient optimizations to hived in the coming week (I’m hopeful on this point, but there’s a lot of moving parts, so can’t be sure yet), then we would still be looking at the same projected date as my last post (end of July). As to the reasoning for this timeline: essentially I would like a 30 day window for testing prior to the hardfork, after freezing hived’s feature set.



0
0
0.000
21 comments
avatar

Thanks for the update block trades. I appreciate and welcome your 10th update which surely bring more success to blockchain platform hive in upcoming days. Keep updating as much possible!

0
0
0.000
avatar

Fantastic work it's great to see you continue to drive change and upgrade the project. It's a great community

0
0
0.000
avatar

Great work, lots of moving parts but progress.
Thanks for the info

0
0
0.000
avatar

Happy to see we are on schedule for the end of July.

0
0
0.000
avatar
(Edited)

Christmas is more common around here than HFs :P

0
0
0.000
avatar

It could be called a hard spoon..
Since their releases are so fluid.
Hive souped-up by @blocktrades.

0
0
0.000
avatar

Congratulations @blocktrades! You have completed the following achievement on the Hive blockchain and have been rewarded with new badge(s):

You received more than 1730000 HP as payout for your posts, comments and curation.
Your next payout target is 1735000 HP.
The unit is Hive Power equivalent because post and comment rewards can be split into HP and HBD

You can view your badges on your board and compare yourself to others in the Ranking
If you no longer want to receive notifications, reply to this comment with the word STOP

Support the HiveBuzz project. Vote for our proposal!
0
0
0.000
avatar

Great to see the progress.

Is increasing the haircut rule percentage still planned? I have been reading some concerns about that (especially after seeing the collapse of UST), so I was wondering if it would make sense for the blockchain to have a hard upper limit, but for the witnesses to be able to set a lower limit at will. So the witnesses would be able to bring the haircut rule limit as low as they want (if need be), but they cannot move it above the hard limit set by the blockchain itself. In this way, they would be able to quickly react and stop a potential systemic collapse from happening, if such a threat ever arises.

0
0
0.000
avatar

The code change for raising the haircut percentage from 10 to 30% was made a while back in the develop branch (i.e. it is part of the hardfork changes). I don't believe there is any major risk at 30%. Also, I believe the change will increase HBD liquidity and reduce risk for HBD holders, and I believe this will have knock-on positive benefits for Hive holders as well.

With a small value like 30%, I don't think it is a good idea to allow it to be lowered by witnesses. It is more work and I think making it a runtime-changeable variable would dramatically increase the uncertainty involved in investing in HBD, to the detriment of the entire mechanism.

0
0
0.000
avatar
(Edited)

Looking for this HF, Great work guys!
$WINE

0
0
0.000
avatar

I am very happy with the development of Bloktrades which is getting more and more amazing, the updates are constantly increasing to make all users comfortable and safe, I really appreciate it, continue to develop with great and continuous updates until the stage of perfection is extraordinary, thank you very much for this great information.

0
0
0.000
avatar

Awesome that you created the hive. app_reset_data function. I was encountering an issue when I dropped a HAF app's schema before removing it's context, which left the shadow tables and other HAF entries like registered tables intact blocking the recreation of the context. So I had to manually remove those entries if I forgot to remove context first.

0
0
0.000
avatar
(Edited)

Yeah, I had a similar experience that led to this change. Whenever there's a requirement to do a sequence of commands in a specific order or "bad things happen", it's a good idea to try to prevent it from being a sequence that a human has to type :-)

0
0
0.000