EOS: State of the Chain — 10/12/18

eosiob
4 min readOct 12, 2018

--

A brief report on the state of EOS Infrastructure.

WABT Impact

This is a 14 day graph of CPU execution time on average for each Block Producer that has been in the top 21. As you can see, after everyone implemented WABT (EOS 1.3.x) over the past two weeks, CPU time has been increasingly lowering.

Aloha EOS Benchmarking Tool

This is a great public metric because it pushes BPs to get better hardware and run it more efficiently. Standby BPs can also join the testnets and prove that they can produce blocks. However, there are currently no tests to show that they are running the same hardware on the testnet and mainnet. https://www.alohaeos.com/tools/benchmarks

EOS API Node Consistency

Recently EOS API nodes have been much more stable and in-sync with the network. So, each API node is across the board is staying synchronized more of the time. BlockMatrix’s eosnode.tools/monitor tracks these metrics.

This is the head block synchronization time of all of the Block Producer EOS API nodes from September 10–14. As you can see, there are a lot of dots scattered everywhere. The lack of uniformity shows that many nodes are terribly out of sync with one another.

https://eosnode.tools/monitor

The image below is about 1 month later October 5–9. It’s easy to see how much more in sync the nodes have been lately, and this is a strong indicator of the usability of the network.

https://eosnode.tools/monitor

Overall Block Producers are understanding the software more deeply. This deeper understanding assists with configuring software in a way that is optimized for common queries and activities.

One slight downside is that EOS v1.3 brought in a regression with history calls — if you upgrade to 1.3 without a replay, you will get exceptions. Only a full hard-replay will fix this. Here is the issue from Github:

EOS API Proxy from BlockMatrix

If you are using an API in your application or scripts — I highly recommend using BlockMatrix’s proxy. It’s a single API proxy that balances requests based on latency and performance to get your query to the best location. You can check it out here.

https://eosnode.tools/proxy

Having healthy and well designed API proxies is very good for EOS. Pete from BlockMatrix/EOSNode.tools is one of the top infrastructure guys working on EOS right now. The 1.3.x issue noted above recently brought down 7/8 of the valid full-history API nodes, but using this proxy would protect apps from being affected by any changes to the BP’s API nodes.

EOS API Error Code Specification

While I do believe that the error reporting of the EOS API needs to be more robust — here is some very nice work from the OracleChain team where they have documented all of the EOS API error codes in simple English and Chinese.

LiquidEOS Heartbeat Plugin for Producers

LiquidEOS released a new version of their nodeos heartbeat plugin. This plugin runs inside of nodeos on the producer nodes and reports system settings and information on-chain every few minutes. This provides a new level of transparency for

So far 17 Block Producers have installed it — some are in the top 21 and some are still candidates who are not yet in standby position. You can view them here, but this is an example from sheos21sheos that shows the some of the heartbeat information including the nodeos version, account blacklist hash (verification of ECAF orders), CPU type, and whether or not the producer node is running on bare-metal or in a virtual machine.

Example of LiquidEOS heartbeat public website result:

http://heartbeat.liquideos.com/

Every standby and Block Producer Candidate should be installing this heartbeat plugin — especially if they are committed to transparency and especially if they claim to be running bare-metal.

I’m a strong believer that Block Producers should be running their own bare-metal servers and not hosting their nodes on AWS/Azure.I run the nodes for shEOS — the female owned Block Producer. This is shEOS node in the heartbeat photo above. We actually run nodeos in an LXD container on bare-metal using a ZFS deduplication on the root file system. I’ve been very happy with this setup so far.

If anyone has any questions, comments, or just wants to chat about infrastructure — please hit me up on Twitter.

Thanks for reading!

--

--