top of page
Search
  • Writer's pictureCorey Pigott

Is It Possible to Quantify Decentralization?

Updated: Apr 1, 2019


In the crypto community, decentralization is one of the most desired properties a cryptoasset can have. If we feel decentralization is integral to the success of a cryptoasset, how have we not come up with a comprehensive and objective way to quantify the level of decentralization for cryptoassets? Currently, decentralization is completely subjective with different individuals/groups utilizing different metrics for determining what makes a cryptoasset decentralized. If there were a definitive way to quantify decentralization, the community would be able to benefit in a few different ways:


  • Measurement​ — Having the ability to measure decentralization easily and track historical trends would be valuable for the entire community. Not only would current and future cryptoassets be provided the model for decentralization, but regulatory bodies would also have a better framework for determining which cryptoassets are decentralized. From recent SEC comments, this is a major factor in determining whether a cryptoasset is a security or not.

  • Improvement​ — A measurement of decentralization will allow for teams to measure the impact of improvements or updates to a project’s decentralization. This would provide more appropriate updates to blockchains, compared to some of the meaningless hard forks making ancillary tweaks to the code of Bitcoin or other larger projects. We would also be able to measure, for example, whether deploying more nodes or hiring more developers would provide a greater improvement in a project’s decentralization.

  • Optimization​ — If our goal is to optimize the level of decentralization within a cryptoasset, we will need to have an objective and quantifiable metric for determining decentralization that can be applied to all cryptoassets.

A Starting Point


Balaji S. Srinivasan and Leland Lee, both formerly with 21.co (now Earn.com), formulized a possible metric for quantifying decentralization called the Minimum Nakamoto Coefficient. The Minimum Nakamoto Coefficient is related to two more common economic concepts, the Lorenz Curve and the Gini Coefficient. Without going into too much detail on these concepts, they essentially are a measurement of equality/inequality or for the sake of crypto, centralization/decentralization. The basic concept of the Lorenz Curve and Gini Coefficient is shown below:


The Lorenz Curve (red line). The more the Lorenz Curve deviates from a slope of 1, the larger the Gini Coefficient becomes.

Applying these concepts to cryptocurrency, Srinivasan and Lee highlighted six subsystems that play a role in the decentralization of a cryptoasset. Those six subsystems are:


  1. Mining​ — a measure of the block reward within a 24 hour period

  2. Client​ — a measure of the different number of client codebases

  3. Developers​ — a measure of the number of developers who have contributed to the code (measured through commits on GitHub)

  4. Exchanges​ — a measure of the trading volume across exchanges

  5. Nodes​ — a measure of mode distribution across countries

  6. Ownership​ — a measure of the distribution of coin ownership through addresses

The Minimum Nakamoto Coefficient essentially takes each subsystem listed above and plots out the Lorenz Curve for each. From there, the Minimum Nakamoto Coefficient for each subsystem is then calculated by determining how many entities are required to get to 51% of the total capacity as shown in the below example:


Sample Minimum Nakamoto Coefficient showing the amount needed for 51% (in red)

This concept takes a similar approach to the idea of a 51% attack where it tries to measure the minimum number of entities that would be needed to make up the majority to potentially compromise the system. For an example, what are the minimum number of Bitcoin nodes needed to comprise 51% majority?

Once the Minimum Nakamoto Coefficient is calculated for each subsystem, the lowest Minimum Nakamoto Coefficient amongst the subsystems becomes the final Minimum Nakamoto Coefficient for the cryptoasset as a whole. The higher the Minimum Nakamoto Coefficient, the more decentralized the cryptoasset is. From their findings, Srinivasan and Lee concluded that based on the reliance on Bitcoin Core and Geth codebases, Bitcoin and Ethereum had a Minimum Nakamoto Coefficient of 1, which is the lowest possible score, meaning the projects are still essentially centralized due to the reliance on these two codebases within that one subsystem. For reference, Bitcoin’s other subsystems were calculated to be 5 (Mining), 5 (Developer), 5 (Exchange), 3 (Node), 456 (Ownership).

Shortcomings of the Minimum Nakamoto Coefficient


It is clear that the Minimum Nakamoto Coefficient is not the greatest metric for determining decentralization. We would bet most people within the space would believe Bitcoin and Ethereum are both more decentralized than the bare minimum as this metric suggests. While the Minimum Nakamoto Coefficient is a great starting point, we see a few issues with it that may be the reason for the devalued decentralization score:


  • Are these the correct six subsystems to calculate the Minimum Nakamoto Coefficient? Are the measurements of the different number of client codebases or exchange volume more critical to determining decentralization compared to other metrics? Exchanges are known to inflate volume to look more attractive to traders, and hashrates are only measurable because miners choose to put their names into blocks. Would metrics such as founder/team, block difficulty, community, daily active addresses, or any other be more worthy of consideration. In addition, is six the correct number to get an encompassing review of a cryptoasset? Reorganizing the subsystems or adding additional subsystems could provide a more robust Minimum Nakamoto Coefficient.

  • Do all subsystems need 51% majority to be considered compromised? While the 51% majority for nodes or address ownership would be a cause for concern, would 51% majority of exchanges cause similar concern. If an exchange were compromised, volume could easily be transferred over to an uncompromised exchange. It’s possible the exchange subsystem would need north of 75-80% majority to be considered compromised. Having the flexibility to modify the majority threshold could help with the accuracy of the coefficient.


Are all subsystems created equal? If all subsystems are created equal, would averaging out the subsystem’s Minimum Nakamoto Coefficient provide a better measurement? If they are not equal, should we provide a weighting system to the subsystems? This would allow the subsystems that are higher indicators towards decentralization to be bigger factors in the final measurement. Providing an average or a weighted average will eliminate the reliance on the weakest link amongst all of the subsystems. If we take Bitcoin and Ethereum as examples again, using the research conducted by Srinivasan and Lee, an average of the subsystems would provide a Minimum Nakamoto Coefficient of 80.66 and 14.5, respectively.

Conclusion

It’s possible a combination of these upgrades to the Minimum Nakamoto Coefficient or addressing any other shortcomings would be able to provide a more accurate scoring metric for determining decentralization. With everyone, especially regulators, focused on the path to decentralization for all of these projects, the logical step would be to find a way to objectively quantify decentralization. While the Minimum Nakamoto Coefficient may not be the end all, it's a great starting point to encourage a discussion and perhaps be a catalyst for finding an appropriate solution.

11 views0 comments

Recent Posts

See All
bottom of page