Recently, @Peter R started a thread addressing epistemological issues involving the true nature of Bitcoin. This thread focuses on a similarly esoteric issue: the limits of decentralization.
The prompt for this thread is a recent story about US power grid vulnerabilities in the Las Vegas Sun. Unsurprisingly, by interconnecting local power grids into a single 'intergrid', engineers and officials have created a system that essentially begs to be attacked. (The headline writer engages in a humorous bit of psy ops by warning about the potential for foreign attacks, as if no one within the USA's borders would do such a shameful thing.)
Whether power grids, web hosts, medical records, state secrets, or anything else that is system critical, when control is centralized, either physically or protected with a password, it becomes a single point of failure. Clearly, like the Internet, an optimal system should be built on a network with a multitude of nodes, such that, if any or even most nodes are corrupted, the remaining network can continue to operate.
The question then becomes: to what degree of granularity? Should, e.g., the power grid be broken into sub-grids that serve 10 million households each? If 10 million is still too big and vulnerable, perhaps 1 million? Maybe each neighborhood? Perhaps, each building? Why stop there, and why not have each commercial or residential unit—each office and apartment—generate its own power? Why not each appliance? Should each appliance have multiple redundant and independent power sources, just to be on the safe side?
However, if everything that uses electricity is walled off from everything else, the generators cannot share their idle-time capacity, and those appliances that experience extreme peak periods cannot buy or borrow capacity. In a completely atomistic world, every appliance must have sufficient capacity for peak load, even if this amount of power is needed only rarely.
Fortunately, we have a bit of guidance here from the work of James Buchanan and Gordon Tullock in the analysis of the optimal voting rule, which faces a similar issue regarding decentralization of decision making.
Buchanan & Tullock's Public Choice model is a basic Supply and Demand graph, with Costs on the vertical axis, corresponding to Price, and Quorum on the horizontal axis, corresponding to Quantity, and running from 1 (absolute depotism) to the population n (absolute unanimity).
The cost of bad decisions to the members of the population collectively falls as the size of the majority needed for a motion to become enacted increases. For example, if a single absolute despot is able to make all decisions, he or she need not worry about how it will affect others, and the risk of bad decisions is very high. At the other extreme, if no motion can become enacted unless it receives unanimous support, then the likelihood of a bad decision's becoming enacted is extremely low, as every member of the population has veto power.
The opposite relationship exists with regard to decision-making costs. For example, if a single absolute despot is able to make all decisions, he or she need only issue edicts, the cost of which involves merely uttering the edict and seeing that a scribe records it. At the other extreme, if all new decisions require unanimous support, then one must convince all members of the population to go along, and deal with any strategic holdouts, who try to extract veto rents by threatening to derail the decision.
Combine these two graphs, and one arrives at a model that is recognizable to any economist, and that has known properties.
Here, c* is the minimum decision-making cost that can be achieved in a decision-making process, and q* is the size of the majority needed to enact new motions that corresponds with c*, sometimes referred to as the optimal voting rule, although it need not necessarily be greater than n/2. It can be any value that yields c*.
The takeaway is that c* cannot be reduced to zero. On the one hand, minimizing decision-making costs increases the likelihood and costs of bad decisions; as when, hypothetically, a development team aligned with a single corporate agenda hijacks an open source project that exhibits extreme economies of scale. On the other hand, minimizing the incidence of bad decisions increases the costs of making any changes in response to new information, knowledge, or experience.
The optimum is between these two extremes, where consensus—i.e., the least-bad that the majority can go along with, even if begrudgingly—reigns, and all parties agree to disagree, so long as everyone gets his or her way sometimes.
With regard to power grids, this optimum should be the population size per power grid that is large enough to enable load balancing and small enough to impact as few customers as possible in the event of disruption.
With regard to systems more relevant here, the optimum will be that consensus that spreads decision-making as broadly as possible without having the decision-making process turn into the Galactic Senate. In other words, there cannot be anything approaching a Linus Torvalds of Bitcoin, if both the costs of bad decisions and the costs of making decisions concerning changes to the protocol or software running on it are to be minimized as far as possible. To reduce one or the other even further is to increase the other, perhaps dramatically.
Where, precisely, this optimal 'quorum' is remains to be seen, but one has a much greater tendency to find something, if one knows to look for it.
The prompt for this thread is a recent story about US power grid vulnerabilities in the Las Vegas Sun. Unsurprisingly, by interconnecting local power grids into a single 'intergrid', engineers and officials have created a system that essentially begs to be attacked. (The headline writer engages in a humorous bit of psy ops by warning about the potential for foreign attacks, as if no one within the USA's borders would do such a shameful thing.)
Whether power grids, web hosts, medical records, state secrets, or anything else that is system critical, when control is centralized, either physically or protected with a password, it becomes a single point of failure. Clearly, like the Internet, an optimal system should be built on a network with a multitude of nodes, such that, if any or even most nodes are corrupted, the remaining network can continue to operate.
The question then becomes: to what degree of granularity? Should, e.g., the power grid be broken into sub-grids that serve 10 million households each? If 10 million is still too big and vulnerable, perhaps 1 million? Maybe each neighborhood? Perhaps, each building? Why stop there, and why not have each commercial or residential unit—each office and apartment—generate its own power? Why not each appliance? Should each appliance have multiple redundant and independent power sources, just to be on the safe side?
However, if everything that uses electricity is walled off from everything else, the generators cannot share their idle-time capacity, and those appliances that experience extreme peak periods cannot buy or borrow capacity. In a completely atomistic world, every appliance must have sufficient capacity for peak load, even if this amount of power is needed only rarely.
Fortunately, we have a bit of guidance here from the work of James Buchanan and Gordon Tullock in the analysis of the optimal voting rule, which faces a similar issue regarding decentralization of decision making.
Buchanan & Tullock's Public Choice model is a basic Supply and Demand graph, with Costs on the vertical axis, corresponding to Price, and Quorum on the horizontal axis, corresponding to Quantity, and running from 1 (absolute depotism) to the population n (absolute unanimity).
The cost of bad decisions to the members of the population collectively falls as the size of the majority needed for a motion to become enacted increases. For example, if a single absolute despot is able to make all decisions, he or she need not worry about how it will affect others, and the risk of bad decisions is very high. At the other extreme, if no motion can become enacted unless it receives unanimous support, then the likelihood of a bad decision's becoming enacted is extremely low, as every member of the population has veto power.
The opposite relationship exists with regard to decision-making costs. For example, if a single absolute despot is able to make all decisions, he or she need only issue edicts, the cost of which involves merely uttering the edict and seeing that a scribe records it. At the other extreme, if all new decisions require unanimous support, then one must convince all members of the population to go along, and deal with any strategic holdouts, who try to extract veto rents by threatening to derail the decision.
The takeaway is that c* cannot be reduced to zero. On the one hand, minimizing decision-making costs increases the likelihood and costs of bad decisions; as when, hypothetically, a development team aligned with a single corporate agenda hijacks an open source project that exhibits extreme economies of scale. On the other hand, minimizing the incidence of bad decisions increases the costs of making any changes in response to new information, knowledge, or experience.
The optimum is between these two extremes, where consensus—i.e., the least-bad that the majority can go along with, even if begrudgingly—reigns, and all parties agree to disagree, so long as everyone gets his or her way sometimes.
With regard to power grids, this optimum should be the population size per power grid that is large enough to enable load balancing and small enough to impact as few customers as possible in the event of disruption.
With regard to systems more relevant here, the optimum will be that consensus that spreads decision-making as broadly as possible without having the decision-making process turn into the Galactic Senate. In other words, there cannot be anything approaching a Linus Torvalds of Bitcoin, if both the costs of bad decisions and the costs of making decisions concerning changes to the protocol or software running on it are to be minimized as far as possible. To reduce one or the other even further is to increase the other, perhaps dramatically.
Where, precisely, this optimal 'quorum' is remains to be seen, but one has a much greater tendency to find something, if one knows to look for it.