One of the small-blockers’ basic arguments goes a little something like this:
- Bitcoin’s entire value proposition is based on it being decentralized.
- Larger blocks will reduce Bitcoin’s level of decentralization.
- Therefore, we shouldn’t raise the block size limit.
It seems to me that there are a number of problems with this argument. Most of the debate seems to focus on 2. Large blockers will often claim that a larger block size will actually
increase decentralization, saying things like “sure the cost of running a full node might increase, but we can expect the total number of full nodes to go up thanks to the increased adoption that larger blocks will promote.” Of course, the real problem with this whole debate is not only the speculative nature of these kinds of predictions, but also the fact that there’s no single, agreed-upon measure for “decentralization.” It’s a complete abstraction. Is it the cost of running a full node? Is it the total number of full nodes? Do we need to create some kind of “Decentralization Index” that’s calculated via an elaborate formula that factors in both of those things along with a whole host of other variables, e.g., metrics for “decentralization of Bitcoin development,” the geographic distribution of full nodes, and the geographic and hash power distribution of individual miners and mining pools?
There’s also a huge problem with the third claim. Even if we assume that a larger block size limit
would decrease “decentralization,” that wouldn’t automatically mean we shouldn’t implement it. So for example, if we could increase transaction throughput 1000-fold at the cost of a 0.001 percent reduction in “decentralization,” that would presumably be a fantastic tradeoff.
And I actually think there’s also a bit of a problem with the third claim. It’s not that it’s
wrong. In fact, I’ve often summarized Bitcoin’s value proposition as the “first form of money in the world that is both decentralized AND digital.” But a point that shouldn’t be lost is that decentralization is a means to an end. We don’t care about “decentralization”
per se. We care about Bitcoin preserving the properties of sound money. So instead of asking how changes to the block size limit might impact this abstraction of “decentralization,” I think it’s far more useful to ask how they might impact Bitcoin’s monetary properties. I’ve argued
before that good money has three important properties that can be mapped, more or less one to one, onto the three traditional functions of money. Those properties are:
- reliable scarcity (the “store-of-value” aspect of money);
- transactional efficiency (the “medium-of-exchange” aspect of money); and
- network effects/ widespread acceptability (the “unit-of-account” aspect of money).
Taking those properties one at a time, would a larger block size limit pose a threat to Bitcoin’s reliable scarcity (i.e., the 21 M cap)? I really don’t think so. I guess you might argue that larger blocks could lead to dangerous levels of mining centralization which would, in theory, make it easier for a small cabal of miners to push through a fork raising the limit (perhaps under pressure from a hostile government). But I don’t find this scenario very plausible. Again, a larger limit doesn’t necessarily mean
no limit;
a (safe) larger limit can be set through a BU-type emergent process. And
as I’ve argued before, the Schelling point protecting Bitcoin’s limited supply is much stronger than the Schelling point that tends to follow the highest-PoW chain.
On the other hand, would keeping Bitcoin’s block size limit small pose a threat to its scarcity? Well, I think that’s a little bit more plausible. If you force people to do the vast majority of their transactions off-chain, that at least potentially opens the door for essentially fractional-reserve banking which could be hugely inflationary.
What about “transactional efficiency”? Well, that one seems pretty easy. Larger blocks mean lower transaction fees. If the block size limit is successfully kept at 1-MB (and we heroically assume that adoption nevertheless continues apace), that would translate to
soaring fees. Imagine 7 billion people attempting to use a system that allows for, at most, 250,000 transactions per day or about 90 million per year. (And again,
so-called “off-chain scaling” solutions can’t substitute for actual on-chain scaling.) So this one’s a clear victory for larger blocks, right? Well,
I think so, but the small blockers would probably argue that transaction fees are only one measure of “transactional efficiency.” Transaction fees focus on the sender, but we also need to consider things from the recipient’s perspective. People receiving money need to be able to confirm that they have in fact been paid. And, the small blockers will point out, the only way to do that trustlessly is by running a full node, the cost of which is increased by allowing larger blocks. Well, yeah, but so what? Maybe I’m missing something, but I guess it seems obvious to me that, for almost users, for almost all use cases, confirming a transaction by either consulting one (or several) trusted services (e.g., blockchain explorers) and/or by running your own thin client is going to be good enough. Let’s say I sell you my crappy used car for 3 BTC and give you a payment address. You tell me you’ve paid. I check Blockchain.info. It shows that the payment was made and has received 6 confirmations. I check two other block explorers that tell me the same thing. I guess they
could all be in cahoots with you and willing to throw away their valuable reputations to help you defraud me, but it seems unlikely. I don’t actually know much about SPV nodes but I see a source that says that some of them “put [their] faith in high difficulty as a proxy for validity.” So I’m assuming what this means is that I can run a thin client that would allow me to just download the block that I’m interested in (the one that includes your supposed payment). I don’t have the full blockchain so I can’t personally confirm that the block is valid, but I can verify that someone created a block with a certain difficulty that includes a transaction that purports to send me 3 BTC. And based on that difficulty I can know that, if it is fake, someone spent the equivalent of about $16,000 in resources to produce it. And then I can maybe just download the headers of the 5 blocks that purport to extend that block and now also know that if I haven’t been paid, someone has now spent the equivalent of about $96,000 to produce this bogus chain? (And presumably even with much larger blocks, the resource requirements for running a thin client of this type are going to be essentially negligible?) Yeah, if that’s all correct, that seems good enough. Also, prioritizing keeping the cost of running a full node low over keeping transaction fees low seems incoherent. I mean, ok great, I
could “trustlessly” verify that I’ve received an on-chain payment… but no one can actually afford to send me one. In other words,
who cares if individuals can trustlessly verify the state of an interbank settlement network that they can’t afford to actually use? Why would they bother?
That brings us to the final property of good money, network effects. Now to me, it seems obvious that allowing for more transactions via a larger block size limit allows for more users and thus is good for Bitcoin’s network effect. But I suppose a small blocker would argue that a larger block size limit would make Bitcoin “centralized” and insecure, thereby destroying its value proposition and discouraging adoption. I think the more fundamental point is that the importance of network effects means that Bitcoin can’t afford to be arbitrarily “conservative” on the block size issue. Excess “conservatism” in this context is potentially very reckless. If a competitor gets the (supposed) tradeoffs between greater throughput / lower fees and “preserving decentralization” closer to optimal, it will begin to steal market share from Bitcoin and eventually overtake its network effect advantage.