Microeconomics of a dynamic size limit

Gavin Andresen

New Member
Dec 9, 2015
19
126
I'm hoping Peter R or maybe an academic economist might get inspired and model this mathematically (learning enough statistics and micro-economics to do that myself has never popped to the top of my TODO list).

This was advice I had for lead developer of an altcoin who worries that no block size limit would mean a low-transaction-fee-death-spiral. Rather than trying to convince them they're wrong, I said this:

If you're concerned about fee pressure, you should implement a dynamic cap.

At every difficulty change:

Find average sigops/sighash per block over last 2016 blocks
Find average bytes per block

Maximum = some multiple of those percentages (or some minimum if blocks are empty). If you want a little fee pressure-- blocks on average half full-- choose 2.

If you want a lot of fee pressure, choose 1.2 (blocks on average 80% full).

See the charts at http://hashingit.com/analysis/34-bitcoin-traffic-bulletin to get an idea of how long the average transaction will have to wait at various percentages.

You should also change the default mining policy to "produce average blocks" (miners who care to influence the size up or down, or want to pick up extra fees can change it).

Then you're done, never have to touch the size limit again.​

It would be spiffy to contrast that approach with the "flexcap" proposed by Mark Friedenbach, Blockstream and altcoin (Freicoin) founder. Seems to me a simple dynamic limit addresses the same (perhaps unfounded-- lets put that aside for a bit) concerns as flexcap but in a much simpler way. My objection to flexcap has always been that it is just a complicated way of setting a minimum transaction fee...
 

chriswilmer

Active Member
Sep 21, 2015
146
431
Back when I lived under a rock and was pondering this issue on my own, I thought we should do the above with the "2" multiple (to keep blocks ~half full on average). Seems so simple.

Probably the more rigorous approach to choosing the multiple is to look at empirical payment network behavior during holiday shopping days... (i.e., are the busiest days generating 2 times, 5 times, or 10 times the number of transactions of average days?).
 

Gavin Andresen

New Member
Dec 9, 2015
19
126
RE: busiest shopping days: http://www.businesswire.com/news/home/20060103005412/en/Visa-Payment-Network-Processes-Record-179-Million "On December 23, with last-minute shoppers and merchants relying heavily on Visa, the payment network's data centers securely processed a record 179 million transactions in a 24-hour period, surpassing last year's peak processing day by 22 percent. That's a marked contrast to a normal day, when the VisaNet system processes roughly 100 million debit and credit transactions. "

That's from 2006, but I doubt the ratio has changed much in the last ten years.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
From a first principles standpoint, a market equilibrium point (supply curve intersects demand curve) solution is the correct solution by definition, since it represents the weighted preferences of all market participants.

Defining any particular outcome as "wrong" is perilous, since that usually involves making implicit assumptions that the observer's ability to distinguish desirable from undesirable outcomes is privileged compared to the ability of the participants in the market.

We do know there are problems with the supply curve for mining, due to the fact that the creators of transactions and the producers of blocks consume resources for which they do not pay because our current network protocol does not provide a mechanism for doing so.

Most notably, the resources for which price discovery does not exist are:
  • Bandwidth
  • UTXO set storage
  • historical blockchain data
Someone could probably spend some time proving that all of the plausible disaster scenarios for Bitcoin operation with no protocol-mandated block size limit can be traced to one or more of the above problems.

On the other hand, maybe the time would be better spent just fixing them.
 

chriswilmer

Active Member
Sep 21, 2015
146
431
@Justus Ranvier : To borrow Peter R's terminology, I think what Gavin is discussing here is how far to set the limit above the free market equilibrium Q*. It seems pretty straightforward to make the limit twice as high as Q* to deal with sudden surges (but without letting a malicious actor somehow inject a block 100x larger than what everyone else is expecting).
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
But where does that magic 100x number come from?

My point is, if the system is such that you even have a concept of "a block that is too big", then the system itself is broken.

A terabyte-sized block is only a problem if the "economically irrational actor" doesn't correctly compensate the network for the cost such a block incurs.

The ideal network says, "you want to pay us for a useless 1 TB block? Fine, we'll be happy to take your money. Have a nice day."

What we need to do is identify the deficiencies in the P2P network design that prevent Bitcoin from behaving in the ideal manner and correct them.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
I think feeding into the whole idea of hardcoded caps, while it might win a few battles, loses the war. Sooner or later Bitcoin is going to require its supporters to know about the economics of mining incentives, which becomes a hopeless task when the picture is clouded by interventionist measures. Peter R's paper is a great start.

Really I think the low fee death spiral is answered by the fact that miners will simply keep including transactions until the estimated cost to include an additional tx in a block (due to orphaning, etc.) exceeds the fee on that tx. The arguments against allowing that economic limit to determine blocksizes are that it may be too big for TOR nodes, bad for decentralization, etc. I don't see how an argument can be made against that on low-fee-death-spiral grounds, though.
 

chriswilmer

Active Member
Sep 21, 2015
146
431
@Justus Ranvier: OK, that's an interesting perspective... I just wanted to be clear that this is a categorically different discussion than a top-down decision about what the block size limit should be to the left of Q* (i.e., forcing a smaller block size limit than what the market wants). You seem to be implying that even if the free market decides on an equilibrium block size X (without any top-down intervention), that the system would be "broken" if it prevented a suddenly 100x larger block (and I'm not saying you're wrong... just wanted to make sure we were on the same page).

@Peter R : Seemed to be OK with a block size limit, using any of a variety of schemes, provided it was always larger than Q*
(Peter, please jump in if I am misrepresenting you!)
[doublepost=1450374327][/doublepost]
But where does that magic 100x number come from?
A terabyte-sized block is only a problem if the "economically irrational actor" doesn't correctly compensate the network for the cost such a block incurs.
Not to belabor the point... but if the market wants to create terabyte-sized blocks, the market will create terabyte-sized blocks under the scheme Gavin is suggesting. What won't happen is a sudden jump from 1 GB blocks to 1 TB blocks in a single 2-week period. So, I think you're point is still valid, but it's more subtle than not letting block sizes be decided by the free market. It's more about how quickly does the Bitcoin network adjust to market dynamics?

(also, I believe it is relatively uncontroversial to say that we don't really know the timescales over which the market reaches equilibrium... but they're probably longer than 2 weeks).
 
Last edited:

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
In the long run, consensus on a block size 'cap' itself becomes a market process. It cannot be decided in a 'top down' manner.

What I mean is that in an ecosystem with several competing clients, entities such as miners can choose to observe whatever block size limit they want. Of course there is an overwhelming incentive to stay with the consensus. This means that network consensus to enforce certain limits will likely converge around 'Schelling points' and predictable caps like BIP101. But these limits cannot be wildly out of line with what is in the individual best interest of most network participants.

It makes sense that miners would have some incentive to restrict block sizes somewhat to maximize their overall fees, and might seek consensus mechanisms to do so. But it is impossible to predict exactly what these limits should be, as they are dependent on specific market conditions at any point in time.

It is hard to imagine why any node on the network would want enforce a block size limit by rejecting a longer proof-of-work chain that miners continue to build on. So for non-mining nodes, what is the incentive to enforce a limit? I can see that they may require compensation for certain services like relaying transactions, as @Justus Ranvier mentioned, and may refuse to relay blocks wildly larger than 'normal'.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
But where does that magic 100x number come from?

My point is, if the system is such that you even have a concept of "a block that is too big", then the system itself is broken.

A terabyte-sized block is only a problem if the "economically irrational actor" doesn't correctly compensate the network for the cost such a block incurs.

The ideal network says, "you want to pay us for a useless 1 TB block? Fine, we'll be happy to take your money. Have a nice day."

What we need to do is identify the deficiencies in the P2P network design that prevent Bitcoin from behaving in the ideal manner and correct them.
that is what Gavin is talking about.

a solo miner self constructed single tx multi input non std tx exablock that pays fees to itself.

@Gavin Andresen : what happened to your 100kB max tx size limit you were working on? i know you said you'd kick yourself in 10 y if tx sizes got bigger than that but why isn't that on balance a good solution now to the exablock attack?
 
  • Like
Reactions: Cryptodude999

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Thanks for starting this thread, Gavin!

I think it's an interesting idea, and you're right that we could use a method like http://hashingit.com/analysis/34-bitcoin-traffic-bulletin to get an idea of how long the average transaction will have to wait at various percentages.

On @chriswilmer's point: one subtelty is that there are really two different values for Q*. There is a Q* for each block, and there is a sort of "average" Q*. Like Chris implied, there will be peaks (e.g., on Black Friday) where the per-block Q* will be much higher than than the average value of Q*.

IMO, I'd like to see Qmax > Q* even on a per-block basis, so that miners are never forced to restrict the supply of block space (except to prevent the feared "TB spam block attack"). If that were the case, we'd want Qmax >= Q*_peak = Q*_avg x peak_factor.

In my "weak blocks" paper (that @Gavin Andresen and @awemany are currently reviewing), the zero-confirmation security comes from the fact that miners are always incentivized to build on top of the last weak block. If miners hit a limit during peak times that incentive could change, and I'd worry that what I believe will be a great benefit of weak blocks might be lost.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
that is what Gavin is talking about.

a solo miner self constructed single tx multi input non std tx exablock that pays fees to itself.
I'm not sure you understand what I'm saying.

If a miner (or any network participant!) can take an action that consumes the resources of other participants without compensating them for this consumption, the network design is broken regardless of the size of the blocks.

If the consumption of other people's resources only occurs in the context of a price discovery process, then excessive consumption is impossible, also regardless of the size of the blocks.

You can spend time calculating the extent to which the network will still hobble along while being broken, and how far to push things before catastrophic failure occurs, or that time can be spent fixing the underlying flaws such that those calculations are unnecessary because the network isn't broken any more.
 

Gavin Andresen

New Member
Dec 9, 2015
19
126
@Justus Ranvier : well, we have to deal with the real world of broken network designs (like TCP/IP which we're all using to communicate here) where economically irrational attackers mount denial-of-service attacks all the time.

But we're going down a rabbit-hole I was hoping we wouldn't go down; yes, if we had a perfect Bitcoin protocol and everybody was on the same page with respect to economics, you are correct. I'd still like to see some analysis of imperfect alternatives that might "feel" better to a lot of people.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Sanity check: isn't the chart for the probability distribution for the block arrival time in http://hashingit.com/analysis/34-bitcoin-traffic-bulletin incorrect? Shouldn't it just be a decaying exponential:


[doublepost=1450382093][/doublepost]@Justus Ranvier

I agree with the spirit of what you're saying, but I think Gavin is being realistic. Because Bitcoin is so new, we haven't yet figured out how to properly deal with various cases. The block size limit acts like training wheels that we'll hopefully be able to remove when Bitcoin is all grown up.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
@Justus Ranvier : well, we have to deal with the real world of broken network designs (like TCP/IP which we're all using to communicate here) where economically irrational attackers mount denial-of-service attacks all the time.
I hope that wasn't a goal post shifting that I just felt. DDoS attacks are short term problems caused by a lack of a mechanism for ISP customers to communicate which packets they do and to not want to receive to their upstream providers. A DDoS attack is outside a node's control

The valid concerns about large blocks and high transaction rates involve things which are under a node's control, such as which packets are forwarded to other peers and which information is stored permanently.

Besides this difference, one is an acute problem and the other is chronic They are entirely different threat models.

I'd still like to see some analysis of imperfect alternatives that might "feel" better to a lot of people.
Sometimes a short-term kludge is the only way to meet a deadline with the resources that are available. I have no real problem with that strategy as long as it's properly labelled as such and as long as the people who need to feel better are properly identified (because universal good feelings for any course of action are impossible).
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
I'm not sure you understand what I'm saying.

If a miner (or any network participant!) can take an action that consumes the resources of other participants without compensating them for this consumption, the network design is broken regardless of the size of the blocks.

If the consumption of other people's resources only occurs in the context of a price discovery process, then excessive consumption is impossible, also regardless of the size of the blocks.

You can spend time calculating the extent to which the network will still hobble along while being broken, and how far to push things before catastrophic failure occurs, or that time can be spent fixing the underlying flaws such that those calculations are unnecessary because the network isn't broken any more.
i think you're talking about the resource consumption provided for "free" by full nodes while i actually believe Gavin is talking about the exablock i described.

anyways, to your question, i don't think compensating full nodes is a problem in the future b/c with economic growth, full nodes are sure to be run by thousands of merchants who will have the financial incentives to run those full nodes as well as a fiduciary responsibility to do so. even early adopters like us are likely to still be able to run full nodes as our contribution to the network as our holdings will appreciate giving us the incentive to do so.

in short, i don't think the system is "broken" if we can grow block sizes and thus user growth.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@Peter R

The question to me is, in response to broken network design, should the training wheels be inserted through code or should they be allowed to develop organically through various unforeseen market mechanisms? Removing the cap entirely and leaving the market to its own devices seems most likely to result in the cleanest and most thoroughgoing formation of such mechanisms.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Peter R.

Indeed, I just read it and tried to figure it out, but I do not understand this either. It is a Poisson process, memory-less, so timing between blocks and transactions shouldn't matter - you should simply get the 600s exponential PDF for when a transaction is included in a block. Assuming no load.
 
  • Like
Reactions: Peter R

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Peter R

The question to me is, in response to broken network design, should the training wheels be inserted through code or should they be allowed to develop organically through various unforeseen market mechanisms? Removing the cap entirely and leaving the market to its own devices seems most likely to result in the cleanest and most thoroughgoing formation of such mechanisms.
Good points. My current preference is of course Bitcoin Unlimited and @theZerg's "meta-cognition" stuff for accepting excessive blocks. Because BU puts the node operator in control, it will motivate all sorts of "out of band" communication infrastructure to be developed for agreeing on protocol changes in a decentralized fashion. Furthermore, with BU, we sort of simultaneously get both a block size limit and no block size limit.

Nonetheless, I think it would be quite worthwhile to do the analysis that Gavin is suggesting.