Microeconomics of a dynamic size limit

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@Peter R.

Indeed, I just read it and tried to figure it out, but I do not understand this either. It is a Poisson process, memory-less, so timing between blocks and transactions shouldn't matter - you should simply get the 600s exponential PDF for when a transaction is included in a block. Assuming no load.
@Peter R check out figure 4 and 5 in my paper that I just I PMed you. It plots the theoretical probability against actual data collected from the Relay Network. So we are right. I think that the difference is that the graph in http://hashingit.com/analysis/34-bitcoin-traffic-bulletin is plotting "cumulative probability", basically its plotting:

y = Integral from t=0, infinity of (poisson probability )

Where we are are just plotting the poisson probability

In other words, they are summing the probabilities from 0 to x...

EDIT: oops, I see that they are plotting 2 different y axis now. But I think that the difference is that this is the probability of particular transaction confirmation, not that any block is found. So this transaction needs to propagate throughout the network and be incorporated into the block that mining pools are working on. This is very different than the Poisson process of block discovery.

EDIT2: The red line actually looks a lot like the block discovery probability if you remove 1-txn blocks. At least in shape if not in magnitude... that makes sense to me.
 
Last edited:

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
i think you're talking about the resource consumption provided for "free" by full nodes while i actually believe Gavin is talking about the exablock i described.
I am talking about all the uncompensated costs in the system.

Example:
The only way for me to have 100% certainty regarding my balance is to process 100% of the blockchain, going all the way back to the genesis block. Thus, other people's use of the blockchain imposes uncompansated costs on me.

Freezing access to the blockchain is obviously not a viable solution. The real long-term solution is to find better cryptographically secure ways for me to get the certainty I want without needing to process 100% of the blockchain. Then other people's usage of it does not harm mine.

There are probably about half a dozen examples like this that all need to be addressed to have a sustainable long term scaling plan.

in short, i don't think the system is "broken" if we can grow block sizes and thus user growth
The fact that many people sincerely feel that we need to have block size limits is prima facie evidence that Bitcoin is broken (and the uncompensated costs is the proof). If it wasn't broken. we'd be talking about strategies for reaching our block size goals instead.
 
  • Like
Reactions: majamalu

Roger_Murdock

Active Member
Dec 17, 2015
223
1,453
The only way for me to have 100% certainty regarding my balance is to process 100% of the blockchain, going all the way back to the genesis block. Thus, other people's use of the blockchain imposes uncompansated costs on me.
But is it possible that those costs are more than offset by uncompensated benefits derived from the all-important network effect? In other words, those other people are using the blockchain because they derive utility from it. Presumably the more they can use the Bitcoin network, the more utility they derive from it. The more utility they derive from the Bitcoin network, the more they value that network (and the coins themselves). And the more they value Bitcoin, the more valuable your Bitcoin holdings become.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
But is it possible that those costs are more than offset by uncompensated benefits derived from the all-important network effect? In other words, those other people are using the blockchain because they derive utility from it. Presumably the more they can use the Bitcoin network, the more utility they derive from it. The more utility they derive from the Bitcoin network, the more they value that network (and the coins themselves). And the more they value Bitcoin, the more valuable your Bitcoin holdings become.
The effects you're talking about work even better if the consumers of bandwidth and storage pay the providers of those things so that we can be sure there will always be enough of them to keep the network operating optimally.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
What if there was a merkle tree of subblocks in the blockchain, and you could verify that X BTC had moved "into" (i.e. all transactions containing these TxOuts occur in the subblock) a subblock, Y BTC had been moved "out" of the subblock, and X >=Y. If none of your transactions occurred in a subblock, you would not need to get access it or the transaction history inside it, but you could still see that it is impossible for it to have diluted your holdings.

EDIT: ofc the miners would be verifying the complete history of the subblocks that they include in their blocks.
 

albin

Active Member
Nov 8, 2015
931
4,008
@Justus Ranvier : To borrow Peter R's terminology, I think what Gavin is discussing here is how far to set the limit above the free market equilibrium Q*. It seems pretty straightforward to make the limit twice as high as Q* to deal with sudden surges (but without letting a malicious actor somehow inject a block 100x larger than what everyone else is expecting).
Maybe it would make sense for this coefficient to also be determined dynamically, by making it a function of some measure of central tendency of the blocksize sample used to determine the mean.
 

Roger_Murdock

Active Member
Dec 17, 2015
223
1,453
The effects you're talking about work even better if the consumers of bandwidth and storage pay the providers of those things so that we can be sure there will always be enough of them to keep the network operating optimally.
After thinking about it a little bit more, I think the issue with my suggestion might be the different identities of the affected parties. The uncompensated benefits I'm describing are conferred on holders whereas the uncompensated costs you're worried about are presumably borne by node operators, no?
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
Just going back to Gavin's OP which presents a dynamic block limit solution:
At every difficulty change:

Find average sigops/sighash per block over last 2016 blocks
Find average bytes per block

Maximum = some multiple of those percentages (or some minimum if blocks are empty). If you want a little fee pressure-- blocks on average half full-- choose 2
Is the main problem being whatever multiple is chosen it might set a limit which is too low to handle a few extra busy days of activity - especially if the limit is tight for higher "fee pressure"?

In which case a simple flex-cap is to allow unlimited blocks but increase the required difficulty in proportion to the block size over the dynamic sub-limit. This doesn't have to be linear, but it would make a 1TB block require a ridiculously high difficulty that is all but impossible.

I'm know this is not new idea, but worth thinking about again.
 
Last edited:
  • Like
Reactions: lunar

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Just thought I'd mention at this stage that the dynamic cap adjustment in e.g. Monero is based on the *median* size of the last N blocks, with a certain minimum size. That is then multiplied by a factor (in Monero's case, x2).

The reason they use median instead of average, I believe, is that the median will be more robust in the presence of extremes which might occur when actors are trying to push the cap hard in some direction.
 

BitMayor

New Member
Dec 18, 2015
1
0
I am a fan of a dynamic solution. I would propose something along the lines of a "standard deviation" formula for best accuracy at setting a range. Dr. Deming did a lot of work in this field of work in the 1940s. Im a fan of his work. *With an amendment that max size can't go below 1mb as well, this will prevent any un anticipated gaming(even if not practical, at least for the neigh sayers)

I am familiar with ramping up factory production, similar game theory. if you ever want to grab a bubble tea at the Limered downtown, would love to meet up sometime. Im nearby in the berkshires
 

rocks

Active Member
Sep 24, 2015
586
2,284
@chriswilmer : yes, exactly. There is certainly a fear that with no limit an economically irrational actor might be able to break the system with a terabyte-sized block....
The reason I think this fear is misplaced is a massive terabyte block larger than most can handle, would normally be orphaned due to propagation times. The massive block would simply take too long for others to receive and build on.

After IBLT and weak blocks are implemented, solutions which significantly lower the block propagation times of "valid blocks" (i.e. blocks that only contain previously seen transactions paying fees), then the cost of the massive block attack goes up even more. The reason is to use the attack the attacker needs to either: 1) pre-mine its transactions but suffer high propagation times since neither IBLT or weak blocks can be used here, or 2) pre-announce its self created transactions to lower propagation times, but risk losing fees to other miners (it essentially becomes a spam attack not a massive block attack).

Once real scaling solutions are implemented, the massive block attack simply becomes very uneconomical. I've never understood the worry.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Before we dig into algorithms we need to understand what we are trying to do:

Let us presume that there are enough transactions to fill as big a block as we want.

Let's suppose that you order the txns by fee and stick them in a block so f = fee(pos) is a decreasing function.

Now, let's observe that for network bandwidth or transaction validation reasons, nodes in the network have a validation rate that can be measured in seconds/MB. If you plot the # of nodes that can handle a particular validation rate, you probably get something near a gaussian distribution.

So, at some block size (i.e. network transaction throughput), some nodes cannot handle the traffic and must drop off of the network (in practice they might first stop relaying, and next only get committed blocks, but at some point in theory a node will simply be too slow). For example, I have an orange PI (raspberry PI clone but faster) that can just barely sync up with the testnet large block branch.

What is this relationship? When do we choose to drop slow nodes in preference to users demand for transactions?

Note that this question is essentially looking at the block size debate as a continuous spectrum rather than a binary one.

Is it "fair" for us to act as the "central bank" and program this relationship or should we simply leave it to the market?
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
The reason I think this fear is misplaced is a massive terabyte block larger than most can handle, would normally be orphaned due to propagation times. The massive block would simply take too long for others to receive and build on.
...
Once real scaling solutions are implemented, the massive block attack simply becomes very uneconomical. I've never understood the worry.
This is exactly right. Depending on what numbers you assume for the propagation impedance (seconds per MB), the cost to produce even a 128 MB spam block is large. The reason is that an attacker would have to attempt the attack so many times--and lose so many block rewards--before the perfect conditions arose where his block "stuck." That's also assuming the other miners and nodes try to accept the ridiculous block in the first place...

 

chriswilmer

Active Member
Sep 21, 2015
146
431
I still maintain that this discussion is putting too much weight on whether this multiple is hard-coded or not. It's a COMPLETELY different situation to have the blocksize limit decided by the freemarket or hard-coded..(which is NOT what is being discussed here), vs. HOW QUICKLY does the limit adapt to what the free market wants (which IS what is being discussed here). You are all probably correct about the pure free market approach being better... but the practical difference is likely to be really subtle.

Honestly, I think if for historical reasons we had a dynamic limit with a multiple of 2, these discussions we're having would never, ever come up (can you really imagine saying this --> "argh, the hubris of these central planners, we had 1 GB blocks all week and suddenly there were so many transactions we needed 3 GB blocks, and thus had a whole week of slower processing until the limit doubled!").
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
After thinking about it a little bit more, I think the issue with my suggestion might be the different identities of the affected parties. The uncompensated benefits I'm describing are conferred on holders whereas the uncompensated costs you're worried about are presumably borne by node operators, no?
Welcome to the forum! You've always been one of my favourite posters on Reddit. Glad to have you join us!!
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
After thinking about it a little bit more, I think the issue with my suggestion might be the different identities of the affected parties. The uncompensated benefits I'm describing are conferred on holders whereas the uncompensated costs you're worried about are presumably borne by node operators, no?
I think this is accurate.

There is significant overlap between the two groups, which means we can get away with quite a bit (but not an unlimited amount) of non-optimal design in terms of how the infrastructure is paid for.

As long as the compensation for non-mining infrastructure is implicit rather than explicit, there will always be uncertainty surrounding the question of whether or not it will be adequate to meet future demand.

Once the payments are explicit, then the uncertainty goes away and we can get on with figuring out how attract as many customers as possible. The debate I'd like to have right now is, "how do we make so many people want to use Bitcoin that we need 100MB blocks to accommodate their transactions?"
 

rocks

Active Member
Sep 24, 2015
586
2,284
The debate I'd like to have right now is, "how do we make so many people want to use Bitcoin that we need 100MB blocks to accommodate their transactions?"
I think the larger ecosystem has been addressing this question very well. This is what the >$800M in VC investment has gone towards, plus all of the other developments at smaller businesses, merchants, larger firms, NASDAQ, etc.

The problem is now that after many entities have invested in and worked on how to get people to use Bitcoin, all of a sudden they are being told "bitcoin can't scale" and "we are going to force a small block fee market". This is a massive problem.

The only priority discussions that should be happening at the protocol/client level is, "Now that large numbers of people have already invested significant time and resources in making solutions that enable people to use Bitcoin, how can we improve Bitcoin so that it can scale larger to meet that demand".

There are multiple challenges towards getting Bitcoin to the 1GB block level, but different challenges are needed at different times. Block propagation times currently are a significant factor limiting miner's flexiblity to make larger blocks economical. To me we should be asking how to best fix this issue.

IBLT and weak blocks together would go a very long way towards motivating miners to produce larger blocks economically. What other solutions are there and how are they going to get implemented?

Edit: @Gavin Andresen welcome to bitco.in and thank you for all of your amazing efforts on getting Bitcoin to where it is today. Many of us appreciate your leadership in the past and you have many supporters for the direction and vision you have for Bitcoin. If it does not always seem so, that is because many of us were censored from the 2 main bitcoin forums and scattered to various corners. Anyway without getting too far OT, thanks again for everything, you helped make the most amazing project I've ever seen.
 
Last edited:

HostFat

Member
Sep 13, 2015
39
48
What do you think about making the "dynamic size limit" proposed by @Gavin Andresen (or something similar) not a limit but an
"advice"?

So, every block over this value will get "lower priority", example:
Size advised: 2 MB

There are two blocks contemporary on the network:
- 2.5 MB
- 3 MB

Nodes whatever block will download first, they will put the lower (2.5 MB in this example) in priority for the next block.
Nodes should also give priority even while downloading, so if they are downloading a block of 5 MB, and they'll find that there is a new one of 3 MB, then they try'll stop downloading the 5 MB block and start to download the one of 3 MB.
There can different rules on when doing this, that can change on how much they have already downloaded of the bigger block.
This will fix the huge block attack, it will be impossible to push an huge block to the network, smaller blocks will always have priority.


And it's the same for lower size block, example:
Size advised: 2 MB

There are two blocks contemporary on the network:
- 1 MB
- 1.5 MB

Nodes whatever block will download first, they will put the higher (1.5 MB in this example) in priority for the next block.
This to avoid empty blocks.


Other examples:
Size advised: 2 MB

4 different blocks from 4 pools:
- 1.5 MB
- 1.8 MB <- winner
- 2.5 MB
- 3.1 MB

4 different blocks from 4 pools:
- 1.8 MB
- 1.7 MB
- 2.1 MB <- winner
- 2.8 MB

The block closer to the advised size wins.



The "dynamic size advised" should be updated every 2016 blocks as Gavin as proposed.


I see that there can be a problem of miners that after they find out a block from a competitor they'll try to make a new block with just few bits less.
Maybe (or not ?) a bigger pool is able to start from scratch on creating a new small block an being able to find it before the next one.
Do you think that it's possible this kind of action (attack)? I mean, economically appropriate?
 
Last edited:

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@HostFat

Fascinating how a simple "advised priority" rule can change so much. There is probably much to explore in the realm of softer rules rather than heavy-handed hardcoded limits (even though I think unlimited is the way to go and we're talking here about the lesser of various market interventionist evils).
 

HostFat

Member
Sep 13, 2015
39
48
I agree that the no limit can still be the right solution that includes all the economic forces to avoid bad misbehavior, but it needs a lot of faith from who doesn't know or trust these kind rules of the free market.
I was trying to mainly find something to go against empty blocks and the possible 10 TB block from the big bad guy with unlimited resource :)