BUIP101: (closed) Set the default value of max blocksize cap (hard limit) to 10 terabyte

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
BUIP101: Set the default value of max blocksize cap (hard limit) to 10 terabyte
Date: 27 August 2018
Proposer: Norway

EDIT 1 September, 2018: Just to be clear, this is a proposal to set the default value of EB (Excessive Blocksize) to 10 terabyte. The user may adjust this value up or down.

The motivation of this proposal is to move the judgement of what max blocksize cap is safe from Bitcoin Unlimited to the individual miners.

The size of how large blocks a miner will or can accept should be a matter of competition. Not a green light from developers for what's safe for everybody, no matter how little you invest in hardware and network connections.

Competition in this space have the potential to drive development. Specialized software and hardware (GPU, ASIC) for transaction handling will develop faster under competitive conditions.

Finally, I'd like to quote Gavin Andresen on the topic:


Yes, let’s eliminate the limit. Nothing bad will happen if we do. And if I’m wrong, the bad things would be mild annoyances, not existential risks, much less risky than operating a network near 100% capacity.
 
Last edited:
  • Like
Reactions: reina

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Norway, please use #101 for this.
However, BU does not have a block size hard cap per se, apart from the max value of the integer of the excessive block size. I have just tested it in the BUCash client preferences and it accepts 2,147,483,648 but truncates any larger, so the current maximum in BU is ~2.1GB.

This is massively ahead of the curve and basically doing what Gavin has suggested.
The constraint will not be in block limit at the moment, but max message size.
 
Last edited:
  • Like
Reactions: Norway

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@solex If the data type needs to be changed, it can be changed.

I'm proposing 10TB as the default value. Not a fixed value.
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
I'm not releasing this into general discussion yet. Two comments:

@torusJKL @Norway, to be respectful of BU member's and voter's time why don't you two duke it out to decide whether you really need two separate BUIPs on this topic?

BU doesn't have an explicit limit (but note that we do have a point beyond which we have no longer recently tested).

The maximum message size is 32 (IIRC) times the configurable "excessive block size". One prior problem was how large the largest RPC call could be because we needed to send this block to miners via a JSON RPC API. However, this is fixed in the 1.4.0.0 release via new RPCs that don't pass the entire block.

So I'm not sure what I'd do with this BUIP if passed. Perhaps you'd want to add/change some wording, request QA python test code up to a certain amount, etc.

Also note that passage of a BUIP means I'll merge code if it appears, it does not force BU devs to run off an implement your BUIP. For example, if you request testing up to 10TB, IDK how to even start that given my resources. So you'd have to do the testing and provide the PR for the test and any necessary code changes.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@theZerg

It should be clear now that we are talking about the default setting when the code is shipped. What do you think the default should be? 10 TB, infinity, something else?

@Peter R suggested it should be the current level miners are using. This number may be different from miner to miner, depending on how big blocks the miner accepts.

@Mengerian suggested that it should be a number describing the capacity of the software. But this may be different from hardware to hardware.

The whole point of this BUIP is to take the definition power that comes with default settings from developers and give the miners the responsibility of their own nodes.

A computer running out of memory is not a threat to bitcoin. It's just an annoyance to the person running that node.
 
Last edited:
The limit should be a "developer's recommendation". I trust BU devs with it.

But I also had no problem of voting about it, if this is what u r about.

For me the limit selection is a fair mechanism to let node operators influence the limit (a bit). Defaulting it to terabytes goes against this.

Edit: BitcoinSV is the node to let miners decide, BitcoinABC the developers, and BU the users. I'm not for changing this.
 
  • Like
Reactions: freetrader

Griffith

Active Member
Jun 5, 2017
188
157
As a block this size would never be mined for the foreseeable future due to multiple factors (orphan risk/opportunity cost and lack of tx volume to name a few) if you find yourself at any point making more changes to the code base (outside of tests) other than changing the value of the default EB variable, i would give this a hard no.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
The limit should be a "developer's recommendation". I trust BU devs with it.
What do you think the developers should base the default number on?

The current level? (The average, median or another formula, hashpower weighted maybe, of current signalling?)

The capacity? A number based on a minimum of bandwidth/hardware?

The risk/reward? A number including the risk of businesses not getting into the real bitcoin because the future capacity is held down and indirectly keeping the price down? The opportunity cost of having a node crash and the time it takes to reboot and sync?

Maybe the best recommendation is 10 terabyte or more, when you take everything into consideration. My interpretation of Gavin Andresen says it is.

I think the default value has a lot of power, and I don't think developers should use this power to keep laggards on the network and hold the system back.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Norway, to reduce the debate to practical action, are we basically asking that the C++ data-type for block limit related variables be changed from 4 byte to 8 byte?

4 byte signed -2,147,483,648 to 2,147,483,647
4 byte unsigned 0 to 4,294,967,295
8 byte signed -9,223,372,036,854,775,808 to 9,223,372,036,854,775,807
8 byte unsigned 0 to 18,446,744,073,709,551,615

Plus, increase the default value for EB, which mirrors the bitcoincash May HF specification of 32MB at present?

Seems to me that the default is a good value, realistic in terms of network capacity. A higher hard limit is also in place currently, and should not be a problem increasing.
 
  • Like
Reactions: Norway

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@solex
I'm not a bitcoin developer, and the BUIP is not a pull request. But based on what you wrote earlier, the data type must probably be changed to 8 byte. I think it's called long long in C++. And it should probably be unsigned, as negative EB values are not useful.

It seems to me that you argue for a 32MB default value, and that this is "realistic in terms of network capacity".

I don't think BU should recommend or suggest low default limits like 32MB. It will only hold competition back and prevent miners from developing more efficient software and hardware. It will also be a signal that prevents large companies from building their services on BCH.

We should stop worrying about nodes crashing as a result of accepting a too big block, and start worrying about becoming the same technocracy Core was.
 
Last edited:
  • Like
Reactions: _bc

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Norway, Lots of points there.
Before the recent stress test, BCH blocks were 1/600 the size of the 32MB limit. So, the potential for growth is enormous already. We saw a maximum 23MB achieved during the stress test and this does seem to approximate the current technical limit, based on how long this block took to propagate.

The current BU hard limit is 67x the current 32MB default. This is 2000x greater than what Bitcoin Core allows. If companies can't see the potential this has for scaling then they have serious problems with corporate vision, and probably couldn't plan their way out of a paper bag.

It is also the developers who are leading the miners on scaling and no one has done more for scaling than BU. Just today @theZerg has reported amazing results from the GTI work which he is porting to the main BU dev branch:

thezerg [6:08 AM]
2018-09-14 14:00:24,401.INFO: created 5000 tx in 2.863113655941561 seconds. Speed 1746.350512 tx/sec
I repeat: 1746 transactions per second!

It seems your fear is simply that HFs will soon cease and that 32MB will remain a cap which the miners adhere to forever. The situation is that there is one thing BU, ABC, XT, Bitprim and nChain's SV are fully agreed on is that BCH will not be crippled by a developer determined block limit, that gigablock capacity is a major priority. It has been actively worked on since BCH was launched.
 
Last edited:
  • Like
Reactions: Bagatell and Norway

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@solex
You want the open source devs to tell miners what's safe, and you expect the developers from the different clients to agree on a number that is "safe". This has already failed because the SV client is going for a 128 MB default value (that can be changed by the miner).

I think this is the wrong apporach. I think this is creating a technocracy where VIP devs are playing with the potential capacity like the FED plays with the interest rate.

My BUIP is not about putting a number on what the capacity is today, given a certain hardware, software and bandwidth configuration.

It's all about taking definition power from VIP devs and giving the people that are affected (pools) the responsibility of their own fortune.

I know you are a wise man, @solex, and I have a lot of respect for you. I hope you can see my point of view.

32 megabyte is a joke.
[doublepost=1536972276,1536971153][/doublepost]@theZerg
In the unlikely event that none of the BU devs are making pull requests for BUIP 101 after a majority have voted "YES", I will seek outside of the BU space to get the job done. But I think it's better for BU to own their own decisions.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Awesome, @solex!

Bitcoin Cash adopted the Overton window from Core. We just made the window taller.

It's time to let the pools compete. I don't expect the best node software to be open source in the future. I hope the protocol gets frozen as soon as possible.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
The situation is that there is one thing BU, ABC, XT, Bitprim and nChain's SV are fully agreed on is that BCH will not be crippled by a developer determined block limit
all @Norway is asking you to do is deliver on that statement by making the default limit so large that it equates to effectively removing it. he's probably got a point that a default value acts like a barrier to the layperson's mind.
 
  • Like
Reactions: _bc and Norway

painlord2k

Member
Sep 13, 2015
30
47
I support @Norway BUIP101 and Solex suggestion to change the data-type to 8 byte unsigned.
16 exabytes blocks should be enough for the foreseeable future and some time after.
I also support the idea the default blocksize should be, by default, unlimited (at least in practical terms). If any miners has the need and the will to change it, he must be able to do it freely.

Maybe you know about cities without street signals and street lights having better traffico flow and less incidentes.
http://thecityfix.com/blog/naked-streets-without-traffic-lights-improve-flow-and-safety/

The original example is Drachten, a town in Holland of 50,000 people. It is home to exactly zero traffic lights. Even in areas of the town with a traffic volume of 22,000 cars per day, traffic lights have been replaced by roundabouts, extended cycle paths and improved pedestrian areas. The town saw accidents at one intersection fall from 36 over a four-year period to just two in the last two years since the lights were removed in 2006.

Pre-sets are signals and developers should avoid to give too many or any signal in an evolving p2p environment. If there is no specific limit, other developers can test and try ways to increase BCH's capacity without worrying about them and when they will be lifted. It allows for a granular increase in the blocksize ASAP instead of waiting the time it is needed and others agree to it.

Conceptually it is no different from BUIP099. Fixed HF dates are like intermittents street lights. Without the street lights cars move when they needs to move and it is safe to move. With street lights, they move together no matter if it is safe or not, if they are ready or not.
 

digitsu

Member
Jan 5, 2016
63
149
Here here.

Time for BU to actually live up to its name. Unlimited.
If you want to keep your node from falling over, set it to something that you think appropriate. Programs running on our computers assume “unlimited” memory and don’t cap themselves. Bitcoin nodes shouldn’t be any different.
 
  • Like
Reactions: _bc and Norway

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
BU has a hard limit of 2147MB since November 2015. That is immense. That is living up to its name! That is as open for business as Amazon on the day before USA's Thanksgiving. It is ABC and nChain's SV which have the comparatively smaller hard limits of 32MB and 128MB respectively.

@digitsu, @Norway Why are ABC and SV not under pressure to increase their low hard limits to match BU?

There seems to be confusion over the relative importance of the hard limit (max possible accept size) and soft limit (max generate size) which also has a default value for mining.

The default soft limit is the out-of-the-box setting for the block size limit which miners are fine to use while there is no blockspace demand for a higher level. Remember, the fairy-tale "fee-market" does not exist. What exists is a blockspace market and the empirical evidence is that the blockspace market works.

When blocks became regularly full on BTC, all of the miners increased the developer determined 750KB default to the 1MB base-block max limit. The miners were, and still are, open for business as much as the full-node software allows. Miners are aware of the level of organic volume, the blockspace market, priced by fee level / block-building overhead. So, on BTC, new miners will jump in straight at 1MB by over-riding the ridiculous default. The fact that the hard limit for BTC is also ridiculous, is a different matter, and is what ultimately led to BCH being created.

The hard limit is far more important than the default soft limit when it comes to blockchain capacity. However, the physical throughput limit of the network is the ultimate arbiter, and why software development on txn generation, wallet efficiency improvements, txn ordering, parallel validation (blocks & txns), thin, thinner and thinnest block propagation, etc, etc, are the ultimate measure of capacity. The default soft limit should not even be the starting point for any company to assess whether blockchain capacity can exist for their business use-case. The hard limit is definitely a consideration, as we have seen with BTC, the hard limit can be fatal to many use-cases.

BUIP101 proposes to increase BU's hard limit of 2147MB to 10 million MB. Personally, I see this as covering the scenario that the BCH protocol will ossify too early, when general upgrades (hard-forks) will become impossible. Well, maybe that is a risk which needs closing off.

Unfortunately, the included proposal with BUIP101 is to raise the default soft limit to the new hard-limit. This just removes information which miners find very useful. BCH developers should be given credit for setting a soft limit which is in the ball-park of the current physical limits. Further, soft limits can be changed at any new version of software by any developer team within the lowest common hard-limit.
 
  • Like
Reactions: Norway and torusJKL

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@solex
There seems to be some confusion here about what "hard limit" is. I guess this is because we are creating new language at a technological frontier. I think of the terms this way:

Hard Limit = Excessive Block Size (EB)
Soft Limit = Maximum Generation Size (MG)

The parameter limit at 2,147,483,647 is just a result of a variable data type (4 byte signed). This data type will need to change to 8 byte if BUIP101 gets a majority vote, and should probably be unsigned as negative values are useless.

This BUIP is regarding the default value of EB. So when you install the software, EB = 10 terabyte. If the node operator wants to change this value up or down, he/she is free to do so.

Personally, I see this as covering the scenario that the BCH protocol will ossify too early, when general upgrades (hard-forks) will become impossible. Well, maybe that is a risk which needs closing off.
To be honest, I see it more as a transfer of definition power. From Bitcoin Unlimited to mining pools. It is also a transfer of responsibility from Bitcoin Unlimited to mining pools. The pools should find out for themselves what their EB setting should be, and a complete removal of this limit that has haunted bitcoin for years is probably the best (see the quote from Gavin Andresen in this BUIP).

Unfortunately, the included proposal with BUIP101 is to raise the default soft limit to the new hard-limit. This just removes information which miners find very useful. BCH developers should be given credit for setting a soft limit which is in the ball-park of the current physical limits. Further, soft limits can be changed at any new version of software by any developer team within the lowest common hard-limit.
This doesn't make any sense to me at all. I blame it on the language confusion mentioned earlier.

Bitcoin Unlimited was founded on the idea of letting the miners choose for themselves and letting the market forces and incentives in bitcoin play out. I see BUIP101 as an extention of this idea, where we remove the last shred of technocratic power that comes with default values.
 
  • Like
Reactions: reina