BUIP101: (closed) Set the default value of max blocksize cap (hard limit) to 10 terabyte

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Norway, I admire your enthusiasm and vision. A motivating article for sure.

To clarify once more, the 1746 is a benchmark of the improved wallet performance. It is isolated from subsequent requirements like txn propagation and block building etc.
 
  • Like
Reactions: Norway

micropresident

New Member
Feb 7, 2018
6
26
So no, Solex. On this specific topic I disagree with you. We should not subsidize the weaker software of ABC. We should not drag ourselves down to their amateur level.
75% of the maintainers for ABC are BU members. BU is both a a organization, and an implementation.

FWIW, I intend to vote yes on this proposal. It will be nice to give everyone a chance to see what the limit is for, in the wild, so I can quit hearing about it.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
@deadalnix, can you outline why you think BUIP101 is such a good idea that you voted for it?

If it's for the same reason as @micropresident above, then I'm a little disappointed.
Not because I think you shouldn't exercise your vote. But because I think unless one really believes this is a good change for BU, one should not vote acceptingly for it.

The reason given above "so I can quit hearing about it" is absolutely not a good reason for a BU member to vote for something that they argue puts the client at risk.

I refer to your recent recording in Bangkok where you eloquently argued the DoS risk point. And Shammah must agree since he was present and didn't counter that. So I think given your position on the matter it would have been natural and in good taste to at least abstain, if not reject.

The only way I can otherwise explain your vote is that you must have come to a different view on the technical merit of allowing huge blocks, despite size not yet being covered by POW and propagation improvements still in early phase (e.g. BUIP093).

This BUIP probably won't pass, but if it does can we look forward to ABC folks patching their default value to 10TB tout de suite?
 
Last edited:

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
I thought I had done so already in at least two posts in GCBU, @Norway, allow me to simply refer you to them:

https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-1244#post-81142
https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-1245#post-81208

To summarize again for readers new to this discussion, I voted to reject this BUIP because I don't think simply changing the limit to something that a client demonstrably cannot handle sends out the right kind of message to its potential users. I believe it would be doing BU a disservice when many of its members are working hard to overcome the actual scaling issues and BU is poised to be the first client that can legitimately claim it can truly handle 128MB blocks in the near future (having ParVal, Graphene, Gigablock ATMP and RPC improvements etc).
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@freetrader, your reason for voting "no" is based on fear of an unrealistic scenario. A big block attack is not sustainable, so it's not harmful. I wrote about it in the article.
 
  • Like
Reactions: Windowly

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
@Norway, I was completely unconvinced by your article.
Since this attack is unsustainable and will not cause much harm to the honeybadger, it will probably never even happen.
That's just handwaving. Also, this is just flawed logic:
A big block attack is not sustainable, so it's not harmful
Simple example to point out the flawed logic: An attacker would've only needed to exploit the recent inflation bug in ABC / Core *once* to probably cause major market damage to the coins. And profit handsomely in the act. Just because something isn't sustained doesn't mean it can't be very harmful.

But because @cypherdoc already barraged me with questions in GCBU re: such an attack's feasibility, I wrote up a detailed response to that based on my views, which I will post in GCBU. You'll see it there so you can continue the speculation pro your POV there.

Needless to say, I don't even have to oppose this BUIP based on any speculative risk.

If there's anything that irks me more than avoidable yet self-imposed hazards, it's deceitful marketing plays and their use to push agendas. And advertising software as more capable than it demonstrably is could have some real-world liability ramifications - it really shouldn't be placed apart from making fraudulent claims in other professional circumstances.

It's plenty enough reason for me to vote against it based on its stated intent of "moving the judgement of what max blocksize cap is safe from Bitcoin Unlimited to the individual miners" -- all without any provision in this BUIP for providing those miners with any improved guidance on what the BU software is actually capable! But mind you, BU is closest to delivering decent public information on the capabilities of its client (ref Giga performance evaluations).

---

To save time, I'll post the Reddit thread here which OP opened and in which @deadalnix replied to my "Why" question about his voting rationale.

https://www.reddit.com/r/btc/comments/9m2036/deadalnix_top_abc_dev_voted_yes_to_buip_101_we/
 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@freetrader
The inflation bug was a very different cup of tea. It is not related to a big block attack. The fact that you use this dishonest argument show that you don't have a real point.

Anyone being attacked by big blocks can just drop the connection to the attacker or adjust his max blocksize limit. It's not a big deal. Since the attack is not sustainable but very weak, it will probably never happen.

People lining up to play blocksize police and spreading FUD about big blocks are the real problem.
 
  • Like
Reactions: Zarathustra

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Since the attack is not sustainable but very weak, it will probably never happen.
The reason such attack would fail right now is because systems are running with reasonable upper limits and would reject such blocks. This BUIP is proposing to move away from a somewhat sensible default.

And you haven't shown at all that it's not sustainable for long enough to significantly damage the coin.
Your article is, imo, severely underestimating the time (couple of minutes - 2hrs??) required to correct the situation on the network, if an attacker managed to split some nodes off onto a separate chain.
Anyone being attacked by big blocks can just drop the connection to the attacker
Can you demonstrate how to do that using BU? Dropping a connection that's receiving an attack block -- for which you don't know the final size ahead of time?

I've provided some napkin math in the GCBU thread about how much I think an attacker would need to budgetto execute such an attack for a month. I put the upper bound based on current BTC price at ~$60M. That's less than a percent of current BCH marketcap.
If they manage to disrupt at least a few times, for several hours each time, I think the impact on BCH market value could give them a decent "return" on the money spent attacking.
People lining up to play blocksize police and spreading FUD about big blocks are the real problem.
Alright, I think I'm done here.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
As long as BU's miners are in the minority, this will cause accidental orphans. This is a UX problem that could cost miners money.

Additionally, I will require a pull request implementing this change to contain a test like all other feature implementation pull requests. This might be hard at the 10TB level.

I will vote "no" for this because I think that if BU gains majority hash, that's when such a thing makes sense. And it should be more on the lines of: "Aggressively test as large blocks as possible and set the EB limit to the largest successfully tested value". Finally, consider that we have a lot of things to do, and perhaps testing blocks 1000s of times bigger than other parts of the software can handle in 10 minutes is not what we want to spend time on. I'd prefer working to increase real 10-minute sustained block sizes, not try to get from (say) 16GB to 32GB.
 

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
> Finally, consider that we have a lot of things to do, and perhaps testing blocks 1000s of times bigger than other parts of the software can handle in 10 minutes is not what we want to spend time on

I think @Norway knows very well that the software can not yet handle such blocks. His point is that the miners should be responsible to set the limit themselves, which is possible already today. How do you prevent a bloat block attack with a default value that is configurable?
 
  • Like
Reactions: Norway

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
How do you prevent a bloat block attack with a default value that is configurable?
A sensible developer-determined default, well above demand and also within current capacity, creates a Schelling point which is strong enough so a majority of miners are using it as a soft limit, thereby likely to orphan bloat blocks. This, while simultaneously weak enough of a Schelling point so miners (and users) can co-operate in shifting the limit higher to suit rising demand, i.e. in the scenario that the developers get co-opted by banksters, VCs, reptilians, or otherwise abrogate their responsibility about ensuring the default block limit is above demand for future releases.
 
Last edited:
I voted with NO, too.
Not because I am afraid of Monster-Blocks. But because deleting the limit - which this effecticely does - deletes Bitcoin Unlimited's core value: Enabling the ecosystem to find an emerging consensus about the blocksize limit.
Increasing the network load is possible with BU today, if there is a consensus of the miners and they are able to push a big block through the AD threshold. BU works like intended, and this proposal is out to break it.

This said, I can't believe what I hear from Micropresident. Did he just say that he voted "yes" for reasons of malice - because he believes it will cause damage to BU but do good for his own ego? If so, is there a rule in BU's constitution to ban him?
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Christoph Bergmann
In my opinion you are reading a lot more into @micropresident's statement than he intended. The membership is clearly very divided on this issue and there are many opinions. You and I are agreed that the default limit should be a sensible value. Considering that BU's effective hard limit is 2147MB (7500 TPS) it is understandable that some members are also sick of the block limit debate and see it as a distraction from the massive amount of work needed to get the software to the point where it can handle such large blocks.
 
  • Like
Reactions: Windowly
@solex
Either I read too much into the statement - a clarification would be helpful, since it seems very unambigious - or this is something we could call an "infiltration with malicious intent". He did vote for it because he thinks it will do damage to Bitcoin Unlimited. At least this is what I get of it.

Edit: Anyway, I notice myself caring less and less about Bitcoin Cash. Not your fault, (Bitcoin Unlimited is the only reason I'm still hoping / holding), but what I completely agree with @Norway is that Bitcoin Cash got on a "developer babysit users" path, while it was the most important reason of the creation of Bitcoin Cash to resist Cores "we must babysit users" approach. If we have "the same in green" (haha, wordplay in German), but with less developers, less market cap, less transactions, less investors, there is not much sense left in Bitcoin Cash.

Edit Edit: This also goes for CTOR. Me - like some other BU members - think it is foolish to break existing implementation with a potential scalability improvement like CTORs, which is not needed in any terms, and which might have the effect that BCH never reaches it, because it makes usage more complicated. BU definitely voted against CTOR. That you think you must implement it against this vote, because there are indications that ABC will have miners majority, makes any vote ridiculous. BU has the AD mechanism. Let's make it a standard to reject CTOR until nodes with CTOR got a certain depths.

Edit Edit Edit: Don't get me wrong. I'm not angry nor frustrated. I'm just tired and bored of what's happening in BCH.
 
Last edited:

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
A sensible developer-determined default, well above demand and also within current capacity, creates a Schelling point which is strong enough so a majority of miners are using it as a soft limit, thereby likely to orphan bloat blocks. This, while simultaneously weak enough of a Schelling point so miners (and users) can co-operate in shifting the limit higher to suit rising demand, i.e. in the scenario that the developers get co-opted by banksters, VCs, reptilians, or otherwise abrogate their responsibility about ensuring the default block limit is above demand for future releases.
I think that's the "paternal approach" that @Norway, @shadders, me et al. want to get rid of:


The block size does impact propagation times but the fixes for this and other issues aren't consensus changes. The stress test proved that the network won't collapse under extreme load. Due to the backlog caused by inefficient software it also showed that a miner that had invested in better software would have had the opportunity to mine all the transactions and claim the revenue that others missed as a reward. And the ancillary services that had the capacity to deal with it would have gained a branding advantage over those that didn't.


This is the point of raising the limit even before the software gains the capacity to handle it. It creates an incentive and an economic pressure for miners and other that rely on full nides to improve this capacity. Until Bitcoin SV, miners have not directly paid for their own node development and developer groups have had different priorities. That has now changed. Our focus is 100% on performance and 0-conf improvements and we have the funding and team to build world best practice (modelled on aerospace) development capacity and do so.


We have kept the November changes set minimal for safety's sake. But in the background we are working on all the identified bottlenecks in parallel as well as building profiling infrastructure to identify more and model the next generation bitcoin node design. The performance changes I refer to are targeted for release before the May 2019 hard fork.


Bitcoin cash needs to scale fast and the paternal approach of saving people from their own mistakes removes the urgency of this development imperative. One important thing to remember is that no one can mine a block that they can't mine. And if they do mine it it means everyone else could have mined it and should be able to validate it. Creating a block is a harder problem that validating one.

 
  • Like
Reactions: Christoph Bergmann

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
I voted with NO, too.
Not because I am afraid of Monster-Blocks. But because deleting the limit - which this effecticely does - deletes Bitcoin Unlimited's core value: Enabling the ecosystem to find an emerging consensus about the blocksize limit.
Isn't that exactly @Norway's goal? To force the ecosystem to find an emerging consensus without the help of the 'Politbüro'?