BUIP040: (passed) Emergent Consensus Parameters and Defaults for Large (>1MB) Blocks

sickpig

Active Member
Aug 28, 2015
926
2,541
@deadalnix

Currently I've no strong opinion about the issue at hand, or better I didn't think about it properly.

But if we are going to apply EC to tx size like with have done for the max block size, we are just removing it from the consensus rules as intended in the traditional sense.

Actually we are removing from the consensus layer a technical constraint related to transporting blocks around the network. A continuation of what we've done for the max block size.

EC guarantees a flexibility that a fixed limit can't have, hence it will avoid an hard fork in the traditional sense.

That said, in my book setting a "rigid" 1MB limit to the transaction size, or wiring if you will, means repeating the same mistake that has been made a few years ago with the max block size.

This time the situation is a little bit different though, we should probably need to fork again to introduce a new tx format to fix all the issue stated here by @Dusty so we will have an occasion to modify whatever rule we decide to implement now.

The current situation share another characteristic with the past, 1MB is probably a lot higher than what a market equilibrium would be considering the current supply and demand curve, to use @Peter R notation Q* << Q_max.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
My vote is to keep it simple:

MAX_TX_SIZE = 1,000,000 bytes (effective rule today)
MAX_TX_RELAY = 100,000 bytes (default today)
MAX_BLOCK_SIGOPS = MAX_BLOCK_SIZE / 50 (rule today)

I'm not worried about slow-to-validate blocks within these limits.

All of these "limits" are also soft-limits dealt with by the excessive-block gate.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
I would like to remind why I emphasized the question of rationale on this proposal to make maximum tx size scale beyond what the system currently provides.

We should try to be very clear about the reasons why we argue for something. Otherwise, how else can we understand and defend a certain proposal, in case it gains the popular vote? So we must discuss very critically about rationale for any proposal, so that we ourselves can fully understand it, and stand behind it when others ask "why".
So it makes sense to create some "excessive" style limits that will cause nodes and miners to reject extremely-long-validation-time blocks and transactions unless the mining majority is allowing them.
I'd like to discuss briefly this quote, because I think it can guide us to answering this "why?" question.

Firstly, the essence of it, as I see it, is "it makes sense to create some limits that will cause nodes and miners to reject extremely-long-validation-time blocks and transactions". I would have thought nobody here would disagree with that, but it's at least interesting that some people believe no such limits are needed at all and that block-level EC + PV would take care of the problem already.

I like this proposal of a new "excessive transaction size" (ETS).
The allure of excessive-style parameterization is that node operators can express their resistance to changing the network consensus while making it possible to change this consensus without bothering developers. It is very appealing to avoid having to address these fixed size limits in future hard forks.
However this comes with some added complexity, as pointed out by others.

Currently BUIP040 does not mention whether this new ETS parameter would be coupled to the existing AD, and if so, what the justification for that coupling would be.
Otherwise, it seems an additional AD-like parameter would be needed. We could tentatively call it transaction acceptance depth (TAD?).

If we base on what was done for block size, an excessive-style control mechanism would involve a set of parameters, e.g. (ETS, TAD). Further discussion on this seems necessary in the context of this BUIP, especially as BUIPs 38 & 41 show that stickiness of these parameters is itself a subject that needs to be carefully weighed in relation to possible attacks.

I don't think anyone has done the math (simulate current network and mining conditions sufficiently) to be certain that unbounded tx sizes pose no substantial risk to the network. On that, I would err on the side of caution. Current bounds are implicitly safe as they define the current system, are not being exploited and no-one is arguing for them to be lowered.
Scaled limits are probably safe under the PV assumption, especially if using EC, but the data on that still needs to settle, for transactions just like for blocks.

Unless I've missed it, no-one has yet provided a user requirement for increasing max tx size above 1MB at this time.

Starting off with current defaults as excessive limits seems pretty safe, since it is not just miners but the relay nodes as well who exert control, and miners would have to convince the rest of the network to raise these limits.

I find this balance of power to be an important factor in my liking of EC, and don't quite understand the phrasing "unless the mining majority is allowing them" in the quote above.
Should it not be "unless the majority of nodes is allowing them"?
 
Last edited:
  • Like
Reactions: sickpig and solex

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
It is taking 5 years and counting to overcome the major error Satoshi made with his 1MB change. It is also widely agreed that this transport-level limit has become ossified onto the Bitcoin protocol and is now causing serious damage to the network effect. Just today Circle, a major bitcoin-using company, has thrown in the towel citing development "gridlock".

Surely, the last thing we want is to finally move the 1MB protocol limit into the transport layer and at the same time create a new arbitrary protocol limit i.e. 1MB max transaction size.

While we hope to hard-fork to Flexible Transactions or similar, where the max transaction size can be addressed in some way, we have to accept the possibility that a second hard-fork cannot be achieved quickly.

@Roger_Murdock ...the block size is capped at 1MB...this is the max size of a transaction that a miner could make and then mine himself, which is today, obviously set at 1MB...if block size increases now then that max transaction size will be whatever the size of a block a miner can create, as you stated above.
Clearly the original design is that the max transaction size scales with block size, the same as the SIGOPS limiting. So the decision to cap the txn size at 1MB for >1MB blocks is a new decision.

Point in fact is that Core has 4MB "blocks", and still keeps the limit of 1MB for transactions. Same with Classic.
Is there a link to any discussion in Classic development about the decision to make a 1MB limit for txns?

This decision does have good rationale, which is why BU should default the max txn size to 1MB for blocks >1MB, however employing EC, as @freetrader describes, I think is a major improvement. The added complexity is reduced by re-using the block-limit AD value.
 

Peter Tschipper

Active Member
Jan 8, 2016
254
357
I agree with @solex in that we currently don't have a max transaction size in bitcoin, it is entirely related to block size and if we introduce a 1MB cap on transaction size then we will introduce something new again which may be difficult in the future to get rid of. I also like the idea of an EC style solution which is in keeping with Bitcoin Unlimited's approach. My only hesitation is the added complexity introduced right at a time where we are trying to do the very first Bitcoin hardfork.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
I'm ok to go along the line of thinking that the transaction size is bound by the block size in an original design by Satoshi (I looked it up, he added the limit).
I do agree with Freetrader that we have to take into consideration all the factors that weigh for and against setting the transaction max size free. If only because when this design was initially created, Satoshi clearly didn't know about the quadratic attack. In other words; the original design should not force us to act in one way or another when we found a vulnerability.

Lets look at this conversation in compressed format;

Blyes: we don't want hard limits in the protocol.
Corwin: if we remove the hard limit, we open us up to CPU exhaustion attacks.
Blyes: miners and nodes don't need to change it from the 1MB default!
Corwin: If nobody is going to use it because miners won't let us, then why bother?
Blyes: we could allow this for v4 transactions (flextrans) which fix malleability.
Corwin: good idea, I can agree with that. But would that not be better done in the hard fork that introduces flextrans?
Blyes: But we may not get another hard fork! We have to do it now!
Corwin: If we don't get another hard fork, flextrans will never be accepted and we'll end up with segwit as a way to enable LN.
Blyes: Then why don't we make this hard fork include flextrans?
Corwin: Possible, but I think it would be a disaster to force a new transaction format on short notice. It would be equally bad to wait a long time for the block size change. I think it needs to be two hard forks.
Blyes: But how are people then going to create big transactions?
Corwin: They split them. It really doesn't happen often that anyone needs > 1MB tx and those that need can split them with very little downside.


As the max transaction size is clearly not the same as the max block size (bigger blocks means more users, bigger transactions are a cost saving method for a very small amount of usecases), maybe we can live with this variable being set to the same value its been at for the last so many years. I'm definitely open to removing the limit of v4 transactions after they become flextrans formatted.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Responding to a few of the points:

1. The intention of this BUIP is to use the same "AD". All these rules designate the blocks "excessive" or not, I couldn't see much reason to separate the excessive concept into multiple sections.

2. This BUIP took a lot of analysis, but the code is actually 5 LOC. It is literally:

BOOST_FOREACH(const CTransaction& tx, block.vtx)
{
uint64_t nTxSize = ::GetSerializeSize(tx, SER_NETWORK, PROTOCOL_VERSION);
uint64_t nTxinLen = 0;
BOOST_FOREACH(const CTxIn& txin, tx.vin) // 1
{
nTxinLen += ::GetSerializeSize(txin, SER_NETWORK, PROTOCOL_VERSION); // 2
}
blockEffort += (nTxinLen * nTxSize); // 3
}

uint64_t blockMbSize = 1+((blockSize-1)/1000000); // 4
if (blockEffort > blockEffortPerMb.value*blockMbSize) return true; // 5, block is excessive

Not counting tests, of course. The lines in the code I've included but not marked were already there because they are needed for figuring out other metrics.


3. In BUIP038, there is the concern that in a fork event, a significant amount of hash power (25% of the world hash, I think) may be directed to attacking the large block fork by trying to jam extremely large blocks onto the BU network.

The current 1MB transaction size and sigops limits allows "normal" transactions to be created by miners that have extremely long validation times. There are long-time-to-validate transactions on the blockchain today, and complaints on the web about lengthy sync times and the sync "hanging" on certain blocks.

If we think in BUIP038 that 25% of the hash power may be malicious, I do not think that in this BUIP we can turn around and rely upon the goodwill of every miner. So it makes sense to employ the "effort" parameter where we can, and that is for any block that does not need to be compatible with the old rules since it is ALREADY incompatible by being > 1MB.

4. WRT the current lack of limits in BU code, these changes were made before the BU organization existed which is why no BUIP was posted.

5. I do not think that this place is appropriate for a discussion of the BU process because this BUIP does not change that process. If discussion persists, I will create a separate topic and move comments there.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
The current 1MB transaction size and sigops limits allows "normal" transactions to be created by miners that have extremely long validation times.
The quadratic hashing issue is described here; https://bitcoinclassic.com/devel/Quadratic Hashing.html

It refers to an old bitcointalk post which constructs the most complex transaction set that you can put in 1MB. Theoretically you could make it so that
the total size hashed is 100*200*1000000=20 000 000 000 bytes if you have more block space. The sha256 hasher we have in bitcoin takes 4 seconds to hash 1 GB (let me know if you can't test this yourself, I can provide the testing code). So we are talking about 80 seconds for this specially crafted transaction.

If you can create a longer-to-validate transaction that fits in 1MB, please post about it. Preferably with code.


I've tried to keep criticism away from the actual proposal since it still only solves a problem that is local to BU. Its a bit unfortunate that thezerg never replies to my suggestions or questions. Now I read that thezerg may moderate answers he doesn't like, moving them away. So maybe for full disclosure I should mention something about the BUIP itself. Its time.

It is worth pointing out that this specially crafted transaction I mentioned uses lots of OP_CHECKSIG's to make this attack possible. The proposal in this BUIP is based on the assumption that there is always exactly one OP_CHECKSIG for every input. Without the many checksigs the 20GB to hash would come down to 100MB only for that 1MB transaction. Which is validated in 0.4seconds.

So I went and checked the block you, thezerg, said too almost half an hour to validate. The one that your blog said this about;

> One of the highest effort blocks has a projected validation time of 1624 seconds, or about a half hour, due to a transaction with 20k inputs.

That number seemed to come out of the air, and I thought I'd check. I just did a full validate of block at height 367877 and not really surprising, it finished a full (checklevel=4) check in about 100ms or so.
 
  • Like
Reactions: awemany

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
That's interesting. I collected my data using a python script that uses bitcoin-cli to generate blocks and transactions under regtest mode, and then measured their validation time on another connected node. I wonder if there's some kind of bug in my timing of this process. Since I do get nearly 0 results for blocks full of 1-to-1 and 1-to-many transactions, it would need to be a pretty weird bug.
 

deadalnix

Active Member
Sep 18, 2016
115
196
@deadalnix

That said, in my book setting a "rigid" 1MB limit to the transaction size, or wiring if you will, means repeating the same mistake that has been made a few years ago with the max block size.

This time the situation is a little bit different though, we should probably need to fork again to introduce a new tx format to fix all the issue stated here by @Dusty so we will have an occasion to modify whatever rule we decide to implement now.
That is my point. There is no need to add a new limit into the consensus mechanism. The limit can't be raised too much for the current tx format, because of quadratic hashing, and the limit is unnecessary. The most simple way forward is to keep the limit for current transaction, and simply never have a limit for the next transaction format.

This isn't introducing a new limit, simply removing one in a way that is not as disruptive as the proposed solution.
 
  • Like
Reactions: dgenr8 and awemany

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
When I collected my data, I instrumented the transaction validation function in bitcoind. This will be pretty time consuming to do for people who want to replicate my effort (although you can check my branch).

So I created a python script that does rough measurements, located here: https://gist.github.com/gandrewstone/92237f2e44449909964a0af37a1353ca

I think that it will work on any client, if you copy it to the qa/rpc-tests directory.

This is the output I get when running it:

qa/rpc-tests$ python quickPerf.py
2016-12-08 16:20:37,234.INFO: Initializing test directory /ramdisk/t1
synchronizing
generating addresses
address generation complete
Create 1 to many transaction
send txn
[ 'gen', 136019 , 1 , 3999 ],
Generate time: 0.12350487709
synchronizing
Sync time: 1.13732790947
Create many to 1 transaction
[ 'gen', 164046 , 4000 , 1 ],
Sign time: 245.427388906
Send time: 100.071123123
Send&Gen time: 106.802419186
synchronizing
Sync time: 108.262549162
--Return--
> /fast/bitcoin/bugas/qa/rpc-tests/quickPerf.py(202)largeOutput()->None
-> pdb.set_trace()
(Pdb)

So it seems to be taking 245 seconds to sign a 4000 to 1 transaction and 100 seconds to send & sync it on another node (I made send time, send&gen time and sync time cumulative in this output because BU's XVal means that transactions are prevalidated). Visually, I do see the second node jumping up to 100% cpu at the appropriate time.

Tom, I would be very interested in your results if you are willing to run this script on your machine. Also, I'd like to see the results for anyone else who wants to run it.

As I commented in my blog post I plotted validation times on many-to-1 transactions from 1 to 2000 inputs, which seemed to be quadratic, and then extrapolated that result to the largest transaction in the blockchain (20000 inputs) to project that it would likely take 30 minutes to validate that transaction. Of course this projection may be inaccurate...
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Ok good news! In a conversation with TomZ this AM we narrowed down the possibilities and it simply came down to the fact that my 4000 input transaction was validating WAY too slowly. He then asked whether I was running a debug build... looks like I was :-(.

Running the above test in a "release" build results in an ~20 second, rather than a ~100 second validation time. So it looks like this will pull the worst case 1MB (the 4000 input tx I referred to above is about a half MB) transaction down to a reasonable value, even if we have (say) 8 of them in a block 4 years from now (and somehow haven't managed to HF the transaction format).

This means that we don't need the additional "effort" parameter, even with very large blocks, and so I'll likely be removing it from this BUIP soon.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
original BUIP edited. I think that the "effort" metric may still be useful to maintain support for ARM SOC full nodes, but this issue is of lesser importance so can be put in a subsequent BUIP.
 

deadalnix

Active Member
Sep 18, 2016
115
196
I think the effort metric is definitely useful, for instance, to decide if one relay a tx or not, but baking it into the consensus rules would be a mistake. It can be a policy.
 
  • Like
Reactions: freetrader

deadalnix

Active Member
Sep 18, 2016
115
196
On the proposal now. I think everybody, even core, is of the opinion that a new transaction format needs to be added to bitcoin to fix quadratic hashing and malleability. As a result I don't think adding a new consensus mechanism here is that important. We can just keep the 1M limit on the transaction size for current transaction and lift the limit on the next transaction format.

As for the sigops limit, I'm unclear if the limit can actually be reached with a 1MB transaction. It's very high. If it cannot be reached, then we'd need a proof, but in this case we can simply remove it. If it can, then I'm not sure what's the best course of action. Maybe making this a relay policy is enough.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
I'd vote for keeping max TXN size at 1MB as well, for the meantime/current txn format. Regarding @Tom Zander 's and @theZerg's dispute over whether there is an implicit assumption that maximum transaction size equals maximum block size, I'd slightly tend to @Tom Zander 's view that this might be coincidence in this case, but not necessarily meant to be so.

However, and to maybe address @solex concern and as I have said elsewhere: Should we maybe somehow make explicit that having an 1MB transaction limit is no way meant to be final?
Should we succeed with getting BU 'activated' (i.e. larger blocksizes due to emergent consensus),
what we are writing and saying here might start to carry a lot of weight, and though certainly not to the extend the whitepaper is tried to be misinterpreted, there might be some disagreement about what was 'meant with the 1MB txn limit back then'. It would be easy to just add a sentence of the form 'tentative, not meant to be final' to this limit.

However, I cannot see, a bunch of folks attaching to an 1MB txn size limit the same way they did to the 1MB blocksize limit. Because for all I can see, it is a minor parameter and rather a 'belts and suspenders' safety feature.

An example of the need for large transactions would be crowdfunding through something like Mike Hearn's Lighthouse. But I wonder whether there could be decentralized workarounds?

In any case, I fail to see how it would be of such fundamental importance as the maxblocksize limit value, or lack thereof.
 
  • Like
Reactions: freetrader

Tom Zander

Active Member
Jun 2, 2016
208
455
This morning I learned that this BUIP is still going to vote in a couple of days (why in the holiday week?).

This vote is open only to BU members. The topic of the vote is about Bitcoin-wide consensus rule changes which affect people far outside of the BU project.

For 3 months at least I've been attempting to actually start a dialogue where multiple parties can come to consensus about changes. They call them 'consensus' rules for a reason, in that any one individual can't change them because the rest of the world would reject his changes.

This BUIP changes the upcoming hardfork to not just cover 1 change, it makes further changes to the consensus structure of Bitcoin where it removes the sigops and the transaction limits from the consensus rules.
Both of which are hard-fork changes. It's irrelevant that they may be managed by clients in the EC system, they are still hard fork changes.

Classic doesn't intend to incorporate those changes because they are unsafe to the network and because nobody has given a single reason why there is a need for these changes.

But in the political climate we have today, it may be so that if you vote yes and don't make TheZerg revert these changes then Classic and all the other software that together run the Bitcoin network will have to follow. And thats not how consensus rules should be changed! We don't have to have 2 year discussions, but none of my concerns have even been even acknowledged by TheZerg.

The membership can vote against these changes, and maybe you wanted to do this already, based on technical reasons. Thats great!
But I think you definitely should vote NO because the membership should not support this vote as a means to change consensus rules, without even trying to build consensus or having a serious conversation with the wider bitcoin community.

Thank you, and have a merry Christmas!
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Tom Zander: I'm not following your concern here. Isn't the proposal basically what you were suggesting anyways?

Excluding the change the the default excessive-block size, the proposal is to add the following EC-style rules:

max_tx_size = 1,000,000 bytes
max_block_sigops = 20,000 * (block size in megabytes rounded up)

How would you propose to change this?
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
My view about the existing consensus is simple:
  • For blocks <=1MB consensus exists on the sigops limit, implied max txn size, and implied sighash memory overhead limit.
  • For blocks >1MB there is not yet any consensus to break. First XT, then Classic came up with their own consensus rules for large blocks. Now BU has its own proposed default rules. All these are proposed because no block >1MB exists in the blockchain and the majority client will not allow more than 1MB base-block data even if its extension-block solution for witness data (SegWit) gets activated.
A key advancement is making these limits user configurable settings, and making them subject to the EC acceptance depth logic. This means the network can smoothly evolve.

Hence, I support BUIP040 and refute any suggestion that it makes a change to consensus rules. This BUIP makes new rules for blocks >1MB, and these rules are to be implemented in the most flexible manner possible.
 
Last edited: