BUIP040: (passed) Emergent Consensus Parameters and Defaults for Large (>1MB) Blocks

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Bitcoin Unlimited currently does not constrain SIGOPs (signature operations) or transaction size when accepting blocks (block generation is constrained to network norms). The Parallel Validation (PV) BUIP (passed) helps resolve this issue (and has many other usability advantages), but for PV to succeed, it requires that another miner first successfully mine a sibling block, and that the original block be orphaned, losing money for that miner.

So it makes sense to create some "excessive" style limits that will cause nodes and miners to reject extremely-long-validation-time blocks and transactions unless the mining majority is allowing them. Please read the following post for details of the analysis: https://medium.com/@g.andrew.stone/proposed-bitcoin-unlimited-excessive-defaults-for-block-validation-326417f944fa#.gmjyiqbjv

Based on this work, I propose that BU:

1. Mark blocks <= 1MB that exceed the current network limits as excessive, to stop "attackers" from attempting to push BU miner nodes temporarily onto another chain, causing them to orphan some blocks (at low BU hash rates, this "attack" costs the attacker much more to execute than it costs the target BU miners, so is not critical. At high hash rates, it simply causes the fork to large blocks). This means that these blocks must contain 20000 or fewer "legacy" sigops (following the algorithm in the code today, a discussion of which is beyond the scope of this document).

2. Create a configurable "excessive transaction size" parameter, and set it to 1MB by default. Blocks with a transaction exceeding this size will be marked as excessive.

3. Create a configurable "excessive sigops per MB" parameter, and set it to 20000 by default. The algorithm will first round the block size UP to the nearest MB, and then apply this rule. For example a 1.5 MB and a 2 MB block will each allow 40000 sigops.

If passed, this BUIP will be extended with the exact implemention of the "excessive sigops" metric so that other implementations can copy the exact logic (preserving rounding behavior, for example). However, this is not a "consensus-critical" issue. Due to the emergent consensus algorithm there will be no blockchain fork if other implementations calculate this parameter differently. In the worst case miners may orphan a block or two while the problem is discovered and is fixed.

To pre-answer some of the inevitable discussion:

I originally considered limiting transaction size in a > 1MB block to 100KB since that is the limit of a transaction in the Satoshi client's P2P protocol. However, some mining pools need large 1-to-many transactions to pay their hashers. This style of transaction is very quick to validate, since it only typically contains 1 actual sigop, even though the "legacy" sigops calcuation may return 20000.

EDIT:
This was implemented in BU, CTweak parameters and their defaults are:
"mining.excessiveSigopsPerMb": 20000,
"net.excessiveTx": 1000000,
 
Last edited:

Tom Zander

Active Member
Jun 2, 2016
208
455
This is a topic that has been raised first a couple of months ago when I and others noticed that BU had already removed the sigop limits and the transaction maximum sizes.

Back then the question was asked why we should allow transactions larger than 1MB (the current protocol limit), and I have to this day not received an answer.

This is a relevant issue because the proposal made in this BUIP is build on the unproven assertion that when block sizes can get larger than 1MB, there somehow needs to be a protection for slow-to-validate blocks. It turns out that the only way that there is a risk is if the transaction-limit is removed. If it is kept as it is today at the current limit of 100kB, any node can validate a block easily in about 10 seconds or less. Hardly a scaling issue.

I have not found a BUIP where the removal of the maximum transaction size is suggested. I think someone assumed that with the removal of the maximum block size, the transaction sizes should be removed too. This is something that I invite BU to discuss. Remember that bigger transactions means that less of them will fit in a block. So the natural consequence of allowing larger transactions is a lower amount of customers.

The current situation is this;

* No block is allowed to hold a transaction larger than 1MB.
* Clients do not accept transactions over the network larger than 100kB.

The difference of those two numbers is explained by the proposal itself, the miners want to be able to add large transactions that are very low complexity.

I have not seen any reason to change from the current settings. Indeed, the fact that we have this proposal shows that moving away from the current max transaction size would be a mistake as it creates new problems.
 
Last edited:
  • Like
Reactions: awemany and Peter R

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
@Tom Zander
I am having trouble parsing whether you agree with BUIP040 as written, or not.

The current max txn size is 1MB because: 1MB block limit. So this is the existing protocol.
BUIP040 describes behaviour for blocks >1MB , which is new protocol, yet it adheres to the 1MB max txn size as a default for larger blocks, but it does allow a window for txns larger than 1MB if Nakamoto consensus is achieved (IMHO not an easy achievement).

Looks perfect to me.
 
  • Like
Reactions: sickpig

Tom Zander

Active Member
Jun 2, 2016
208
455
Hi Solex,

> BUIP040 describes behaviour for blocks >1MB , which is new protocol

Ok, thats a very interesting reading, indeed. I assume you mean the opening sentence from thezergs proposal;

> Bitcoin Unlimited currently does not constrain SIGOPs (signature operations) or transaction size when accepting blocks

I didn't read that as a proposal, but I understand from you that it is? I have not seen any BUIP that has actually proposed to remove the sigops and the transaction size from the protocol elsewhere. So you may very well be correct. Maybe that can be worded a bit more clearly.

I am really curious what the reason is for removing the maximum size for transactions. I can't really find any positives in that suggestion.

I can find various negatives.

  • You need a new rule to protect nodes from a new CPU exhaustion attack.
  • It makes the amount of users that can use the network much lower.
  • It is a change that needs to be discussed wider than just BU as this affects all Bitcoin participants.

My advice, don't change the bitcoin consensus rules for transaction maximum size.
 
  • Like
Reactions: awemany

Roger_Murdock

Active Member
Dec 17, 2015
223
1,453
@Tom Zander I don't see this proposal as removing the "current protocol limit" on transaction size; in fact, if anything, I see it as adding a limit that previously didn't exist. Unless I'm mistaken, there is no defined maximum transaction size in current Bitcoin clients. There is only an effective limit on transaction size as a result of the block size limit -- obviously a single transaction can't be larger than the block it's contained within. (So I'm assuming the largest single transaction one could actually mine right now would be 1 MB minus whatever minimal amount of space is required for block overhead.)

But even if you interpret the status quo to include a 1-MB "consensus rule" for maximum transaction size, this proposal isn't changing that limit -- it's simply providing a tool that would empower users to change that limit if and when they can achieve the requisite support from the network as a whole. That seems entirely in keeping with my understanding of BU's general philosophy.
 

deadalnix

Active Member
Sep 18, 2016
115
196
I'm not super fan. As Tom mentioned, there is already a limit for the size of a transaction. It must be noted that this limit exist for the current transaction, and do not need to be kept for a transaction with a higher version - which won't be subject to quadratic hashing.

Overall, I don't think we need to trust miner to produce easy enough to validate block. It is in their interest to do so to maximize how fast their block propagate. I don't think we should be afraid of a block that 30 mins to validate.
 
  • Like
Reactions: awemany and Peter R

Roger_Murdock

Active Member
Dec 17, 2015
223
1,453
Roger;
> Unless I'm mistaken, there is no defined maximum transaction size in current Bitcoin clients.

Yes, you are mistaken; See
https://github.com/bitcoin/bitcoin/blob/master/src/validation.cpp#L467
Sorry, I'm not really a programmer so I'm not exactly sure what I'm supposed to be looking at here. I do see the following within a function called "CheckTransaction":

// Size limits (this doesn't take the witness into account, as that hasn't been checked for malleability)
if ( ::GetSerializeSize(tx, SER_NETWORK, PROTOCOL_VERSION | SERIALIZE_TRANSACTION_NO_WITNESS) > MAX_BLOCK_BASE_SIZE)
return state.DoS(100, false, REJECT_INVALID, "bad-txns-oversize");​

Based on the references to "malleability" and "witness," I'm assuming this is code in the latest version of Core that would only be used if and when SegWit actually activates? But in any case, it seems to be checking the size of a transaction against some kind of maximum block size ("MAX_BLOCK_BASE_SIZE") -- which seems consistent with my understanding that transaction size is currently constrained only by the block size?
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
@Roger_Murdock :
assuming this is code in the latest version of Core that would only be used if and when SegWit actually activates
No, this code is always used in that client. The GetSerializeSize function would apply as well in the case where SW is not yet active.

Tom's point is that this function can lead to a rejection of the transaction due to oversize (MAX_BLOCK_BASE_SIZE is defined as 1e6 [bytes]), and is proof that there is a "defined maximum transaction size in current Bitcoin clients". A transaction that does not pass this check is not accepted into the mempool by the majority on the network.
 
Last edited:
  • Like
Reactions: Roger_Murdock

Peter Tschipper

Active Member
Jan 8, 2016
254
357
@Roger_Murdock To be clear, there are two kinds of max transactions size in Bitcoin today. One is the max transaction size that can be propagated by a node and that is set to 100KB. The other max transaction size is the one that is not really defined but bounded by the fact (as you mentioned before) that the block size is capped at 1MB...this is the max size of a transaction that a miner could make and then mine himself, which is today, obviously set at 1MB...if block size increases now then that max transaction size will be whatever the size of a block a miner can create, as you stated above.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
I'm not super fan. As Tom mentioned, there is already a limit for the size of a transaction. It must be noted that this limit exist for the current transaction, and do not need to be kept for a transaction with a higher version - which won't be subject to quadratic hashing.

Overall, I don't think we need to trust miner to produce easy enough to validate block. It is in their interest to do so to maximize how fast their block propagate. I don't think we should be afraid of a block that 30 mins to validate.
As far as I understand this will be a temporary band aid for the quadratic hashing issue. Once a proper fix will be deployed we could reconsider this change.

That said since the proposed code:

- doesn't change the tx relay policy
- and keep the same 1MB limit on tx size to declare a block excessive.

so I basically this mean that with an high AD value we are emulating the same situation we are in now.

WRT the need to trust miner to produce "easy" txs, I don't think we are delegating such a role to miners. We are putting every actors of the system in charge of it like we did for the block size by mean of EC. Am I missing something obvious?
[doublepost=1481125913][/doublepost]
@Roger_Murdock :


No, this code is always used in that client. The GetSerializeSize function would apply as well in the case where SW is not yet active.

Tom's point is that this function can lead to a rejection of the transaction due to oversize (MAX_BLOCK_BASE_SIZE is defined as 1e6 [bytes]), and is proof that there is a "defined maximum transaction size in current Bitcoin clients". A transaction that does not pass this check is not accepted into the mempool by the majority on the network.
But without this change, leaving the code as is, if we're going to fork using BU EC for the max block size limit the effective max size for a transaction will be as big as the new max size for the block, no?
 
  • Like
Reactions: awemany

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
But without this change, leaving the code as is, if we're going to fork using BU EC for the max block size limit the effective max size for a transaction will be as big as the new max size for the block, no?
@sickpig: Not necessarily. It could remain limited to 1MB max size for a valid transaction, until someone argues a need for bigger transactions. So far, I've not seen anyone make that argument, despite @Tom Zander asking about this precise point on BU slack.
 
  • Like
Reactions: awemany

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
That code looks to me like the max transaction size is limited to the max block size. This happens to be 1 MB in Bitcoin Core, but is not limited in Bitcoin Unlimited.

And when referencing code in other clients, its important to clearly indicate that that is what you are doing, so other readers don't think that that is the current code or behavior of Bitcoin Unlimited.

Point #1 in this BUIP requires that Bitcoin Unlimited use the same block limits as in the current network, for all blocks less than or equal to 1MB. So this BUIP does mandate that Bitcoin Unlimited preserves current network behavior.

But this BUIP also seeks to clarify network behavior post hard-fork. This is clearly necessary, as shown by our collective interpretations of the provided the code example. Half of us see that that logic would result in transactions as large as the max block size, and the other half see that logic still limiting transactions to 1MB.
[doublepost=1481127427][/doublepost]@freetrader, given the exact wording of that line of source code, I'd say that the onus is on someone to make the opposite argument. This is what I have attempted to do in this BUIP.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@freetrader @Tom Zander

So the alternative is let the 100KB relay policy, keep the max transaction size to 1MB, right?

What about the sigops per block, keep it max_block_size/50?

@theZerg one question about the sigops counting. You discovered that the current/legacy algo is broken, quoting the relevant part of your article:

Transaction 1 reports the following statistics:

Vin: 1, Vout: 10000

length: 340126 bytes
calculated sigops: 10000
actual sigops: 1
sighash: 340092
validation time: 125ms
Transaction 2 reports something completely different:

Vin 10000, Vout: 1

length: 1474955 bytes
calculated sigops: 1
actual sigops: 10000
sighash: 4100750000
validation time: 394725 ms
are we going to use a fixed version but just for blocks bigger than 1MB ?
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@sickpig, yes. As the BUIP states, for blocks > 1MB the "effort" metric will be used instead of the "legacy" sigops calculator.
 
  • Like
Reactions: Norway and sickpig

Tom Zander

Active Member
Jun 2, 2016
208
455
I'm a bit worried we are talking past each other. The questions that need answering are still not being answered. Why?

If you want to make a change to the Bitcoin protocol, and removing the maximum transaction size is doing that, you have to have a good reason.
So, I ask again, why do you want to change the maximum transaction size as they are defined now?

I've seen some argue that since two rules both happen to have the same maximum value (1e6), they are related. I have to reject the notion that this somehow means that if one changes that this automatically means the other should change too.
They currently have the same value, that is really the only relation between them. Even if we agree on there being a causation, it doesn't excuse us from having to actually have a good reason before changing the max transaction size.
Point in fact is that Core has 4MB "blocks", and still keeps the limit of 1MB for transactions. Same with Classic.

What the BU membership should ask, should they at some point be asked to vote on this is, why would you want to change the maximum transaction size as they are defined now?

This is important because there is no problem to solve at all if you don't change those limits.
This BUIP doesn't solve anything except a problem that BU created.
 
  • Like
Reactions: awemany and Peter R

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
Seems to me that in this case simplicity is a virtue. Because of the quadratic hashing issue, keeping transaction size limited to 1MB seems like a good idea. Unless there is strong market demand I don't see a need for making that configurable.

In the longer term, with a new transaction type that resolves the quadratic hashing problem, the new transaction type can have no size limit.

Something like this sounds reasonable for now:

So the alternative is let the 100KB relay policy, keep the max transaction size to 1MB, right?

What about the sigops per block, keep it max_block_size/50?
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@Tom Zander, the Core code you pointed out references the constant MAX_BLOCK_BASE_SIZE as follows:

if ( ::GetSerializeSize(tx, SER_NETWORK, PROTOCOL_VERSION | SERIALIZE_TRANSACTION_NO_WITNESS) > MAX_BLOCK_BASE_SIZE)
return state.DoS(100, false, REJECT_INVALID, "bad-txns-oversize");

So if you increased the max block size, the max transaction size would also automatically increase. The code clearly indicates that the author expects the maximum transaction size to be the same as the maximum block size, otherwise two separate constants would be used.

In point 2, this BUIP recommends that the excessive transaction size instead be 1 MB.

This is the value that I think you are arguing for, so I'm completely confused as to what your issue with this BUIP is...
 

deadalnix

Active Member
Sep 18, 2016
115
196
But without this change, leaving the code as is, if we're going to fork using BU EC for the max block size limit the effective max size for a transaction will be as big as the new max size for the block, no?
There was no BUIP to put the code as it is, so I don't see why there should be one to rolback the change. Keeping the 1MB limit for transaction size works just fine and do not need to apply for later transaction version.

You don't wire something in the consensus rule as a temporary band aid, as this is the hardest and riskiest part to change.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
@theZerg
> So if you increased the max block size, the max transaction size would also automatically increase.

I think you got confused somewhere. To remove the maximum block size, you would not increase it, you would remove it. If you want to know how to avoid increasing the max transaction size without affecting the max block size I can point you to Core or Classic which both managed to do so.

But you are talking implementation details and I addressed that exact point a couple of posts ago. Maybe you missed it:
They currently have the same value, that is really the only relation between them. Even if we agree on there being a causation, it doesn't excuse us from having to actually have a good reason before changing the max transaction size.
Point in fact is that Core has 4MB "blocks", and still keeps the limit of 1MB for transactions. Same with Classic.
You are not working in a vacuum here. I'm only commenting on this BUIP because they affect a lot more than your implementation and I think its important that all clients have the same consensus rules. Would you not agree?

I'll take your thinking process of why they are the same this as an answer to the question on why you increased the transaction max size. Feel free to correct me if you have a better answer.

This BUIP is unneeded if you do what Classic has done and avoid the removing of the maximum transaction size when removing the maximum block size.

This is literally a less than 10 lines fix.
 
  • Like
Reactions: awemany