BUIP016 (passed): Consensus with Classic on txn size limit

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
BUIP016: Consensus with Classic on txn size limit

Proposer: Andrew Clifford

Submitted on: 20 Jan 2016
revision 1: 21 Jan 2016 (no longer a simple 100kb limit for txns)


Summary

This is a "bare bones" BUIP just to obtain BU membership agreement for implementing the final patch used in Classic for transaction size limiting. This patch is necessary for blocks >1MB because there is a theoretical attack possible on Bitcoin from a rogue miner creating very large txns which take minutes to verify, i.e. too many signature validation operations (sigops). BU is safe from this attack, but may eventually reject blocks which Classic considers valid, perhaps not for a few years but the probability of a divergence increases as block sizes steadily increase.

Gavin Andresen has written a sophisticated pull request for Bitcoin Classic:

https://github.com/bitcoinclassic/bitcoinclassic/commit/842dc24b23ad9551c67672660c4cba882c4c840a
Accurate sigop/sighash accounting and limits

Adds a ValidationCostTracker class that is passed to
CheckInputs() / CScriptCheck() to keep track of the exact number
of signature operations required to validate a block, and the
exact number of bytes hashed to compute signature hashes.

Also extends CHashWriter to keep track of number of bytes hashed.

Signature operations per block are limited to MAX_BLOCK_SIGOPS
(unchanged at 20,000)

Bytes hashed to compute signatures is limited to MAX_BLOCK_SIGHASH
(1.3 GB in this commit).


Proposal


That the BU Developer may implement the same patch which Classic uses for txn size limiting in a BU release before or during the first release of Classic.
 
Last edited:

Roy Badami

Active Member
Dec 27, 2015
140
203
Presumably it would be more in keeping with the BU philosophy to make this a configuration option, though, so that people could remain compatible with Core's consensus rules if they wish?

I'd favour Classic-compatible consensus rules being the default, though.
 
  • Like
Reactions: TrevinHofmann

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Roy Badami
This change will only have an effect on blocks larger than 1MB so Core remains compatible, unless they increase their limit and do sigops counting a different way. Which I can't believe they would do (but you never know!).
 

Roy Badami

Active Member
Dec 27, 2015
140
203
Notably, I don't think we can get away without tracking the activation of the Classic fork. The new 1,300,000,000 byte sighash limit only applies after the Classic fork triggers. Applying it as a blanket limit could result in us rejecting a valid block, since there is currently no such limit in force. At which point we would just stop tracking the valid chain (and in all likelihood not see any new blocks at all).
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Roy Badami I agree with you.
While BU has BUIP001 to cope with variation of interpretations in excessive block sizes, it does not have the same capability for excessive sigops & sighashes. And TBH it shouldn't because when these limits are approached then we are looking at txns without a decent business-case, that could be broken up if they are really needed.
 
I would make this a configurable option just like the block size limit, with both the preferred limit and an acceptable limit after a specified depth.

Eventually, I think it makes sense to replace this and the block size limit with a "block cost limit" as proposed in the SegWit BIP, but with a more complex cost calculation and BU's configurability for the weights and limit.
 

Aquent

Active Member
Aug 19, 2015
252
667
I suppose it could either be the patch or, perhaps better, make the default limit 2MB, rather than the current 1MB.
 

Roy Badami

Active Member
Dec 27, 2015
140
203
Well, so AIUI Classic changes three things (all of which will trigger simultaneously):

1. The block size hard limit (increases from 1,000,000 to 2,000,000). We already know how to handle that.

2. The MAX_SIGOPS limit (changes from 20,000 with legacy counting to 20,000 with strict counting). I'm pretty sure that this is strictly an increase (i.e. legacy counting often overestimates the number of sigops, but never underestiamtes). This can be handled with configuration parameters in much the same way as block size limit; the only complication is that it's not just a number - it's a number plus a counting style. So sigops is being increased from 20000L to 20000S (where L/S indicates legacy/strict counting). EDIT: Actually, it doesn't really increase the risk. Someone who wanted to waste money mining such a block could DoS BU nodes today anyway with a large block that we would accept and attempt to validate.

3. The MAX_SIGHASH limit is being *reduced* from infinity to 1,300,000,000. This is the one that is a little harder to figure out how to handle. We could potentially keep it at infinity until after the classic fork triggers, and then change it manually (assuming we're only targetting non-mining nodes right now) but this increases our vulnerability to DoS by means of blocks that take ludicrously long to validate.

EDIT: As I say, this last change (the sighash limit) is being reduced, not increased. Changes 1 and 2 permit certain blocks that are currently invalid due to exceeding the limits, but change 3 is the other way round. So blocks that are currently valid could be invalid under this change. But the change fixes a pathological case (that could be exploited as a DoS) that gets worse with larger blocks - hence it makes sense that Gavin is proposing to roll out this fix as part of the hard fork (even though if we had all the time in the world we could just do this as a separate soft fork first).

EDIT^2: Still, we have 28 days to deal with this if/when the Classic fork triggers.
 
Last edited: