BUIP037: Hardfork SegWit

deadalnix

Active Member
Sep 18, 2016
115
196
Proposer: Amaury SECHET
Submitted on: 11/12/2016

Summary:

There are various problem with the current transaction format, including malleability and quadratic hashing. SegWit proposes to add a new transaction format, which solves this problem but does so in a way that does not allow to spend existing UTXO. As a result, SegWit doesn't deliver on its promise. FlexTrans proposes an alternative way to solve these problems in a way that is compatible with existing UTXO, which allows us to eventually weed out the old transaction format.

This BUIP proposes to adopt a strategy similar to FlexTrans but using implementation details more similar to SegWit. Doing so should allow actors in the ecosystem, who have already implemented SegWit, to support this BUIP with minimal effort.

Specification:

A new transaction format is introduced. It is recognized by its use of version 7.

The binary format is similar to existing transaction format. However, the signature script for each input is replaced by witness data, the output is limited to a version and a hash, and options and metadata fields are added for extensibility.

For compactness all integer are represented via a variable length encoding, except the version field. The leading byte of the format indicate how large the representation is. The representation is big endian.
Code:
First byte | bits | from               | to
===========================================
0xxxxxxx   | 7    | 0x0000000000000000 | 0x000000000000007f
10xxxxxx   | 14   | 0x0000000000000080 | 0x000000000000407f
110xxxxx   | 21   | 0x0000000000004080 | 0x000000000020407f
1110xxxx   | 28   | 0x0000000000204080 | 0x000000001020407f
11110xxx   | 35   | 0x0000000010204080 | 0x000000081020407f
111110xx   | 42   | 0x0000000810204080 | 0x000004081020407f
1111110x   | 49   | 0x0000040810204080 | 0x000204081020407f
11111110   | 56   | 0x0002040810204080 | 0x010204081020407f
11111111   | 64   | 0x0102040810204080 | 0xffffffffffffffff
It has the following binary format:
Code:
Field Size | Description  | Data type | Comments
=================================================
1+         | version      | varuint   | Transaction data format version (in this case, 7)
1+         | tx_in count  | varuint   | Number of Transaction inputs
35+        | tx_in        | tx_in[]   | A list of 1 or more transaction inputs or sources for coins
1+         | tx_out count | varuint   | Number of Transaction outputs
22+        | tx_out       | tx_out[]  | A list of 1 or more transaction outputs or destinations for coins
1+         | option size  | varuint   | The size of the metadata field, in bytes
?          | option       | metadata  | Optional metadata relative to this input
Transaction id (aka txid ) is computed by hashing the transaction, skipping over input's witness and metadata. See the inputs section for more details.

The option field is interpreted as follow:

The metadata field contains a set of entries. Each entry is made of one varuint tag, followed by a value, which size and representation depends on the tag. One tag value cannot appear twice.

Because the size of the metadata field is known by the parser, the parser can skip over remaining metadata when it encounter a tag with a value it doesn't know how to interpret.

This BUIP defines the following tags for transaction metadata:
Code:
Tag | Description | Data type | Comments
========================================
10  | LockByBlock | varuint   | lock_time support (block height)
11  | LockByTime  | varuint   | lock_time support (timestamp)
Inputs

TxIn consists of the following fields:
Code:
Field Size | Description     | Data type | Comments
===================================================
33+        | previous_output | outpoint  | The previous output transaction reference, as an OutPoint structure
1+         | witness count   | varuint   | Number of witness data
?          | witness         | witness   | Witness data
1+         | metadata size   | varuint   | The size of the metadata field, in bytes
?          | metadata        | metadata  | Optional metadata relative to this input
The OutPoint structure consists of the following fields:
Code:
Field Size | Description | Data type | Comments
===============================================
32         | hash        | char[32]  | The hash of the referenced transaction
1+         | index       | varuint   | The index of the specific output in the transaction. The first output is 0, etc.
Witness consist of the following fields:
Code:
Field Size | Description    | Data type | Comments
==================================================
1+         | witness length | varuint   | Witness length
?          | witness        | uchar[]   | Witness data
The witness field represent element that are on the stack before the redeem script starts to run.

This BUIP defines the following tags for input metadata:
Code:
Tag | Description | Data type | Comments
========================================
10  | LockByBlock | varuint   | BIP68/112/113 support
11  | LockByTime  | varuint   | BIP68/112/113 support
Metadata tags are a BUIP039 extension point. The can be signaled with the InMD prefix.

The signature process compute sighash as double_sha256(txid + previous_output + TxOut + metadata + hashType)

TxOut needs to use the new format. When spending UTXO from older transaction, please refers to the conversion procedure in the Outputs section.

Outputs

TxOut consists of the following fields:
Code:
Field Size | Description | Data type | Comments
===============================================
1+         | value       | varuint   | Transaction Value
1+         | version     | varuint   | Script versioning capabilities
20+        | hash        | uchar[]   | Usually contains the public key as a Bitcoin script setting up conditions to claim this output
The 3 least significant bits of the version field indicate the size of the output hash as follow:

Code:
Size class | Hash size
=======================
0          | 20B - 160bits
1          | 24B - 192bits
2          | 28B - 224bits
3          | 32B - 256bits
4          | 40B - 320bits
5          | 48B - 384bits
6          | 56B - 448bits
7          | 64B - 512bits
This BUIP define 4 valid versions:

Code:
Version | Semantic
===================
0       | P2KH - OP_HASH160
3       | P2KH - OP_HASH256
8       | P2SH - OP_HASH160
11      | P2SH - OP_HASH256
For both version, we'll define OP_HASH as OP_HASH160 is the hash size is 20 bytes and OP_HASH256 if the hash size is 32bytes.

If we have P2KH version, the following redeem script is executed to verify the signature:
OP_DUP OP_HASH <pk_script> OP_EQUALVERIFY OP_CHECKSIG

If we have P2SH version, the topmost element of the stack popped and hashed using OP_HASH, and the result compared to hash. If the comparison fails, the transaction is invalid. If the comparison succeed the popped element is executed as a script to validate the spend.

Out version are a BUIP039 extension point. They can be signaled with the OutV prefix.

Legacy UTXO can be converted to this new format using the following procedure:

Code:
Script pattern                                         | Version | Script
=========================================================================
OP_DUP OP_HASH160 <pk_hash> OP_EQUALVERIFY OP_CHECKSIG | 0       | pk_hash
OP_DUP OP_HASH256 <pk_hash> OP_EQUALVERIFY OP_CHECKSIG | 3       | pk_hash
<pubkey> OP_CHECKSIG                                   | 3       | double_sha256(pubkey)
OP_HASH160 <script_hash> OP_EQUAL                      | 8       | script_hash
anything                                               | 11      | double_sha256(anything)
NB: this may require output script to be duplicated in the witness to spend legacy UTXO. This will happen in a very minimal number of cases and is on purpose. Witness data can be put in cold storage while UTXO data need to be kept hot as normal node needs to query these data frequently when operating. Because of this, it is desirable to shift as much data as possible from the UTXO set to the witness data.

Semantic

- Signature malleability are prohibited as per BIP146
- OP_CHECKMULTISIG and OP_CHECKMULTISIGVERIFY dummy argument must be a null vector as per BIP147.

Rationale

This BUIP keeps thing close to SegWit to minimize sunk cost for actor supporting it. It also reuse the tag system from FlexTrans to ensure the format is extensible, but limit this to the metadata and option field in order to enable BUIP039.

In conclusion, This combine the best part of SegWit (extensibility and privacy using a version/hash pair as output) and FlexTrans (extensibility, compatible UTXO) and allow for BUIP039. In addition, this limits the sunk cost for actors who prepared for SegWit.

EDIT: Revision, removing the lock_time field and reintroducing the sequence one. More revision to come as this converge toward FlexTrans.
EDIT2: Revision.
Output script are now just a hash. This is also an idea from SegWit and allow to shift data from the UTXO set to the witness data. Because this is a hard fork, it can be done in a way that is backward compatible.
Inputs now have a metadata field containing optional data identified via a tag. This is a natural extension point for future extension and an ideal place to store optional data such as the sequence field.
EDIT3: Use BUIP039 to extend this transaction format.
EDIT4: Add a global option field which respects the metadata. Leverage it to implement lock_time.
EDIT5: Rework the Out structure to be more compact. Add a rationale section to compare to FlexTrans and SegWit.
EDIT6: Remove comparison to FT and SW as post length is limited.
EDIT7: Use variable size encoding for int all over the place. Define the format.
 
Last edited:

deadalnix

Active Member
Sep 18, 2016
115
196
Fixed :)

EDIT: Hijacking that post as the first one is running out of space. Some comparison with other proposals out there:

Compared to SegWit:

This BUIP provide the benefits provided by SegWit. In addition:
- It is compatible with existing UTXO.
- It is more extensible thanks to metadata and options fields.
- It provide an extension path other than soft and hard fork as per BUIP039.
- It doesn't require 2 merkle tree.
- It doesn't create a centrally planned discount.
- It is a hard fork instead of a soft fork.
- It is more compact.

Compared to FlexTrans:

As FlexTrans this fixes malleability and quadratic hashing. In addition:
- It provides output as a pair (version, hash) which allow unified addresses even for feature that do not exists yet, better privacy and allow BUIP039.
- It provide an extension path other than soft and hard fork as per BUIP039.
- More compact UTXO set.
- Canonical transaction encoding. FlexTrans require to keep around the original serialization of a transaction, which may turn out to be a problem.
- It is more compact as the most common fields do not require a tag.
- Extensibility is limited to option and metadata fields. This is a limitation in one hand, but also enable BUIP039. I do not believe it is ever going to be a problem as extra data can be added per input and/or per transaction, and output is a hash so it can be anything we wish in new features.
 
Last edited:

deadalnix

Active Member
Sep 18, 2016
115
196
@Justus Ranvier I banged my head against that one. I think this is the wrong tradeof. Let me explain.

One could prove that way that a UTXO existed in the past. That doesn't allow to prove that this UTXO isn't spent already. This UTXO can be spent anywhere and therefore, a node wanting to produce a proof that this UTXO is spent would have to know where any past UTXO has been spent.

This effectively prevents pruning and sharding.

The same proof can be produced using UTXO commitment, with the additional benefit that it helps pruning and sharding rather than hinder it. It also avoids making tx bigger. I'm working on a BUIP for UTXO commitment.

ALL: There are some problem with this BUIP as it doesn't support sequence feature. I'm goign to update it.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
a node wanting to produce a proof that this UTXO is spent would have to know where any past UTXO has been spent.
Ease of producing a proof doesn't matter - only the ease of verifying it.

There will always be nodes capable of discovering double spends - you don't need to worry about their resource requirements. The important thing to do is make sure even the lightest nodes can verify the proofs.

All this emphasis on network-level sharding is optimizing the wrong thing.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Not trying to let this drift OT too much: But I honestly never understood the idea of sharding the chain anyways.

To really benefit from sharding, you'd need to split up the graph of transaction generators (people) into domains so much that most transactions stay within a single domain.

But that would create obvious risks to the fungibility and censorship resistance of Bitcoin, or wouldn't it?

After all, Nakamoto consensus means exactly that: A common, world-wide eventual agreement on what transactions are ordered and valid in which way.
 

deadalnix

Active Member
Sep 18, 2016
115
196
@Justus Ranvier Producing the transaction being expensive is not that big of a deal, but it still needs to be scallable.

For instance, one could produce a zk-SNARK that a block is valid with a block. Problem is, this scale with the amount of data in input * the amount of operations. It means quadratic scalling for block validation: it doesn't fly.

In this case, a prover would have to crunch the whole blockchain in between the announced block and the current block. That is O(n*t) scaling, ie quadratic scaling if usage grows linearly, superexponential scaling if usage grows exponentially.

@awemany sharding is inevitable for scaling. I get from your comment that you don't really understand what it is. But consider how would bittorrent would scale if all nodes have to download all files ? How would the web scale if all webserver had to host all websites ? That's just inevitable if bitcoin is to succeed.

There is no reason for it to hurt fungibility if it is done properly. There is no reason to believe that running the system on multiple small machines would be less censorship resistant than running it on a few HUGE nodes, in fact there are many reason to believe the opposite to be true. I get from your comment that you are not very familiar with large scale systems.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
@Justus Ranvier Producing the transaction being expensive is not that big of a deal, but it still needs to be scallable.
I have no idea what you're talking about, and there's no need at all to involve zk-SNARK moon math.

There is no such thing as block validity apart from transaction validity (trivial exceptions notwithstanding).

Full nodes already have the ability to accept blocks that only contain valid transactions based on their own locally-maintained UTXO set and always will.

Yes, the hardware requirements will go up over time and as the transaction rate increases until Satohi's prediction of full nodes only running in data centers comes true, but that's not a problem which needs to be solved any time in the next decade or so.

if we assume the majority of mining power will always be unwilling to extend a chain containing invalid transactions, then we don't need fraud proofs. Non-full node clients can always just follow the chain with the highest proof of work.

I don't think we should make that assumption, because we can easily do a lot better.

The situation in which fraud proofs make sense is if we assume that there will always exist in the world a minimum of one entity running a full node which is willing to notify the rest of the network in the event the majority of hash power is extending a chain containing invalid transactions.

This is a reasonable assumption to make for the foreseeable future. Maybe it won't be in a hundred years, but we can worry about that a hundred years from now.

The biggest change we need for fraud proofs is for transaction inputs to reference the block from which they originated. It's an easy change which only leaves a small attack surface.

The remaining issues can be addressed by implementing the merklex tree, and then we're done.
 

deadalnix

Active Member
Sep 18, 2016
115
196
I explained why i bring SNARKs in. If producing the proof don't scale, then this is not an effective mechanism.

I showed that you proposal scale either quadratically or super-exponentially, depending on the adoption pattern. This has the same scaling characteristic as SNARKs, just with smaller constants.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
There is no reason for it to hurt fungibility if it is done properly. There is no reason to believe that running the system on multiple small machines would be less censorship resistant than running it on a few HUGE nodes, in fact there are many reason to believe the opposite to be true. I get from your comment that you are not very familiar with large scale systems.
Where did you see mention the words "censorship resistance"?

I am familiar enough with large scale systems to detect when you're deving for the sake of deving instead of solving problems in the correct order.

Some day the optimizations you want to make will not be premature, but today is not that day.
 

deadalnix

Active Member
Sep 18, 2016
115
196
Censorship resistance is mentioned in @awemany 's message.

I suggest you pay a bit more attention to what you are answering to. You clearly state in your first message "I have no idea what you're talking about" and now you seemed to be dressing a message without understanding its context. I don't think we can have any constructive conversation.

I could address various reason reason why UTXO commitment are needed, even now, but that's more or less off topic here.

My point stands, producing the proof that an UXTO has been spent without an UTXO commitment doesn't scale, and therefore shouldn't be introduced.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
My point stands, producing the proof that an UXTO has been spent without an UTXO commitment doesn't scale, and therefore shouldn't be introduced.
I don't know why you're being so disingenuous now.

Many nodes maintain indices for identifying which block contains a particular txid, and they aren't going to lose this capability any time soon.

They could just as easily maintain an outpoint index which identifies which transaction spends particular outpoint.

The manner in which you're trying to introduce UTXO commitments, besides being nowhere near necessary yet, undermines the value proposition of Bitcoin.
 

deadalnix

Active Member
Sep 18, 2016
115
196
I'm not sure why I can't quote your message. Anyway.

"They could just as easily maintain an outpoint index which identifies which transaction spends particular outpoint."

That is true. But I already showed that this set grows quadratically to super exponentially depending on the assumption you make on the system growth. The fact this can be done at the current scale is not a good reason to dismiss this. If we are going to have bigger blocks, we need to make sure we introduce change that scale at worth linearly, preferably sublineraly. If not, you have a problem. these datastructure are hard to change, and the bigger bitcoin gets, the hard to change they become. So i don't think starting to rely on something because we can do it at this scale is a good idea.

I'm also not especially pushing for UTXO commitment at this stage. The only reason I mentioned is because it solves the problem you are interested in in a superior way. There is nothing disingenuous in any of this. However, I'd like to know why you think it undermines the value of bitcoin.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
There were lots of types of digital cash before Bitcoin, but none of them became valuable.

Bitcoin is valuable because no party, regardless of how much hash power they control, can arbitrarily increase their supply of coins.

This is the value proposition which needs to be maintained.

The value proposition is only completely valid for nodes that have observed every transaction in the blockchain (full nodes).

Absent some kind of new crypto-magic which has not yet been invented, Bitcoin's value proposition will always depend on some nodes operating at any given time which have observed every transaction in the blockchain.

This requirement can not be bypassed by committed UTXO sets, because an attacker with sufficient hash power can commit anything they want. There always has to be some entity who can reject invalid blocks regardless of the hash power promoting them.

We have a problem consisting of three parts:

  1. Bitcoin value proposition requires full nodes (not likely to change any time soon)
  2. The relative, and possibly absolute, number of full nodes will decrease as Bitcoin becomes more widely used. (hard to fix without negatively impacting the value proposition or limiting utility as digital cash)
  3. The validation performed by full nodes can not help other nodes, because they can't produce verifiable proofs which other network users could use to effortlessly reject invalid chains. (relatively easy to fix)
The reason I don't care so much about the cost of producing double spend proof is because it's equivalent to the cost of running a node that observes all transactions in the blockchain and I think nodes which do this will always exist as long as Bitcoin has value.

As long as the proofs are compact and easily verifiable, then we can stop worrying about #2 because the criteria for acceptable security is met with a minimum of one honest full node operating at all times.

If all we need from a security standpoint is one node (because of the availability of comprehensive fraud proofs), then having hundreds or thousands running in the world at any given time is probably a greater safety margin than what the network has now.
 
Last edited:
  • Like
Reactions: Mengerian

deadalnix

Active Member
Sep 18, 2016
115
196
That doesn't explain why you think UTXO commitment undermine the value of bitcoin. Nobody is suggesting to get rid of the nodes.

The first difference is that if you have a UTXO commitment, you need to query a data set that is O(UTXO) and if not, you need to query a dataset that is O(blockchain) to produce the proof. There is today almost 2 orders of magnitude of difference between these 2 datasets, and it is only going to grow.

The second difference is that UTXO commitment require to add one hash per header, while your solution require to add one hash per input.

Comparing the 2 seems like a no brainer, and you haven't presented any fact that contradict this so far.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
A UTXO commitment proves the hashing majority's consensus on the state of the ledger. It's useless for producing a proof that the majority of the hash power has chosen to extend an invalid chain, therefore it's not a solution to the problem.

The second difference is that UTXO commitment require to add one hash per header, while your solution require to add one hash per input.
The block's hash is not necessary. It's chain height is sufficient.

If a few bytes per input really is a problem though, those commitment could be in a seperate tree that's calculate instead of being broadcast with the block.
 
Last edited:

deadalnix

Active Member
Sep 18, 2016
115
196
What are you talking about ? The UTXO comitment needs to be validated the same way the transaction are and blocks rejected if it doesn't match.

For the second point, you can put the block hash or its height, it doesn't really matter. It changes nothign to the point I'm making.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@awemany sharding is inevitable for scaling. I get from your comment that you don't really understand what it is. But consider how would bittorrent would scale if all nodes have to download all files ? How would the web scale if all webserver had to host all websites ? That's just inevitable if bitcoin is to succeed.
I disagree. A typical website is 1MB (or 2MB?) now, and the typical Internet user is doing tens to hundreds a day. A typical transaction is 300 bytes, and the typical person does maybe 2-3 a day. And that is why I think Bitcoin is viable as it is.

Regarding sharding: Again: Sharding means that you have to divide up the load. Dividing up the load means dividing up the transaction generator graph, and making the slices of your pie so that not too many edges are sticking out of your slice. Because if that is not the case, you'll not gain anything and simply lose all your efficiency due to lots of cross-shard-transactions.

Meaning that your sharding scheme needs to exploit clustering in this graph. And the largest clustering effect is likely geographical, thus you'll get a US, a Russian and a German shard - and if governments have their way, not just a shard, but a US, Russian and German BTC variant. Eventually likely mutually incompatible, with 'desired' inflation, 'KYC/AML' and all that.

EDIT: To make this a little bit more clear, I am specifically talking about sharding here. I don't see the value, so here's where I disagree with you @deadalnix .

However, I also think I disagree with @Justus Ranvier on the value (of some kind) of UTXO commitments. If it is made part of the consensus (and contains a running sum of value in the merkle nodes) then a) adherence to the inflation schedule is trivial to check and b) I can't see how a mining node with 51% of HP can commit anything. They could just commit what the network accepts.

That's why I said elsewhere that I think running full nodes from a UTXO set plus transactions from a year back or so should be sufficient to start a node - because your assumption then is that the whole network didn't lie about the UTXO set for a year.

And I wonder whether there might be, for extra protection, also a way to implement this so that archival nodes with everything could produce succinct proofs that a certain UTXO set is invalid.
 
Last edited:

deadalnix

Active Member
Sep 18, 2016
115
196
@awemany I see value in sharding, but I don't think this is appropriate to debate this here anyway. This proposal isn't about sharding. I'll just tell you that the assumption you make "Dividing up the load means dividing up the transaction generator graph, and making the slices of your pie so that not too many edges are sticking out of your slice." is overly restrictive. You can create sqrt(n) shard of sqrt(n) elements, for a total of sqrt(n)^2 = n elements. Even if 100% of the edges are sticking out (worst case scenario) you got to do sqrt(n) request out of your shard and serve sqrt(n) request from other shards. in such a situation, the workload for one node scales in sqrt(n) and the number of node required scales also in sqrt(n) to maintain a given level of security. There are ways to do better, but that should be enough to show there is value there.

I'm 100% with you when you says that clustering would undermine security and fungibility. Clustering is undesirable. However, you are wrong when you assume there is no benefice in sharding without clustering.

As to producing succinct proof that the UTXO set is invalid, it can be done as long as the structure of the UTXO allows for compact proof of presence and proof of absence. I also agree that you can consider something sufficiently in the past as "established history" and start validating from there. That's obviously a security tradeof.