BUIP037: Hardfork SegWit

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
And I wonder whether there might be, for extra protection, also a way to implement this so that archival nodes with everything could produce succinct proofs that a certain UTXO set is invalid.
This is exactly what Justus' proposal enables.

It is a way to structure the data so that a full node can prove fraud to others who do not need to run full nodes. This tilts the balance towards honest nodes, and increases the cost of fraud. It means that even if most nodes and miners collude to (for example) inflate the money supply, it only takes one honest full node to be able to prove the fraud to the world.
 
  • Like
Reactions: awemany

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
@Mengerian There are basically two way to do fraud proofs.

If they're based on committed UTXO sets, then they are less expensive to produce while being less secure.

If fraud proofs are implemented without recourse to a committed UTXO set, then there is no security tradeoff, but the cost of producing them is higher.

In both cases the cost of verifying a proof is the same.

I don't think the cost of producing the latter type of fraud proof will ever be a problem, so I don't see any reason to introduce a security tradeoff in order to obtain an unnecessary performance increase.
 

deadalnix

Active Member
Sep 18, 2016
115
196
"If they're based on committed UTXO sets, then they are less expensive to produce while being less secure."

You'll have to start substantiating that assertion at some point.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Mengerian: Understood. My point was rather that I think Bitcoin operating in a mode of constantly 'bootstrapping itself' (by new nodes joining from UTXO set and partial history and even the scenario of initial history eventually forgotten) is as far as I can see as good as current scenario of full nodes having all transactions, maybe full nodes minus some small epsilon.

@awemany I see value in sharding, but I don't think this is appropriate to debate this here anyway. This proposal isn't about sharding. I'll just tell you that the assumption you make "Dividing up the load means dividing up the transaction generator graph, and making the slices of your pie so that not too many edges are sticking out of your slice." is overly restrictive. You can create sqrt(n) shard of sqrt(n) elements, for a total of sqrt(n)^2 = n elements. Even if 100% of the edges are sticking out (worst case scenario) you got to do sqrt(n) request out of your shard and serve sqrt(n) request from other shards. in such a situation, the workload for one node scales in sqrt(n) and the number of node required scales also in sqrt(n) to maintain a given level of security. There are ways to do better, but that should be enough to show there is value there.
Point taken.

But I still don't see how you can maintain a given level of security with just your sqrt(n) nodes then. Those sqrt(n) nodes also contain only sqrt(n) of the data, and with the assumption that you want equal security (~ number of full nodes), you'd need sqrt(n) times the number of nodes?

I guess I was arguing somewhat along the wrong lines for the right reason (and I feel like I have a deja vu, as if we argued the same things here before): With sharding, you only get a fraction of the data stored and transmitted per shard. But with random distribution across the shards, all I can assume is that a sharded node stores 1/(no-of-shards) fraction of my data.

With some kind of clustering, I'd know more about the shards I need, so that would change the full node security question.

For example, each person storing their own UTXOs (+ Merkle branches leading to their coins) could be considered -in a way- an extreme form of sharding while exploiting clustering.

But that would still mean that each transaction has to propagate through the full network so that everyone is on the same page regarding the new UTXO set in the next block. I however, definitely see value in this (because it would remove the psychological roadblock to scaling that is increasing storage requirements). But I can't see how you'd get there without still using the full bandwidth (and some more even, to transmit the Merkle branches from the UTXO set to the other nodes).

But maybe I am still too stupid here to see the scenario where you'd truly gain from random sharding. Can you make a realistic scenario?
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@deadalnix Getting back to your BUIP proposal.

What do you see as the main benefit of BUIP037 versus Flexible Transactions? One stated benefit is that it is easier to adopt for parties that have already done work to implement segwit support. Are there any other pros or cons relative to Flexible Transactions? Is BUIP037 more future-friendly than segwit, how does it compare on introducing technical debt?

I'm also wondering, is there a similar structure to the segwit witness merkle tree?
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
I think the main benefit at this point would be uniting the bitcoin community again.
 

deadalnix

Active Member
Sep 18, 2016
115
196
@deadalnix Getting back to your BUIP proposal.

What do you see as the main benefit of BUIP037 versus Flexible Transactions? One stated benefit is that it is easier to adopt for parties that have already done work to implement segwit support. Are there any other pros or cons relative to Flexible Transactions? Is BUIP037 more future-friendly than segwit, how does it compare on introducing technical debt?

I'm also wondering, is there a similar structure to the segwit witness merkle tree?
Good questions !

First and foremost I'm discussing these ideas with tom on a regular basis and my hope is that we can get the 2 proposal converge. This includes various things that Flextrans do not support, such as witness data rather than input script. This sidestep a bunch of bizantine rules in the validating process that do not serve any purpose others than having to be handled because it is possible to do it. Tom has been working on porting this to FlexTrans since then.

Another improvement is the BIP143 hashing scheme. It improve hardware wallet and SPV security by making the signature invalid if the device was lied about the UTXO. It also enable double spend proof. Once again, Tom has been working toward porting this to FT, so it is all good.

It also deal with BIP146 which remove signature malleability. This is honestly not very important because signature malleability both in FT and in this proposal do not matter much as they don't affect txid.

Lastly, this proposal leverage existing encoding used in various places in the code. This means that existing serialization framework can be used. This is in this code that Matt discovered flaws in FT - which are fixed now - but it should indicate that this code in non trivial and using the same framework is both less work, but also less risk.

Last but not least, I plan to update this proposal myself to include various ideas that fell out discussions with Tom.
[doublepost=1480293045,1480292399][/doublepost]
I think the main benefit at this point would be uniting the bitcoin community again.
Yes I think getting more people onboard will be easier if the proposal is closer to SW. People hate sunk cost and so it will be easier to get more people onboard if we can limit sunk cost.
 

deadalnix

Active Member
Sep 18, 2016
115
196
I updated the BUIP to add natural extension point and future proof the format (this was a weakness of this proposal compared to FT) and to have SegWit style TXO as they are more compact, but, contrary to SegWit, this is kept compatible with existing UTXO, with a bit of plumbing.
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@deadalnix thanks for the answers! Great to hear you and Tom Z are combining efforts to work towards a best-of-breed solution.

So if I understand correctly, this proposal allows to spend existing UTXO using this new transaction type, whereas Segwit cannot. Is this correct? This is a big advantage, as it would allow phasing out old transaction type, while still keeping all UTXO spendable.

How would imagine this new transaction type being introduced? Would it be introduced alongside existing transaction types on the network, with both coexisting on the network? and then after a suitable adoption period, the old transaction type would be phased out? Or would the old transaction type be allowed forever, with a size limit to mitigate quadratic hashing?
 

deadalnix

Active Member
Sep 18, 2016
115
196
So if I understand correctly, this proposal allows to spend existing UTXO using this new transaction type, whereas Segwit cannot. Is this correct?
That is correct. For completeness, it has to be noted that this is also the case for FT.

How would imagine this new transaction type being introduced?
I see this new transaction type being introduced as a hard fork - there is no way around it. It can coexist with the old format for a while without problem. We can then plan a phasing out of the old transaction format over time - it may takes years. Phasing out the old format would be a soft fork. Alternatively, we can keep the old format around forever with a limit on the transaction size to limit the quadratic hashing problem, but that means we can't get rid of malleability completely, which sadden me :)

So yes, you guessed right.

However, while I haven't written any spec about this, I would like to have a way to introduce new script version and new metadata in a way that do not require a hard fork, but that the network can oppose - contrary to a soft fork, by using a mechanism similar to EB/AD. The same process can be added to all the OP_NOP operation. But I think that belongs to another BUIP.
 
Last edited:

drwasho

New Member
Dec 9, 2015
5
15
Brisbane, Australia
keybase.org
SegWit proposes to add a new transaction format whichsolve this problem, but does so in way that do not allow to spend existing UTXO.
SegWit proposes to add a new transaction format, which solves this problem but does so in a way that does not allow to spend existing UTXO.

As result, SegWit doesn't delivers on its promise. FlexTrans proposes an alternative way to solve these problems in a way that is compatible with existing UTXO, which allow to eventually weed out the old transaction format.
As a result, SegWit doesn't deliver on its promise. FlexTrans proposes an alternative way to solve these problems in a way that is compatible with existing UTXO, which allows us to eventually weed out the old transaction format.

This BUIP proposes to adopt a strategy similar to FlexTrans but using implementation details much more similar to SegWit. Doing so should allow actors in the ecosystem which already implemented SegWit to support this BUIP with minimal efforts.
This BUIP proposes to adopt a strategy similar to FlexTrans but using implementation details more similar to SegWit. Doing so should allow actors in the ecosystem, who have already implemented SegWit, to support this BUIP with minimal effort.
 

deadalnix

Active Member
Sep 18, 2016
115
196
I updated the BUIP to describe how to deploy new script versions and/or new metadata tags. This should allow this transaction format to be future proof.
 
  • Like
Reactions: freetrader

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@deadalnix, I'm glad that you posted this option, but think that it makes sense to collect all the proposals for new transaction formats and vote for a single one. But as the submitter, you have the right to force a speedy vote to occur. Are you willing to wait for a "pick the best option" vote, or do you intend to require that voting on this BUIP occur in a timely fashion?
 

deadalnix

Active Member
Sep 18, 2016
115
196
I added a field named option to backport the lock_time field. It can also be leveraged to add future feature as a BUIP039 upgrade, which I think is a plus. I'm also working on adding BIP143.

Open questions:
- If we want to enable aggregated signatures, such as Schnorr or BLS signatures, we need a field kind of like option, but that is not included in the transaction id. Do you guys think it is worth it ?
- I also plan to do encoding tweaks to reduce tx size. Worth it, or should I stick with the encoding used by most of the code ?

@Justus Ranvier I think we could reserve a tag number for block height for inputs if you feel strongly about this. It could be made mandatory in the future via soft fork if that ends up being very important. I don't think there is consensus to make it mandatory, and I don't think we can reach it in the kind of time frame we are aiming for here.