What could SegWit look like if it were a hardfork?;

Tom Zander

Active Member
Jun 2, 2016
208
455
I've been asked one question quite regularly and recently with more force.
The question is about Segregated Witness and specifically what a hard
fork based version would look like.

Segregated Witness (or SegWit for short) is complex. It tries to solve
quite a lot of completely different and not related issues and it tries to
do this in a backwards compatible manner. Not a small feat!

So, what exactly does SegWit try to solve? We can find info of that in the
benefits document.

    • * Malleability Fixes
    • * Linear scaling of sighash operations
    • * Signing of input values
    • * Increased security for multisig via pay-to-script-hash (P2SH)
    • * Script versioning
    • * Reducing UTXO growth
    • * Compact fraud proofs
As mentioned above, SegWit tries to solve these problems in a backwards
compatible way. This requirement is there only because the authors of
SegWit set themselves this requirement. They set this because they wished
to use a softfork to roll out this protocol upgrade.
This post is going to attempt to answer the question if that is indeed
the best way of solving these problems.


Full post at;

http://zander.github.io/posts/Flexible_Transactions/
 

Bagatell

Active Member
Aug 28, 2015
728
1,191
Reducing UTXO growth

They don't want Factom, Stampery, CounterParty et als business?
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Great post, Tom.
I am confused about one thing:
Linear scaling of sighash operations
This has been fixed in the BIP109 2MB hardfork quite some months ago.
I don't understand this claim. It is not mentioned in BIP109 . In this post, Andrew Chow wrote:
Segwit introduces a new hash pre-image generation algorithm which will make signature hashing operations scale linearly. The hash pre-image is the data that is to be hashed. This hash will be signed and that is the signature for the transaction. The change enables the use of hash midstates which allows for faster and more efficient hashing. This makes the relationship between the number of signature hashes and the time to generate them linear instead of the former quadratic relationship.
I'm not aware of the BIP109 2MB hardfork changing the signature hashing in any way, except for adjusting some limits outlined in the BIP. Perhaps I missed something - I would appreciate if you could clarify.

Thanks again for your work - I'm hoping Bitcoin can adopt your Flexible Transactions, they seem MUCH nicer than the SWSF.
 
Last edited:

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
@satoshis_sockpuppet : imo calling a hardcoded cap "linear scaling" isn't accurate, as long as the scaling is still quadratic on the restricted set of inputs which don't exceed the cap.

If sighashes scaled linearly within the permitted domain, then I'd say it would be entirely valid to call it linear scaling - even if there was a protective cutoff due to technological limits.

However, transactions that exceed the cap are rejected, that's technically no longer scaling beyond that point, just like only being able to fill blocks <1MB is not scaling.

My personal opinion is that the discussion paper should not claim linear scaling until the design truly fulfills that objective and we are closer to removing the protective cap.

This is splitting hairs though. I'm confident that the linear scaling issue can also be solved with Tom's solid design as the basis.
 
  • Like
Reactions: satoshis_sockpuppet

Tom Zander

Active Member
Jun 2, 2016
208
455
Great post, Tom.

I am confused about one thing:



I don't understand this claim. It is not mentioned in BIP109 . In this post, Andrew Chow wrote:



I'm not aware of the BIP109 2MB hardfork changing the signature hashing in any way, except for adjusting some limits outlined in the BIP. Perhaps I missed something - I would appreciate if you could clarify.


Thanks again for your work - I'm hoping Bitcoin can adopt your Flexible Transactions, they seem MUCH nicer than the SWSF.

Sorry for the delay. I should indeed clarify.


The concept we are talking about was originally started as SigOp protection. See this discussion; https://bitcointalk.org/?topic=140078


It turned out that sigops didn't actually represent the amount of work very well and as such it was deemed sub-optimal. It may be exploitable in the future.


In BIP109 there is a definition to change from sigops to something called "sighashbytes". This solves the possible vulnerability and defines an upper limit to the one task that takes by far the longest of all. This is the hashing of data. BIP109 sets the upper limit to 1.3GB for the entire 2MB block. Thereby solving the exploit referred to above.


For reference; (the iso is 316,669,952 bytes)


$time sha256sum debian-testing-amd64-netinst.iso
0c94f92745b1e6821facf925a118064b727d30ba057cfd9b8897e3ed03f8d2a9 debian-testing-amd64-netinst.iso

real 0m3.417s
user 0m2.188s
sys 0m0.144s


This is a good balance. Some 650MB hashing per MB block will allow even the most extraordinary transactions while protecting machines from issues. The hashing algorithm shipped in Bitcoin is actually faster than the above. But you can try this on your own machine ;)



The Core team then went and invented something they call "linear hashing". Presumably some change in the bitcoin protocol of which we don't know the side-effects.

The solution to migrate to sig-hash-byte counting is a good one and I have not heard any downsides to it. I think we should stick to it and not use something new that Core invented months after the problem had been solved. At least not without them giving very good arguments why their solution is better.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
I agree that sighashbyte counting is probably a good enough way to constrain the problem.

The only thing I don't like about the current implementation is that it's a hardcoded limit of 1.3GB.

In my opinion, it would be better to make it a configuration parameter that is part of emergent consensus, like BU's block size.

Set the default to 1.3GB, let the mining network configure itself according to what it can handle - problem solved.

P.S. glad to see you got a post through TOR :-D
 
Last edited:
  • Like
Reactions: satoshis_sockpuppet

HelloGuy

Member
Mar 16, 2016
42
20
I agree with @freetrader , the hardcoded limit of sigop should be avoided, which is very much like another MaxBlockSize constant which will bring us trouble several years later. We should just leave the non-mining nodes with no restriction on such constant, and miners are able to figure out how to defend such an attack. There is no need to add such constraint on common nodes.
 
Last edited:

HelloGuy

Member
Mar 16, 2016
42
20
It seems that FT got very few responding from the bitcoin community, even inside bitco.in. To add some hot to the discussion, I would like to add some words from Gregory Maxwell:

https://www.reddit.com/r/Bitcoin/comments/4v9g1t/fit_more_in_a_block_with_flexible_transactions/

This is a confused and misleading post. It gives a detailed (if not entirely correct) accounting for the existing Bitcoin serialization, then presents its alternative but ignores all of its overheads making it look much more compact than it is.

It also seems to think that you need a hardfork to do this-- in fact. you can use a different format to store or transmit transactions without any consensus changes at all. Bitcoin Core first did this with the creation of the UTXO set (which uses a highly compact encoding, even more efficient than described here). People have experimented with using more compact encodings on the wire and on disk, but the savings is small enough that it hasn't so far seemed worth the complexity.

Stored in reverse
Txid is serialized in exactly the same order it comes out of sha256. Do maybe it meant displayed in reverse?

It seems confused in general about the existing transaction format, e.g. saying that three bytes of checksequence data is in the version field. This isn't the case.

The complete dismissal of UTXO incentives is bizarre.

Linear scaling of sighash operations This has been fixed in the BIP109 2MB hardfork quite some months ago.
BIP 109 retains the quadratic hashing, but adds a new block size limit of 1GB of total hashing for a 2MB block. It doesn't make the hashing linear in the size of the block, just bounded (and it was already bounded to begin with by the blocksize).

It also shows a script version tag on the inputs in the spending transaction, if done as described this would likely result in anyone being able to to steal anyone's coins, or or other troubling contract misinterpretation. The spender shouldn't get to decide the terms of the contract.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
I agree with @freetrader , the hardcoded limit of sigop should be avoided, which is very much like another MaxBlockSize constant which will bring us trouble several years later. We should just leave the non-mining nodes with no restriction on such constant, and miners are able to figure out how to defend such an attack. There is no need to add such constraint on common nodes.
The size of 1.3GB is 650MB per megabyte. As long as we follow that its not a hardcoded constant because it increases with the increasing size of a block.
[doublepost=1471346890][/doublepost]
It seems that FT got very few responding from the bitcoin community, even inside bitco.in. To add some hot to the discussion, I would like to add some words from Gregory Maxwell/
I'm not sure why the "we did this in Core before" needs answering. If any words from Greg make you think it needs a reply, please copy them here and I'll answer to them.
 
  • Like
Reactions: satoshis_sockpuppet

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
i didnt read the whole thing.
but i have a comment none the less...

if we use tags to allow for "flexible TX" then that adds a huge amount of bytes to the TX. i guess nodes can interpret the tags and save the TX as a binary blob to disk ( not to waste space on disk ) and only use "flexible TX" to Communicate TX's to other nodes.

but still this will add huge amount of bytes to TX's will it not?
 

Tom Zander

Active Member
Jun 2, 2016
208
455
Quoting adamstgbit; (sorry, the quote button is missing...)

> if we use tags to allow for "flexible TX" then that adds a huge amount of bytes to the TX.

The scheme is actually quite smart and the resulting transactions are actually smaller than they are now. The blog goes into reasons why, but the easiest to understand is that many of the content has a hard-coded length and every field is mandatory. So current format writes a lot that is not needed to be saved.


> but still this will add huge amount of bytes to TX's will it not?

Basic transactions are about 3% less in size than current. After we implement pruning then we can remove 75% (leaving 25% of the current size).
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
>Basic transactions are about 3% less in size than current
LOL i didnt expect that...

makes sense... good stuff.

i have a feeling tho that with jr-luke and maxwell at the helm, if they didnt dream it up they will label it a " BAD idea "