Segregated witness

davecgh

Member
Nov 30, 2015
28
51
It isn't really a new idea. They have been implemented in CryptoNote (later became Monero) for quite some time.

There is some additional complexity around delivering the necessary out-of-band information (the signatures/scripts themselves) and while it does make the data more "prunable", so to speak, it doesn't actually change the amount of data that needs to be relayed. In fact, it will likely increase it.

This is because a block can be up to 1MB without the scripts, and they are still needed in order to verify the witness, the effective block size with the scripts will increase. Since signatures are 71-73 bytes and public keys are 33 or 65, that basically means in the typical case you offload ~105 bytes per input. So imagine if you have a block with 5000 inputs, that's roughly 525k worth of offloaded data. Now imagine if you fill up the entire 1MB block. I haven't done the math, but my guess is it would be roughly 3-4MB of offloaded data that is needed in addition to the 1MB block.

It's interesting that one of the huge debate points about increasing the block size is all about relay and, as nice as segregated witnesses are for many of the other properties they bring, they do nothing to address the amount of data that needs to be relayed.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@lebing I think Pieter has pulled a rabbit from the hat with his SW solution. It could well be enough to keep Core in the game with the 1MB in place much longer than otherwise. It also makes his BIP103 look somewhat more viable if it's scaling schedule is effectively x4.

There are a number of things to consider:
  • Complexity. Has SW had the testing and review necessary for it go live in such a fundamentally critical role, splitting the blockchain into two distinct data-sets with two merkle-trees?
  • Roll-out which affects SPV nodes, (unlike just the full nodes affected by BIP100,101 etc).
  • Is the soft-fork before the hard-fork to change the 1MB primarily a strategic move? A lever to push users to use the new tx versions which can make full use of the 4MB.
  • Is a goal of SW to make it easier to get sidechain op-codes live, and the versioning an easier route to update them.
  • Relief! At least this avoids the imminent scenario of very high fees for main-chain tx, the "settlement layer" goal, and there is a scalability roadmap of sorts from Core.
I am still a fan of BU's principle of block-limit by distributed consensus and it makes no difference whether SW is present or absent.
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
OK. So it seems that the 4x increase is a best case scenario and the SW provides effectively a 2x gain in tx space for normal tx.

So from IRC, this doesn't seem quite right -- capacity is constrained as

base_size + witness_size/4 <= 1MB

rather than

base_size <= 1MB and base_size + witness_size <= 4MB

or similar. So if you have a 500B transaction and move 250B into the
witness, you're still using up 250B+250B/4 of the 1MB limit, rather than
just 250B of the 1MB limit.
http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-December/011869.html

This does very little to help alleviate the demand for block-space. Soon after SW is released blocks will be full again.
 

albin

Active Member
Nov 8, 2015
931
4,008
Segregated Witness + forced artificial block congestion seems like a disaster waiting to happen.

Imagine the plight of a miner trying to rationally maximize fees. That's fairly straightforward at the moment, I would imagine you simply sort by fee/kb, and include the highest value transactions until the next one you want to include doesn't fit, then you iterate at lower fee/kb levels to fill the remaining space.

With segwit, isn't it monumentally more complicated? You have a fee/kb that applies to space on both the tx block and the witness (you're maximizing across two variables), and simply adding those together doesn't necessarily actually maximize your fee revenue. Wouldn't you really have to brute force calculate a massive amount of combinations of tx inclusion sets, or at least develop some kind of heuristics to guess the appropriate tx inclusion rules?
 
  • Like
Reactions: Cryptodude999

jl777

Active Member
Feb 26, 2016
279
345
https://bitcointalk.org/index.php?topic=1398994.msg14211197#msg14211197

SEGWIT PERMANENTLY WASTES PRECIOUS BLOCKCHAIN SPACE

the wtxids would not be necessary at all with a 2MB hardfork. segwit breaks the installed base as old wallets wont even be able to validate incoming payments and they cant spend them. So if that is what backward compatibility means, then I missed the memo that says bitcoin just needs to allow people to see what they have, but not being able to validate or spend it, well, that's just fine

The ONLY rationale that justifies segwit softfork is that it avoids a hardfork, but it will break all existing wallets and will take 6 months for all the vendors to update as it is a LOT of changes. And for this disaster, what do we get? permanent loss of space due to the needless space occupied by wtxids.

Segwit cannot be a softfork. It has some clever tech ideas and as a hardfork, then maybe it is ok at some point. But saying it avoids a hardfork is disingenuous. Breaking the installed base is not what people expect from a softfork, but that is what it will do
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@jl777
The ONLY rationale that justifies segwit softfork is that it avoids a hardfork,
Yes. Because Core are scared of hardforks now because it resets their node count to zero, and they don't believe that they will remain the reference implementation afterwards. They like the old nodes counting as Core nodes to boost their status even though the old nodes are no longer full but hollowed out.
 

jl777

Active Member
Feb 26, 2016
279
345
That has got to be the stupidest reason every to bloat the blockchain by 30%

The partyline appears to be "wallets still function perfectly fine with the old system. They can still receive segwit transactions, they just can't spend from them"

I must be in a different universe, since "function perfectly" and "just cant spend from them" in my universe are not usually used together
 

jl777

Active Member
Feb 26, 2016
279
345
they are not devs anymore if this is the sort of priorities they have

knightdk is backtracking pretty quickly away from his segwit supporting position. maybe I was a bit too harsh, but it is sad to see otherwise objective technical posters contaminating their output with political agenda

https://bitcointalk.org/index.php?topic=1398994.msg14211564#msg14211564

I do agree that it is cool tech, but saying it fixes blocksize issues is like claiming a microwave oven makes your internet connection get faster bandwidth

The theory about the microwave oven speeding up the internet bandwidth would be something along the lines of putting it on a shared circuit with your neighbor's wifi router and by overloading the microwave, it will disrupt and take out said router. This then allows your wifi router to get more throughput to the local hotspot. Perfectly logical and the existing wifi is still working perfectly since you can always microwave yesterday's pizza. plus if everybody used hardlines then it wont be an issue anyway
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@jl777
Hahaha. I can almost believe that your microwave quote will appear in a future Maxwell post.
 
  • Like
Reactions: Cryptodude999

jl777

Active Member
Feb 26, 2016
279
345
the microwave also works to increase bandwidth if you just microwave all your neighbor's wifi routers. so it is definitely a fully supportable position
 
  • Like
Reactions: Cryptodude999

Chronos

Member
Mar 6, 2016
56
44
www.youtube.com
That's a good point. I didn't consider microwaving the routers themselves. That's much better than increasing the maximum microwave size directly. I've never seen a microwave transmitted quickly out of China.

Sorry, a bit off-topic. :D
 
  • Like
Reactions: Cryptodude999