I don't know what makes a fork soft. Is it like an auto update?
Soft fork is way to enforce new and stricter rules in the system without requiring that all nodes upgrade the software but needing only a majority of miners to be in agreement.
@Dusty
not at all b/c multisig seemed to be a good thing at the time.
But your knowledge of the system was
waaay less then than now (as was mine, btw).
Also, P2SH was way,
way more than only enabling multisig: it was an hack to rendering possible to create an address for whatever complex a script could be, using all enabled opcodes.
2of3 multisig was only the simplest and the more easier to explain to non-technical people.
plus, i know i didn't really understand the ANYONECANSPEND implications back then.
That, and also the fact than entering the path of softforking, i.e., introducing features without forcing that all the nodes of the network to upgraded their client, was just the first step in a certain direction and more and more softforks would have been followed since then (stricter malleability rules, op_ctlv, op_csv, etc).
I remember that at the time there were all sorts of fights between various camps: the ones that would have preferred hard forks, the ones that preferred OP_EVAL or CHECKHASHVERIFY(BIP17) or P2SH (BIP16) etc.
The only difference was that the bitcoin space was 1/100 or maybe 1/1000 of what is now, so it did not reach the public, but it was discussed mainly between experts.
this is a totally different situation.
Is it? (please note I'm talking about segwit now, not block size)
In common with the old debate we have that a group of people wants to soft fork the network to enable a whole new scripting capability, while other groups would like a more clean approach, an hard fork, less changes, etc.
I find that the most important difference between now and then is not technical but political: while in the old days there were a bunch of people speaking for themselves, now there are big firms coordinating people and ideas. Also, where before the discussion was mostly between very technical people, now a lot of the general public is giving opinion (one side or the other) without really knowing how the specific things work.
Soft forks should never be used for changing consensus rules, independently from the proposed change.
Soft forks are used
only to change consensus rules. If no consensus rules have to be changes, no hard or soft forks are necessary. There is no use for a soft fork without changing consensus rules.
I think there's a very fundamental difference between softfork segwit and softfork P2SH.
P2SH was not intended to be an immediate fix to an emergency need the community had right then and there, and the community had essentially years to understand what it was all about and design UX and workflow around it. It truly was opt-in because there is essentially no systemic risk to introducing it.
Actually, if you read the old threads, the main criticism was "why all this rush? We need more time to discuss all the implications!", only at a certain point Gavin was fed with all the time wasted talking and decided to act. Since it was the undisputed leader at the time, he committed his P2SH patch and released the new software.
sickpig said:
Back then I was naive enough to not being able to grasp the perniciousness of this kind of deploy mechanism.
Yes. Still, in retrospective, we can say the P2SH experience turned out well: it did not break the network and allowed us to work out complex scripts, having multiuser ot server side signatures, automated escrow and a lot of other nice things without needing any other modifications to the consensus.
So, I ask myself (I'm really thinking out loud and trying to play the devil's advocate), should we try to learn from the past and being more open to this kind of innovation?