Gold collapsing. Bitcoin UP.

sickpig

Active Member
Aug 28, 2015
926
2,541
here we are:



For what is worth I was among the ones who chose the first options.

To be fair ripple raise is so strange that I really dunno how to think about it.

(@lunar I know market cap concept is flawed, still I think that looking at the trend can give us some insights to where we are heading to)
 

go1111111

Active Member
If I knew the guy would not move the $1,000,000 for a year, I would take your bet. But I think he'll move them. I understand it's sort of a crappy bet for you if he moves them for reasons other than "being worried about segwit security" and I win on a technicality. But then it's a crappy bet for me if segwit coins aren't stolen and segwit isn't official rolled back, but people just avoid storing value in it and you win on a technicality. How about:

1. I win (a) immediately if any segwit output is "stolen" without a valid segwit signatures, or (b) after 1 year if no more than 4% of the LN money supply is ever stored as SW coins.

2. You win otherwise.
I don't have strong opinions about how much segwit will be used on Litecoin (especially since I think Lightning on Ethereum or Bitcoin will be the main use of the concept in a year), but why not go with one of the 'conditional bet' proposals? For instance:

If no more than 4% of Litecoin is ever stored as segwit outputs within the next year, the bet is void. Otherwise we use my market-cap based end condition above.

I don't like the "if any segwit output is stolen.." phrasing, because it's unclear how the bet would resolve if that leads to two permanent chains.

(I'm being reminded how tough it is to nail down precise bets.)
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@SanchoPanza
Never had the time to go through your proposal, I'm going to read it before deciding if sponsor it or not. Still, from a cursory look it seems interesting. Practically speaking it seems to be BIP9 extended also to manage hard forks and with a configurable activation threshold.
 
Last edited:
  • Like
Reactions: SanchoPanza

SanchoPanza

Member
Mar 29, 2017
47
65
@sickpig : Thanks for having a look at it. If you have any technical questions, please feel free to ask them (preferably in the BUIPxxx thread).

Yes, it is BIP9+, one can configure each bit individually with its own threshold, evaulation window (not limited to 2016 blocks), and grace period.

This configuration data can be read in from a forks.csv file which overrides the built-in client defaults. (although this is just an implementation detail of the BU implementation).

BIP9 explicitly specified that it is only for soft-forks (by purposefully not being general to cover hard-forks too). I think that was an ideological decision which is unnecessarily limiting to BIP9.
 
Last edited:
  • Like
Reactions: Peter R

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
Thanks :)

Also note the connection between Segwit's different security model and the terminology abuse whereby sig-checking wallets are called "nodes" as if Bitcoin were not just a mining network. I don't know if Core planned it this way from the beginning, but the move away from the original Satoshi term "node" as only being miners is the perfect setup for Segwit's new security model not causing people alarm: if you're not really "using Bitcoin" unless you run a "full validating node," there is no need to worry so much about the possibility of miners failing to validate the witness.

If the blocksize is kept tiny, there is no worry of being able to run a "full node." Even the UASF is a useful lore for reinforcing this faulty view that "full nodes" really enforce anything. Core has been angling for "full nodes" to be the enforcers for years and with Segwit they may have finally gotten their twisted wish (and just as UASF really means Developer-Activated Soft Fork, "nodes" as enforcers really ends up meaning devs as enforcers - really just what you'd expect from devs with hubris).

@Peter R do you see what I mean there?
 
Last edited:

Dusty

Active Member
Mar 14, 2016
362
1,172
My understanding was the P2SH outputs look like 'anyone can spend' outputs to nodes that haven't upgraded to the soft fork that implemented P2SH. So my point is, while you need to provide such a script *according to the P2SH rules*, if you don't accept the P2SH rules you can just take the money. This seems analogous to the SegWit situation.
Not at all.
P2SH outputs are very normal Bitcoin transactions that have a script of this type:
Code:
OP_HASH160 [20-byte-hash of {sub-script} ] OP_EQUAL
instead of the more common P2PKH:
Code:
OP_DUP OP_HASH160 <PubkeyHash> OP_EQUALVERIFY OP_CHECKSIG
the only difference is that when a P2SH-enabled node sees the first pattern, after having validated that script it executes another step of validation: it takes the script given in the input and validates it too.

Thus, since finding a value that hashes to a given 20-byte hash is not doable by hypothesis, P2SH outputs are perfectly protected even by a non-P2SH-enabled node, modulo the fact that you only use a certain address once.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
Another key piece of the whole segcoin security differences argument is what happens to SPV wallets. Core has argued that the SPV section of the whitepaper is incomplete, that we need fraud proofs to complete it, but @awemany argued persuasively that SPV already works as planned. Scaling being viable primarily by SPV wallets *in particular* throws a monkey wrench in the "Blocking the Stream" plan. Convenient that Segwit could make SPV wallets too risky to use.

These seem like more and more pieces Core has been building in terms of false arguments, shifting terminology, and actual protocol changes starting from years back, all with the aim of molding Bitcoin into Greg's vision.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Zangelbert Bingledack:

Yeah, I've never really understood the supposed "problem" with SPV wallets. Sure, fraud-proofs would be nice, but the proof that my TX has been included in the longest chain is good enough for me. A scenario where miners start including invalid transactions and double-spends in their blocks and other miners extend them is a failure mode for bitcoin. I'm going to lose a significant portion of my crypto wealth anyways.
 
@Zangelbert Bingledack:

Yeah, I've never really understood the supposed "problem" with SPV wallets. Sure, fraud-proofs would be nice, but the proof that my TX has been included in the longest chain is good enough for me. A scenario where miners start including invalid transactions and double-spends in their blocks and other miners extend them is a failure mode for bitcoin. I'm going to lose a significant portion of my crypto wealth anyways.
All this is about how you see the world. Are 99⁹⁹ meters an infinitely long distance - or are they an infinite distance short of infinity?

I never understood the attack on SPV wallets completely. Wasn't it something like "X percent of miners sybill me and mine an invalid block to double spent me"? Meaning that you still need a full node when you want to receive payments of several millions or other amounts that are worth such an attack. But that for everything else a SPV wallet is completely save ... save as long as the fee market does not break backward compatibility with old software ... or as long as you don't loose your keys, don't have "blockchain providers" who spy on you and so on. The attack which needs fraud proofs seems to be some of the smallest worries with SPV wallets.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
On Erik Voorhees's call for compromise at Segwit+2MB.
I think @Tomas van der Wansem (BUIP056 proposer, Bitcrust dev, and redditor /u/tomtomtom7 quoted in my post above) has a different view on Segwit than most here. If he's not too busy, I'd be interested in what he thinks of the arguments lodged here against Segwit (both the ones put forth in this thread over the years and the recent CSW-originated ones).
 
Last edited:

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
@Dusty BIP-17 could have dispensed with the OP_CODESEPARATOR complexity by simply making OP_CHECKHASHVERIFY interpret the item on the top of the stack as the number of previous elements to consume.

Then we could have had the functionality of P2SH without breaking the normal flow of script processing.

This was a huge missed opportunity and set a horrible precedent.
 

Dusty

Active Member
Mar 14, 2016
362
1,172
@Justus Ranvier
If you do as you propose, then how can you create a new kind of address for every kind of script you can create?

The beauty of P2SH is the fact that with a standardized set of OPs you can pattern match the scripts and rapidly detect if a certain script is a destination address.