Gold collapsing. Bitcoin UP.

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
ROW=rest of world
MC=mainchain
SC=sidechain
scBTC=sidechain BTC
 
  • Like
Reactions: AdrianX and awemany

luigi1111

New Member
Nov 9, 2015
13
7
@luigi1111 @rocks - from what I saw, just segwit gets close to a 2x better efficiency if you just look at the tx chain, and you only get to the predicted 4x improvement (eg, the throughput of 4mb cap under the current blockchain model, while still *actually* leaving the cap at 1mb) if most transactions were multisig.
If non signature data is presently taking up 500kB / block, then the practical maximum increase in Tx rate is 60%.
If we take the hypothetical scenario that the signature data becomes far larger relative to the rest, we get higher "chain size" scaling, but *not* higher TPS scaling. Actual TPS scaling still goes down as witness size increases, just at a lower rate compared to today.

Unless my formula is wrong, the 4x number is pulled out of nowhere, and is practically meaningless. The only way to get a 4MB "block" would be for the data to be entirely signatures.

If you care about scaling WRT to TPS, it's a 60% improvement max (assuming a 50/50 split at present).
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
BU=Bitcoin Unlimited
BS=BlockStream :)
SW=Segregated Witness (segwit) (new data scaling proposal - limited bandwidth scaling)
IOW=in other words
GFC=Great firewall in china (?):confused:
TPS=transactions per second
WRT=just gets filled in some how
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
Unless my formula is wrong, the 4x number is pulled out of nowhere, and is practically meaningless. The only way to get a 4MB "block" would be for the data to be entirely signatures.
No surprise that one of the few applications which might get use out of such large scriptsigs is the Lightning Network.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@luigi1111 @rocks - from what I saw, just segwit gets close to a 2x better efficiency if you just look at the tx chain, and you only get to the predicted 4x improvement (eg, the throughput of 4mb cap under the current blockchain model, while still *actually* leaving the cap at 1mb) if most transactions were multisig.
spot on.

on avg more than 80% of txs are "normal". So we would have a nominal gain in virtual block size equal to:

Code:
(0.8 * 1.75 + 0.2 * 3) =  2
1.75 is the gain in case of usual txs, 3 is a mix of the gain of various less common txs (2-2, 2-3).

we also to take into account that we have 2 constraints that has to be verified at the same time:

Code:
(base_size + witness_size/4 <= 1MB) and (base_size < 1MB)
so is not as straightforward as it seems.
 
  • Like
Reactions: AdrianX

luigi1111

New Member
Nov 9, 2015
13
7
A standard 1 input, 2 output Tx should be 257 bytes, 139 of which should be scriptsig, or ~54%.

Plugging that into (base_size + witness_size/4 <= 1MB), I get a ~68% increase in "usual tx" rate, not 75% (but it's not that different).

Edit: this is for an uncompressed pubkey, a compressed pubkey drops the scaling down to ~56% for "usual" Txs.
 
Last edited:

sickpig

Active Member
Aug 28, 2015
926
2,541
@luigi1111 sure there's no big difference, I borrowed this figure from Pieter Wuille

 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@theZerg

what if you have the excessive block setting at 4 blocks deep before accepting it and along comes a 5 block deep fork that eventually should get reorg'd like what happened in BIP66?
Well first off, I didn't remove any of the other consensus rules. So BU would not have followed the BIP66 invalid block regardless of mining majority. I think that this idea -- that maybe the mining majority should be considered instead of the local node's opinion -- for many of the consensus rules is very interesting. However, I feel that we should choose to move the rules out of consensus on an individual basis. Also I think that in most cases we should choose to put up a red warning on the client "Warning, significant fork in progress, please research before issuing transactions" rather then blindly follow the mining majority.

But maybe your question is "is BU capable of switching over beyond the accept depth (4 blocks in this case)?" The answer is absolutely. After "accept depth" is passed, the fork with the most work is chosen. So if you had 2 forks with 50% hash power each BU would switch between them. I've tried this back and forth on regtest, and seen coins that were "confirmed" pop back into the "pending" state.

I've also done a monster 1000 block reorg from testnet <1MB to testnet >1MB. This reorg was complicated by the fact that my wallet had 10000's of addresses and was responsible for about 50MB of transactions committed during that reorg. This did not happen cleanly, although it might have if I had been patient. I discovered a huge inefficiency in the rewinding of transactions into your wallet (but this inefficiency was probably massively amplified by the unrealistic -- except for companies -- # of addresses). But after a restart it worked.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@luigi1111 sure there's no big difference, I borrowed this figure from Pieter Wuille

why does brg444 seem to be so close to everything Blockstream? remember i found it quite odd that he popped up in my old gold thread back in Oct 2014 when the BS WP was released trolling me every step of the way for months on end for over 300 pages:

https://www.reddit.com/r/Bitcoin/comments/3vlrws/scaling_bitcoin_hong_kong_december_6th7th_live/
[doublepost=1449785981,1449784994][/doublepost]@awemany

this is probably as good a reason as any as to why stakeholder voting won't fly:

edit: Also a signed message reveals the pubkey in much the same way spending does.

 

davecgh

Member
Nov 30, 2015
28
51
A standard 1 input, 2 output Tx should be 257 bytes, 139 of which should be scriptsig, or ~54%.

Plugging that into (base_size + witness_size/4 <= 1MB), I get a ~68% increase in "usual tx" rate, not 75% (but it's not that different).
Although it doesn't make a big difference on the results, the average size for a standard 1 input, 2 output tx (where the input consists of redeeming a previous pay-to-pubkey-hash output with a compressed pubkey, and the 2 outputs are pay-to-pubkey-hash as well, as is typically the case) is 226 bytes. I'm not sure how you came up with 257 bytes.

Breakdown:

Code:
Standard redeeming input for pay-to-pubkey-hash:
OP_DATA_72 <sig> OP_DATA_33 <compressed pubkey>
So that is 1 + 72 + 1 + 33 = 107 bytes
** A signature can be from 71-73 bytes, so using 72 on average

Standard pay-to-pubkey-hash output:
OP_DUP OP_HASH_160 OP_DATA_20 <20-byte hash> OP_EQUALVERIFY OP_CHECKSIG
So that is 1 + 1 + 1 + 20 + 1 + 1 = 25 bytes

type TxIn struct {
  PreviousOutPoint OutPoint // 36 bytes (32-byte hash + 4 byte index)
  SignatureScript  []byte   // 1 byte for script len + 107 bytes for the script itself
  Sequence         uint32   // 4 bytes
} // 36 + 108 + 4 = 148 bytes
type TxOut struct {
  Value    int64   // 8 bytes
  PkScript []byte  // 1 byte for script len + 25 bytes for the script itself
} // 8 + 26 = 34 bytes
type MsgTx struct {
  Version int32   // 4 bytes
  TxIn []*TxIn    // 1 byte for the number of inputs + 148 bytes for the input
  TxOut []*TxOut  // 1 byte for the number of outputs + 2 outputs @ 34 bytes each = 68 bytes
  LockTime uint32 // 4 bytes
} // 4 + 149 + 69 + 4 = 226 bytes
 
  • Like
Reactions: cypherdoc

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998

luigi1111

New Member
Nov 9, 2015
13
7
davechg said:
I'm not sure how you came up with 257 bytes.
Easy, I used an uncompressed pubkey! (though still off by 1 byte, oh well; probably because sigs are from 71-73 bytes as you note)

Though I intended to note this (completely forgot about it), using a compressed pubkey actually skews the numbers lower.

Actually quite a bit lower: ~56% increase in "usual Tx" rate, vs ~68% with uncompressed and 75% quoted.

That changed sickpig's numbers to be
Code:
0.8 * 1.56 + 0.2 * 3 = ~1.85x
though I'm not quite sure I buy those other numbers/ratios (don't know if there's an easy place to get "real data", though one could argue that future projections might be more valuable).
 
  • Like
Reactions: cypherdoc

rocks

Active Member
Sep 24, 2015
586
2,284
A standard 1 input, 2 output Tx should be 257 bytes, 139 of which should be scriptsig, or ~54%.

Plugging that into (base_size + witness_size/4 <= 1MB), I get a ~68% increase in "usual tx" rate, not 75% (but it's not that different).
Considering Bitcoin is still scaling exponentially, any one time change of x% is meaningless, whether it is 75% or 68% or 200%. It buys the system a matter of months just that one time. It shows the current focus is entirely in the wrong places, we can't keep rolling out disruptive changes every several months.

And I think that is the goal. SW will be a difficult transition, this will allow Greg and team state that difficult scaling Bitcoin is very difficult, with the goal of convincing people it is too hard to do regularly and we should accept a fee market. If people see that SW was hard, they might then become convinced that scaling Bitcoin was hard.
 

luigi1111

New Member
Nov 9, 2015
13
7
Considering Bitcoin is still scaling exponentially, any one time change of x% is meaningless, whether it is 75% or 68% or 200%.
I understand that, my whole point is that the actual results are likely to be (significantly) less than people are imagining on Reddit, at least from what I've read.

I like SW, but I'd be in favor of including it in a hardfork that also increases the blocksize to 2,4,or 8 MB. I don't have a firm opinion on future increases being programmed in immediately.

If SW data is less impactful than the other data and should be discounted by 75% (because 4 is a good number (?)), that can be included as well. Doing it via soft fork like this with the 1/4th number thrown just makes it look like it's trying to be presented as a panacea for scaling (possibly causing nothing or at least less else to be done), rather than getting included on its merits (and it has merits).

Edit: why can't I quote the most recent message?
 
  • Like
Reactions: sickpig

sickpig

Active Member
Aug 28, 2015
926
2,541
@luigi1111 I don't remember where but I read that 86% of total amount are pay-to-pubkey-hash standard txs. I need to find out the source, though. regardless as @rocks pointed out we are quite distant from what we need in case of a wider adoption.
[doublepost=1449789681][/doublepost]@luigi1111 re quoting last message: it's a feature not a bug, that way is almost impossible to have multi level nested quoting. If you need to reply to the last message there's almost no need to quote, just use forum handler (@username) and a brief reference to the part you're answering.

in the beginning it could be frustrating but I get used to it after a while and all in all forum readability is better than other more traditional forum.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
really appreciate the math guys.

i need to work on mine.
 

luigi1111

New Member
Nov 9, 2015
13
7
@luigi1111 re quoting last message: it's a feature not a bug, that way is almost impossible to have multi level nested quoting. If you need to reply to the last message there's almost no need to quote, just use forum handler (@username) and a brief reference to the part you're answering.

in the beginning it could be frustrating but I get used to it after a while and all in all forum readability is better than other more traditional forum.
When I quote it removes any previous quotes anyway, so I don't really get it. In any case I'd rather the pertinent part I want to respond to show up in a quote box rather than inside "" or some "re" statement. Maybe I'll get used to it.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
Well, you'll certainly track consensus if your "excessive block" limit is higher. But the point is that you'll also track consensus if its lower. This is the key difference between BU where the block size is not part of consensus and something like BIP101 where it IS.

BU will discourage the block by not relaying it (or mining on top of it), but once it is buried a few blocks deep in the blockchain, BU will accept it.

You can see how this behavior is really nice for companies who just don't care; they just want to track the most-difficult chain without having to do an emergency port of their custom modifications to their bitcoin nodes to a new release.
i don't think i've really understood this behavior until now.

if that's how it works, that is great.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@cypherdoc: don't worry--hardly anyone gets it. When I tried to explain it to Gmax he unsubscribed from the dev mailing list. I don't believe any of the Core devs understand how it works and in fact they seem vehemently opposed to even trying to understand. But the reality is that it does work--@theZerg has proved that it works on testnet.