Gold collapsing. Bitcoin UP.

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@solex
Do you have a tentative date for the next BU vote?

I want to create a debate around BUIP 101 in time before the vote and see the arguments from all sides play out.
Certainly. The tentative date is 5 October 2018.
I will send a PM to each BU member about it, soon.
 
  • Like
Reactions: Norway

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
If you insert a transaction at position 1 (after the coinbase), that will change every node in the tree. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. For a 1 GB block with 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second without any parallelization. This is a highly parallelizeable problem, so we can easily get that down to 0.1 seconds for a 1 GB block if we want to. Mining hardware generally prefers to switch jobs every 10-60 seconds, so 0.1-1.0 seconds out of a 15 second typical job is insignificant.

This discussion of adding transactions to an existing block template is mostly irrelevant. This is not part of the latency-critical code path and will not affect orphan rates at all. It's also not at all how the current code works, and I'm not aware of anybody having seriously proposed implementing it. A full template reconstruction needs to be fast enough to handle the latency-critical path after a new block is published on the network. If it's fast enough for that situation, it will also be fast enough that you can re-run it a few dozen times during a 10 minute block interval.
Sounds great, can you make a prototype and show us some data before changing the BCH consensus rules in an irreversible way? I encourage a responsible approach where we build prototypes to test theoretical assumptions and optimize before making changes to the future global money.

Software being just code and requiring only the commitment of time makes it more easy to prototype. Do you have good reason to commit to an irreversible change before prototyping and testing practical variables?
[doublepost=1536627914][/doublepost]@adamstgbit who is CSW and why are you listening to him?
 
Last edited:

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
So who is CSW, why is that relevant and why are you listening to him?

If I posted myself having lunch and talking shit would you think I was relevant and worth should be taken seriously? I hope so, I've got a bridge you may be interested in.

Ideas, on the other hand, don't need ego to make them valid.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
oh come on that was funny. oh wtv nvm.
[doublepost=1536634774,1536633852][/doublepost]



if you haven't see the video, here it is...


unfortunately they disabled comment and cut out the funny bit at the end when some well known core supporter takes the stage makes an even worst argument.
 
  • Like
Reactions: majamalu

imaginary_username

Active Member
Aug 19, 2015
101
174
@Mengerian will deploying Merklix tree require a fork, soft or hard?

From the last series of posts, it seems to me that all relevant benefits of LTOR over AOR would have to come from Merklix tree deployment. If Merklix itself requires at least a soft fork, then it is perhaps prudent to simply go AOR right now to reap any perceived benefits that are supposed to come with LTOR (...canonical ordering benefits on graphene won't be relevant for a long time). When Merklix is ready, then it can be evaluated with LTOR, and if found beneficial, have LTOR be soft-forked in by then.
 

molecular

Active Member
Aug 31, 2015
372
1,391
Sorry, I'm quite out of the loop and only following the drama with very low reading capacity... so maybe I'm missing something, but...

In my opinion transaction order should not be part of the consensus rules at all (just like the blocksize limit doesn't belong there).

If I was the CTO of bitcoin I would have it removed from consensus rules and put into block transmission protocol rules. That transaction order is merely an implementation detail specific to block transmission which miners/devs/nodes are perfectly capable of negotiating.

So why not remove the ordering requirement from the consensus ruleset in a hardfork and let the rest be solved by node implementers and miners (a free market approach with competition among block transmission protocols)?
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
@molecular

to force miners to use the ordering.
its more difficult to produce a block in order, so alot of miners might choose to simply have any order.
and its alot easier to evaluate the consequence / effects of ordering, if we know everyone is going to use it.

blocksize limit is and always will be usefull, somthing like it needs to exist.
miners have no good way of determining what is "too big" without blocksize limit.
even if we get to the point where GB blocks are a thing, I still think miners might benefit from collectively agreeing to orphan block that are 1TB big...

the more liberal the consensuses rules are the more unpredictable consensuses becomes.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
oh come on that was funny. oh wtv nvm.
[doublepost=1536634774,1536633852][/doublepost]



if you haven't see the video, here it is...


unfortunately they disabled comment and cut out the funny bit at the end when some well known core supporter takes the stage makes an even worst argument.
Jimmy exhibits the same greed that all core devs before him have exhibited. which is a big problem. it's trite and does make some sense at a certain level to say that the Bitcoin protocol runs on greed. but if that were true, then there would be no philosophical justification to try and make money more fair for everyone across the world as a p2p ecash. at some point, for Bitcoin to succeed, you have to want to make the world a better place. that involves some sacrifice and benevolence along with the desire to make some money:

Free transactions aren’t free. Much like the food stamp program, the cost is being paid, just not by the user. If you look at the current incentives in BCH, every node on the network pays for the free transactions by having to transmit, validate and store these transactions. In other words, free transactions are a tax on every other node.
This might be a hard pill to swallow for Jimmy but the entire Bitcoin protocol runs on altruism as excellently highlighted by Emin Gun Sirer years ago:

https://www.yours.org/content/war-is-peace-freedom-is-slavery-ignorance-is-strength-bch-is-fiat-m-54a701f0ddfe

as a footnote, i went to one of his workshops. sure enough, he said at least a half a dozen different times during it, "if you want to pay me, i'll do that for you". nothing specifically wrong with that, but Jimmy is in it for the money only. he runs tag team workshops with Tone Vays who ppl inexplicably pay for "technical analysis instruction". big blockists, otoh, want to rid us of gvt controlled corrupt fiat and make some money along the way doing it. we have bigger visions.
[doublepost=1536683340][/doublepost]@adamstgbit

>blocksize limit is and always will be usefull, somthing like it needs to exist.
miners have no good way of determining what is "too big" without blocksize limit.

i disagree with this. the fact that the miners didn't make any blocks >23MB during the stress test is evidence of them knowing not to go higher, imo, despite the limit being 32MB. how'd they figure it out and how'd they stop at 23MB? i have no idea. maybe it had something to do with knowledge of ATMP, maybe not. but another way of looking at this is that it's good that the miners might have "no good way of determining what is 'too big'". this keeps them on their toes and is a fundamental observation backing the BU idea of "creeping increases in blocksizes produced" in the absence of a limit. just like we saw between 2009-2015. no miner wants to get orphaned. or identified as an attacker. it's too expensive to risk a bloat block and too expensive to get booted from the small world relay network.
 

molecular

Active Member
Aug 31, 2015
372
1,391
@molecular
to force miners to use the ordering.
why is forcing the miners to use some ordering beneficial?

its more difficult to produce a block in order, so alot of miners might choose to simply have any order.
I doubt it. Miners are incentivized to use an ordering that minimizes orphan risk (by choosing an ordering that ensures fast reception/verification by other nodes). Also: it doesn't matter much how "difficult" (more specifically: how costly in terms of time) it is to produce a block. Doesn't matter much if you're only assembling your block every 10 seconds or you can do it in one second. All you lose is 10 seconds worth of fees.

and its alot easier to evaluate the consequence / effects of ordering, if we know everyone is going to use it.
Why do we need to evaluate the consequences of ordering? How can you even do that? Those effects are largely implementation-specific.


blocksize limit is and always will be usefull, somthing like it needs to exist.
miners have no good way of determining what is "too big" without blocksize limit.
even if we get to the point where GB blocks are a thing, I still think miners might benefit from collectively agreeing to orphan block that are 1TB big...
exactly, the MINERS might benefit from collectively agreeing. So THEY should do the agreeing, not US, right? They are free to orphan big blocks as they wish even without a blocksize limit in the consensus rules.

As a user I couldn't care less how miner A transmits his block to miner B and how the transactions inside the block are ordered or how big it is. In fact I don't care about properties of blocks at all, I just care about transactions, confirmations, fees and the monetary properties of the coin.

I think a lot of stuff got carelessly put in the consensus layer (no clean separation in the code) in ancient times when there was just a single implementation (coded as a monolith no less) and we're paying the price now needing to sort that out (in a highly politicized terrain no less).
 
Last edited:
Feb 27, 2018
30
94
@Mengerian will deploying Merklix tree require a fork, soft or hard?

From the last series of posts, it seems to me that all relevant benefits of LTOR over AOR would have to come from Merklix tree deployment. If Merklix itself requires at least a soft fork, then it is perhaps prudent to simply go AOR right now to reap any perceived benefits that are supposed to come with LTOR (...canonical ordering benefits on graphene won't be relevant for a long time). When Merklix is ready, then it can be evaluated with LTOR, and if found beneficial, have LTOR be soft-forked in by then.
Merklix requires a hard fork, plus it's also the kind of hard fork that adds a permanent technical debt (all block processors will forever need to support the old merkle format and the new merklix format). I can only see minor advantages:

* as Peter points out, merklix allows miners to incrementally build CTOR blocks which means if you broadcast a transaction just before a block is mined, then there's a better chance of it getting included.
* the new tree format would include a special metadata leaf which may be a good place to introduce extensions (finally can stop loading every extension into the coinbase).

Note that being able to provide proof of absence is not actually a special feature of merklix -- rather it's a feature of having an ordered merkle tree. I still don't know what proof of absence is good for, though.

AOR itself provides no benefits from what I can tell. It just annoys block processor programmers since they have to abandon the old way of updating the utxo set. The only reason I can see for doing AOR would be that there is no reason to keep TTOR around in the long term, so we might as well 'rip off the bandaid' now before BCH becomes less agile. Making the tiny further step to add CTOR, on the other hand, provides an immediate benefit that graphene can be trivially shrunk (no messing around with special coding for detecting and reconstructing special optional orders).
 

8up

Active Member
Mar 14, 2016
120
344
Jimmy exhibits the same greed that all core devs before him have exhibited. which is a big problem. it's trite and does make some sense at a certain level to say that the Bitcoin protocol runs on greed. but if that were true, then there would be no philosophical justification to try and make money more fair for everyone across the world as a p2p ecash. at some point, for Bitcoin to succeed, you have to want to make the world a better place. that involves some sacrifice and benevolence along with the desire to make some money:

Free transactions aren’t free. Much like the food stamp program, the cost is being paid, just not by the user. If you look at the current incentives in BCH, every node on the network pays for the free transactions by having to transmit, validate and store these transactions. In other words, free transactions are a tax on every other node.
This might be a hard pill to swallow for Jimmy but the entire Bitcoin protocol runs on altruism as excellently highlighted by Emin Gun Sirer years ago:
Reciprocal altruism is part of the game theoretical equation that makes Bitcoin work:

https://en.wikipedia.org/wiki/Reciprocal_altruism
 

imaginary_username

Active Member
Aug 19, 2015
101
174
exactly, the MINERS might benefit from collectively agreeing. So THEY should do the agreeing, not US, right?
I hate to repeat this ad nauseum, but every single maxblocksize is ultimately adopted by miners. At no point did miners ever lose that power, even less so now that the EB number is just there and can be tweaked at any time. Arguing that having a default number somehow takes away that choice is mind-boggling.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
Arguing that having a default number somehow takes away that choice is mind-boggling.
I still think the best method to remove the block size limit from the consensus layer is BIP101. However, I'm sufficiently convinced that BU has effectively achieved the same thing.

As things stand today the block size limit is a consensus rule, it is as susceptible to the same abuse as when the consensus rule was configured at 1MB.

The consensus rules revolve around defining valid transactions. The Block size limit, while it exists as a consensus rule, can be abused, it should sit outside the consensus rule layer. I agree with @molecular in principle.

... transaction order should not be part of the consensus rules at all (just like the blocksize limit doesn't belong there).
We should be making the consensus rules as tight and straightforward as possible, let the market converge on the things that create universal value and build then on top of the consensus rules.

When blocks size approaches the new limit, the playing field and players will have changed again. We should be aiming to have the protocol self-manage, free of any groups influence. With mass adoption comes a mass of opinions and pseudo-experts.

I believe most miners were not prepared to risk the changing of the limit from 1MB. ( the BCH fork is like the insurance of a choice) We can expect those miners to move over to BCH if adoptions grow. We should plan for success. If they don't move over, we've failed anyway.

I have no reason to think 92% of miners today would react differently to changing a consensus rule limit and I suspect it only gets harder as the network diversifies and more varied opinions abound.
 
Last edited:

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
i disagree with this. the fact that the miners didn't make any blocks >23MB during the stress test is evidence of them knowing not to go higher, imo, despite the limit being 32MB. how'd they figure it out and how'd they stop at 23MB? i have no idea. maybe it had something to do with knowledge of ATMP, maybe not.
the way i see it, miners feel that a limit is so important, they go out of there way to come up with a soft limit, on top for the hard limit.
but....
hard / soft is the wrong wording.

there are 2 necessary types of limits when it comes to block size.

the max block size the miner is willing to risk producing.
the max block size the miner is willing to risk accepting.

whether or not these limits are a hard or soft protocol rule makes no difference, each miner MUST define them.

IMO its just a good idea that the blocksize limit is a hard and fast rule. because this gives us a more predictable behavior, since we ensure that everyone has the same value.

is it necessary that its a hard rule? nope. is it a good idea? i think so....
[doublepost=1536707812][/doublepost]
why is forcing the miners to use some ordering beneficial?
because it leads to more predictable behavior, and simpler code.
[doublepost=1536707913][/doublepost]
I doubt it. Miners are incentivized to use an ordering that minimizes orphan risk (by choosing an ordering that ensures fast reception/verification by other nodes). Also: it doesn't matter much how "difficult" (more specifically: how costly in terms of time) it is to produce a block. Doesn't matter much if you're only assembling your block every 10 seconds or you can do it in one second. All you lose is 10 seconds worth of fees.
i think your right. but just because its not much hard to procedure one ordering over an other, dost ensure everyone uses the same ordering.
[doublepost=1536707989][/doublepost]
Why do we need to evaluate the consequences of ordering? How can you even do that? Those effects are largely implementation-specific.
the more predictable the system is, the easier it is to model / think about, the easier it will be to maintain.

[doublepost=1536708159][/doublepost]blowing this thing wide open and free, Sounds fun and exciting, and its likly your all correct in saying that nothing horrible is going to happen.

still, i disagree the we should do such a thing.
 
Last edited:

jtoomim

Active Member
Jan 2, 2016
130
253
Last night I finished a draft of a new encoding scheme for blocks. With CTOR, it uses 15.84 bits per transaction on the 21.3 MB block we saw during the stress test. Without CTOR, it would use 32.84 bits per transaction. This is instead of the 64 bits per transaction used by Xthin and Compact Blocks, and reflects a 4.04x improvement over the status quo. Furthermore, it should be far more reliable than Graphene, and can serve as a fallback method for instances in which Graphene fails.

It achieves its superior compression through two methods: (1) it uses a prefix-tree style encoding to eliminate repeated data at the beginning of a TXID, and (2) it adaptively encodes only the minimum number of bytes needed to disambiguate between other transactions in the mempool with similar TXIDs.

This encoding was inspired by LTOR, and requires the block's TXIDs be sorted lexically in order for the algorithm to function. This can be achieved by a separate sorting/desorting step if we don't have LTOR. If we have any non-LTOR CTOR (e.g. Gavin's order), this can be accomplished without any extra data, albeit with extra computation. It can also be used without any CTOR if you encode transaction order separately. This takes an extra log2(n) bits per transaction, or an extra 16 to 24 bits per tx for the block sizes we anticipate for the future.

I propose calling this new encoding scheme "Xthinner" in English, or "Xthin二" in Chinese contexts.

I will write up a more detailed description soon in a Medium article, but for now, interested parties may peruse the source code:

https://github.com/jtoomim/xthinner-python

Edit: I added checksums to the code, which currently adds 1.65 bits per tx.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
lol

“And by the way,” he adds, gaining steam. “The store of value. At 20,000 they were all going, ‘Bitcoin is the greatest store of value.’ Really? Are you still thinking that at 6,000? How’s that store of value hypothesis going, you monkeys?”

https://breakermag.com/the-bitcoin-oracle-who-exited-bitcoin/
[doublepost=1536722049][/doublepost]here's the best part:

Instead of a central bank’s paper money, which is tied to a commodity like gold
[doublepost=1536722249][/doublepost]"“The point is, the guys on bitcoin are so fuckin’ religious. There’s no rationality behind it. I wouldn’t go publicly saying this, but I think in the next two or three years, bitcoin cash could be bigger than bitcoin.”