Sounds great, can you make a prototype and show us some data before changing the BCH consensus rules in an irreversible way? I encourage a responsible approach where we build prototypes to test theoretical assumptions and optimize before making changes to the future global money.If you insert a transaction at position 1 (after the coinbase), that will change every node in the tree. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. For a 1 GB block with 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second without any parallelization. This is a highly parallelizeable problem, so we can easily get that down to 0.1 seconds for a 1 GB block if we want to. Mining hardware generally prefers to switch jobs every 10-60 seconds, so 0.1-1.0 seconds out of a 15 second typical job is insignificant.
This discussion of adding transactions to an existing block template is mostly irrelevant. This is not part of the latency-critical code path and will not affect orphan rates at all. It's also not at all how the current code works, and I'm not aware of anybody having seriously proposed implementing it. A full template reconstruction needs to be fast enough to handle the latency-critical path after a new block is published on the network. If it's fast enough for that situation, it will also be fast enough that you can re-run it a few dozen times during a 10 minute block interval.
Jimmy exhibits the same greed that all core devs before him have exhibited. which is a big problem. it's trite and does make some sense at a certain level to say that the Bitcoin protocol runs on greed. but if that were true, then there would be no philosophical justification to try and make money more fair for everyone across the world as a p2p ecash. at some point, for Bitcoin to succeed, you have to want to make the world a better place. that involves some sacrifice and benevolence along with the desire to make some money:oh come on that was funny. oh wtv nvm.
[doublepost=1536634774,1536633852][/doublepost]
if you haven't see the video, here it is...
unfortunately they disabled comment and cut out the funny bit at the end when some well known core supporter takes the stage makes an even worst argument.
why is forcing the miners to use some ordering beneficial?@molecular
to force miners to use the ordering.
I doubt it. Miners are incentivized to use an ordering that minimizes orphan risk (by choosing an ordering that ensures fast reception/verification by other nodes). Also: it doesn't matter much how "difficult" (more specifically: how costly in terms of time) it is to produce a block. Doesn't matter much if you're only assembling your block every 10 seconds or you can do it in one second. All you lose is 10 seconds worth of fees.its more difficult to produce a block in order, so alot of miners might choose to simply have any order.
Why do we need to evaluate the consequences of ordering? How can you even do that? Those effects are largely implementation-specific.and its alot easier to evaluate the consequence / effects of ordering, if we know everyone is going to use it.
exactly, the MINERS might benefit from collectively agreeing. So THEY should do the agreeing, not US, right? They are free to orphan big blocks as they wish even without a blocksize limit in the consensus rules.blocksize limit is and always will be usefull, somthing like it needs to exist.
miners have no good way of determining what is "too big" without blocksize limit.
even if we get to the point where GB blocks are a thing, I still think miners might benefit from collectively agreeing to orphan block that are 1TB big...
Merklix requires a hard fork, plus it's also the kind of hard fork that adds a permanent technical debt (all block processors will forever need to support the old merkle format and the new merklix format). I can only see minor advantages:@Mengerian will deploying Merklix tree require a fork, soft or hard?
From the last series of posts, it seems to me that all relevant benefits of LTOR over AOR would have to come from Merklix tree deployment. If Merklix itself requires at least a soft fork, then it is perhaps prudent to simply go AOR right now to reap any perceived benefits that are supposed to come with LTOR (...canonical ordering benefits on graphene won't be relevant for a long time). When Merklix is ready, then it can be evaluated with LTOR, and if found beneficial, have LTOR be soft-forked in by then.
Reciprocal altruism is part of the game theoretical equation that makes Bitcoin work:Jimmy exhibits the same greed that all core devs before him have exhibited. which is a big problem. it's trite and does make some sense at a certain level to say that the Bitcoin protocol runs on greed. but if that were true, then there would be no philosophical justification to try and make money more fair for everyone across the world as a p2p ecash. at some point, for Bitcoin to succeed, you have to want to make the world a better place. that involves some sacrifice and benevolence along with the desire to make some money:
Free transactions aren’t free. Much like the food stamp program, the cost is being paid, just not by the user. If you look at the current incentives in BCH, every node on the network pays for the free transactions by having to transmit, validate and store these transactions. In other words, free transactions are a tax on every other node.This might be a hard pill to swallow for Jimmy but the entire Bitcoin protocol runs on altruism as excellently highlighted by Emin Gun Sirer years ago:
I hate to repeat this ad nauseum, but every single maxblocksize is ultimately adopted by miners. At no point did miners ever lose that power, even less so now that the EB number is just there and can be tweaked at any time. Arguing that having a default number somehow takes away that choice is mind-boggling.exactly, the MINERS might benefit from collectively agreeing. So THEY should do the agreeing, not US, right?
I still think the best method to remove the block size limit from the consensus layer is BIP101. However, I'm sufficiently convinced that BU has effectively achieved the same thing.Arguing that having a default number somehow takes away that choice is mind-boggling.
We should be making the consensus rules as tight and straightforward as possible, let the market converge on the things that create universal value and build then on top of the consensus rules.... transaction order should not be part of the consensus rules at all (just like the blocksize limit doesn't belong there).
the way i see it, miners feel that a limit is so important, they go out of there way to come up with a soft limit, on top for the hard limit.i disagree with this. the fact that the miners didn't make any blocks >23MB during the stress test is evidence of them knowing not to go higher, imo, despite the limit being 32MB. how'd they figure it out and how'd they stop at 23MB? i have no idea. maybe it had something to do with knowledge of ATMP, maybe not.
because it leads to more predictable behavior, and simpler code.why is forcing the miners to use some ordering beneficial?
i think your right. but just because its not much hard to procedure one ordering over an other, dost ensure everyone uses the same ordering.I doubt it. Miners are incentivized to use an ordering that minimizes orphan risk (by choosing an ordering that ensures fast reception/verification by other nodes). Also: it doesn't matter much how "difficult" (more specifically: how costly in terms of time) it is to produce a block. Doesn't matter much if you're only assembling your block every 10 seconds or you can do it in one second. All you lose is 10 seconds worth of fees.
the more predictable the system is, the easier it is to model / think about, the easier it will be to maintain.Why do we need to evaluate the consequences of ordering? How can you even do that? Those effects are largely implementation-specific.