ShadowOfHarbringer
New Member
@deadalnix, could you at least please provide your stance regarding @awemany's post ?
Note that in my suggestion, the default would likely be the natural order, as Tom suggests, simply because that's what arises from doing things the mempool way. So "in the meantime" nothing has changed. All that needs to happen is that the node needs to be able to cope with different ordering of transactions which is a little extra code but if everyone continues passing around natural order blocks, the strategy for processing those blocks is no more computationally expensive.I disagree. We should actually leave the rules just exactly as they right now. Think about it: Basically Bitcoin acts as this global distributed notary that just irreguarly stamps anything that comes in. The transactions coming in are in natural order. If not, that can only because of a double spent or because the endpoints have withheld, for whatever reason, a bunch of dependent ones and are too lazy to bring them into their natural order again.
I see. My answer here would be: "Let's not add code that has little to no benefit". It would only make things more complex, break assumptions and less assumptions also might mean less optimizations that are possible down the road.All that needs to happen is that the node needs to be able to cope with different ordering of transactions which is a little extra code but if everyone continues passing around natural order blocks, the strategy for processing those blocks is no more computationally expensive.
I see, understood.So what I am saying is that if we are going to make a change, it should be to remove the requirement for any particular ordering rather than changing from one to the other.
So Bitmain has more than 1/21 of all BCH that will ever be created. Bullish!Game on!
I'm fine with that but it seems that things are pushing ahead. I just think some parties are hyper-focused on their answer rather than widening the scope and looking for the "correct" answer and thus we will end up with "mistakes were made".I see. My answer here would be: "Let's not add code that has little to no benefit". It would only make things more complex, break assumptions and less assumptions also might mean less optimizations that are possible down the road.
The natural ordering is the ordering we get from the mempool. So what the miner puts in a block. We can call it natural because no sorting needs to be applied since we are required in the mempool to already establish relations between zero-conf transactions that spend each other.WRT ordering, I think it's unfair to say that dependency order is a natural order that comes for free because that assumes a particular evaluation algorithm. In particular a sequential 0 to N processing order is required to get ordering validation for "free".
But it's not free when 3 to 16 other cores are sitting idle.
This is basically what I suggested to the ABC people;@theZerg, @Tom Zander : Speaking about more efficient algorithms. I don't know whether you talked about it, Tom, but it seems like an approach that does 'partial order first and then secondarily sorts by TXID' would allow as far as I can see for a quite beneficial validation algorithm:
I actually was thinking about a further refinement of that. Let's see whether I can cobble something together tonight to demonstrate what I mean. If I didn't, it likely means it doesn't really make sense in the detail ;-)This is basically what I suggested to the ABC people;
https://github.com/bitcoincashorg/bitcoincash.org/pull/94#discussion_r208819983
You would have to sort them so transactions creating outputs are before transactions spending those outputs.Since Tom is in the natural ordering camp (which I am also softening to FWIW), perhaps he can answer these questions:
- How much extra work would it be to add the ability to process non-naturally ordered blocks to your code?
- Would that code significantly affect processing time for a naturally ordered block?
But this could be avoided by creating a temporary table with the outputs of the block and then, when consuming the inputs, talking to this table as a cache first and then on a miss, to the UTXO set on disk.Yes, in my parallel validation scheme an output consumed in the same block never even hits the UTXO, so we never flush it to disk and we never have to retrieve it again from disk. The ultimate cache-locality breakage. And exactly that disk roundtrip is what you will create with the scheme proposed by Vermorel and now again by tomtomtom7.