Assuming BCH stays a 'eat through everything' incentivized time-stamping system, I do not see much of a difference between O(blocksize) and O(mempool) in the average case -?The work to be done by graphene is O(mempool) when the one to be done by CB is O(blocksize).
Well said, seconded. This is exactly my thinking. And breaking the causality would be messing with details of subchains/weakblocks implementations, for example. Right now, it looks like it would be easier with the current ordering. Maybe there will be a future where it becomes clear that it is the other way around. I don't see that yet.The blockchain today is causal: you can start with the genesis transaction and move through the blockchain transaction-by-transaction, validating the entire history by moving through the blockchain in only the forward direction. If A sends a coin to B and B sends that coin to C, the transaction from A to B always appears before the transaction from B to C. If causal ordering is removed, then the transaction from B to C could appear before the transaction from A to B. Validating can no longer be done purely in the forward direction.
Admittedly, I cannot think of why this is necessarily bad, so maybe I'll come around to support this proposal, but I do see it as a huge change to the very structure of the blockchain, and so I think we should proceed very cautiously.
Don't get me wrong. We might end up doing this (breaking this causality), but I think for the meanwhile, I really, really like to be cautious here. I don't think it is particularly pressing.
If I'd give an order on which of the recently debated proposals to BCH I am more worried about, it would be about this, in ascending order:
- 32MB limit now
- increase OP_RETURN limit
- activate OP_XOR/OP_AND/OP_DIV/...
- activate OP_GROUP
- change transaction ordering
With the last two being close to a tie.