And yet validation fails if it's not done proper. But no code checks it. It's probably god who intervene and cause the test to fail. Whatever it is, it's indeed not code.
Hi deadalnix.
I was hoping my polite sharing of my experience would be useful, which is why I posted here. I don't know where you hang out, if you have any other place to communicate better about this.
Your reply;
if you are being serious and trying to understand, I would ask you to talk to a native English speaker to translate my english for you as you completely missed the actual point.
Your answer almost sounds sarcastic. I know that this is not what you meant.
I'll try to explain it a bit more elaborate here;
The talk by Mengerian claims that the existance of causal ordering in a block is a detriment to doing parallel validation. It goes on to claim that removing that ordering will help parallel validation.
First it is important to rewrite this into what this actually means
for parallel validation.
Mengerian claimed; A previously sorted list is harder to process than an unsorted one. (the new sorting is useless for validation)
Any software developer, or logical person, should be curious how on earth that could be. You are talking about removing information.
You are not talking about removing the need to process the transactions in order.
Please
@deadalnix take this knowledge into account and improve the idea to sort a block by txid as I suggested. To get the best of both worlds.
Edit:
@deadalnix
the idea is to use the sorting by txid as you may be onto something there. But make sure that the transactions that actually depend on others in the same block are not sorted as such. They still are sorted in-order.
This only will have a tiny impact on the graphene transfer-speed.
This will actually help parallel validation immensely as you completely avoid any sorting and know that the unordered transactions can be processed embarassingly parallel. Not in 2 or more phases, but all at the same time. Which actually helps a lot with data-locality.