Gold collapsing. Bitcoin UP.

Tom Zander

Active Member
Jun 2, 2016
208
455
Yes, I think that's an accurate understanding.
The idea that we can leave 1 step out of an imaginary implementation is interesting when the implementation is a really bad one.

The one I wrote some time ago actually uses the ordering to more smoothly allow parallel validation. It is much faster than the one suggested by you in your talk.

What is more is that if the ordering is removed then it would be much slower to do parallel validation.

So please take a look at my implementation here;
https://gitlab.com/FloweeTheHub/thehub/blob/master/utxo/Importer.cpp#L181

It is better than the proposed one (even if it were unordered) and it would be hurt by the new transaction ordering.

This is running code, with some benchmarking built in already.

Do you (or the ABC people that published a PDF) have any running code to compare to?
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@Tom Zander but isn't the idea that the causal order is checked once, but only once, in the mempool ? Does that not work?
 

Tom Zander

Active Member
Jun 2, 2016
208
455
@cypherdoc

You are very close to the truth, in reality "causal order" is not a property you check, it is the natural result of the design. So you are correct that stuff doesn't end up in the mempool unless its ordered properly. Stuff gets stuck in an orphan cache if the order is not proper.

For blocks this is the same, no software explicitly checks for causal order. The presentation and paper arguing that parallel validation can be done only if we can skip this are wrong in that nobody checks for it in the first place. The fact that you can't spend an output before you created it implies the causal order. No need to write code to check it.

For a more technical reasoning, look at the link above where actual working code is shown.

I would love to see the code written by the ABC people where they claim this protocol change is needed to make it fast. Because I want to compare the speed with my implementation which doesn't require any change in protocol.
 
  • Like
Reactions: lunar

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@Tom Zander
no software explicitly checks for causal order
Yes, but the validation fails if it's not on causal order. So it seems like the consensus rule was probably an accidental consequence of how it was implemented, not a deliberate design choice.
I would love to see the code written by the ABC people where they claim this protocol change is needed to make it fast.
Firstly, I think it not appropriate to characterize the issue as being specific to "ABC people". There are many people across several projects researching and working on this.

Secondly, I don't know that anyone is claiming it's needed to make it fast now. It's more about laying the groundwork to scale many orders of magnitude in the future.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
> Firstly, I think it not appropriate to characterize the issue as being specific to "ABC people". There are many people across several projects researching and working on this.

Is it not appropriate for whoever it is to show actual running code?

> Secondly, I don't know that anyone is claiming it's needed to make it fast now. It's more about laying the groundwork to scale many orders of magnitude in the future.

Changing core concepts before actually spending (a lot of) time writing and running code is interesting. I'd say it is the definition of premature optimisation.
Every programmer with a few years' experience or education has heard the phrase "premature optimization is the root of all evil." This famous quote by Sir Tony Hoare (popularized by Donald Knuth) has become a best practice among software engineers.

But as you sidestepped my actual question, I'll repeat it here:

can anyone show the code where the protocol change is shown to be required for parallel processing.


Maybe I'm missing somce core innovations. But since I have code and the protocol change actually hurts parallel validation speed I think its worth pausing and reflecting.
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
Changing core concepts before actually spending (a lot of) time writing and running code is interesting. I'd say it is the definition of premature optimisation.
Every programmer with a few years' experience or education has heard the phrase "premature optimization is the root of all evil." This famous quote by Sir Tony Hoare (popularized by Donald Knuth) has become a best practice among software engineers.
One could also argue that the opposite is the case: That optimizing for validation speed right now is premature optimization, whereas implementing simple and robust data structures with future massive scaling in mind is just good design.
can anyone show the code where the protocol change is shown to be required for parallel processing.


Maybe I'm missing somce core innovations. But since I have code and the protocol change actually hurts parallel validation speed I think its worth pausing and reflecting.
I'm not sure if it can be proved that parallel validation is impossible with the current order rules... In fact you say you have implemented it, which would prove that it is possible!

It does seem that in general topological order constraints are more complicated to deal with that simple "sorted" order as proposed for canonical order in Vermorel's article. The references to that article go into more detail of the issues around topological vs. sorted order schemes.

Anyhow, I will have a look at the code you linked when I have some time to think about it, and try to understand your approach. Thanks for sharing the link.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
optimizing for validation speed right now is premature optimization, whereas implementing simple and robust data structures with future massive scaling in mind is just good design.
I fully agree. This is exactly what I've been trying to say!

Here is the core disagreement;

you seem to claim that removing the requirement to have your inputs ordered allows you more freedom. Specifically in PV.

Practice shows this to be false. I claim instead that;

having in your inputs a guarentee that transactions are causally ordered allows you more freedom. This ordering is not in any way a limitation, it is useful information.
Throwing that information away makes validation slower. Not faster.

The distinction exists because the idea to remove causal order mistakes the ordering as a requirement to be explicitly checked. As we agreed above, this is false and no software has code to check this. Making the premise incorrect.
Instead, this order is there because to not have it there makes things much more difficult.

The proposal removes information from the validation process, is it any surprise that this negatively affects processing speed?

It does seem that in general topological order constraints are more complicated to deal
This is the trick, the validation software does not see this as a constraint. The constraint is purely on the side of mining, it is added information at the time of validation.
Information that can help to quickly separate the (often less than) 1% of the causal ordered transactions from the 99% that has no ordering (and can thus be processed in parallel with no problem).


ps. maybe a middle ground can be found. Causal transactions are kept causal and the rest are sorted by txid. The causal ones are stored in a block first (or last, I don't care) and code like graphene only needs a bit of an alteration to treat the causal transactions different.
 
Last edited:
  • Like
Reactions: torusJKL and _bc

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
It is possible to maintain the causal ordering information, where it is deemed useful, by creating a sequence byte prefix in the canonically ordered txid. This sequence byte is a generation number allowing up to 255 dependent txns (however "dependent" is defined). 99% of the time the generation number is zero.

Not sure if the overhead for the generation byte, in checking and incrementing, is problematic.
 
  • Like
Reactions: torusJKL

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@79b79aa8
Loved the first part of the article, bringing money from this theoretical concept we all have discussed for years down to earth. It made me understand better why a lot of people stay away from bitcoin. Why you REALLY don't want your money to be volatile.

The second part is also great. "Profit Mode" in wallets could be a great concept for people to manage their bitcoins. (When they just have a small amount of their money in BCH, very much unlike me, lol.)
 

deadalnix

Active Member
Sep 18, 2016
115
196
The distinction exists because the idea to remove causal order mistakes the ordering as a requirement to be explicitly checked. As we agreed above, this is false and no software has code to check this. Making the premise incorrect.
And yet validation fails if it's not done proper. But no code checks it. It's probably god who intervene and cause the test to fail. Whatever it is, it's indeed not code.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
And yet validation fails if it's not done proper. But no code checks it. It's probably god who intervene and cause the test to fail. Whatever it is, it's indeed not code.
Hi deadalnix.

I was hoping my polite sharing of my experience would be useful, which is why I posted here. I don't know where you hang out, if you have any other place to communicate better about this.

Your reply;
if you are being serious and trying to understand, I would ask you to talk to a native English speaker to translate my english for you as you completely missed the actual point.
Your answer almost sounds sarcastic. I know that this is not what you meant.

I'll try to explain it a bit more elaborate here;

The talk by Mengerian claims that the existance of causal ordering in a block is a detriment to doing parallel validation. It goes on to claim that removing that ordering will help parallel validation.

First it is important to rewrite this into what this actually means for parallel validation.

Mengerian claimed; A previously sorted list is harder to process than an unsorted one. (the new sorting is useless for validation)

Any software developer, or logical person, should be curious how on earth that could be. You are talking about removing information. You are not talking about removing the need to process the transactions in order.

Please @deadalnix take this knowledge into account and improve the idea to sort a block by txid as I suggested. To get the best of both worlds.

Edit:

@deadalnix

the idea is to use the sorting by txid as you may be onto something there. But make sure that the transactions that actually depend on others in the same block are not sorted as such. They still are sorted in-order.

This only will have a tiny impact on the graphene transfer-speed.
This will actually help parallel validation immensely as you completely avoid any sorting and know that the unordered transactions can be processed embarassingly parallel. Not in 2 or more phases, but all at the same time. Which actually helps a lot with data-locality.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
It seems that CNBC is driving short-term price spikes in Bitcoin. They do need to consider that projections should be an aggregation of the BTC and BCH prices.
 
  • Like
Reactions: Norway and majamalu

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
I think the spring is coiled for such a leap but this will be where the rubber hits the road and BTC won't have the capacity to get there. This time, instead of being about played out and ready to recede and recover, BTC will hit its 1MvB limit at full speed and will see catastrophic congestion for months. The later stages will be people rushing for the exit doors as the price plummets.
 
  • Like
Reactions: majamalu