Gold collapsing. Bitcoin UP.

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Thanks for the answer.

I'm assuming that weakblocks are optional here and are not required for participation.
Yes. With the caveat that if they work well, miners might be incentivized to follow the weak blocks chain.

Which might conversely add a cost - in the form of higher orphan cost - to going against the grain, so to say, and publish blocks ignoring the weak chain.

Which is exactly desirable to get some the benefits of weak blocks in a high transaction rate environment - better / faster "fractional confirmations" that also mean something.

But it is somewhat of a small gray area that is touched/changed by this. Same with graphene or xthin, by the way.

It does not touch block validity or consensus rules. Code that will 'fully validate the full chain since genesis' will do so regardless of the operation of weak blocks on the network.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Sure. If miners start deliberately orphaning blocks that are created without weakblocks, that might be some cause for concern but I'm not sure even that would actually be a problem, as such.

It's all about encapsulation. somewhere there's a blockdata->isvalid() functionality (actually CBlock::CheckBlock() ). If something is inside of there, it's something to pay attention too. If it's outside, it's much more flexible.

Looking back at the block size limit, it looks like when Satoshi introduced it, he did effectively put it inside that functionality so it does fall under requiring that extra scrutiny to change (and it should have originally but that's water under the bridge). Increasing the limit definitely passed that scrutiny though.
 
Last edited:
  • Like
Reactions: awemany

Tom Zander

Active Member
Jun 2, 2016
208
455
We can, however, likely run 8GiB blocks using a protocol that has a specification very close to the implicit one the current implementations use. But the actual implementations are a very different matter.
I think Flowee is very close towards those goals already. The blocks and transactions no longer live in memory, they are memory-mapped making the amount of internal memory no longer be a limit for the size of blocks we can process. (there are still various places where the backwards compatible APIs are used, more work is needed).
Same with various other core components needed to use much much bigger blocks. The global architecture of Flowee the Hub will likely not need much change for huge blocks.

I like to remind everyone that there's one final obvious change that needs to happen before full ossification, though - the 32MB limit needs to replaced with something likely based on miner-voting ...
This is debatable, from my reading of the initial BCH spec the block size was removed from the consensus rules and BU's blocksize-limit (EB) has been made the way to specify the max size. Which means that changing the block max size is not any longer a hard fork.

There are a lot of exciting future changes which could be better with or require a hard fork: UTXO commitments, Schnorr signatures, BLS signatures, Confidential Transactions
UTXO commitments don't require a hard fork. At best they require a soft fork (to orphan blocks that lie about the commitment hash).
Schnorr or BLS signatures seem very low priority. Not a good return on investment because of the high risk and low (possible) reward.
Confidential Transactions aren't really confidential. They are not a good solution if you are interested in privacy. Various options exist that also don't require any protocol changes (and no forks).

Really cool stuff I'm excited about in BCH;

* utxo commitments. No fork required.
* Graphene optimisations (tx ordering). No fork required.
* Double spend proof. No fork required.
* Weak blocks. No fork required.

devs gotta dev
Without an open conversation, or even a good explanation of why stuff is useful and how they spend time to make sure nothing old breaks, yeah, the impression of "devs gotta dev" sounds familiar.

There is a good old saying that if you create an institution or practice in order to do a thing, then the people will make sure that this thing will be done well beyond the time it should have been decommissioned. Ask anyone from the socialistic times (east Europe) and they can give you examples of institutions that had no function other than keeping themselves important and relevant.

Within a short year the periodic hard fork practice has reached this and at minimum the practice has made it too easy to make invasive changes and shifted the burden of proof from the people wanting to change Bitcoin to the people not wanting to change Bitcoin.
 
  • Like
Reactions: AdrianX and Peter R

Tom Zander

Active Member
Jun 2, 2016
208
455
Just to share this with the people here.

I responded to mengarian some weeks ago on this thread about the transaction ordering change proposed by the ABC people. I asked for code to show how this was an actual improvement because in my software I can see that the transaction order change would be a massive problem and make PV slower.
I never got any good answers, certainly no software is presented that shows the rationale behind the transaction ordering change.

Now with the ABC news stating they still want to change the transaction ordering, I'm a little upset.

  • The reasons they give for doing this change can all be achieved without a fork.
  • The changes actively hurt known implementations and speed
  • There is no actual implementation they have to show that the protocol changes are indeed useful. Nobody actually benefits.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
The reasons they give for doing this change can all be achieved without a fork.
@Tom Zander : How do you avoid transmitting ordering information for things like Graphene?

The changes actively hurt known implementations and speed
Which ones and how?

There is no actual implementation they have to show that the protocol changes are indeed useful. Nobody actually benefits.
That isn't true with at least Graphene. I found another use in that I could easily and quickly O(log N) search for a given TXID in my weakblocks RPC implementation. Of course I could also implement to be fast it in another way, but would need to take care of some extra data structures for that.

The last part shifted me towards "I'm ok with this change though rather in the form of neutral" from a "meh, let's rather not change stuff that is working".

My main concern now is time to implement this. No one in BU has started to work on this AFAIK, so it will be quite the hurry. I don't like hurries. Maybe wait a bit with it? It is just the biggest change for this HF, as far as I can see, and it simply makes me a little uneasy.

What's your stance on DATASIGVERIFY?
 

Tom Zander

Active Member
Jun 2, 2016
208
455
How do you avoid transmitting ordering information for things like Graphene?
A block typically has only a very small percentage of transactions that are required to be in-order. A small amount that spend outputs created in that same block. The rest of the transactions are completely free to be reordered. For instance by sorting them by txid. No consensus rule violated. And you suddenly can avoid sending the ordering info for the majority of the transactions. So only the transactions that need to be in-order will have some ordering info.

So if you just split the transaction list of your mempool into two sets, ones that depend on others and ones that don't depend on transactions in the same block then you can take the first list and put them in the front. Then the second list can be sorted by txid and appended.
The graphene creation software can detect this and create a much shorter data, essentially omitting the ordering info for all the sorted transactions.

The interesting part is that this means a graphene block message becomes smaller if the miner reorders the transactions smartly. As such it is in the miners interest to do this.

So you don't need any new validation rules. The reordering is allowed in the current rules and the optimization is economically profitable so we can expect the miner to do it voluntarily.

Which ones and how?
See the post directly before yours which goes into some details.

Essentially I have been researching how to best parallelize transaction validation. The main thing the code needs to know is which of the transactions are spending outputs from other txs in this block. Because those that don't can be 100% run in parallel. Including updating the UTXO.

The change they made is stated to add a sorting, they don't mention that they remove a sorting. The order in which transactions are that spend each others outputs.
And you want to know this order so you can parallelize the rest.

To get that info back you'd have to compare all inputs with all transactions in the block. Which is vastly more expensive than knowing tx 10 can only spend outputs from tx 1...9.

You can look at the code in flowee if you want to see the fastest way (I know) to do this. (link)
It could be even faster if we can get the sorting done as stated above.

What's your stance on DATASIGVERIFY?
This seems to be code directly copied from the blockstream codebase on elements. As such I would appreciate emperical evidence on its usecases, its actual use and why there are two ops instead of one. In other words, I have not looked into this all that much, and I feel that the people proposing to change Bitcoin should be the ones making the argument. We should not be the ones finding arguments against change.
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
... So you don't need any new validation rules. The reordering is allowed in the current rules and the optimization is economically profitable so we can expect the miner to do it voluntarily.
Fair point.

To get that info back you'd have to compare all inputs with all transactions in the block. Which is vastly more expensive than knowing tx 10 can only spend outputs from tx 1...9.
I see a 'more expensive' here, but I don't see a 'vastly more expensive'. In the TXID-ordering case, I run through all transactions before and put their outputs into a hash table - what you call 'miniUTXO' in your code. In your case, I do it on the fly.

In both cases, it stays an O(1) hash table read plus an O(n) to go through all transactions once respective twice?

But maybe you actually bring up good points here why one wants to wait a bit with ordering as well. The more I ponder about it, the more I think there should be measurements on its effectiveness in terms of parallelization.

It has advantages, however. Lookup by TXID in a block becomes simple without needing to keep extra data.

And you want to know this order so you can parallelize the rest.
Can you elaborate on this? For any parallelization I can think of, I have to divide up into several transaction sets that have to be processed in parallel. In all cases, I have potential dependencies to outputs that are in the 'mini utxo' but not the regular UTXO. In any case, I don't see much benefit in an ordered-by-TXID block here, either.

This seems to be code directly copied from the blockstream codebase on elements.
I don't think it is - it is rather the coevolution of similar goals resulting in similar code. If you look closely, ABC does a few things different that this implementation from Blockstream rather has in common with BU (that is more different to both of the two).

As such I would appreciate emperical evidence on its usecases, its actual use and why there are two ops instead of one.
Empirical evidence of use cases is good to have. I know of two: @theZerg 's betting and insurance schemes, and my (though to be proven still) "Zero Conf Insurance" idea.

The answer to the latter, I guess the answer is a desire for symmetry. We have CHECKSIG and CHECKSIGVERIFY. But I think there's a case that maybe just a CHECKDATASIG would suffice.

In other words, I have not looked into this all that much, and I feel that the people proposing to change Bitcoin should be the ones making the argument. We should not be the ones finding arguments against change.
Yes, absolutely. But these arguments have been put forward! People now need to weigh them applying their own metrics. Which brought me towards a somewhat positive position regarding this change.
 
Last edited:

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
This is good enough for a permanent solution, without the leap of faith into full-blown emergent consensus.
I think miners are able to move forward with EC (Emergent Consensus), it's the 1MB blockers that are opposed to it. If anything I'd make the EB (Excessive Block limit) something like BIP101, where the default limit is adjusted on a fixed schedule ending with unlimited.

I'm not familiar with the tweaks to BIP100 but I'm still of the understanding that anything that involves voting can be manipulated by the majority voters. Majority hash power could constrain EC in a similar way, with one exception, a cartel would be fragile.
 
Last edited:

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
....This is good enough for a permanent solution, without the leap of faith into full-blown emergent consensus.
we have "full blow EC" Already.. "EC" is a fundamental property of bitcoin.... its what sets bitcoin apart.
all we need is some code to facilitate that process, but that's probably what you mean by "full Blown EC".

IMO "full Blown EC" is a " leap of faith " we have ALL, already signed up for...

i wish we didn't have to take then tiny baby steps all the time....
 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Agreeing with @awemany. If the mood is changing towards a "lockdown" of the protocol, then the 32MB can be replaced with Bitcoin XT's modified version of Jeff Garzik's BIP100
https://bip100.tech/

This is good enough for a permanent solution, without the leap of faith into full-blown emergent consensus.
Why not the 128 MB in november 2018, 512 MB may 2019, 2GB november 2019 and no limit may 2020 as suggested by CSW? Sounds good to me, but a dynamic cap is also possible.

I totally share the opinion that we need a solution to the max blocksize issue ASAP.

I don't think we should have a committee of devs deciding the next 6 months quota based on results from the gigablock testnet in the years to come. The gigablock testnet should only be a way to improve clients.

The miners and ecosystem simply have to pick up the pace to stay in business.
 

majamalu

Active Member
Aug 28, 2015
144
775

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Ok, @Tom Zander: I have now spend the better parts of the night here in Europe pondering hard about the whole transaction ordering thing once more as I feel the window is closing to
bring forward criticism yet it feels very important to do the right thing here.

I (as have others, like @Peter R , I believe) have flip-flopped on this issue, but that should only be taken as a sign that there's still uncertainty - at least in my mind - as to what this change would entail.

I have also looked again for good arguments on why this change is needed or why it makes sense and I have to say that I have to confirm your POV that there is indeed quite a lack of good arguments.

With all due respect, but for me, this now includes the whitepaper by Vermorel, @deadalnix and others IMO. I am referring to this one: https://blog.vermorel.com/pdf/canonical-tx-ordering-2018-06-12.pdf

To state it bluntly, the convincing sounding arguments therein seem like a series of red herrings now. Not red herrings in the sense meant to fool others, mind you. But the kind of line of thought that emerges when one tries to bury a problem that simply is hard and bothers one's sense of cleanliness and neatness but one that will NOT go away as it is simply intrinsic to the system. You start to fool yourself, and in turn you start to inadvertently fool others.

Basically, @Peter R. said once, paraphrasing from memory "it is the natural order of things, should we really change it?".

And that is the essence, but it took me a while to grasp.

Yes, the transactions are currently in partial order in blocks and that complicates some things.

But basically, there is no way out of that. You can hide the problem behind a layer of 'sorting and unsorting' but that just adds complexity. The required partial order is the validation order. Not the TXID. And You. Can. Not. Avoid. That.

On a single core, it is also quite obvious that this natural partial order is the fastest way to validate. And on multiple cores, if anything, with the partial order in place, I know that transactions I built upon can only come from the past and thus have gained a bit of entropy compared to the by-txid ordering case. I have yet to see any argument why losing this information and having to rebuild it is advantageous. I suspect one could even attempt to make a proof that if you split up a partially ordered block like it is now at arbitrary boundaries into chunks, and distribute it among workers, that the interprocess communication needed and the amount of stalling you get can only get higher with orderings that are different from the required partial order.

One argument from the whitepaper that stood out for me in the sense how it utterly falls apart under a change in perspective is this:

As blocks are on average 10 min apart, it’s important to take advantage of the whole duration in-between two consecutive blocks to process incoming transactions as evenly as possible, making the most of available computing resources.
Yes. Agreed. BUT: Right now, if you identify the transaction order with the partial order that you get if you'd take the (inaccessible) transaction creation time (which is necessarily causal!), you get the green line (crosses) in this picture:



And if you do the canonical order, you get the red crosses, with a sporadic return to natural order on the block boundaries.

How can this be an advantage for processing, especially processing that is smooth in intensity over time, like the above stated goal!? The red herring here looking at blocks instead of the stream of transactions.

The good parts, like my weakblocks RPC code becoming simpler resp. faster are true. But they're basically just me reaping the benefits of a sort-and-unsort steps that happens around the very validation core of the system. I can more easily dig by TXIDs through blocks when they are guaranteed to be ordered. But someone or something still has to do that ordering and unordering for me!

And the structure of the system itself should be seen independently of requirements for just querying it! It is as important to look at the requirements for updating it.

And yes, it would be great to have all that kind of functionality in easily accessible APIs and be able to shape the amount of extra resource usage vs. convenience trade-off when using them.
But that is independent of changing the order in the blocks!

And it appears to me that changing the validation order of the system, with now folks starting to complain that it makes things slower (so evidence to the contrary of efficiency!) is a dangerous path to go down, just to reap some benefits on the borders of the system. (
@Tom Zander : Have you made plots regarding the slowdowns in your software expected with canonical ordering?)

This goes for graphene as well as weak blocks. @Tom Zander's argument (which I have a deja vu of having heard before, from him most likely) regarding the partial order basically permitting freedom to sort for Graphene makes a lot of sense here.

And, again, it is absolutely important to look at the requirements for updating it when talking about validation efficiency.

And all one does when adding the sorting on top is to push the issue around and creating more complexity in the end. @deadalnix, please convince me of the opposite. What and where does validation efficiency increase due to canonical ordering?

At the very least, let us please wait a bit longer with changing the transaction ordering. We can change it and it might be worse, or we can leave it for now and it might be worse for a while.

I will try to put together a more detailed criticism of the linked whitepaper tomorrow.
 
Last edited:

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Basically, @Peter R. said once, paraphrasing from memory "it is the natural order of things, should we really change it?".
But they're basically just me reaping the benefits of a sort-and-unsort steps that happens around the very validation core of the system. I can more easily dig by TXIDs through blocks when they are guaranteed to be ordered. But someone or something still has to do that ordering and unordering for me!
As I've mentioned before, I think that a node should be able to accept transactions in any order within the block. It doesn't really bother me if they're sorted this way or that way but if Bitcoin is time-stamping transactions, they should all be regarded as simultaneous. Perhaps, even, it might be possible for a node to request one sorting or another or possibly even an index into the block.

It's somewhat beginning to feel like the old "how many angels can dance on the head of a pin" at this point.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Note that if nodes can accept any transaction ordering within a block, it allows new ordering methods to be adapted at any time as they become beneficial or if one is seen as providing more benefits over the other. A market of ideas is allowed for. If the ordering needed for idea "X" is preferred then that method will be able to work without re-ordering. Then in 2025 when Graphene-XtrEme!! is invented which benefits from ordering transactions by decreasing number of outputs followed by the third character of the public key address, it can be adapted with no fork required.

Never forget:

Be liberal in what you accept, and conservative in what you send.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
Epic writeup @awemany !

The paper from Vermorel was weird. I agree on the red-herring diagnosis. The bottom line for me is this;
during validation there currently isn't a single cycle spent checking the ordering. Because it is a natural ordering.
Their suggestion to remove the natural ordering and start using canonical and making that a hard fork rule implies that now accepting a block we have to validate the order. This is more complexity, more code and naturally this can't make things faster unless there is a huge advantage elsewhere (I don't see one).

The core concept that Vermorel et all got wrong is claiming that ordering in natural order is a net negative. It is not! Their claim ignores the fact that this ordering is a natural consequence of having a validated mempool of transactions. The most important part being that this ordering happens during mining, not during validation like they imply when they talk about edges and ordering methods. None of their graph theory is relevant as the order is a natural consequence of having a mempool, and thus free.


I think there is one little problem in your thoughts @awemany, but this doesn't invalidate any of your other arguments. I am being a bit pedantic today :)

This is about the advantage you explained to get from sorting by txid and how fast it would be to look up a transaction.
I think it stems from the API you currently use which is a block with a vector of transactions.
What you have to realize is that this is a design that can't scale because the entire block needs to be in memory and needs to be iterated over before you can access it. Imagine doing that for a gigabyte block. You take the gigabyte and copy all the data into memory into vectors and you parse the entire block before you even start looking at the transactions.

A longer term solution (the one I already use in Flowee) is that a block is memory-mapped and you use an iterator over the entire block, identifying the individual transactions and their hashes as you go. Not storing any of that in memory (but maybe in a hash like the miniutxo you found in my linked code).
This means that you get a bit lower level which is required to not malloc a huge amount of memory just to find 1 transaction in a large block. And then you realize that there isn't really a benefit to sorting by txid in order to find one specific transaction. Only in corner cases like providing proof that a transaction doesn't exist is there a benefit. But not a very large one (since the proof only extends to one block, we have half a million of them today).

Yes, absolutely. But these arguments have been put forward! People now need to weigh them applying their own metrics. Which brought me towards a somewhat positive position regarding this change.
I'm more of the position that we can pick the 3rd option, which is to reject their entire hard fork proposal because they haven't argumented it well enough. Almost exactly like they rejected "GROUP" from @theZerg.
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Richy_T :
As I've mentioned before, I think that a node should be able to accept transactions in any order within the block. It doesn't really bother me if they're sorted this way or that way but if Bitcoin is time-stamping transactions, they should all be regarded as simultaneous. Perhaps, even, it might be possible for a node to request one sorting or another or possibly even an index into the block.
I disagree. We should actually leave the rules just exactly as they right now. Think about it: Basically Bitcoin acts as this global distributed notary that just irreguarly stamps anything that comes in. The transactions coming in are in natural order. If not, that can only because of a double spent or because the endpoints have withheld, for whatever reason, a bunch of dependent ones and are too lazy to bring them into their natural order again.

Without any loss of generality or even functionality, the network can go and accept transactions only when they come in the natural order. As this order is partial (at least the way it is observed) it has some wiggle room left, wiggle room that actually is quite large and can be exploited by miners who want to also sort some of the transactions by some other order, compatible with the natural one, to make them simpler to digest by others.

However, this should all need to happen only on the edges of the network, so to say. The core validation logic should stay dumb and simple, so that almost all nodes in the network can simply follow this "flow sorted by transaction signing time". First seen safe and similar ideas are actually in part a motivated in a view stemming from this core observation.

And it is the most dumb and simple if it simply reflects the order in which things are happening.

@Tom Zander said it exactly right, the order is natural and comes for free. The whole paragraph in Vermorel's paper that talks about how it is costly to maintain an online partial ordering is a complete red herring!

And it is physical causality that makes this order come for free: If you somehow had a magical way to look at transaction signing time, from any point of view in spacetime, the stream of transactions would naturally fulfill the partial order that the paper argues is so hard to get and maintain! This is modulo double-spend shenanigans, of course, but rejection of a transansaction NOT following the natural order is a quite easy thing to do, at least regarding computing resources.

@Tom Zander:

I think it stems from the API you currently use which is a block with a vector of transactions.
What you have to realize is that this is a design that can't scale because the entire block needs to be in memory and needs to be iterated over before you can access it. Imagine doing that for a gigabyte block. You take the gigabyte and copy all the data into memory into vectors and you parse the entire block before you even start looking at the transactions.
And, equally pedantically, I disagree on this. An O(n log n) search in a txid-ordered block for a given TXID would not require loading the full block into memory and could also be quickly done on a mmap()ed terabyte block.


I'm more of the position that we can pick the 3rd option, which is to reject their entire hard fork proposal because they haven't argumented it well enough. Almost exactly like they rejected "GROUP" from @theZerg.
My personal stance is that we should move to a pick-and-choose approach, like I think Haipo implicitly suggested. Would we have a good protocol for this pick and choose in place, I think one of the data verification opcode would have good chance of inclusion by now or would at least be on track to be.

I can understand however, that people feel that the risk is too high and want to abandon this change set altogether for now. As things can be done at a later time when we keep our healthy sense of 'the code isn't law' in tact, the extra risk ossification risk to do that seems very low.

As others have said, none of the changes seem to fix anything that is perceived to be broken by many right now.

From my discussions with the RBF supporters back in the marriage with Core folks, I take the potential for rogue miners a tad more serious maybe, and I believe (and yes, you can totally accuse me of 'dev gotta dev' now :D) that a data verification opcode (which fulfills certain criteria: the current implementation from @theZerg does not work, because it prefixes with an extra message marker before signing) would allow to code extra insurance outputs that would make 0-conf scamming extremely unlikely. And I think that's a worthy change to have down the road.
I should also say that I support the CHECKDATASIG opcodes without having a direct "dev gotta dev" incentive to do so - I have not been involved in their specification or development. My support comes from solely wanting to use it. You can question my incentives for wanting to use it and might see a "dev gotta dev", but I honestly think the benefits would outweigh the risks and it will make the network as a whole more usable and also "easier for salespeople to sell". If it doesn't come in November, I'd certainly argue for keeping it prominently on the table.

You are right, however, that maybe we want to keep the CHECKDATASIGVERIFY variant switched off, as this is solely a trade-off between script code density / verification speed and using opcode space. But then I feel like one starts to bikeshed at this point, so I have to say I'd be fine with either variant (one or two opcodes) being implemented.

I also think that my ZCI preliminary proposal shows interesting limitations of the current approachs. I am sure, for example, that @theZerg can see that the extra message signing prefix he puts into his opcode would prevent the use case for this opcode that I have foreseen. From a more detached perspective, finding and making and discussing (and testing, not yet done!) these use-cases allows to refine the set of validation rule changes / the new opcode that would be helpful, and given that I formulated this just in the last week or two, maybe waiting another few months brings up unforeseen problems with the other proposals that should be fixed. But I also think the current ABC implementation is close to a universal, general and 'doing one thing well and right' status.
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
In language design, there is a conflict between safety and versatility. This is why we have low and high level languages. Datasigverify was designed to be safe compared to checkdatasig, and allow the bitcoin script programmer to not be aware of some esoteric ECDSA limitations.

For example, in your DS protection system, if the user signs any message (not a doublespend tx, just any message) it can be used to take all the coins.

The required msg prefix in datasigverify generally means that its unlikely that signed data intended by the user for some other use cannot be (mis)used in a datasigverify message.

Since your DS protection protocol relies on the adversarial reuse of someone else's signature, you are correct in saying datasigverify would not work for it.
[doublepost=1534078849][/doublepost]WRT ordering, I think it's unfair to say that dependency order is a natural order that comes for free because that assumes a particular evaluation algorithm. In particular a sequential 0 to N processing order is required to get ordering validation for "free".

But it's not free when 3 to 16 other cores are sitting idle. Perhaps you guys could give us a 1-page summary of a parallel algorithm that more efficiently executes validation with dependency order enforcement than an algorithm with no order enforcement?
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Everyone:

I have just written a pretty blunt (but only because I am very worried now with where we're intending to go) criticism of the whitepaper by Joannes Vermorel, @deadalnix and others.

I made it a PDF quoting sections of the aforementioned whitepaper, and you can read it here:

https://www.docdroid.net/pvFaNUq/critique-canonical-order.pdf

I hope that it is a productive entry in the discussion about transaction order in blocks. In any case, to be happy, I like to see the points raised addressed and I suspect others have a similar view.

Given that transaction order change is also the largest upcoming code change in November and in that regard the most risky, if there's one thing I'd like to convince anyone else to put the brakes on in the next HF, it is this issue of changing the transaction order.
[doublepost=1534080142][/doublepost]@theZerg:
Since your DS protection protocol relies on the adversarial reuse of someone else's signature, you are correct in saying datasigverify would not work for it.
Yes, and this is why I think if it is implemented instead of the ABC variant, it should be made more flexible. As I said, I don't worry so much about script safety as I think that's a pretty specialized field already. But I guess we have to agree to disagree on this.

WRT ordering, I think it's unfair to say that dependency order is a natural order that comes for free because that assumes a particular evaluation algorithm. In particular a sequential 0 to N processing order is required to get ordering validation for "free".
Well, but we cannot change the nature of time. Transactions arrive in this order on how they can be processed.

But it's not free when 3 to 16 other cores are sitting idle. Perhaps you guys could give us a 1-page summary of a parallel algorithm that more efficiently executes validation with dependency order enforcement than an algorithm with no order enforcement?
I think we're talking past each other here and I suspect you agree with what I am really trying to say.

Of course, order enforcement is necessary in the sense of keeping track of transaction dependencies! And of course that information is necessary for PV. But how the heck does it help to order by TXID here? The TXID does not confer any information about transaction dependencies.

However, the current order in blocks clearly does. See also the above PDF that I just wrote.
[doublepost=1534080880,1534079855][/doublepost]@theZerg, @Tom Zander : Speaking about more efficient algorithms. I don't know whether you talked about it, Tom, but it seems like an approach that does 'partial order first and then secondarily sorts by TXID' would allow as far as I can see for a quite beneficial validation algorithm:

Simply mark the points in the block at which transactions follow which are dependent on each other. This point will coincide likely with the points where the lexicographic ordering is interrupted to fulfill the partial, validation order to take precedence.

Any transactions in the first range can now be validated in an embarrassingly parallel way, as they are all independent of each other. Same with those in the 2nd bin. And then the third bin. And so forth!

Basically, what rather seems to me to potentially make sense down the road is conferring further information on which transactions can be validated in a parallelized way.