Gold collapsing. Bitcoin UP.

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
The work to be done by graphene is O(mempool) when the one to be done by CB is O(blocksize).
Assuming BCH stays a 'eat through everything' incentivized time-stamping system, I do not see much of a difference between O(blocksize) and O(mempool) in the average case -?

The blockchain today is causal: you can start with the genesis transaction and move through the blockchain transaction-by-transaction, validating the entire history by moving through the blockchain in only the forward direction. If A sends a coin to B and B sends that coin to C, the transaction from A to B always appears before the transaction from B to C. If causal ordering is removed, then the transaction from B to C could appear before the transaction from A to B. Validating can no longer be done purely in the forward direction.

Admittedly, I cannot think of why this is necessarily bad, so maybe I'll come around to support this proposal, but I do see it as a huge change to the very structure of the blockchain, and so I think we should proceed very cautiously.
Well said, seconded. This is exactly my thinking. And breaking the causality would be messing with details of subchains/weakblocks implementations, for example. Right now, it looks like it would be easier with the current ordering. Maybe there will be a future where it becomes clear that it is the other way around. I don't see that yet.

Don't get me wrong. We might end up doing this (breaking this causality), but I think for the meanwhile, I really, really like to be cautious here. I don't think it is particularly pressing.

If I'd give an order on which of the recently debated proposals to BCH I am more worried about, it would be about this, in ascending order:

- 32MB limit now
- increase OP_RETURN limit
- activate OP_XOR/OP_AND/OP_DIV/...
- activate OP_GROUP
- change transaction ordering

With the last two being close to a tie.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@hodl: Only reason it is not a complete no-brainer for me is that the 32MB limit is the network message size limit in the code. That might trigger dormant bugs. You are basically testing code close to a limit that has not been reached otherwise. E.g. 31MB instead of 32MB would not do that.

But yeah, I guess enough testing will convince me otherwise on that front.

Note that we also might need a new network message format for even bigger blocks.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
is this markedly different than 'cash first, other things later
Absolutely.

To be clear, I'm not against other uses, I just think that continuing to build on the value proposition that brought Bitcoin to where it is over the past 8 years is the highest priority. The other stuff is not so urgent that we can't take time to do it right.

On the other hand, deadalnix's disposition continues to concern me. I think he got some "political capital" from the fork and spent a bunch of it on a messy DAA and changing the address format. I think he will find it harder to maverick things going forward and probably needs to adjust his expectations.
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
So with Calvin Ayre saying he doesn't want OP_GROUP yet, it looks to me like we're indeed inching towards a future where the miners decide what will go on.

I would be fine if all of the above proposals become active at 75% miner support for a difficulty period. I trust the incentives to do the right thing :)

@hodl: As you don't trust signalling: Can you see Calvin Ayre fake-signalling pro OP_GROUP even though he (for now) dislikes it? I really can't.

I suspect the miners want to stay away a bit from being seen as 'responsible for the chain' and the clear signal from a minor now is the exception. But if only the devs propose changes at the 75% level, I think that would make that responsibility a lot more moderated.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Admittedly, I cannot think of why this is necessarily bad, so maybe I'll come around to support this proposal, but I do see it as a huge change to the very structure of the blockchain, and so I think we should proceed very cautiously.
I think that should likely be regarded as an accident. Would any miner reject a block that was not structured that way? I.e. is this actually a consensus rule?. If it is just an accident and not a consensus rule, it should be regarded as disposable.
[doublepost=1519584893][/doublepost]
Don't get me wrong. We might end up doing this (breaking this causality), but I think for the meanwhile, I really, really like to be cautious here. I don't think it is particularly pressing.
I think if causality should be a thing, it should be a thing. Make it a consensus rule. If not, it's not and should not be expected. If some software comes to expect this and another not, that could lead to accidental hard-forks down the road. If subchains/weakblocks require it then let's make it a rule when subchains/weakblocks are implemented.

Assuming this ordering is like assuming file paths with no spaces in your scripts. It works until it doesn't.
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I think that should likely be regarded as an accident. Would any miner reject a block that was not structured that way? I.e. is this actually a consensus rule?. If it is just an accident and not a consensus rule, it should be regarded as disposable.
A miner would reject a block where Carol receives a coin from Bob before Bob received it from Alice, because those are the rules for Bitcoin. Was this an "accident?" I highly doubt it.

I think if causality should be a thing, it should be a thing. Make it a consensus rule. If not, it's not and should not be expected.
It already is a thing. It is a consensus rule. The debate is about whether we should remove that rule.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
A miner would reject a block where Carol receives a coin from Bob before Bob received it from Alice, because those are the rules for Bitcoin. Was this an "accident?" I highly doubt it.
I mean within the same block (which seems to be the issue here). Obviously transactions referencing those in previous blocks would not be affected.

It already is a thing. It is a consensus rule. The debate is about whether we should remove that rule.
OK, if that is the case, fair enough I guess. I always had the vague idea that blocks were "atomic" and that ordering was not critical but if that's not the case, then it's not the case.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Nothing is breaking causality. Causality is guaranteed by having the SHA256 of the tx you spend from in the input.
When I say "the blockchain is causal" I mean that if you parse the blockchain in the forward direction, C cannot receive a coin from B before B received it from A.

Your proposal would change that. It would allow C to receive a coin from B before B received it from A, provided B does indeed receive it from A in that same "block." In other words, your proposal would weave the concept of "discrete blocks" more tightly with what was historically just a chronological (or causal) list of transactions.
[doublepost=1519586737][/doublepost]
I mean within the same block (which seems to be the issue here). Obviously transactions referencing those in previous blocks would not be affected.
Yes, exactly. Removing what I call the "causality requirement" changes the abstraction boundaries in a big way. Right now B sending to C before A sending to B would be invalid. With the new rules, it would be valid if both occurred within the same block and invalid if it occurred over two blocks. The notion of discrete blocks becomes more intertwined with the chain of transactions.

Does that matter? I'm not sure. Is it a significant change? Yes, definitely!
 

Tom Zander

Active Member
Jun 2, 2016
208
455
Just want to point out this part of the whitepaper;

In
this paper, we propose a solution to the double-spending problem using a peer-to-peer distributed
timestamp server to generate computational proof of the chronological order of transactions.​

I fully agree with @Peter R here about ordering being important. The whitepaper even mentions this being intended.

Edit; what this says in plain English is that the goal is to create a "chronological order" of transactions and the means with which to do so is the "distributed timestamp server". Knowing that this was the goal from the beginning makes a suggestion to change it a rather big deal.

And, really, changing something so fundamental without having done any real optimisation and parallelisation work yet is not sane engineering practice.
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Tom Zander: For fairness, this can well be interpreted as the mined blocks giving your that chronological ordering.

Note that there is no true chronological ordering of transactions in the global network even!

However, "chronological" ordering within blocks in the sense of dependencies is kind of a natural extension of this requirement. Which I think, should be kept, unless we really find it to be a roadblock.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Don't take this as an argument but I'll just note that the whitepaper doesn't specify order of transactions within blocks but refers to the system as a "timestamp server". If transaction A occurred in block X and transaction B occurred in block X then they occurred simultaneously in a plain reading of this. Order within the block is not an issue.

Not that I'm saying that this is how things should be implemented, just providing some perspective. It may turn out that for some weird to-be-determined utilization, ordering within a block would not make logical sense (some kind of circular dependency for example - don't ask me to provide an example :) )

Personally, I'd be happy if a transaction B that depended on transaction A could not occur in the same block as A (I have said this before and believe it even moreso now). I think that would simplify things a lot and would give the required causality while also preserving the simple timestamping server functionality but acknowledge that that is simply not how things are.
[doublepost=1519587849][/doublepost]
Note that there is no true chronological ordering of transactions in the global network even!
Well, that is why they are put into the blocks. That provides the chronological ordering. We apparently also extend that to the ordering within the blocks. I would suggest that the way it is explained in the whitepaper puts this as a function of the block creation and not the ordering within the blocks. (I don't call to the whitepaper as an authority on this though. Existing practice and consensus rule and future implementation changes take priority)
[doublepost=1519588245][/doublepost]Here is the text for reference.

The solution we propose begins with a timestamp server. A timestamp server works by taking a hash of a block of items to be timestamped and widely publishing the hash, such as in a newspaper or Usenet post [2-5]. The timestamp proves that the data must have existed at the time, obviously, in order to get into the hash. Each timestamp includes the previous timestamp in its hash, forming a chain, with each additional timestamp reinforcing the ones before it.
I think the diagram is quite important too. Let me see if I can replicate it.


[doublepost=1519588863,1519587686][/doublepost]Related question: How often is it that a transaction depends on a transaction within the same block? It surely must be vanishingly small. Is there an overriding use case for such a thing? Is there an argument, perhaps, for removing this as a valid thing and thus enabling multiple ordering schemes without breaking causality?
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Hmm. Which then leads to the question of the function of these ordering schemes. Perhaps it is worth it for these. Is it worth it for chronological? It seems like the computational complexity from having to reconcile two dependent unconfirmed transactions is probably not worthwhile. In the simple case, the question is "Does this transaction depend on the transactions in this UTXO set?" but now we have to ask "Does this transaction depend on the UTXO set or also this pseudo-UTXO set of unconfirmed transactions that I might be including in this block but what if for some reason I decide not to include that first transaction?".

It seems like removing that would simplify computational complexity and open up other functionality.

I have to believe that there was some reason it was done this way but I can't think of one and I'm not hearing one either :)

Later, I may run some stats and see how often this (same block dependent transactions) is actually used.
 
  • Like
Reactions: Norway and awemany

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Hmm. Which then leads to the question of the function of these ordering schemes. Perhaps it is worth it for these. Is it worth it for chronological? It seems like the computational complexity from having to reconcile two dependent unconfirmed transactions is probably not worthwhile.
But we have that functionality right now. Sure it is more complex than when everything is available and up to date. However it also only needs to be used when such a transaction comes in! And if you define a transaction order, you order all transactions ...
 

deadalnix

Active Member
Sep 18, 2016
115
196
@Richy_T : But an ordering scheme means you have to do the ordering. All orderings take O(n log n) time at the least.
You assume that you don't have to do that, but you absolutely do. This is exactly why the graphene block is 3 time bigger with ordering, because you ot n*log(n) of info to send through. And then you need to do n*log(n) work to reorder the transactions in the order they where in the block. Canonical ordering or not, you get to put the transaction in some given order and got to do the work.

But if the ordering is canonical, then it's very easy to keep thing ordered during the whole process, which is much less expensive, and, more importantly, not in the critical path.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
OK, I figure it would be useful for CPFP where the miner would want the child transaction to be included in the same block as the parent. I always thought CPFP was a bit hacky though.

I think I'll have to do some looking into the history of this whole thing.
[doublepost=1519591222][/doublepost]All I'm saying on the alternative ordering schemes is that we should evaluate them on their merits and that in-block causality may turn out to be an extremely weak argument against.
[doublepost=1519591453][/doublepost]I think Deadalnix is right though. If your blocks are sorted by TXID and you have the TXID of the input to a transaction, it becomes pretty lightweight to look it up. If you are not doing it that often, it becomes even less of an objection.

Such sorting can also be done as the transactions come in fairly easily. So computation time is not an issue for either scheme.