Gold collapsing. Bitcoin UP.

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
That's not what I am angry about, however. I am angry that I wasted my time, even though you have been informed that we're working on this. You just merged your PR. Great.

I wasted a lot of time today with writing this. @Mengerian has facilitated communication with you because there was no other way to reach out, apparently. I thank him for that.

But one little note that you will take this over and do your own implementation would have been very easy. If something is a dick move, then it is this.

Our leaders are beginning to behave like politicians whose party friends are their worst enemies. Their worst enemies are no longer the leaders of the North Coreans. Instead, a destructive 'South Korean' infight; ABC, BU, nChain, Flowee - everyone against everyone. That's how Google, Microsoft, 'The JP Morg' et al. defeat the open source 'community'; Axa/BS/DCG/TPTB's minions against the Bitcoin Cash 'community', with the help of an army of anonymous cyber terrorists and their so called 'privacy'.

But that's the society (hyper collectivism). It never worked since its invention 10'000 years ago.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
@Zarathustra

It makes me very sad to see Flowee in there. Not because of you, but because I made the conscious choice to not do protocol development work. Its a rat race and it for those that like the ego trip.

So I've been working on what I believe is actually useful, while not competing with anyone.

The reason why I'm so sad is that the ABC people just had to go and break Flowee and make a protocol chance that would hurt me. Thats just insane.
I've told them this breaks my work many months ago, but deadalnix just called me toxic and has been ignoring me 100% (private emails and all public messages) and they continue pushing a change that has exactly one result; it breaks the client that has the most advanced parallel validation system. (oh the irony that this proposal comes from the terra-blocks guy). There isn't even a single positive reason for this change. Well, unless you think killing big blocks is positive.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@deadalnix, @Mengerian:

Ok, Mengerian told me that he didn't see a message by you and this caused much of the miscommunication here. Fair enough. Not blaming for that.

I am still a bit annoyed that there was like a 10+ hour window where other folks from the ABC team (schancel and jasonbcox) could have seen the ongoing duplicate effort (especially since I posted multiple updates).

Otherwise, given that this might just have been an honest miscommunication here, I like to take my accusations back.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
I read this

https://medium.com/@Bitcoin_ABC/benefits-of-canonical-transaction-order-ec30ae62d955

But I couldn't understand how it successfully rebuts any of @awemany's critiques against CTOR.
Which is what I was hoping to find in such a response by ABC. This just convinced me that the real issues raised are being handwaved away. I hope they do post a proper response to that critique, not just a list of benefits that we are supposed to accept without being really convinced.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
I read this

https://medium.com/@Bitcoin_ABC/benefits-of-canonical-transaction-order-ec30ae62d955

But I couldn't understand how it successfully rebuts any of @awemany's critiques against CTOR.
Which is what I was hoping to find in such a response by ABC. This just convinced me that the real issues raised are being handwaved away. I hope they do post a proper response to that critique, not just a list of benefits that we are supposed to accept without being really convinced.
I agree; my concern with this article is that it makes statements without proving them, and then references a medium post by @jtoomim that finished by basically saying "you can get all this without canonical tx ordering, if you change the parallel algorithm slightly"

In this debate, we need to clearly separate the benefits of removing dependency ordering, from having no order, from having a canonical order. And we need to specify consensus (enforced) ordering verses optional ordering.

For example, graphene benefits from transaction ordering, but it does not benefit from miner enforcement of it. Full nodes could generally agree on an ordering (or a few) and then its just a few bits to send "use that ordering". Since blocks with a known order propagate faster, there would be pressure on miners to generate blocks that follow the order. But if a different compelling ordering was conceived, we could switch to that easily
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054

jtoomim

Active Member
Jan 2, 2016
130
253
For example, graphene benefits from transaction ordering, but it does not benefit from miner enforcement of it.
Disagree. It does benefit from the miner enforcement because it eliminates the adversarial condition in which a rogue miner intentionally randomizes the ordering so that they may perform an outcast attack. It also allows the code to be simpler, as *no* order information would need to be transmitted if the block is fully sorted, rather than a condensed section of order information that only applies to the exceptions.

@awemany, Thanks for that, it shows a somewhat larger effect than I expected. I'm going to comment on reddit when I get a chance.
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Gavin's post from 2014 about IBLTs is interesting in the context of lexicographic ordering and graphene.

https://gist.github.com/gavinandresen/e20c3b5a1d4b97f79ac2#canonical-ordering-of-transactions

Also, what's wrong with Gavin's canonical ordering scheme (that requires neither a hard- nor a soft-fork)? I don't see why this doesn't work, and would in fact mean that 0 bytes of ordering information need to be transmitted.


[doublepost=1534357679,1534356759][/doublepost]@jtoomim: I like the name Outcast Attack best too, for the reasons you mentioned.

Pool C doesn't know how to encode/decode the seed at all. Rather, A does it for them. All that A needs to do is rent a server in the same datacenter as C's poolservers. You send the block with your own 256 bit encoding to your own rented server, then you use the general (and inefficient, in this case) Graphene or getblock protocol to send the block to them. Since you're on the same LAN (possibly even the same physical server!), you'll have 1 to 40 Gbit/s of bandwidth with <1 ms latency, so it won't matter that the encoding is inefficient. However, if they try to forward that inefficient encoding to other pools in different datacenters or different countries, the inefficient encoding will slow the flow to the speed of black molasses, or possibly even honey.
Thanks for the explanation. That makes sense to me now.

Back to the 1 GB block with 2 x 10^6 transactions in it. Let's imagine the attackers attempted to carry out the Outcast Attack using a mixed-up order to create the delay. Now for the Other Miners to communicate this block to the Outcasts requires transmitting ~5 MB of information in addition to the IBLT:

(2 x 10^6) log2(2 x 10^6) / 8 ~= 5 MB

I'm with you to here. The other miners need to send out an extra 5 MB before the Outcasts can validate the block.

But can't the Attacker obtain the same result by packing five 1 MB transactions in his lexicographically-sorted block that he never bothers to relay? In fact, this might be more effective.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Peter R.: Very interesting. I remember this faintly. I wonder:

a) When the did get "canonical" ordering changed to what ABC is proposing now?
b) What is the "expected worst-case" complexity of Gavin's algorithm? This might be a not-so-easy question as you cannot grind a large number of TXIDs so orders of txn in mempool that will make this O(n^2) or worse by design are very likely not possible. But this should still be carefully analyzed.

That said, we're having all the signs of collective blindness here and we should really take a step back.

And maybe Gavin's algorithm is in the sweet spot. It is a different double ordering (which respects the topological ordering) than @Tom Zander had in mind, but as you say has the advantage of 0 bits of ordering info needed.
 

jtoomim

Active Member
Jan 2, 2016
130
253
If you want Graphene to help in the adversarial case, then you need to ensure that Graphene will work even when the block creator designed the block to not propagate well with Graphene. So the problem with Gavin's approach is that it assumes that the miner will voluntarily choose to do it, which isn't true of adversaries. That means that you need the canonical sorting to be compulsory.

Gavin's sorting can be a soft fork, whereas the lexicographic order is a hard fork. This may be a worthwhile advantage. I am not attached at all to the lexicographic order, but I think that a compulsory and canonical ordering is required, which means some sort of fork.

Gavin's sorting has a few minor performance disadvantages. Sorting by the minimum prevout hash involves checking every single input, which not only involves parsing the transactions but also requires checking a larger number of values. Requiring a transaction parse will complicate pool software like p2pool, which currently is able to treat all transactions (except the coinbase, of course) as a binary blob tagged with a fee. But p2pool can be changed or discarded. The greater number of comparisons during sorting might be significant, though.

Edit: Another performance disadvantage for (prevout, idx) sorting is that the comparisons will often depend on the last bit of the key, especially in adversarial conditions, instead of just the first 32 to 64 bits. That means about 4x-8x as many 32-bit comparisons for the same sorting. This will make GBT/CNB slower, as CNB still needs to work with a fee-based sorting. In the case of a soft fork CBO, It will also make block verification slower. /Edit

Gavin's sorting can also be used by SPV wallets to verify the absence of a transaction in a block or to determine where in a block the transaction should be, though again the requirement to parse each transaction instead of just inspecting the txid makes this a slower and more data-intensive process than with lex order.

Overall, I'm fine with either fork. I have a mild preference for lex order, but I could be convinced otherwise if someone showed me evidence that the best possible algorithm and implementation of validation with a non-topological sorting will be inferior to that of one with a topological sorting. However, I currently expect that in the long term, all implementations will need to use embarrassingly parallel algorithms, and for that you usually want the correctness of the validation to not depend on the order in which transactions are processed for each stage of processing. If that's the case, lex order seems to me like it could have some validation advantages because it ensures that all UTXO inserts by a worker are perfectly sequential. This may have some big benefits in sharding, and may allow the UTXO cache writes to be much faster than a standard random-access hashtable could give. It should also facilitate synchronization of workers' local UTXO caches in order to generate a global UTXO cache, should a global UTXO cache even be needed.

(By the way, GPU-based embarrassingly parallel algorithms have a useful fence feature for synchronization which would allow the embarrassingly parallel algorithm to do a single pass through the transactions and a single fetch/decode of the transaction, where you first process the outputs (and insert into UTXO cache), then hit a fence to wait for all other threads/warps/wavefronts to complete that section, and then finish by processing the transaction's inputs. On a CPU this strategy would probably be a loss, but GPU thread context switching is very different and far more efficient.)

> But can't the Attacker obtain the same result by packing five 1 MB transactions in his block that he never bothers to relay? In fact, this might be more effective.

Yes, this is true, and it's a more difficult attack to defend against. Note: the size of the individual transactions does not matter in this case, what matters is that they have (a) large summed size, (b) unpublished prior to the block. I usually call this attack a secret transaction attack. This is the second type of adversarial case that I had in mind when designing Blocktorrent, which I think is probably the only protocol besides Falcon that handles a secret transaction attack well. The best known counter seems to be with brute force and effective use of bandwidth (e.g. UDP+FEC, or upload-before-completing-download, or cut-through routing as Falcon likes to call it).

Secret transaction attacks necessarily involve sacrificing some fees, which is a strong disincentive with BTC. But with BCH and Graphene, the lost fees will be worth less, and possibly worthless. Secret transactions also require validation of the transactions themselves so group C will not start mining on top of the block as quickly as if it had only involved known transactions. For the same amount of extra bandwidth required, the reorder attack is somewhat worse. Given that secret transactions can use arbitrary amounts of bandwidth, the secret tx attack is probably worse overall.

That said, I think there's merit to plugging a known vulnerability even if you know of other vulnerabilities with similar effects. Maybe at some point in the future we'll figure out a good way to deal with secret transactions, like maybe getting pools/miners to enforce a policy of punishing (i.e. voluntarily delaying validation of) any block that was transmitted with more than some arbitrary threshold worth of transactions not in mempool. Or even punish blocks with any transactions not in mempool with a delay proportional to the number of bytes not in mempool, thereby encouraging miners to have a policy of waiting until transactions have been in their mempool for about 10 seconds before including them in block templates.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
I see a path forward for canonical ordering which does not require all relevant issues to be hammered out before the November upgrade. Consider:

Ordering at Uncompressed Blocksize
optional <= 16MB < mandatory​

There are a number of resulting benefits such as:
  • allowing a lot more time for modelling, analysis and debate, which may result in a further threshold change, but the default of no development action is eventual activation;
  • accepting the Outcast Attack risk when its effect is small, but addressing it before the effects become large;
  • leveraging Graphene's full potential when it is needed;
  • devolving the final activation decision to the miners by a soft-limit change above 16MB, as 32MB is already permitted
 
Last edited:

jtoomim

Active Member
Jan 2, 2016
130
253
I think we should just delay the CBO until May 2019. We can work on getting a sample implementation or two done before then and benchmark them, then give the community a chance to choose. It's not like CBO is going to magically make Bitcoin Cash moon if the fork happens in November.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Further thoughts on ordering

(After writing this all, I wonder now whether I should put the stuff below into a PDF. I post this as two messages as there's a 10k char limit.)

In my head, the landscape of things that this change touches is getting more and more detailed - that is, both in the 'more complex and difficult to reason about' as well as in 'more gets apparent' sense:

The way I see it, there's like three main steps (which can probably be subdivided further) involved in the things where transaction order potentially comes into play, with my current view of the theoretical situation. I am very interested in input where I got stuff wrong. The three steps are:

Mempool ingress
Block building
Block 'transception'
(transmission and receival - the network part in one)

Mempool ingress

For memory pool ingress, transactions can come in any order but will follow to a large extend - simply by causality - a topological (natural) order. At the point of mempool ingress, I can validate the signatures of the transaction, the script and so forth. For lack of a better name (or knowledge of such) I'll call this inner validation of transactions here in.
I can do all this inner validation complete in parallel. Each transaction stands for itself and I don't need to check whether it follows any causal order in the big scheme of things.
I can use UTXO shards and lots of workers and all kinds of fancy trickery. Note that the transactions at this stage will not follow lexical but topological order to a large degree (not exactly, there will be temporary orphans which I'll get to below), and any amount of wishing that they did doesn't make it so.

But transactions also need outer validation in addition to inner validation. With outer validation, I mean the check that transactions actually extend the chain in a causal sense. Now, I can also go and use the outs-then-ins (OTI) algorithm to validate sets of transaction on memory pool ingress. User "tl121" on reddit remarked that I can avoid the DAG order because I don't have to care for cycles, but as far as I can see this isn't so (Though, again, this might all be my stupidity. If so, point it out please).

From here on, I will completely ignore inner validation, as it seems obvious to me that this is indeed a problem that, as far as I can see, has pretty much been solved in a theoretical sense. It is embarrassingly parallel in the most obvious sense. The trouble and all the discussion is about outer validation.

Because one very important remark here is that when I use OTI on mempool ingress, I get results only with a certain granularity.

The OTI algorithm is awesome for quickly answering the question "Given a set of transactions that are in topological order (which is going to be the current chain + outer-and-inner-validated mempool), is this other set of transactions an extension of the topological order in terms of outer validation?". (I emphasized set here, @deadalnix et. al, because I am very aware of the idea of looking at blocks as sets instead of lists. You really have driven that point home. But as far as I can see, it doesn't change anything about what I am saying here)
Now, this is a question that has a simple, single bit yes or no answer. This is fine for block transception (see below) but we're running into trouble with this approach at this stage.

Because, at the mempool stage, transactions might be missing, or they might be too much. They might be that in a simple sense - just a direct double spend of the current UTXO set which is easy enough to detect also with parallelized OTI. This is a case that will not happen very often in practice however, as it is discouraged by Bitcoin's main purpose and does not seem very exploitable in the sense of a resource-draining attack. So I am going to ignore it for now.

But there are also other, much more pernicious case of a "no" answer by OTI: A transaction might have missing inputs! These missing inputs might eventually become available. Or they might not!
And OTI will not magically absolve me of the need to care for these cases. If I have run OTI on a set of a certain size, it will only give me the answer of "this set is extending the chain (==keeping a topological order for the union set). If it answers "yes", then I can be happy and just take it in and enjoy day. But what about a "no"? Now, I am sitting in front of a set of transactions and the knowledge that something's wrong with them. The larger the set, the less the single bit of "no" tells me. I am at a loss.

Now I can go now and start to investigate, in detail, why I got a "no". I can start to look at an input that was consumed by OTI even though it is non-existent and say "hey, this transaction is dangling and has missing inputs". When I do that, I have to go up the chain and see whether anything else has the same property. And so forth.

But this investigation does not come for free. I need to keep track of everything in extra data structures. And this investigation amounts to checking the partial order of things. It is the application of an algorithm all in itself. I really like to emphasize this, as I sense that this point is papered over by the proponents of lexical ordering.

Again, this was and is my main complaint with Vermorel's paper. You cannot look at a singular stage (block transception) and then call a problem solved or avoidable everywhere which simply pops up at different stage that you didn't bother to look at but simply papered over.

Now what if we go and simply say "ok, let's drop all transactions for which I had a no"?
Yes you can do that of course. But now you pay a price:

Your mempool becomes out of sync to other node's mempools unless all nodes would follow the same scheme of analyzing incoming transactions as the same. This means that they would need to apply OTI to the same sets of transactions. You now have changed mempool intake so that it essentially rejecting transaction sets that are not the same as your peers. You have to have some sort of machine consensus with your peers on how this rejection scheme would work. And if you now identify these transaction sets with ...drumroll... blocks, which are transaction sets in deadalnix & Vermorel's reading that also follow some machine consensus, you can see that the snake is starting to bite its tail and you basically tried to push around and away the very problem that Bitcoin was invented to solve in the first place!

Now, this is not to say that such bunching might happen in some sense and might be beneficial. If you think about it, weak blocks are exactly a variant of this. Maybe there will also be weak blocks that are precalculated by nodes outside the "full node network" and then pushed into it. Who knows. But it all boils down to the same issue of: You have to somehow analyze the ordering of transactions. Because even if you bunch them, you have to arrive at such a bunch in the first place!
 
  • Like
Reactions: Peter R

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Block building

And with that result, we smoothly move into the next stage in Bitcoin's operation, which is block building. The bunches a.k.a. blocks need to be build! This is where the miner selects transactions. Where the economics happens. Where stuff is rejected or accepted based on fees. A huge and without questions in itself alone quite interesting playing field where I am sure miner's like @jtoomim can say much more about the nitty gritty details than I ever will.

But one observation I like to point out here: A miner has to "select a transaction set", build a block, that is, which will be accepted by other miners as valid.
And valid means that it extends the current state of the chain with a new set of transactions under the invariant of partial ordering.

Given that in the previous part, memory pool ingress, the partial ordering had to be checked anyways, it becomes relatively easy for the miner to select transactions now that will form such a set. Most of the work has already been done. He can select from the partially ordered mempool and only needs to make sure that he doesn't create any gaps in the transaction strings.

I like to observe that he cannot simply select transactions by lexical order across some UTXO shards. They are interdependent. He has to honor the partial order criterion at this stage.

So he arrives at a block, a "transaction set" for the lexical ordering proponents, which he now wants to communicate with his peers.

Block transception

This is the area where, it seems, the "canonical" ordering arguments originated and then got
unduly applied to the other main areas of Bitcoin as described above. But after more consideration, I think they only really apply to this area. [Minor remark aside: I have been accused of being ignorant of the state of the art in distributed systems and that I might please want to talk to a computer science professor about all this who will explain the obviousness of all what they propose to me. Apart from noting that this is an appeal to authority, I really like to have a CS professor point out to me where I am going wrong in any of this. CS profs, please get to your keyboards! :) I might see the light then. But that's the point, isn't it? I might be just an ignorant BU member, but I think I am not a complete fool. I have enough technical knowledge to understand the big picture, I believe. The saying of "If you cannot explain it to a 5 year old, you haven't understood it", applies as far as I can see. Let me be this 5-year old repeatedly asking "why is that?" and tickling those with superior knowledge enough that they come down to us and explain it in layman's terms.]

To get my block, my "transaction set" to a peer, I have to somehow encode it into a string of bytes, send it out on my network link and have the peer on the other side disassemble it and arrive at the same data structure as I did.

And with the OTI validation scheme, I can do the validation on the other end in any order that I receive the 'transactions' in, at least as far as the question of transaction validation goes.
I also have to validate the structure of a block, and here's where the trouble starts.
Besides obvious needs that are not removed or touched by transaction ordering (having a valid merkle tree, for example, and having not too many bytes for those 1MB-lovers), lexical ("canonical") ordering intends to change the ordering in the block validity from following the topological to following the lexical order.

The proponents argue here: By ordering it lexicographically, I can throw away a lot of information that I would otherwise have to transmit to my peer, which makes block transmission less efficient.

But it should be pointed out that this is, as-is, very much a red herring. Gavin Andresen's two-step sorting scheme as @Peter R. referenced it above would allow the same, while still leaving the block topologically ordered. As far as I can see, Gavin's scheme as above has a bit of a specification bug (it doesn't quite work the way it is described, at least how I read it), but it can be fixed. What you arrive at, then, is, as far as I as a non-CS person can see, a simple sort along a certain axis (he proposed the minimum of (input-txid, index) ), followed by a topological sort by Kahn's algorithm.

(Let me remark that I also like to avoid using the word "canonical ordering" for yet another reason. The earliest so-called canonical ordering was named like that AFAIR by Gavin Andresen for his initial IBLT propagation proposal. The current use is a redefinition of terms which we should avoid IMO.)

The complexity of this is an O(n log n) lexical sorting step, followed by an O(n) (in terms of block size) topological sorting step, giving you O(n log n) overall complexity.

Ok. Now, here it becomes murky and, given the lack of data, makes many of us argue from a sense of gut-feeling. Let me still light up the following area as best as I can, as I believe there's still analysis and categorization that can be done that helps to form a clearer view.

So, first, it should be noted that Gavin's approach will destroy the existing partial order (with the sorting step) and then rebuild it with Kahn's algorithm (see also below on more thoughts on that).

On the receiving side, with Gavin's proposal, I can use (pretty much) any scheme to validate. I can use the old, sequential validation (as long as my H/W can keep up). Or I can use OTI. Or other schemes. I cannot use a scheme that would depend on lexical order, but I haven't seen such a one. If you have one that is efficient, point it out!
I have demonstrated that there can be, in principle, advantages to validation by OTI when the topological order is kept: https://github.com/awemany/valorder

I also like to remark that we have an interesting situation here: The receiving node could of course recreate the topological order as well. From any order. So I could also go and do the proposed lexical ordering and then resort to topological in blocks.

To view this in a different light, I think we basically have to define a "cut point" which is the amount of ordering or lack there of we do when we create a block. And there, a lot of scenarios that can be imagined with the trade-offs that I can see as they have been discussed:
(SEQ meaning variants of sequential validation)

sort lexical -> [BLOCK] -> OTI
(Work on the sender, efficient transmission, potentially less efficient validation, only OTI. Needs fork.)

sort lexical -> [BLOCK] -> resort topological -> OTI, SEQ
(Work on the sender, efficient transmission, work on the receiver, unlikely to be a competetive contender. Needs fork.)
This is the scenario that I meant when I said we might start building 'adapters' from lexical to topological ordering and vice versa into Bitcoin.

keep topological -> [BLOCK] -> OTI, SEQ
(No work on the sender, any validation possible on the receiver, less efficient transmission. The current situation. Needs no fork.)

sort into two sets (Tom Zander) -> [BLOCK] -> OTI, SEQ
(Work on the sender, any validation scheme possible on the receiver, relatively efficient transmission. Needs no fork but changes in the mining and network code.)

sort Gavin-canonical -> [BLOCK] -> OTI, SEQ
(Work on the sender, any validation scheme on the receiver, efficient transmission. Needs no fork but changes in the mining and network code.)

And for the unknown unknowns (as least to the best of my knowledge):
sort ??? -> [BLOCK] -> OTI, SEQ
(Maybe less work on the sender, any validation scheme on the receiver, efficient transmission, needs no fork but changes in the mining and network code.)

To explain this last part and why I am proposing, apart from status quo arguments, to keep it all as it is for now:

Yes, Gavin's algorithm resorts everything. This makes it the same complexity as lexical ordering in big-O notation, which is O(n log n). However, it realistically adds another step (topological sorting) which likely makes it slower on the sending side compared to pure lexical.
Now it might, on the receiving side, regain these losses by yielding faster validation. It throws away information, only to regain it.

What I am meaning with 'sort ???' is that I feel that there are unknown unknowns that might explored and which could connect the left and the right side of [BLOCK] in the best manner.

This is an area, however, where I really think input from computer science experts is needed.

One of my questions here would be: Is it possible to post-sort, to update a partially ordered set in an efficient manner into a unique presentation?

I mean this in the sense of: The set is already partially ordered. Can this be exploited? Does a sorting algorithm exist (or can one be developed) that uses the existing information and throws away exactly that information (the potential reorderings) that would be necessary for block transmission without extra order encoding bits?

I think this is one of the key questions to be answered here.

EDIT: The above ideas on what needs to happen there are actually inaccurate and I think they need to be more detailed - there's going to be a reconstruction sorting that has to happen in any case and then there's also the question of further sorting required (or not) for the merkle tree. A lot of questions remain.
 
Last edited:
  • Like
Reactions: Peter R

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
anybody know if Electron Cash and Electrum wallets conflict when opened in the same Ubuntu OS? i remember something about Electron Cash trying to pull in the Electrum wallet without permission when Jonald first published it? or can they both be opened and used independently?
 
Feb 27, 2018
30
94
@cypherdoc should be no issues at this point. From what I can see there's nowhere in the electron cash source that references the ".electrum" config directory so it doesn't seem possible that there could be any interaction.
 
  • Like
Reactions: AdrianX

Tom Zander

Active Member
Jun 2, 2016
208
455
If you want Graphene to help in the adversarial case, then you need to ensure that Graphene will work even when the block creator designed the block to not propagate well with Graphene.
You have not understood proof of work and Bitcoin in general if you think a slow to propagate block is an attack on Bitcoin.

Thats like saying a miner having a slow network connection, or a slow computer is an attack on Bitcoin.
[doublepost=1534493690,1534492517][/doublepost]
I emphasized set here because I am very aware of the idea of looking at blocks as sets instead of lists.
In reality they are not a list nor a set.

Transactions as a whole (across all blocks) are a DAG. A directed graph.

Transactions when looking at only one block are for the most part a set. Because in the same block they have no parent or children. They can be reordered, split etc without having any effect whatsoever on validity.
For a small set of transactions in a block (and this will always be small because we prioritize by days-destroyed) they still are small DAGs. A set of DAGs.

So in a block we have a collection of DAGs, most of those being a DAG of 1 item.

And to reiterate this point; the paper of Vermorel suggests that its too expensive to sort while keeping the lexical ordering. This is a complete red herring as any mempool will have all the info already and all the paths have been established well before any code starts building a block.


I also like to avoid using the word "canonical ordering" for yet another reason. The earliest so-called canonical ordering was named like that AFAIR by Gavin Andresen for his initial IBLT propagation proposal. The current use is a redefinition of terms which we should avoid IMO.
Definitely agreed, various solutions are out that that apply canonical ordering without destroying lexical ordering. A new proposal that throws away info we currently have is not the same proposal.
 
  • Like
Reactions: awemany

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@cypherdoc should be no issues at this point. From what I can see there's nowhere in the electron cash source that references the ".electrum" config directory so it doesn't seem possible that there could be any interaction.
Unfortunately there is. had them both working independently initially, but as soon as both were installed, they both stopped. had to separate them to two separate VM's.
 
  • Like
Reactions: AdrianX