Gold collapsing. Bitcoin UP.

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
Which claim is true?

@Tom Zander on nChain SV: "one dev having a reputation of coding isn't enough".

.... while the nChain CEO writes:

"Developer Team

The Bitcoin SV team has been constructed with a view to realizing industry best practices, in order to deliver and maintain a full node implementation with an unprecedented commitment to quality assurance and stability.


The Lead Developer will be Daniel Connolly, who joined nChain after 20 years in enterprise systems and IT positions for United Nations agencies. Daniel contributed anonymously to Bitcoin for several years, and has contributed to the Electron Cash project and is a primary contributor to the BitcoinJ-Cash project. nChain’s Steve Shadders will act as Technical Director, providing project oversight and liaison with sponsors. Steve began contributing to Bitcoin in 2011, authoring one of the first open source mining pool engines and was one of the earliest contributors to BitcoinJ. Additionally, the team will begin with a pool of 5 C++ developers with over 95 collective years of development experience, a part time Dev Ops resource, a full time QA engineer and range of business support personnel.
 
Last edited:

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
This is weird. I thought the motivation for canonical order was to opimize the effect of Graphene. Changing motives make me suspect that the real motive is something else.
Personally I've never thought Graphene was the primary motivation. To quote myself from an earlier post:

The purpose of moving to Canonical Order is to design a system that can handle massive scaling in the future. The fact that it may help with block propagation (eg. Graphene), or block validation, are just surface manifestations of solid fundamental design.
The real reason to move to Canonical Order is that it's a better design for the data structure. The different benefits listed are just symptoms that flow from good design.
 

Dusty

Active Member
Mar 14, 2016
362
1,172
I tried to discuss CTO with Shammah today, asking about why we need to change consensus and have new requirements without providing hard evidence (data, benchmarks, etc) about it.
It would just point me to his medium article, saying that "everything was explained there".
I spoke about Tom's article and said that he won't read it because Tom was always critic about everything, and that "Given that most of these upgrades have been a great success, it should go without saying there is a particular motive for his articles".

Well, I think I'll not upgrade my node to ABC 0.18 this time, and I hope more discussion will be done, because changes for the sake of changes are not good in my opinion.

Edit: I deleted a paragraph with a controversial thing I understood differently from the intended meaning. I'm still working out details to wrap my head around this CTO pros & cons.
 
Last edited:

deadalnix

Active Member
Sep 18, 2016
115
196
This is weird. I thought the motivation for canonical order was to opimize the effect of Graphene. Changing motives make me suspect that the real motive is something else.
The motivation is to commit to a set rather than a list. A list differs from a set because it has ordering information. This information needs to be transmitted (hence improvement to block propagation) and validated (a process which currently doesn't parallelize very well).
 

Dusty

Active Member
Mar 14, 2016
362
1,172
Ok, it seems like my opinion about CTOR was a bit rushed, mostly due to the way Shammah replied to me initially.
I studied in detail the document https://blog.vermorel.com/pdf/canonical-tx-ordering-2018-06-12.pdf and I finally wrapped my head around CTOR: I'm in favour of implementing it.

It certainly has many advantages, with the only notable con to need an hard fork to make it mandatory.

Regarding advantages, like for example proof of exclusion, I've yet to understand how to correctly handle pre and post fork scenarios though.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
This is weird. I thought the motivation for canonical order was to opimize the effect of Graphene. Changing motives make me suspect that the real motive is something else.
Like you point out, the reason for this change keeps changing. If this were an obvious decision, there would be one good reason for the change that everyone could understand (they might not agree, but they would understand).

I know deadalnix just said the reason is to change blocks to sets rather than lists, but I don't find this convincing. Transactions are not sets: they have causal dependencies. I don't think we should pretend transactions are something that they're not.

Right now, any contiguous chunk of the blockchain forms a topologically-ordered block of transactions. It doesn't matter whether you take that chunk on block boundaries or not: the same property remains at all scales.

If we change to lexical ordering, this property is lost.

My opinion remains that changing the structure of the blockchain via lexical ordering will hurt more than help.
 

Dusty

Active Member
Mar 14, 2016
362
1,172
If we change to lexical ordering, this property is lost.
And what do we lose with it?
We have and will have forever blocks so we can reconstruct any valid topologically-ordered set we need at any point in time, should we need it.

My opinion remains that changing the structure of the blockchain via lexical ordering will hurt more than help.
Ok, but as you request examples on how a change gives an advantage, you should specify exactly in what way "it will hurt".
For now, you just asserted that "If we change to lexical ordering, this property is lost", but without providing evidence that this will hurt anything.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
The motivation is to commit to a set rather than a list. A list differs from a set because it has ordering information. This information needs to be transmitted (hence improvement to block propagation) and validated (a process which currently doesn't parallelize very well).
I think the improvment to block propagation is not certain, and this has not been tested. Regarding parallelization in general, I believe @theZerg did parallelize the mempool admission without CTOR during the gigablock testnet initiative.
 
  • Like
Reactions: AdrianX

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
I know deadalnix just said the reason is to change blocks to sets rather than lists, but I don't find this convincing. Transactions are not sets: they have causal dependencies. I don't think we should pretend transactions are something that they're not.
The information about the causal dependencies between transactions is contained in the inputs and outputs in the transactions.

It doesn't make any sense to also constrain the structure of blocks (which have a different function) with that information.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@BldSwtTrs
My best guess how things will turn out: No agreement in Bangkok. Maybe a short miner-war november 15. No persistent split, and nChain will get their will in the end.

But it's very uncertain.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
not sure if i got this right but i believe read somewhere that ...

lexical sorting will speed up the processing of wormhole TX's, and that bitcoin ABC has invested interest in wormhole.
is this true?

I find it odd that lexical sorting(aka CTOR?), is being pushed while seemingly having no hard evidence that it's going to improve the layer 1 scaling (BCH). it's hard get behind a protocol change that is only a theory, and the CTOR and it's benefits seem to be just that a theory! show me some hard numbers about how CTOR will improve X by a factor of Y, let that proof be peer reviewed and confirmed, and only then could i get behind such a change. without that, all i see are 2 unproven theories CTOR has befits, and CTOR befits are moot. both sides seem to base their statements purely on thought experiments with 0 hard evidence either way.

wormhole + CTOR is starting to look ALOT like lighting + segwit.
only this time the "evil devs" have a tone of hashpower too!
 
Last edited:

_bc

Member
Mar 17, 2017
33
130
Does anyone think the impending Stress Test will help? I'm hopeful.

IF enough contiguous transactions are generated to significantly enlarge the mempool, I would hope miners would feel compelled to generate large blocks - to avoid Core calls of "look at that huge mempool rising!" Maybe we could see blocks larger than 8MB. I hold out hope we'll see some larger than 16MB, but what I'd like most of all is to see sustained traffic, and the appearance of a business-as-usual chain humming along without issue.

IF the mainstream press ran a few stories, that could give us good exposure.

Obviously this will all be "artificial" in that it won't consist of much "normal" economic activity. It could, however, show that BCH can and will support activity at that level.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
nChains hashpower is a fly speck compared to bitmain. How could they get their will?
nChain hold the ace card, Bitman is doing an IPO, they are on best behaviour, making positive ROI decisions. I find it unluckily they would try to explain negative ROI mining decisions to their new investors during an IPO and call it strategic. Bitmain also has no motivation to limit the network capacity to 1MB or 32MB.

If Bitmain were to forgo profit in a hash contest, it would give the investors reason to justify their underbidding and devalue their IPO.

In the situation where the BCH price drops the responsible thing for Bitmain to do is move the hashrate to the BTC chain.

__

This controversial uncertainty is one of the biggest drives for the investment hype cycle. This surrounding uncertainty when resolved could trigger the next hype cycle.

Changing the 32MB limit illustrates we can change the limit during a controversy, this is an advantageous feature over the BTC chain.

It proves that miners are prepared to take responsibility to guard the network while improving it without depending on externally dictated safety limits that they have little control over and are hard to change.

Higher on chain transaction capacity invites businesses to use BCH addressing the Fidelity Problem.

A limit outside of the network's capacity should be the default. We need miners to plan for peak capacity and those that predict market demand are rewarded. An artificially constrained limit, limits capacity planning, and then justifies keeping the limit. (we have empirical evidence of this with the BTC chain,) BCH, when it grows, will suck the BTC hashrate over to the BCH chain and could suffer the same fate again.

After we have removed the 32MB limit, there will be a new theoretical maximum limit that no other blockchain can boast, this is excellent marketing material, build it, and they will come.

The network capacity is not dictated by a transaction limit, the hardware deployed on the network dictates it, miners cannot exceed the laws of physics. However, all they can do is respond to supply and demand. Miners had historically reacted appropriately to supply and demand until they have a conflict of interests as seen when the transaction limit allowed them to extract higher rents by doing nothing.

The only negative result that I can predict when removing the limit is the inability to extract rent when that limit is reached, consequently there is no negative cost to removing the limit.

So I think the winds are in nChain's favour moving forward. As an investor, I want them the miners to remove the limit.
 
Last edited: