Gold collapsing. Bitcoin UP.

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@solex
To be clear, the hard limit is what the software will permit, while default / soft limits are a user setting between zero and the hard limit.
I use the terms "soft limit" and "hard limit" as the definitions of "MG" and "EB" in BU. I think @Peter R use the terms the same way, as you can see in this tweet.


But these definitions are not important to me, as long as we understand each other.

a) the BTC miners are ignoring the dev default of 750KB right now.
This is the default soft limit/MG and not related to BUIP101. I agree that the miners are ignoring this default and have managed it well in both BTC and BCH.

b) users want software to work out of the box. They expect defaults to be set sensibly so it is one less thing to learn about and have to change before running up a node. BU should be user-friendly and have considered defaults, based on testing, benchmarking and real-word metrics which are updated ad-hoc when new versions are being developed.
I believe a EB=10 TB out of the box is sensible and would work like a charm. My takeaway from the Gavin quote is that it's not dangerous to remove the max blocksize.

I think the benefit of BUIP101 is that it closes off the non-zero future risk that the BCH full node software ossifies with the block hard limit too small for global demand. i.e. 1MB redux in 20 years time.
Yes, that's one of the reasons behind BUIP101. Although I think it's a real (but small) risk that it may ossify much sooner than in 20 years. And I hope @torusJKL push for the infinity-default for the next vote after this one.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Norway,
OK, I see where these terms are being used casually in conversation. Software development deserves more precise definitions.

I will address your points more, but just now I will say that in BU, both MG and EB values are soft limits because they can be easily and quickly updated by the average user. Having them configurable weakens the strength of the Schelling point which arises in a decentralised network around hard-coded values where communication and co-ordination is poor. This is why Core have been so strong in maintaining the 1MB, is because they control the communication and co-ordination around a hard-coded limit.

Where BCH can win is weakening the friction needed by the majority to shift Schelling points and configurable values certainly helps with that. Your position is basically pushing for full emergent consensus, which is what BU strove to achieve for a long time. One of the reasons ABC got traction with 8MB is because the miners were nervous of an open limit.

The argument we have is over whether default settings help the miners/users - or not.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
So computing stats as Luke is doing does not make much sense to me.
I've said these stats were bullshit from the first time they were made public. For pretty much the reasons you stated too.

I think it should be possible to work out a ratio of "hidden" nodes to "visible" nodes and then use the number of visible nodes to estimate the total number of nodes. I'll have to give that a go sometime.
 

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
To be clear, the hard limit is what the software will permit, while default / soft limits are a user setting between zero and the hard limit. [. . . ] I think the benefit of BUIP101 is that it closes off the non-zero future risk that the BCH full node software ossifies with the block hard limit too small for global demand. i.e. 1MB redux in 20 years time.
i don't see how that makes sense. if the hard limit is what the software permits (what's imposed by the real processing bottlenecks, not by the block limit), there is no risk of early ossification, and no need for an algorithmic solution either. for presumably the software will keep being improved so long as demand/usage/value grows, or stop being maintained otherwise. this is especially to be expected in a competitive, multi-implementation environment.

the remaining question is where the soft limit should be set by default. @Norway is of the opinion to set it as high as possible (or better, do away with a default and make the users decide). i imagine the devs, knowing for a fact that the software will choke after a certain throughput, would rather set the default below that point.

in addition, implementations don't only compete on which one can reliably process more tx/s. there are other improvements to be made, particularly knowing that, even under the most auspicious scenario, it will take time for real world adoption to reach adoption levels sufficient to consistently fill even 4 MB blocks. this makes placing emphasis on where to set the default soft limit seem misguided, or at lest less responsible than to commit to successive cycles of capacity increase / testing, and letting the default be set reasonably below known capacity.
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@79b79aa8
hard limit is what the software permits (what's imposed by the real processing bottlenecks, not by the block limit)
Terminology again. The hard limit in the software is the max block size value supported. The physical capacity of the network is a fuzzy value which changes slightly every day and can only be estimated.

The 1MB is a hard limit below capacity of both the BTC and BCH networks. The 32MB hard limit in ABC is above BCH network sustained capacity. Now, lets say for some reason that value never changes, it ossifies, then multiple implementations do provide route around that problem. That solution was tried in BTC with XT, Classic and BU, but never quite succeeded, because it meant firing the Core dev team, and the decentralized community could not be cohesive enough to effect such a process.

I don't expect that scenario again in BCH, but, in terms of 20 years hence, the network will (hopefully) be so big it is servicing a major part of the world economy. We may have a problem elsewhere in the code analogous (but different) to the IPv4 -> IPv6 transition, which is ridiculously glacial due to the inertia of deployed systems and resistance to change.

In terms of defaults, I like them being pre-set by developers so that the software can work "out of the box". If we want to handicap BU's growth then making users have to tweak values they don't have an opinion on, just raises the bar to usage.
 

imaginary_username

Active Member
Aug 19, 2015
101
174
To be honest all this alarm about "risk of ossification" is really attempting to make decision for future communities, and assumes a couple things:

- That we won't be able to hardfork again. If we aren't able to hardfork again we're screwed anyway; or

- That the community will somehow lose the appetite to accomodate commerce when we need it. If we somehow got to that point the maxblocksize can and will be shrunk down via a soft fork, especially from an absurd number like 10TB. In fact, I'd argue that an absurd and meaningless cap like 10TB makes it more likely for a smaller cap to be imposed by a future community.

Trying to hack future minds via unsound false advertising today doesn't work, even Satoshi gets that. If BUIP101 is somehow passed and implemented, I'll try my best to maintain a fork that changes one thing - the EB default - back to saner levels, so node operators have less hassle with defaults and don't have to muck with it every time they fire up their nodes.

And before anyone tries to burn me for that, I'd rather operators and miners use a change-one-line fork than migrating off BU altogether.
 
  • Like
Reactions: freetrader

Dusty

Active Member
Mar 14, 2016
362
1,172
Speaking about CTOR, there is still something I don't get.

When a node receives the block with its transaction ordered in that way, if there are chains of dependents transaction, how can it know a good way of processing them in the right order?
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
I attended the World Digital Mining Summit 2018, mainly to see developments in hardware and industry trends. I was a little disappointed to see it was rather Bitmain-centric.

My overriding takeaway was when people like Adam Back and Greg Maxwell say, "Miners just do transaction ordering" rang more true than ever, that's almost all miners want to do. I was originally suspicious of Core usurping power while I now see it more like they are filling a power vacuum. A huge power vacuum on who directs the network. ABC developers were invited to this conference and presented as the BCH reference implementation.

While I saw many prominent business models supporting the industry, missing from the equation was the notion of diversification in implementation software. (Consensus rules are assumed to be unchanging like gravity and the rotation of the earth around the sun like a stable foundation on which an economy is built.) I feel like we are back where we started, developers direct hashrate, while claiming hashrate is following them based on merit.

Bitmain and business partners offer a turnkey solution for hosting and managing that hashrate. Most miners don't care where their hashrate is pointing, they just want the earnings.

Someone like Bitmain according to their IPO prospectus earn 3.3% of their revenue from mining, yet host and control directly in excess of 15% of the (BTC+BCH) sha256 hashrate combined.

Assuming market tend towards equilibrium and the fact mining is still profitable, 95% of their income ($2.845 billion in 2018) comes from selling mining hardware. So a lot of of the hashrate directed by them is under their control while not owned by them.

Their biggest bottleneck and the focus of this summit is in the logistics of deploying this hardware. Dozens of businesses presented solutions to solve these problems. All of which just assumes the consensus rules are unchanging. Bitcoin.com was one of the few solution providers that mentioned the existence of hosting hardware other than Bitmain.

A highlight was @SeanWalshBTC on twitter being the only miner who from my perspective understood the full value proposition. He presented a concept to grow earnings as opposed to cutting costs. his presentation is here if you are interested. https://www.slideshare.net/SeanWalsh28/20180923-crypto-mining-roi-revealed-tbilisi-sean-walsh-116109807
 
Last edited:

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
To be honest all this alarm about "risk of ossification" is really attempting to make decision for future communities, and assumes a couple things:

- That we won't be able to hardfork again. If we aren't able to hardfork again we're screwed anyway;
Lets keep this attitude for all proposed consensus rule changes that are not needed at this time. (specifically looking ta CTOR, among others.)
[doublepost=1538159642][/doublepost]
Debate between Justin Bons and Tone Vays in less than three hours.

https://blockchaintalks.io/our-events/upcoming-events/tone/

EDIT: Livestream here:
https://blockchaintalks.io/live/
I just got the last 2 questions, thanks for posting.

On the question of brand, they are both variants of Bitcoin.

Rick Falkvinge distills the issue distinctly in this video:

On the other topic Tone insisting that Core developers are the best. A lost opportunity, the reality brought to attention by an audience member calling out: "what about bugs". In reality the Core developers don't appear as competent as some BU developers, with the introducing inflation bugs.

The process that allows such bugs to be copied into BCH should also attract criticism.
 

imaginary_username

Active Member
Aug 19, 2015
101
174
"Miners just do transaction ordering" rang more true than ever, that's almost all miners want to do
It's the logical consequence of the zero-sum nature of mining. Unlike investing, where externalities spilled to benefit others doesn't matter as long as they benefit yourself more than your investment, in mining that's not the case.

Consider two miners, Alice who reinvests all her profits into mining hardware, and Bob who diverts some profit into software research, ecosystem etc that benefits everyone.

Alice's profit maximizes her hashrate output at any given time at no benefit to anyone else. In addition, it also harms Bob: Difficulty will go up, and Bob gets less coins for his investment.

Bob's investment in the ecosystem, on the other hand, benefits everyone including Alice (via coin value increase, indirectly increasing value of their capital investments). The benefit Alice receives, though, is used to further increase hashrate, which through difficulty adjustment, comes back to haunt Bob. Whatever investment Bob does will have to be justified twofold: it will have to benefit his coin and rig value more than what he puts in, after accounting for hashrate increase from everyone else' reinvestments after him.

And whether the business makes sense or or not to Bob, over time we can expect Alice's share of total hashrate to grow while Bob's shrink, all else held constant. Bob might very well become an inconsequential miner and mostly just an investor (as an investor, the zero-sum game doesn't apply) over the long term.
 

imaginary_username

Active Member
Aug 19, 2015
101
174
Speaking about CTOR, there is still something I don't get.

When a node receives the block with its transaction ordered in that way, if there are chains of dependents transaction, how can it know a good way of processing them in the right order?
CTOR (or AOR, for that matter) does not care if transactions are "processed in the right order": It checks that all inputs match to either the UTXO or outputs in the same block that are not already claimed by another input. If any input does not match either (aka "out of thin air"), the block is rejected with the obvious exception of the coinbase transaction.
 

Dusty

Active Member
Mar 14, 2016
362
1,172
Thanks imaginary_username, I know that.
The problem is that as a wallet provider I need to reconstruct the history of transactions of an account and hence must process transactions in the right order, not simply know the total balance.

CTOR ordering hence needs me to reconstruct a valid TTOR (topological ordering) of the whole set.
 
  • Like
Reactions: imaginary_username

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
What do you think, guys & girls?

1) Should the protocol be constantly upgraded, or should it be frozen as soon as possible?

2) Should the max blocksize limit be a part of the protocol in the future?
 
  • Like
Reactions: Golarz Filip