Gold collapsing. Bitcoin UP.

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@Tom Zander

>That said, it is the sane thing to do to look at the devs for a good maximum because software needs to be tested against a block size and known to work.

the problem with that is that none of the devs have the server capacity matching that of the miners to run reasonable tests. nor do they run under the same financial stress levels of making or breaking a business or satisfying investors. and even then, it would be difficult to simulate real economic scenarios as Bitcoin in general has continued to defy all skeptics due to it's rarely appreciated sound money principles.

>Otherwise you end up just stating your car can go 400km/h while the producer put a 250km/h maximum on it. That's not useful, that is plain irresponsible.

so even if some rogue miner self constructs a bloat block that chokes some of the other miners, they will be forced to adapt/adjust to higher capacity, live or die. what's wrong with that? get rid of the cruft.

Bitcoin is really a socio-economic system that's driven by sound money (the fixed supply). with the prospect of hyperbitcoinization always on the horizon, all participants have infinite motivation to adapt to be one of the survivors getting thru the gauntlet of scaling. this requires no limits at all.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Why the push to go to a huge max-blocksize THIS YEAR???
Because I don't trust the developers in general to manage the max blocksize limit, and that they can find consensus about what the max blocksize should be in the future. We could get stuck at 32 MB. The threat is real.

And it's just a default setting. Miners may very well adjust it down. Or up. The subtle psychology around the default values should not be underestimated.

I want to highlight my quote from Gavin. I believe it to be correct. You are all free to call me a member of the Gavin cult. (y)

I will propose a BUIP on setting the default values to 10 TB if some of you support this idea and show it. It will be too late for the upcoming vote, so I aim for the next one.
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
the problem with that is that none of the devs have the server capacity matching that of the miners to run reasonable tests. .
BU passed a vote that allocates up to $150,000 per year for this kind of testing (we're hardly using any of this at the moment). We can -- and have -- deployed powerful servers on the Gigablock Testnet.

But really that is beside the point because right now all of the bitcoin implementations used by miners are limited by the lack of parallelization in the code. It hardly matters how powerful your server is, if the code can't take advantage of your awesome hardware.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
My proposal for default value is 10 TB. The miners may adjust it, if they don't like it.
I'd like to be realistic we have some empirical data that shows where the current capacity range is, I'd think the default should always be on the upper end of that limit. that way it's seen as responsible (an image problem) and its seen as uncomfortable for miners who could not handle that capacity (thus letting those who are not upgrading that they could become irrelevant.)

I'm guilty of projection, s I thought that 32MB was that upper limit.

I now think if it were above that limit in an unrealistic limit, people would talk and know why we couldn't achieve it, putting attention on the actual bottlenecks.

The limit shouldn't be set in a safe developers zone as it discourages other industries who don't have the developers power to set the limit from competing. If it is set in a safe zone it is a regulator on competition to increase transaction capacity, ie it's removed the problem from the market.
[doublepost=1535142361][/doublepost]
@AdrianX i'm not keeping lists ;)
I found the tweet, it was Haipo Yang. I had no idea "we" were working at implementing this, I'm all for testing and prototyping.
I'm surprised to see BU in support of ABC's plans to do this:
BITCOIN CASH DEVELOPMENT AND TESTING ACCORD Bitcoin Unlimited Statement said:
Increase the network capacity, ideally by decreasing the inter-block time to 1 min, 2 min, or 2.5 min to improve the user experience, focusing on faster and smaller blocks.
As a side note In the development world prototypes are built and disregarded all the time.

In software development, the cost to deploy a prototype is extremely low. The result is software developers often think that if you build it, it should be implemented.

I'm all in favour of building and testing projects. Deployment, however, should not be taken lightly, and changing the money because you can should be avoided unless absolutely necessary
 
Last edited:
  • Like
Reactions: lunar

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
I'd like to be realistic we have some empirical data that shows where the current capacity range is, I'd think the default should always be on the upper end of that limit. that way it's seen as responsible (an image problem) and its seen as uncomfortable for miners who could not handle that capacity (thus letting those who are not upgrading that they could become irrelevant.)
I disagree, @AdrianX. I don't think it's Bitcoin Unlimited's job to tell miners what's safe. We don't have to ship code with a "Approved for X MB" stamp on it.

Can a RasPi handle a 32 MB block with the current code? (I doubt it, but I could be wrong.)

We should not have a system where the guys in white lab coats give a green light to the miners what they should accept.

The miners should compete on this arena, just like they do on mining.

Do you think Gavin is wrong, when he say this, @AdrianX ?

Yes, let’s eliminate the limit. Nothing bad will happen if we do. And if I’m wrong, the bad things would be mild annoyances, not existential risks, much less risky than operating a network near 100% capacity.
EDIT: Just to be clear: I'm 100% behind the Gigablock Testnet Initiative (GTI). It's like a public service, identifying bottlenecks and finding clever solutions at the same time. I only see good things coming out of it.

I expect the node code to be closed source and financed by competing mining pools in the future, but I love the hydra approach where GTI give something to everyone, and incentivised development happen at the same time.

Remember: If one 30% pool develop effective closed source software that the rest can't use, they can't make bigger blocks than the others without getting orphaned.
 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
The Emergent Consensus concept born in Bitcoin Unlimited is all about the process.

I'm just trying to get more focus on the default values, as I believe they have been under estimated / considered unimportant compared to the process.
 
  • Like
Reactions: lunar and cypherdoc

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
@Mengerian
Indeed. I don't recall ABC development ever advocating for shorter block times.
In November 2017, it was BU development who listed shorter block times as a potential beneficial change, though 2 or 2.5 minutes was proposed. One benefit I see is that this provides "fractional" confirmations which improves granularity in decision-making for vendors receiving payments. Arguably, the recent initiatives to improve 0-conf makes this less important.

The change to 1 minute is probably at the top-end of ambitious, but received popularity when the viability of the concept was supported by Gavin Andresen, several years ago. Obviously, LTC has worked well with shorter block times, so this particular change has always had some oxygen to keep the debate live in BCH.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
We could get stuck at 32 MB. The threat is real.
I don't believe this. I believe if we even had organic demand exceeding BTC by far (let's say full 8MB blocks), developers would be scrambling all over to break the next barriers.

And BU has long been doing real work in this domain, which I have no doubt will bear real fruit.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
I proposed BUIP099 to allow BU users the choice of not replay-forking their clients away from the wallet ecosystem in November 2018.

I think if BU wants to move away from a 6-monthly HF and move to miner voting on individual change items, then it needs to consciously move away from this "automatic replay" as a requirement.

In good BU tradition, it should provide node operators the option to do so.
So I see this BUIP as related to BUIP098, but I raised it because I didn't see anyone address this requirement specifically.
 
Last edited:

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
Indeed. I don't recall ABC development ever advocating for shorter block times.
Yeah, the only thing I have seen Amaury say about it is that there is no point to reducing block time, since we want transactions to be secure in 3-5 seconds, and you can't reduce block time enough to make that work. So he advocates leaving block time the same, and improving 0-conf. Here's an example of this:

 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Mengerian: I absolutely agree with his 5s assessment. I think it happens on a scale, though. A merchant display becoming "green" because a weak block arrived would help with the process. Even if the mean wait for that is ~30s or so.

So I could see that the kind of setting where fractional confirmations might make sense also from an UI POV is something like a restaurant, IMO. Or online payments, potentially.

That all said, I really want to put something together regarding 0-conf wrt. CHECKDATASIG and "zero conf insurance".
 

wrstuv31

Member
Nov 26, 2017
76
208
If you remove the limit entirely the limit defaults to the block size that makes the node run out of memory and crash in the current implementations. So there has to be one at least until nodes are completely rearchitected.
This means that no limit is needed. Miners can set the block size as large as their memory can handle. They upgrade as usage reaches their limits, and inefficient implementations of the upgrades get weeded out, leaving the more capable miners with more profit.

Large blocks that test miners limits (say 10x time median) don't happen often, they happen periodically and/or randomly. When these high traffic periods emerge, the miners who cannot keep up lose a small amount of mining time trying to recover but then have a clear indication of the state of the network and where they stand, which helps them make competitive decisions.


Miners need stability, any hard limit won't last and an automated algorithmic increase will be arbitrary when compared to the present economic reality and create more uncertainty when miners are making competitive decisions. It's easier for miners to always expect to have to upgrade their hardware according to how they observe the network reacting to large blocks. When the 10X spikes come around, not everyone will be eligible to receive it, that's a risk miners need to manage, and should be a simple part of their operations.

How will we know when "nodes are completely rearchitected"? It's a vague far off target to delay the blocksize increase in favor of other technologies.

Edit: I Agree with Shadders comment
 

NewLiberty

Member
Aug 28, 2015
70
442
Shadders is being very helpful.

I invite all to take notice of his other proposal.

Rather than debate about algorithms which attempt to predict what miners want to do about blocksize...

It presents a miner to miner communication protocol, entirely pseudonymous, like bitmessage signed by per block, or per-miner keys. A message signed by hashpower..
It lets miners make informed choices.

BU is going to get the swing vote miners by supporting the other FVNIs. Miners cut off from these communications will suffer, and so will not use BU but will use a combination of ABC and SV if they support this communication protocol and BU doesn't.

If BU presents a well organized software to manage these communications based on this protocol, miners may choose to run it instead of SV or ABC even if they are not swing voters.

The potentials for BU are promising.

EDIT:
For example miners can communicate their setting changes even in between blocks, or if they only win 2 blocks a week. Miners can then change settings as often as they like based on new information from these channels.
 
Last edited:

majamalu

Active Member
Aug 28, 2015
144
775
To be honest, I don't care who wins this battle, as long as the winner imposes itself in a swift and overwhelming fashion. And I don't even care if the winner is right. May the most invested (in the future of BCH) win, for they are the ones who are most incentivized to be right, and to quickly correct course in case they are wrong.
 

NewLiberty

Member
Aug 28, 2015
70
442
Interestingly, a short swift and overwhelming winner, would reduce the necessity of a BU user base.

A hash battle is eventually the continuous state of nature for bitcoin. Competitors competing at the same game, rather than forking off to new games each on its own chain. This is boot camp for that continuous miner vs miner battle. The many forks of BTC created a confusion that a contentious hard fork necessitates a new chain and coin, it has people scared, but it is just not so and this will surprise people in November.

The path to the base protocol stability needed for mainstream adoption leads through this sort of competition. Each proving their work to be the best at being bitcoin.

Even during a hash battle, a 0-conf tx is just as secure as it is otherwise. Exchanges may temporarily require more confirmations, and miner and similar business may have some disruption, but the average users shouldn't notice unless they want to watch the bloodsport.

And the best console for that spectacle, may just be BU. I look forward to seeing what it can do.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Why we should not implement requirements in Bitcoin Cash clients which make it unsafe to continue running the unmodified client in case of a standoff / controversial upgrade:


Those following Reddit will recognize the poster as a persistent anti-BCH troll who usually just comments negatively to make the community look bad, but in this case he is actively advocating for something that he knows will be harmful to the less informed BCH node operators - I've told him so nearly 20 hours ago in a direct reply to his misinformation.


Yet he repeats his misinformation. Obvious conclusion is obvious.

Unfortunately, I hope you can now see the exact reason I proposed BUIP099 and I think we should resist such logic bombs in the code. They haven't proven useful, not in Ethereum's case (imo), and I think we will have to learn the history lesson (sigh).

There's more to say on this, but I am a little surprised no-one on this forum has commented on the BUIP.
 
Last edited:

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Let's call this guy's "patent" bluff or bluster - whatever it is - already.
I've never been more in favor of DATASIGVERIFY.
Let him face the choice of forfeiting his patent which he uses to threaten the base protocol, or pursuing legal action to enforce it against Bitcoin Cash. Or fork off onto his own chain where he can live out his dream.

 
Last edited:
  • Like
Reactions: satoshis_sockpuppet