Gold collapsing. Bitcoin UP.

Tom Zander

Active Member
Jun 2, 2016
208
455
Because I don't trust the developers in general to manage the max blocksize limit, and that they can find consensus about what the max blocksize should be in the future. We could get stuck at 32 MB. The threat is real.

And it's just a default setting. Miners may very well adjust it down. Or up. The subtle psychology around the default values should not be underestimated.

I read this 3 times because at first read you are making two incompatible statements. Devs manage max blocksize and miners can just change the setting. They can't both be true...

But on re-read it looks like you think that the developers deciding on a default setting is enough to control the network, in a subtle psychological manner.


In reality the software can't currently handle blocks over 32MiB. And the developers need to do many fixes in order to get there. And, yes, the miners can then change the block size to have a new max, the miners just need to set a value in their client.
What you think about developers having the power over the protocol or not is really not relevant.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
No Tom, I'm not saying that devs control miners through default. I'm talking about the default values and the negatvie effect low default values can have.

If your client can't handle over 32MiB today on any kind of hardware, your client is bad. It's not an unsolved problem.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
> If your client can't handle over 32MiB today on any kind of hardware, your client is bad.

Then in your opinion, all Bitcoin clients are bad. ;) (I disagree, with that, btw)

The issue is that the p2p layer was originally limited to 32MiB by Satoshi. He did this for a reason and that reason is that the p2p layer is just not going to scale to large blocks.

Sure, you can just throw more memory to it and it probably won't fail.

But it won't be very good.

Point is that if you introduce huge blocks you can't just keep the current design of the p2p layer. The current design is that a peer sends a full block to a node that is catching up and at that time both nodes can't do ANYTHING. They are completely blocked as there is only one CPU core ever used for the network.

This makes the system extremely fragile. This means I can make a node become effectively isolated from the network for quite a long time by just slowly feeding it a block. And in the mean time nothing else will be able to get in or out of that node. With lots of small annoying side effects.

The long term solution is to ditch the current p2p stack and code a new one. I started doing this in Flowee some years ago (link), but got side tracked with more important stuff. Some researchers that want to join there are more than welcome!
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
> If your client can't handle over 32MiB today on any kind of hardware, your client is bad.

Then in your opinion, all Bitcoin clients are bad. ;) (I disagree, with that, btw)

The issue is that the p2p layer was originally limited to 32MiB by Satoshi. He did this for a reason and that reason is that the p2p layer is just not going to scale to large blocks.

Sure, you can just throw more memory to it and it probably won't fail.

But it won't be very good.
Tom, you're just contradicting yourself. You say none of the clients can handle over 32 MiB blocks. And then you say more memory solves it, so they can.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
Then in your opinion, all Bitcoin clients are bad. ;) (I disagree, with that, btw)
maybe that is the case. this is why i'm beginning to think that clients should be maintained, coded, and developed by the largest vested players; miners. they alone have the motivation and urgency to scale to worldwide adoption moving thru the messy gauntlet of competition. it's not like we have unlimited amounts of time to do this. look around us as fiat players have upped their game with mobile financial apps like Venmo, etc.
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
Interestingly, a short swift and overwhelming winner, would reduce the necessity of a BU user base.

A hash battle is eventually the continuous state of nature for bitcoin. Competitors competing at the same game, rather than forking off to new games each on its own chain. This is boot camp for that continuous miner vs miner battle. The many forks of BTC created a confusion that a contentious hard fork necessitates a new chain and coin, it has people scared, but it is just not so and this will surprise people in November.

The path to the base protocol stability needed for mainstream adoption leads through this sort of competition. Each proving their work to be the best at being bitcoin.

Even during a hash battle, a 0-conf tx is just as secure as it is otherwise. Exchanges may temporarily require more confirmations, and miner and similar business may have some disruption, but the average users shouldn't notice unless they want to watch the bloodsport.

And the best console for that spectacle, may just be BU. I look forward to seeing what it can do.
I really wish more people understood this. Thanks for eloquently spelling it out.

Hopefully a live demonstration will help the laggards.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
If I'm wrong, I'd be interested to hear where anyone from ABC has ever advocated for changing the block time. That would surprise me.
It seems to me it was mostly a talking point to fog the issue of increasing capacity.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
at least within a miner-based development war, you can be confident certain miners will be losing hard money. either the losing miners get forked off permanently in a declining valued coin from their bad choices or they just go out of business. or more likely, they quickly come to their senses and reorg to the majority chain. but in a voluntaryist implementation development war, you can only be confident that the losing implementation teams will only be losing opportunity. there's really no comparison in terms of what works best for Bitcoin as Sound Money.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
> If your client can't handle over 32MiB today on any kind of hardware, your client is bad.

Then in your opinion, all Bitcoin clients are bad. ;) (I disagree, with that, btw)
the clients are good enough for today use but clearly are "bad" when we talk about tomorrow's scale problems . and this is why we promote competition and embrace many different client implementations that service the same blockchain ( BU, ABC, etc ), so that we can over come the problems with the current clients in the best ways we could possibly think of.

I have much faith in future implementations to over come these silly limits (ex. the silly 32mb limit), limits that we're put there because the programmer said to him self " Hmmm this is a hard problem, i cant solve it right this second, so ill put a hardcoded limit for now and come back to it later"

That's basically the though process behind pretty much ANY hardcoded limit.

The issue is that the p2p layer was originally limited to 32MiB by Satoshi. He did this for a reason and that reason is that the p2p layer is just not going to scale to large blocks.

Sure, you can just throw more memory to it and it probably won't fail.

But it won't be very good.

Point is that if you introduce huge blocks you can't just keep the current design of the p2p layer. The current design is that a peer sends a full block to a node that is catching up and at that time both nodes can't do ANYTHING. They are completely blocked as there is only one CPU core ever used for the network.

This makes the system extremely fragile. This means I can make a node become effectively isolated from the network for quite a long time by just slowly feeding it a block. And in the mean time nothing else will be able to get in or out of that node. With lots of small annoying side effects.
isn't that what parallel validation aims to solve? (is the idea of parallel validation still a thing? )


The long term solution is to ditch the current p2p stack and code a new one. I started doing this in Flowee some years ago (link), but got side tracked with more important stuff. Some researchers that want to join there are more than welcome!
now your talking!(y)
 

NewLiberty

Member
Aug 28, 2015
70
442
In regards to the hash war. There will be a war, if nothing else than to show how Bitcoin really works, nChain and Coingeek are committed to provoking it, they believe, for they good of BCH..

How does this look?
nChain and allies are adamant, will be unmoving and not negotiate on any term.
ABC has signaled willingness to negotiate with nChain, and been rebuffed.

I submit that it will be a contentious fork even if ABC were to scrap all their plans and submit to SV's version of Bitcoin Cash. In such case, the hash war would still occur, but just over block size.

Even if all submitted to SV, BMG, coingeek and allies would force a hashwar over blocksize by creating blocks too big for anyone else to mine.

The incentive in a blocksize hashwar is for all others to orphan those blocks and divide up the transactions into smaller blocks and devour the fees, (because they can't mine or even swiftly validate such big blocks they might as well try to orphan them).

The reason for all this is fairly simple. MUCH bigger hashwars are coming, and over more important things. The ones in the game now are all in favor of BCH succeeding. The ones later will not be. Current miners must learn, be prepared, understand how to protect the coin that funds their living... or else BCH dies at a later date not too far in the future.

BU has a unique position here. Not a main party in the war, it can serve those that would switch for profit. It can provide the best telemetrics and controls for miners to wage such wars. It can become the reference client for all that is left of Bitcoin.

Anything that coordinates ALL miners, is anathema, and will be an object of the hashwar waged by nChain.
Anything that allows miners to voluntarily coordinate and also to fight against each other when betrayed is useful.
 

SanchoPanza

Member
Mar 29, 2017
47
65
Anything that coordinates ALL miners, is anathema, and will be an object of the hashwar waged by nChain.
Anything that allows miners to voluntarily coordinate and also to fight against each other when betrayed is useful.
BIP135 allows miners to voluntarily coordinate on deploying individual changes that can stand on their own. It also allows them to fight against each other using their hashrate.

This could be useful to nChain's new client too. Once upon a time I submitted a Core PR for this too, but Core wasn't really that interested. To me that's a sign it could be too useful.
 

NewLiberty

Member
Aug 28, 2015
70
442
>>isn't that what parallel validation aims to solve? (is the idea of parallel validation still a thing? )

Parallelizing EVERYTHING is needed.
High availability + use all resources of the machine.

Because:

The next level up is microservices architectures, every process paralleled, and tuned for efficiency across many cores within each processor of a many processor system.
Terab has the lead currently, from starting first. If your implementation doesn't have a plan for microservices, advanced telemetrics, and reliability... then its lifespan will be limited to terab release date.

Think about fighter-pilot computer control systems. Fully Validating Node Implementations for miners, are this. Much information provided to a decision maker with the most accommodating controls conceivable.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
"It can become the reference client for all that is left of Bitcoin."

Anathema detected.

"BU has a unique position here. Not a main party in the war, it can serve those that would switch for profit. It can provide the best telemetrics and controls for miners to wage such wars."

Why would miners not keep their "best telemetrics and controls" to themselves?
Makes no sense to me they would come to BU as an open source client for that, although it might good to keep an open ear.
I also think BU should strive for a path of transparency in Bitcoin development.

It really amazes me, the issues nChain is willing to start a hash war over now, all in the name of "current miners must learn". Surely in the best interests of the protocol. What's the road to hell paved with again?
 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Bitcoin ABC developer Shammah Chancellor just wrote an article about Canonical (aka "lexical") transaction order. May be of interest for those about to cast their ballots for BUIP096

Sharding Bitcoin Cash

(@Norway you wanted an argument for the change, maybe this will help)
This is weird. I thought the motivation for canonical order was to opimize the effect of Graphene. Changing motives make me suspect that the real motive is something else.

Quoting Shammah from the article:

Some people have asked that ABC produce performance benchmarks of how this optimization could work. As stated above, no such benchmarks can be produced since the software must first exist. As this will take multiple years, benchmarks cannot be done on it — real engineering must be done in advance to plan for it. A summary of that engineering work is manifested above.
This is just a huge red flag for me. We are supposed to change the protocol in a big way, because of reasons that is not possible to prove today.