Gold collapsing. Bitcoin UP.

bitsko

Active Member
Aug 31, 2015
730
1,532
The overton window happens the other direction and it makes me happy to think bip101 can be considered far too conservative.

I used to be a big fan of BIP101, now I see it as a potential limiter and a thing to be used to dicker about the minutia of endlessly.

and again, without settling the issue with removal of the limit; if by the grace of god or worldwide adoption, it becomes a restriction, as i said prior, there will always be a market for an unrestricted in scale blockchain, there may not be a market for multiple restricted blockchains.

ofc this is not an invitation to dicker about the specifics, which can be a tool itself to filibuster, nor can there be a guranantee of consensus at a later date if need be.

when people say 'we(who is we really) CAN change it at a later date' the obviousness of how empty those words are should even be apparent to the writer of them.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
oh brother:

 
  • Like
Reactions: AdrianX

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
rain's free
 
  • Like
Reactions: Norway

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
The overton window happens the other direction and it makes me happy to think bip101 can be considered far too conservative.

I used to be a big fan of BIP101, now I see it as a potential limiter and a thing to be used to dicker about the minutia of endlessly.

and again, without settling the issue with removal of the limit; if by the grace of god or worldwide adoption, it becomes a restriction, as i said prior, there will always be a market for an unrestricted in scale blockchain, there may not be a market for multiple restricted blockchains.

ofc this is not an invitation to dicker about the specifics, which can be a tool itself to filibuster, nor can there be a guranantee of consensus at a later date if need be.

when people say 'we(who is we really) CAN change it at a later date' the obviousness of how empty those words are should even be apparent to the writer of them.
I think there's a massive misunderstanding going on about this.

People say "just remove the limit completely."

Then when asked what a miner should do if he receives a 2 GB block the day after he removes the limit, they reply "miners will set a reasonable limit and orphan such a ridiculous block."

But this is just a re-invention of the block size limit. Both ABC and BU already allow miners to very-easily adjust their block size limits -- in BU they don't even have to restart their node.

The point is that miners want to have the same block size limit as the other miners, to avoid accidentally being partitioned from the network should some elephant block arrive. It comforts them to have a common policy on how to deal with such blocks.

BCH developers are not controlling the block size limit. On the contrary, they are keen to design for miners tools to make increases to the block size limit as frictionless as possible.

My suggestion: Rather than saying "just remove the limit," pretend that the limit is already removed but for whatever reason miners have decided to orphan blocks larger than 32 MB. Now try to come up with a low-friction way for them to coordinate increases to that limit, as needed. There are at least three good ideas:

1. Stick to a schedule like BIP101 (maybe faster, doubling every 18 months instead of every 24 months).

2. Use miner voting via BIP100.

3. Use EC along with BUIP055 signalling.

All of these would work as permanent solutions. But just saying "remove the limit and let miners figure it out" won't resolve anything because that is essentially what we have today.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@micropresident :

I just saw your tweet:



And, quoted again, because tweets are usually for some reason broken in this forum:
Why Canonical Tx Ordering? It allows massive scale in a way that TTO doesn't. CTO allows sharding creation and validation of blocks across multiple threads or machines for massive scale. This isn't about simple parallelization. CTO is part of how we get to VISA throughput.
But what makes sharding impossible or harder with the current order?
The TXID is available to route to a shard in both cases.
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@Peter R Generally agreed about the block size limit.

The only thing I would add is that there are various technical bottlenecks that need to be removed as the limit is raised. Most of them are nothing major, but they are fixes that need to happen along the way. For this reason, it makes sense for developers of different implementations to coordinate, and inform miners what limit they consider safe at any given time.

For example, in the past the "quadratic hashing" issue because a problem around 1MB blocks, so that needed to be fixed (even if only by a transaction size limit, but is now permanently fixed with the BIP 143 SigHash). In ABC, there was a Compact Blocks index that needed to be increased to go past 8MB, and the network message size also needed to be fixed to get up to 32MB limit.

From what I understand, the next bottlenecks that need to be removed to go beyond 32MB are the mining RPC (Get Block Template), and the mempool admittance code, which starts to become a bottleneck. Nothing fancy is needed, but a bunch of development work is needed to fix this stuff.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
My suggestion: Rather than saying "just remove the limit," pretend that the limit is already removed but for whatever reason miners have decided to orphan blocks larger than 32 MB.
Let me edit that quote:
My suggestion: Rather than saying "just remove the limit," pretend that the limit is already removed but for whatever reason miners have decided to orphan blocks larger than 1 MB.
The miners set bitcoin back several years and hurt their own business because of this weird mental idea that they couldn't raise the blocksize.


The point is that miners want to have the same block size limit as the other miners, to avoid accidentally being partitioned from the network should some elephant block arrive. It comforts them to have a common policy on how to deal with such blocks.
Yes, they want a safe and predictable space. But is this the right environment for them? They are competing like crazy on the mining itself. Why shouldn't they have a competitive environment for increased block space?

There should be real incentives for the development of custom node ASICs and optimized software. When 51% of the hashpower can handle 5 GB blocks (and there is a demand for this), the lazy / cheap pools should have a reason to upgrade, not hold the majority back. You snooze, you lose.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Peter R Generally agreed about the block size limit.

The only thing I would add is that there are various technical bottlenecks that need to be removed as the limit is raised. Most of them are nothing major, but they are fixes that need to happen along the way. For this reason, it makes sense for developers of different implementations to coordinate, and inform miners what limit they consider safe at any given time.

For example, in the past the "quadratic hashing" issue because a problem around 1MB blocks, so that needed to be fixed (even if only by a transaction size limit, but is now permanently fixed with the BIP 143 SigHash). In ABC, there was a Compact Blocks index that needed to be increased to go past 8MB, and the network message size also needed to be fixed to get up to 32MB limit.

From what I understand, the next bottlenecks that need to be removed to go beyond 32MB are the mining RPC (Get Block Template), and the mempool admittance code, which starts to become a bottleneck. Nothing fancy is needed, but a bunch of development work is needed to fix this stuff.
Thanks for bringing up a role that developers can and do play in giving miners confidence to raise the block size limit. This was a big part of the motivation behind the Gigablock Testnet experiments: to identify the sustained load where "off the shelf" BU software began to hiccup, and to identify the bottlenecks responsible for those hiccups.

What we found is that things started to break down at around 100 TPS, due to the mempool admission bottleneck that you pointed out. Incidentally, 100 TPS corresponds to always-full 32 MB blocks (or very close to that). I believe the Gigablock experiments helped make the miners comfortable lifting the limit from 8 MB to 32 MB.

By the way, @theZerg demonstrated over 1000 TPS once the mempool bottleneck was removed, and then we hit a second bottleneck due to block propagation / validation at around 500 TPS (which corresponds to always-full 150 MB blocks).
[doublepost=1534981589,1534980691][/doublepost]What is nice about BIP101 is that everyone would know where the target capacity is at any point in the future, and would work towards achieving that capacity. Let's imagine we double the block size limit every 18 months:

2018: 32 MB
2019.5: 64 MB
2021: 128 MB
2022.5: 256 MB
2024: 512 MB
...

It might spur a competition between implementations to be the first to demonstrate that their software could handle the next increase. Or perhaps BU makes a breakthrough and proves "our newest release has capacity that's 6 years ahead of schedule" in order to win over more miners.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
2018: 32 MB
2019.5: 64 MB
2021: 128 MB
2022.5: 256 MB
2024: 512 MB
This could be way to slow. It's hard to predict the future for a network like bitcoin. The russians are famous for their 5 year plans, and how they failed.

I can't understand why we have to bring along all the miners/mining pools, no matter how little they invest in node hardware, network connections and software optimizations.

I get the feeling that we are treating the miners as kids in one of these races where everybody get a medal.

If the 49% of miners with the slowest nodes are threatened by blocks they can't handle, they are incentivized to get their shit together and upgrade,

The hardware in the Gigablock Testnet was not very expensive AFAIK. And it gets cheaper and faster every year.
 

shadders

Member
Jul 20, 2017
54
344
I think there's a massive misunderstanding going on about this.

People say "just remove the limit completely."

Then when asked what a miner should do if he receives a 2 GB block the day after he removes the limit, they reply "miners will set a reasonable limit and orphan such a ridiculous block."

But this is just a re-invention of the block size limit. Both ABC and BU already allow miners to very-easily adjust their block size limits -- in BU they don't even have to restart their node.

The point is that miners want to have the same block size limit as the other miners, to avoid accidentally being partitioned from the network should some elephant block arrive. It comforts them to have a common policy on how to deal with such blocks.

BCH developers are not controlling the block size limit. On the contrary, they are keen to design for miners tools to make increases to the block size limit as frictionless as possible.
If you remove the limit entirely the limit defaults to the block size that makes the node run out of memory and crash in the current implementations. So there has to be one at least until nodes are completely rearchitected.

As you say the max block size is configurable already in BU and ABC. In ABC the setting is buried inside debug setting that require a -help-debug switch to even see. I doubt most miners even know it's there. The practical effect is that the default IS the max block size. The changes we propose for SV (detailed in a coingeek post today) are not about changing already existing functionality (we are just making the setting much more prominent) but about changing miner mindset to understand that this setting is there and that they CAN use it and that it is THEIR decision to do so in concert with other miners.

The point is that miners want to have the same block size limit as the other miners, to avoid accidentally being partitioned from the network should some elephant block arrive.

I refer to the max produced block size as the soft limit and the max accepted as the hard limit.

What matters to miners is that their hard limit is above the majority of other miners soft limits. The fact that these two limits exist and that they are usually quite a distance apart allows plenty of room for miners to gradually creep these limits up in response to network activity and other miner actions with a healthy safety margin.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
This could be way to slow. It's hard to predict the future for a network like bitcoin. The russians are famous for their 5 year plans, and how they failed.
@Norway imagine BU's EB setting being updated as per this skedule, it gives us unlimited block size relatively quickly. and dictates a minimum performance requirement. The thing with EB cap is it's not hard "hard", it's also not set in stone and can be made bigger without issue. (by contrast, BU has adjusted the recommended EB down from 16 to 8 to be compatible with ABC. BIP101 sets in stone the target EB. and ends with 2GB for safety.

Should demand approach the BIP101 cap it can just be moved by changing the actual limit.

Why I like it is, if we ever come to a stalemate over the block size it's just a temporary 18-month barrier (77700 actual blocks)
[doublepost=1534986658][/doublepost]
they are incentivized to get their shit together and upgrade,
maybe they have their shit together. If I've invested to handle 100tps and I'm seeing 0.25tps, I'd think it's reasonable to see at least 3tps before I upgrade to manage 500tps.
 

bitsko

Active Member
Aug 31, 2015
730
1,532
Perhaps a poll is in order:

What is your acceptable required cost of node operation? Preexisting hardware? $500 system? $5000 system? $20000 system? modem? Fibre?

If limits are being set for _safety_, for the safety of _whom_ ? Where is the bar?
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Should demand approach the BIP101 cap it can just be moved by changing the actual limit.

Why I like it is, if we ever come to a stalemate over the block size it's just a temporary 18-month barrier (77700 actual blocks)
256 MB until 2024 (512 MB after 2024) is crazy tiny. At that pace, bitcoin has probably failed. This is just setting us up for the same failiure as Core did.

Why should the miners not be pushed to increase their node hardware when the demand pics up?

In theory, the miners should be incentivized to increase the block space to receive more fees. But, as we saw in december, the miners made a lot more in fees when the blocks were full.