Gold collapsing. Bitcoin UP.

shadders

Member
Jul 20, 2017
54
344
IBS is probably the most significant scaling tech for bitcoin ever invented. It would be nice if Thomas and I get some recognition for our paper in the future, but the most important thing is that the Bitcoin SV node is most likely going to implement it.
(Just to be clear: I don't know if @shadders & Co came up with the same idea independently or if they got inspiration from us.)
It was independent but I'm not surprised others came up with it. I am surprised it took ten years for anyone to come up with it. Once you move to an append only block template building model it seems obvious. But it's not the first obvious idea I've seen undiscovered for a long time until trigger conditions arose...

P. S. Don't know when you first floated IBS but I first discuss this with Craig early to mid last year as I was looking for a response to graphene... As far as I recall I wasn't aware of the IBS paper at the time. But I will of course give credit where due for some innovative thinking.

Agree it is a game changer. Graphene is still block size dependent in terms of scalability. IBS and its siblings are not.
 
I don't know if BCH will manage to raise the block size limit before it's needed or whether they'll fall into the BTC trap. I think the problem BSV faces (other than the obvious) is the same as with BCH vs BTC and that is that it doesn't become an issue until it becomes an issue. While some can see a crisis coming ahead of time, it seems that a crucial majority are happy to keep on keeping on as long as the price is on the up even as the inevitable gets ever closer.
When you only lift the limit when it's needed, you have a developer committee centrally planning capacity.

You could argue this is needed to prevent centralization, like anti monopolism authorities, though.
 

trinoxol

Active Member
Jun 13, 2019
147
422
Germany
What I don't like about IBS is that all miners have to stream data to all other miners. That is O(N^2) and can lead to excessive costs.

1 TB blocks mean 1.6 GB/s. Times 100 receiving miners this is about 1 Tbit network usage!

In the current model, this is not the case. When a new block is discovered it is sent to *some* peers. Then those peers spread it further in a gossiping fashion.

@shadders Can you comment?
 
  • Like
Reactions: Dubby the Goldfish

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
the argument that folks like @Erdogan @solex @jessquit continuously bring to the table about how BCH is unlimited since miners are free to lift the limit if they want to irks the hell outta me.

it's disingenuous as hell because it tries to have it both ways when its really a binary philosophy game theory wise. either you believe in a limit or you don't. by leaving a limit in place, not only do they have to actually go about saying the software has propagation and validation limitations to justify it but even when they claim they don't have a limit, they send a not so subtle message to miners that they better not produce blocks over the limit because they're likely to get rejected due to technical difficulties and the reality that there is a widespread miner coordination in place to use ABC code with the limit left untouched out of a fear that the devs might know best. this part of it really is best described as a Stockholm Syndrome. they also know that it's not merely changing the number in the code. several lines need changing. no miner wants to risk a mistake trying this. and I argue the devs are exploiting the limit and q6mo updates every time they demand to get paid in one way or the other. this is an open source public good project furchrissakes.

This limit also sends a huge message to the market that ABC code is run by a technical community that wants to maintain control. reference the comments made by @jtoomim when he flat out says the limit "will not be lifted until we say so". do you really want childlike technical apparatchiks/anarchists like this running your monetary system vs what we currently have with the fiat system which has at least brought to us a certain level of prosperity over the last hundred years and actually doesn't limit the number of people who can use the dollar system tx wise if you're legitimate ? note the difference ; expecting adoption on a coin with a limit can result in preventing even legitimate actors from participating due to limited tx throughput.
 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
P. S. Don't know when you first floated IBS but I first discuss this with Craig early to mid last year as I was looking for a response to graphene... As far as I recall I wasn't aware of the IBS paper at the time. But I will of course give credit where due for some innovative thinking.

Agree it is a game changer. Graphene is still block size dependent in terms of scalability. IBS and its siblings are not.
We published it november last year in this thread. Keep unfucking bitcoin!

Incremental Block Synchronization

Thomas Bakketun, thomas@bitcoin.no
Stein Håvard Ludvigsen, sh@bitcoin.no


Incremental Block Synchronization (IBS) is a method for nodes of the Bitcoin network to faster reach blockchain consensus. No changes in the consensus rules are required.


The mining nodes of Bitcoin will seek to form a small world network, where each node is directly connected to almost all other nodes of the network. Each node is working on extending the blockchain with their own block. Let’s call that their candidate block.


In IBS, candidate blocks are built append only. Updates are continuously shared with the network.


IBS is not a block relay method, where a block is transmitted via several hops. Block relay will still be needed occasionally.


https://www.bitcoin.no/IBS.pdf
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
In the current model, this is not the case. When a new block is discovered it is sent to *some* peers. Then those peers spread it further in a gossiping fashion.
I assume you mean (mining) nodes when you write peers. Nodes are incentivized to be directly connected to all the other nodes. That's why they form a near complete graph. As a miner, you will normally receive a fresh block directly from the miner who produced it.
 
  • Like
Reactions: sgbett and torusJKL

trinoxol

Active Member
Jun 13, 2019
147
422
Germany
OK, I guess that is true. Is that actually the case today? Do miners do this?

But also, without IBS there is only one block propagated. With IBS there are 100 of which 99 are discarded. No matter how you look at it, IBS goes from linear to quadratic bandwidth usage.
 
  • Like
Reactions: Norway

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
even our beloved @Mengerian, ABC dev galore, understands the intimidation of a limit:



https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-571#post-20567
Similarly, I think Bitcoin nodes and miners are engaged in commitment strategies to defend their chosen Schelling points. Miners sacrifice resources to produce the proof of work, making it a strong Schelling point to form consensus on. The 1MB blocksize limit is another Schelling point that most nodes have implicitly committed to defend. As things currently stand, they will ignore the longest proof-of-work chain if it contains >1MB blocks. If most nodes make this threat, it will deter miners from producing such blocks. Where things could get interesting is if these Schelling points come into conflict with a significant proportion of nodes abandoning the 1MB Schelling point. How committed would the 1MB Schelling point defenders be? The cost they are committing to could be quite high, it is the risk of falling out of consensus with the economic majority.

Looking at things this way also reinforces the utility of nodes advertising the Schelling points they are committed to defend, as Unlimited does with the block size settings in the user agent string. It is an implicit threat to the miners: cross this line and we will orphan your blocks.
[doublepost=1563720536][/doublepost]@SPAZTEEQ

they'll deny everything I've said above: while maintaining a limit, adopting ABC's checkpoints, and becoming incompatible with BSV.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
then i'm confused by this:

I think the problem BSV faces (other than the obvious) is the same as with BCH vs BTC and that is that it doesn't become an issue until it becomes an issue.

BSV does not have that problem. or at least until Feb when the limit is removed.
 
  • Like
Reactions: Norway

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
When you only lift the limit when it's needed, you have a developer committee centrally planning capacity.
My point is that it's not an issue for the larger community until it actually starts causing problems. Whatever the position of most of us in this thread, we are used to paying fairly close attention and extrapolating conditions. When we see Bitcoin BTC is going to hit the 1MB limit in two years time, we're "Guys, isn't it time we start doing something?". But there's a large part of the community which are all about moon-lambos and trusting "the worlds greatest developers" as long as the price is still wandering upwards. This was, and continues to be, BCH's problem with displacing BTC and will be BSV's problem with displacing BCH if they fail to address the limit (which may be several years away as an issue yet).

Of course, if you buy into BSV as filesystem, you're probably take a different position but that's a different discussion.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
the argument that folks like @Erdogan @solex @jessquit continuously bring to the table about how BCH is unlimited since miners are free to lift the limit if they want to irks the hell outta me.

it's disingenuous as hell because it tries to have it both ways when its really a binary philosophy game theory wise. either you believe in a limit or you don't. by leaving a limit in place, not only do they have to actually go about saying the software has propagation and validation limitations to justify it but even when they claim they don't have a limit, they send a not so subtle message to miners that they better not produce blocks over the limit because they're likely to get rejected due to technical difficulties and the reality that there is a widespread miner coordination in place to use ABC code with the limit left untouched out of a fear that the devs might know best. this part of it really is best described as a Stockholm Syndrome. they also know that it's not merely changing the number in the code. several lines need changing. no miner wants to risk a mistake trying this. and I argue the devs are exploiting the limit and q6mo updates every time they demand to get paid in one way or the other. this is an open source public good project furchrissakes.

This limit also sends a huge message to the market that ABC code is run by a technical community that wants to maintain control. reference the comments made by @jtoomim when he flat out says the limit "will not be lifted until we say so". do you really want childlike technical apparatchiks/anarchists like this running your monetary system vs what we currently have with the fiat system which has at least brought to us a certain level of prosperity over the last hundred years and actually doesn't limit the number of people who can use the dollar system tx wise if you're legitimate ? note the difference ; expecting adoption on a coin with a limit can result in preventing even legitimate actors from participating due to limited tx throughput.

of course, the next argument brought to bear will be that ABC will obviously increase the limit when it's necessary. really? @solex, what happened to the well accepted understanding that we're running out of time (in fact, we are running out of time given all the improvements in the fiat system stimulated by the appearance of BTC 10y ago), esp in light of our other well accepted understanding that protocol ossification is a real thing? BCH supporters are spending all their time telling us now why we need the 32MB limit in place that soon it will become cemented in their brains, much like the 1MB became gospel once Core started beating it into their heads. as well, now we have apparatchiks like @jtoomim postponing any blocksize increase for ABC far off into the unforeseen future until he says it's allowed. i say it will never come given what we know happened to Core and his propensity to continuously come up with FUD as to why BCH can't scale. esp when shitlords like Amaury is successful in receiving a windfall of BCH after whining about not getting paid. why else does the q6mo update schedule exist except to give protocol devs something to point to as "necessary" work to justify donations/pay? you may try to say this is necessary work but then you'd have to explain why the base original protocol on both BTC and BSV (no longer BCH as it has rolling checkpoints) has never successfully been attacked and how in the heck is BSV producing these monster blocks onchain without killing off it's smaller miners via verification or propagation delays. @Otaci data shows that even non mining BSV nodes were able to digest these blocks w/o delays.
 
Last edited:

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
@cypherdoc, my point is that "BCH will hit the limit in 5 years" is not an issue for most of the crowd but only becomes an issue at the end of that 5 years.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
OK, I guess that is true. Is that actually the case today? Do miners do this?

But also, without IBS there is only one block propagated. With IBS there are 100 of which 99 are discarded. No matter how you look at it, IBS goes from linear to quadratic bandwidth usage.
All block candidates are streamed to all miners, but only 1 will win. You send/receive more data, but the bandwidth and processing power needed is a lot less because the huge "block found" spike is removed.

Imagine you find a 1TB block. You want to send this as fast as possible to the other 99 miners. We are talking a few seconds here. This requires you to have a lot of bandwidth available. With IBS, you stream the 1TB to the 99 other miners over 10 minutes on average. More data, less bandwidth. Block propagation in milliseconds, regardless the size of the block.

And no, it's not going to quadratic bandwidth. The bandwidth needed is linear/proportional to the number of mining nodes. And the number of nodes is limited naturally by economics. If your mining operation finds 1 block per month on average, you want to join a pool to average out the economic risk related to statistical variance.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
why is IBS and improvement over xthins, ignoring that xthins was the root of the bug attacks x3, and that patches have been applied?
[doublepost=1563723633][/doublepost]
@cypherdoc, my point is that "BCH will hit the limit in 5 years" is not an issue for most of the crowd but only becomes an issue at the end of that 5 years.
then good luck to BCH in removing a limit that allows for the shifting sands of protocol manipulation q6mo that justifies getting paid. IOW, can you imagine protocol devs whining/justifying for pay if the protocol is locked down?
 
  • Like
Reactions: Norway

trinoxol

Active Member
Jun 13, 2019
147
422
Germany
That's a good point, @Norway! It's more volume but less bandwidth.

I wonder if this streaming to other miners can be made more efficient with use of the cloud. Instead of sending across the internet the data can be placed in AWS S3 blob storage. Reading a blob from inside the same region does not cause network charges. It causes IO charges but those are much lower.

Or, you connect VM to VM over TCP inside of the same region. No need to pay for blobs that way. VM networking is free but you have to buy a VM that comes with appropriate bandwidth.

I think the best solution would be if someone made a software package that you can use to instantiate a scalable node inside of an AWS region of your choice inside of your own AWS account. That software could manage the node, all the networking, the block building and control of mining hardware. The mining hardware itself can live anywhere in the world. It's just a dumb box.

Essentially, all that a miner needs as a service with the power of the cloud. No need to manage any hardware. Very little need to manage software. No long term contracts. Essentially, one guy can control everything from his laptop with a few clicks.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
then good luck to BCH in removing a limit that allows for the shifting sands of protocol manipulation q6mo that justifies getting paid. IOW, can you imagine protocol devs whining/justifying for pay if the protocol is locked down?
Yes. It's a curse with two edges. It both removes the urgency for fixing things and fails to incentivise moving to other solutions which *have* addressed the issue in a timely manner. I would say that BCH has a while to go before it becomes troubling to me but they need to be moving on the issue in the next 12-18 months, I'd say.
[doublepost=1563726202][/doublepost]I will add that to me, that BCH is focusing on other technological developments when the main cause for the fork was the block size limit is not very comforting.
 
  • Like
Reactions: bsdtar