Gold collapsing. Bitcoin UP.

albin

Active Member
Nov 8, 2015
931
4,008
@cypherdoc

That is a really good question. I would've assumed that the use of sequence numbers would allow subsequent tx's to suoercede prior ones, but thinking a little deeper about it it's not like there's a tx id or anything that can be easily matched. There's no BIP or documentation I can really find anywhere so maybe the only option is to look at the code.
 

Taek

New Member
Dec 26, 2015
2
9
Hey, Taek42 from Reddit dropping in. A lot of misunderstandings in this thread. I am with family this weekend, so I don't have a ton of time. I'm going to try to hit the big ones, and I'll probably skip some. Keep asking questions, and I'll keep walking through the way I see things.

My own information is incomplete. My own experiences are imperfect. If there's something you think I am not understanding, or an important aspect that I am missing, let me know and we'll walk through it until everyone is on the same page.

The biggest idea here that I am in disagreement with is that no block size limit is safe.

A very significant discovery was made: At any fee level, there is a limit to how much data the network can process. Miners will not produce blocks larger than this limit because those blocks will necessarily be orphaned.

This is true! But it ignores a super important idea: different parts of the network have different throughput. For the rest of this example, I'm going to assume an infinite amount of transactions with an unlimited fee, because that's the assumption under which a fundamental block size limit was proven.

If you are a miner, and you know a block of size X can be processed by 85% of the network, but not 100%, do you mine it? If by 'network', we mean hashrate, then definitely! 85% is high enough that you'll be able to build the longest chain. The miners that can't keep up will be pruned, and then the target for '85% fastest' moves - now a smaller set of miners represents 85% and you can move the block size up, pruning another set of miners.

If by 'network', you mean all nodes... today we already have nodes that can't keep up. So by necessity you are picking a subset of nodes that can keep up, and a subset that cannot. So, now you are deciding who is safe to prune. Raspi's? Probably safe. Single merchants that run their own nodes on desktop hardware? Probably safe. All desktop hardware, but none of the exchanges? Maybe not safe today. But if you've been near desktop levels for a while, and slowly driving off the slower desktops, at some point you might only be driving away 10 nodes to jump up to 'small datacenter' levels.

And so it continues anyway. You get perpetual centralization pressure because there will always be that temptation to drive off that slowest subset of the network since by doing so you can claim more transaction fees.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Taek

Welcome to the forum!

I think most people here agree with (most of) what you just wrote (I know I do). Like you said, it is related to the idea that eventually people running nodes on raspberry-pi's and slow Internet connections won't be able to keep up.
 
Last edited:

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@Roger_Murdock
So in that world, can't we assume that miners would attempt to come up with some kind of dynamic algorithm for figuring out which chain is most likely to win based on both length and likely propagation speed based on the size of the blocks it contains.
That's a striking paradigm shift: as blocksizes grow, the miner's job becomes to identify which chain is most likely to win. It's like "Bitcoin the DAC" is paying miners for this service. Miners will have to be experts at that or lose money. (Yet more responsibilities taken away from the devs and left to the market.)

Thus consensus is this weird self-referential Keynesian beauty contest type thing that perhaps cannot be pinned down in any pat term like "longest valid chain by PoW difficulty." It's a true market process involving the subjective judgments and profit/loss calculations of each miner (as well as others) about what other miners and stakeholders will do, with PoW providing the framework for Schelling consensus, reminiscent of a trellis with vines growing on it.

Miners, not devs, become the experts in determining which chain is most likely to win. It's only natural that the division of labor specializes as Bitcoin grows. They will likely develop algorithms and all sorts of other tools for doing this. The blocksize is thus self-controlling by the dynamics governing the whole system, and would almost certainly be raised conservatively with careful consideration of network conditions with a sophistication neither we nor the Core devs could match.

This is part of a larger paradigm shift: how does Bitcoin grow organically on the clunky parameter "matrix" Satoshi laid down for it? How does the market smooth out the rough edges and optimize things? I think this is one of the answers.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
Hey, Taek42 from Reddit dropping in. A lot of misunderstandings in this thread. I am with family this weekend, so I don't have a ton of time. I'm going to try to hit the big ones, and I'll probably skip some. Keep asking questions, and I'll keep walking through the way I see things.

My own information is incomplete. My own experiences are imperfect. If there's something you think I am not understanding, or an important aspect that I am missing, let me know and we'll walk through it until everyone is on the same page.

The biggest idea here that I am in disagreement with is that no block size limit is safe.

A very significant discovery was made: At any fee level, there is a limit to how much data the network can process. Miners will not produce blocks larger than this limit because those blocks will necessarily be orphaned.

This is true! But it ignores a super important idea: different parts of the network have different throughput. For the rest of this example, I'm going to assume an infinite amount of transactions with an unlimited fee, because that's the assumption under which a fundamental block size limit was proven.

If you are a miner, and you know a block of size X can be processed by 85% of the network, but not 100%, do you mine it? If by 'network', we mean hashrate, then definitely! 85% is high enough that you'll be able to build the longest chain. The miners that can't keep up will be pruned, and then the target for '85% fastest' moves - now a smaller set of miners represents 85% and you can move the block size up, pruning another set of miners.

If by 'network', you mean all nodes... today we already have nodes that can't keep up. So by necessity you are picking a subset of nodes that can keep up, and a subset that cannot. So, now you are deciding who is safe to prune. Raspi's? Probably safe. Single merchants that run their own nodes on desktop hardware? Probably safe. All desktop hardware, but none of the exchanges? Maybe not safe today. But if you've been near desktop levels for a while, and slowly driving off the slower desktops, at some point you might only be driving away 10 nodes to jump up to 'small datacenter' levels.

And so it continues anyway. You get perpetual centralization pressure because there will always be that temptation to drive off that slowest subset of the network since by doing so you can claim more transaction fees.
i really disagree with this outlook. not b/c it is necessarily wrong, but b/c it ignores an ever increasingly valuable network that your scenario of ever increasingly big blocks entails.

i submit that if all these pools can indeed produce ever increasingly large blocks, by definition, that means they are processing ever increasing numbers of tx's and fees. in that world, the exchange price should go up in tandem as @Peter R has shown with his graph. if the price continues to follow the square of the number of tx's upwards, all current full node operators will be able to continue/afford operating those nodes if they so choose. plus, an ever increasing number of merchants will be coming onboard to service all these new users and their tx's who will easily be able to afford running full nodes. the same dynamic will apply to miners; all these bigger blocks with fees will encourage an expansion of mining in the West leveraging it's BW advantages thus increasing decentralization. keeping mining centralized at 60% in China makes no sense at all.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
Welcome to the forum, @Taek!
If you are a miner, and you know a block of size X can be processed by 85% of the network, but not 100%, do you mine it? If by 'network', we mean hashrate, then definitely! 85% is high enough that you'll be able to build the longest chain. The miners that can't keep up will be pruned, and then the target for '85% fastest' moves - now a smaller set of miners represents 85% and you can move the block size up, pruning another set of miners.
Right, but I think you may be assuming that Bitcoin Unlimited forces the miner/node to accept blocks of all sizes and mine on them. It's actually a setting that the user can adjust. This results in an effective emergent blocksize cap, which limits how many times you can do that pruning (probably to very few times, if any).

So I think you may be seeing BU as having a bigger scope than it does. It merely gives miners/nodes a user-friendly menu to adjust some settings so that the network cap emerges dynamically. It is not unlimited in the sense of "everyone must accept giga-blocks." It's unlimited in the sense of unfettered by a development committee's decisions and also able to be adjusted on the fly.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
A very significant discovery was made: At any fee level, there is a limit to how much data the network can process. Miners will not produce blocks larger than this limit because those blocks will necessarily be orphaned.
There are a few subtleties regarding what exactly the discovery was. In my opinion, it was always pretty clear that the average block size was limited by the rate at which the network could propagate and verify transactions and blocks. However, I previously thought that an individual block could be almost any size (without a block size limit)--just that such "terablocks" were astronomically unlikely and very costly.

I was able to calculate the cost of these "spam" blocks in my fee market paper for various values of the network propagation impedance:



The graph implies that a spam attacker will eventually succeed at some finite cost (the cost is just very high).

What Andrew has shown is that the network-capacity limitations actually apply on a per block basis when game theoretic considerations are made. This is a stronger claim than just the average block size being limited.

With his new framework, I think the contour lines on the above chart will become "asymptotes" at a certain point instead. My fee market paper underestimated the cost of block space for very large blocks, when game-theory considerations are introduced.

 

VeritasSapere

Active Member
Nov 16, 2015
511
1,266
Hey, Taek42 from Reddit dropping in. A lot of misunderstandings in this thread. I am with family this weekend, so I don't have a ton of time. I'm going to try to hit the big ones, and I'll probably skip some. Keep asking questions, and I'll keep walking through the way I see things.

My own information is incomplete. My own experiences are imperfect. If there's something you think I am not understanding, or an important aspect that I am missing, let me know and we'll walk through it until everyone is on the same page.

The biggest idea here that I am in disagreement with is that no block size limit is safe.

A very significant discovery was made: At any fee level, there is a limit to how much data the network can process. Miners will not produce blocks larger than this limit because those blocks will necessarily be orphaned.

This is true! But it ignores a super important idea: different parts of the network have different throughput. For the rest of this example, I'm going to assume an infinite amount of transactions with an unlimited fee, because that's the assumption under which a fundamental block size limit was proven.

If you are a miner, and you know a block of size X can be processed by 85% of the network, but not 100%, do you mine it? If by 'network', we mean hashrate, then definitely! 85% is high enough that you'll be able to build the longest chain. The miners that can't keep up will be pruned, and then the target for '85% fastest' moves - now a smaller set of miners represents 85% and you can move the block size up, pruning another set of miners.

If by 'network', you mean all nodes... today we already have nodes that can't keep up. So by necessity you are picking a subset of nodes that can keep up, and a subset that cannot. So, now you are deciding who is safe to prune. Raspi's? Probably safe. Single merchants that run their own nodes on desktop hardware? Probably safe. All desktop hardware, but none of the exchanges? Maybe not safe today. But if you've been near desktop levels for a while, and slowly driving off the slower desktops, at some point you might only be driving away 10 nodes to jump up to 'small datacenter' levels.

And so it continues anyway. You get perpetual centralization pressure because there will always be that temptation to drive off that slowest subset of the network since by doing so you can claim more transaction fees.
Welcome Taek, nice of you to come join us here. Take your time responding and enjoy the weekend with your family. :)

I do think you might be missing one important aspect, I have explained it further earlier today in this thread, so take a look at what I say there. In conclusion however I think that you are mistakenly equating pool centralization with mining centralization. These are separate but related phenomena, importantly the centralization pressures are not always the same. Miners direct their hash power towards 10-20 pools in a fashion which is comparable to a representative democracy. This fundamentally changes the dynamic and understanding of what mining centralization means. Mining decentralization is the measure of how the hashing power is distributed, not the distribution of pools.

The incentive mechanism acts upon the miners, if the underlying game theory is correct, miners would not consciously undermine the value proposition of Bitcoin and therefore their own investment, they would not allow a pool to act maliciously in such a way for a sustained period of time, or even allow a pool to grow to large enough to be able to do this in the first place. Unless mining itself becomes to centralized which would pervert the entire incentive structure. Increasing the blocksize would not effect mining centralization however. You are correct in pointing out this hypothetical attack vector, however this is only true for pool centralization and ultimately it is the people that control the hashing power that determine the size of the pools.

In regards to your second point, there are valid concerns regarding node centralization and externalizing the cost of an increased blocksize upon the full node operators. I do not think that this would be due to the hypothetical attack you mention since I do not think that such an attack would be feasible for the reasons I already mentioned above, unless of course mining became to centralized at which point however we would have even greater problems. So this does lessen the extend of the problem as you describe it. However I think that under Bitcoin Unlimited there effectively would still be a blocksize limit, it just changes how this limit is decided upon. I am still going through the process of trying to understand what this means and how this would actually look, also from a governance perspective. I think that Peter R summed it up well earlier in this thread.
Peter R said:
There exists a natural game-theoretic block size limit.
 
Last edited:

Members online