Gold collapsing. Bitcoin UP.

jtoomim

Active Member
Jan 2, 2016
130
253
It's not about miners. It's about pools. Miners will switch pools based on whoever is giving the best revenue and UI at the time. Pools have very tight margins (running on a 1% to 3% fee) and can double their income if they can lower their orphan rates by 2% compared to everyone else (or raise theirs). If large pools have an inherent advantage, then the large pools will get larger.

Individual miners usually are choosing which pool to mine with based on how it affects their own revenue, not how it affects Bitcoin as a whole. It's a tragedy of the commons scenario.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
I know. I've been arguing against the tragedy of the commons logic since day 1 of my entry to Bitcoin. There's no such thing in a sound money incentivized system when your new money has the potential to go to parabolic levels.

Plus, pool operators, being the least powerful link/broker in the mining system, can't afford to cheat like this else be ostracized/ punished by fleeing hashers. Weve seen this before with ghash.
[doublepost=1535803447,1535802337][/doublepost]The same psychology that permeates the mind of a hodler permeates the mind of a miner; don't sell and don't cheat. One could say these are the two pillars of Bitcoin. Or that they are the same thing, as miners certainly can be hodlers as @Jihan has certainly demonstrated. Hodlers and miners have a never ending unflappable dedication to the system even in the face of tremendous odds or temptations.
[doublepost=1535804132][/doublepost]Look at yourself @jtoomim . As a pool operator, why aren't you consistently pumping out 32mb blocks right now trying to shake out all the smaller miners to your own advantage?
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
Last edited:

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
preliminary observations:

- so far BMG, Coingeek, and BTC.com have mined > 10MB blocks. (Coingeek finally dropped their 2MB soft cap.)

- meanwhile antpool and BTC.TOP are mining < 100kB blocks. presumably they want to monitor mempool growth. (the difference in fees b/w a 67kB and a 13MB block is only USD$80 anyway).

- even with mempool at 32 MB, 0-conf chugging right along (e.g. posts go live on memo right away).

- explorers, APIs under duress.

- (*) green candles showing up (post hoc ergo propter hoc?)

- (**) memo.cash crashed 3 hours into the test. steel getting tempered.
 
Last edited:

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
a new narrative (in good measure, the BU narrative) has been imposed: it's not about the blocksize limit, it's about scaling the software.

this is not a detail in technical emphasis. this is how we've moved beyond the core wars.

game on.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
The issue is that orphan rates cause mining unfairness.
We need to put a number on this you are half way there.

A pool will never orphan their own blocks,
That's a bad assumption, miners orphan their blocks every day, they have a financial incentive to follow the longest chain. Truer is a pool will never build on a block that takes longer than 9.5 minutes to verify.

That would give an overall orphan rate of 1 - e^(-18.75/600) = 3.07%, which in turn gives a 30% pool a 0.92% advantage.
Now put that in perspective what percentage of blocks are orphaned. going by the few pools that report this data its about 0.01%.

So 0.01% of the time bigger pools have a 0.92% advantage over smaller pools. In this thread, its been discussed a lot @Peter R and Gavin Andresen also did a back of an envelope calculation and I consider insignificant. Effectively a 10deg difference in temperature on any given day has more impact on mining profitability. It is not a justification for a limit.

Reducing block orphaning is the responsibility of the miner/pool it is not part of the protocol. The protocol adjusts the difficulty to account for the orphan rate. It is a feature, not a bug.
[doublepost=1535822042][/doublepost]
The 1.6 MB/s of block size throughput figure was for crossing the entire network, not one hop.
WTF miners validated a 15MB block on mainnet and are all building on top of it.
[doublepost=1535822403][/doublepost]
I recommend rewinding and watching both talks in their entirety, as they both have a lot of useful information in them.
@jtoomim I recommend an upgrade, please look into using Xthin, and implement parallel block validation and do your tests again. your test results and conclusions are outdated.
[doublepost=1535822641,1535821815][/doublepost]
I don't believe that it takes 18.75 seconds to send 1.25 MB data between two parties that are highly incentivized to have a good connection.
Not Bulshit, but could be Block Stream. Xthin is just a better way to do it. @jtoomim please have a look at these articles by @Peter R

https://medium.com/@peter_r/towards-massive-on-chain-scaling-presenting-our-block-propagation-results-with-xthin-da54e55dc0e4
[doublepost=1535823485][/doublepost]
It's the Fidelity effect all over again.

Conservative Devs (or those with hidden agenda)... If they come, we'll build it.
Everyone who wants adoption and real global use... If we build it, they'll come.
An amazing consequence of the Bitcoin design is: The building part is incentivized with a for-profit motive. Historically it is developers who have got in the way insisting on transaction capacity quotas to protect the for-profit builders from building too fast.
 

BldSwtTrs

Active Member
Sep 10, 2015
196
583

At 7min38, "Wormhole appears to be a bit of an attack on Bitcoin Cash because you need to destroy bch in order to receive this wormhole tokens".
The level of understanding of the interviewer is embarrasingly low.
 
Last edited:

wrstuv31

Member
Nov 26, 2017
76
208
This ignores the fact that in an uncapped system that promises unlimited growth, there is infinite motivation for legions of new miners to enter the space to compete which by itself makes it impossible for a large miner to even run away.
Great point, overall hashrate grows even during a short term governance event (orphaning).
[doublepost=1535827670][/doublepost]
It's a tragedy of the commons scenario.
The dynamics are nothing like that.
 
a new narrative (in good measure, the BU narrative) has been imposed: it's not about the blocksize limit, it's about scaling the software.

this is not a detail in technical emphasis. this is how we've moved beyond the core wars.

game on.
Good point. Today the discussion shifted from limiting the blocksize to technically enabling bigger blocks. I hope this vibe keeps running.

We have the absolute nightmare of the whole small block narrative: large blocklimits and a network dos'ing itself with spam. And still, it doesn't break. It's wonderful to see that my node, on my personal system, has absolutely no problem to keep up with 50mb blockproduction in 1 hour and a constant flow of 20-30tps.

It's also wonderful to see miners limiting the traffic by mining small blocks from time to time. Under heavy load miners need to steward blockspace production rate, and it is good to see they do it responsible. All miners.

I don't know how ABC and XT do, but BU runs very stable and consumes low system ressources. Maybe it's because of ParVal, xthin/graphene and other improvements. Thank you, BU developers.

One important insight seems to be that Graphene is a bit more stable under heavy load as xthin. But both are good enough to handle this volume. Another insight (for me) is that blocksize is actually restricted by miner's ability to build a block template in time. That's good, because it is a technical limit, not a political / developer-planned.

Some services broke, or have been temporarily frozen / slow, but most seem to stand it.
 

jtoomim

Active Member
Jan 2, 2016
130
253
@jtoomim I recommend an upgrade, please look into using Xthin, and implement parallel block validation and do your tests again. your test results and conclusions are outdated.
We did exactly that. That was the gigablock testnet initiative spearheaded by BU. That test is where the 1.6 MB/s figure came from. Without Xthin, the block propagation speed equals the network throughput speed divided by the number of hops (plus validation time between hops), which ends up around 30 kB/s to 60 kB/s. Xthin multiplies that 60 kB/s number by around 26, giving us the 1.6 MB/s number that we've come to enjoy today.

WTF miners validated a 15MB block on mainnet and are all building on top of it.
Yes, a 15 MB block is fine. The expected full-network propagation time for a 15 MB block is (15 MB / (1.6 MB/s)) = 9.375 seconds. That block took about 1.2 seconds for my node to validate, and probably about 2.4 seconds for the getblocktemplate call, giving a total propagation time (including network and CPU-bound steps) of 13 seconds. The orphan rate from a 13 second delay would be 1-e^(-13/600) = 0.96%. A pool with 30% of the network hashrate would only get 70% of that, or 0.67%. This gives them an advantage of 0.29% over smaller pools. In my opinion, this magnitude of advantage is acceptable. 15 MB blocks are fine given current technology.

Not Bulshit, but could be Block Stream. Xthin is just a better way to do it. @jtoomim please have a look at these articles by @Peter R
How ironic. The 1.6 MB/s figure comes from @Peter R himself. He described it in his talk as 0.6 seconds per MB, but it's the same number. I just prefer the reciprocal formulation. So you're questioning my numbers which I cited from Peter R, and as counter-evidence are citing ... Peter R?

Now put that in perspective what percentage of blocks are orphaned. going by the few pools that report this data its about 0.01%.
Yes, if average blocksizes are about 70 kB, as they have been over the last year, then you will see an orphan rate close to 0.01%. If you increase the average blocksize by a factor of 1000 to 70 MB, then you'll see an orphan rate close to 10%. I claim that 10% is too much, and that we should limit it to about 4% in the worst-case scenario. With current technology, that would be about 30 MB.

Your 0.01% number is pretty close to correct, by the way. There have been 4 orphaned blocks since BCH started at #478559 up to 546047. That gives an average orphan rate of 0.59%. With only 4 observed orphans, though, the error margins on that estimate are pretty large.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
I recommend rewinding and watching both talks in their entirety, as they both have a lot of useful information in them.
@jtoomim I was referring to the information presented in your second video, it's out of date.

Re your approval on the 15MB block size, it's not the limit that dictates the size its the miners capacity. Glad to know you approve a blocksize higher than your purported struggles with 4MB blocks we're making progress ;-).

I personally will not be mining any blocks larger than 8 MB. I already know that the pool software I use (p2pool) can't handle anything beyond 8 MB, and struggles with 4 MB. I also know that Bitcoin ABC was strained a bit in getblocktemplate by 5 MB blocks during the Aug 1st test run, so I would not be surprised if all other pools choose to limit their blocks to 8 MB as well.
still, the network would have the same capacity limits if the consensus limit was moved from 32MB to 26,000 M. It seems others would also have the same problems as you composing, propagating and validating blocks. Those who invest for scale would obviously be able to compose bigger blocks with a minimum validation impact on those who still mine small blocks.

giving a total propagation time (including network and CPU-bound steps) of 13 seconds.
good thing we have 10 minutes.

The orphan rate from a 13 second delay would be 1-e^(-13/600) = 0.96%.
why if are you validating in parallel and doing header first mining for the first 13 seconds?

Yes, if average blocksizes are about 70 kB, as they have been over the last year, then you will see an orphan rate close to 0.01%. If you increase the average blocksize by a factor of 1000 to 70 MB, then you'll see an orphan rate close to 10%.
So orphan rate increasing with block size is essential to the bitcoin design it encourages small blocks.

Anyway, that estimate seems high to me, it would imply a validation time closing in on 2-5 minutes on average. But it is good news, as a miner, I can say I would rather charge a higher fee and make a smaller block that risk a 10% orphan. If I understood correctly I don't think we saw that high an orphan rate with the gigabit testnet and Xthin, things started breaking down over 100MB.
 
Last edited:

jtoomim

Active Member
Jan 2, 2016
130
253
> @jtoomim I was referring to the information presented in your second video, it's out of date.

Yes, I know it's out of date. That's why I posted it second, and why I use numbers from the first video whenever possible. However, the second video covers a lot of material that the first video does not address, like China. The Gigablock initiative explicitly avoided having any nodes in China, which is a big departure from the actual Bitcoin mining network. The 9 MB blocks in my tests often took around 200 seconds to cross the China border, whereas the non-China nodes got the same block in about 20 seconds, or 10x faster. Xthin will reduce all of those numbers, but will not change that ratio.

> it's not the limit that dictates the size its the miners capacity. Glad to know you approve a blocksize higher than your purported struggles with 4MB blocks we're making progress ;-).

The software stack I currently use for mining has only 2-4 MB capacity for block generation with acceptable performance, but up to 30 MB for block receiving and validation with acceptable performance. That 30 MB receive limit comes from Bitcoin ABC's block propagation algorithm, and applies to basically everyone. The 2-4 MB generation limit comes from p2pool's inefficient code, and only applies to me as long as I choose to use p2pool. I use a receive limit of 32 MB and a generation limit of 4 MB. I'm arguing for keeping the consensus limit of about 32 MB until Graphene is mature. That's all.

> why if are you validating in parallel and doing header first mining for the first 13 seconds?

Headers-first mining is not implemented in any full-node software, nor is it implemented in any open-source pool software that I know of except p2pool (which has its own scaling issues). I do not think that it is a good idea to require miners to develop their own software in order to be able to mine competitively. If they're told they have to either hire a developer for $20,000 to write them good pool software, or suffer a 4% orphan rate disadvantage, or pay a large pool a 2% fee, they're going to choose to join the large pool. We don't want them to do that. If we had HFM in BU or ABC or XT, then I agree, that would change things. But we currently don't have that feature. The only pools that have HFM are the large ones with >10% of the network hashrate, and those are exactly the pools that we want to avoid encouraging people to use.

> Anyway, that estimate seems high to me, it would imply a validation time closing in on 2-5 minutes on average

No. 5 minutes would be a 39% orphan rate. 2 minutes is an 18% orphan rate. The formula is

1 - e^(-t / T)

where t is the delay and T is the 600 sec block interval. A 10% orphan rate would be a delay of about 63.25 seconds. At 1.6 MB/s for Xthin, that would imply a block size of about 101 MB.

> But it is good news, as a miner, I can say I would rather charge a higher fee and make a smaller block that risk a 10% orphan

That's only true if you are a small miner. If you have enough hashrate, then you won't see as much of an orphan rate increase as your competition will, because the block propagation time from your node to your node is 0. When your competition's blocks get orphaned, that lowers the difficulty and increases your revenue. So if your increased orphan rate risk is smaller than your competition's increased orphan rate risk, you can actually benefit from making your blocks bigger. In the extreme case, if you have 51% of the network hashrate, you'll see a near-0% orphan rate no matter what, so you should make your blocks as big as possible.

You can still benefit from making large blocks if you only have 30% of the network hashrate. If you can get your block quickly to at least 40% of the rest of the network, then it will end up being at least 70% mining in your support and less than 30% mining against you. This becomes mathematically equivalent to the Outcast attack. Due to the heavy packet loss when crossing the China border, the Outcast attack tends to naturally happen. My estimate is that Coingeek would actually benefit slightly if they consistently mined 101 MB blocks compared to mining 1 MB blocks, even if they did not collect any transaction fees.

The presence of transaction fees to offset the orphan rate risk makes this worse. A transaction fee that offsets 90% of the marginal orphan rate risk for a 1% pool will offset 129% of the marginal orphan rate risk for a large pool. The large pool will then mine 101 MB blocks and see a 0.1 * (1 - 0.3) = 7% loss in revenue from orphan rates, but will gain 9% from transaction fees, for a net of 2%. Meanwhile, everyone else on the network will get a 0.1 * (0.3) = 3% loss in revenue from orphan rates, causing the difficulty to fall by 4.2%. This increases the revenue for the large block mining pool by 6.2%, and increase revenue for everyone else by 4.2%. That attracts more miners to the large pool, and the large pool gets even larger, strengthening their advantage. Furthermore, users learn that they can get away with lower fees, so fewer transactions are issued with enough fees for small pools to take advantage of.
 

wrstuv31

Member
Nov 26, 2017
76
208
>If they're told they have to either hire a developer for $20,000 to write them good pool software, or suffer a 4% orphan rate disadvantage, or pay a large pool a 2% fee, they're going to choose to join the large pool.

Or, they create software that is better than the large pool and take their market share, making more money overall.

>I'm arguing for keeping the consensus limit of about 32 MB until Graphene is mature. That's all.

aka we need to subsidize you until you're ready

>The only pools that have HFM are the large ones with >10% of the network hashrate, and those are exactly the pools that we want to avoid encouraging people to use.

It's the raspberry pi argument except for pools now.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
i think @jtoomim makes some good points.

> I'm arguing for keeping the consensus limit of about 32 MB until Graphene is mature. That's all.

that's reasonable.


I think a hardlimit is good, miners need to be able to confidently reject blocks that are "too big". problem is that as the tech improves nodes will be able to handle bigger and bigger blocks, so this hardlimit is going to need to be bumped up every so often, and that means HF every so often, and that's why it would be nice if all miners implemented EC.
 
  • Like
Reactions: majamalu