Gold collapsing. Bitcoin UP.

jtoomim

Active Member
Jan 2, 2016
130
253
It's the raspberry pi argument except for pools now.
There is a small amount of truth to that, but it's very small.

The Raspberry Pi argument was that the minimum acceptable hardware spec should be no higher than a RPi. That's a very low bar.

The argument I'm making is that the minimum acceptable hardware+software spec should be a high-end server running the best open-source software available that isn't in a beta or alpha state. I am noting that currently, no released software can handle more than 100 tx/sec (22 MB-ish) in AcceptToMemoryPool, and that no released software can propagate blocks over long-distance internet trunk lines faster than 1.6 MB/s except for the buggy beta Graphene release. So I think 32 MB is a pretty reasonable and fairly high bar.

But yeah, I acknowledge that it's a qualitatively equivalent class of argument. It's just a matter of degree.

Or, they create software that is better than the large pool and take their market share, making more money overall.
It's unlikely that a small miner will have the resources to rival the software development ability of a large pool. Insofar that the capacity for gaining an advantage through proprietary full node software exists, it makes it likely for large pools and miners to get larger.
 
  • Like
Reactions: Peter R

jtoomim

Active Member
Jan 2, 2016
130
253
1 hour 17 minute interval between #546103 and #546104. That mostly explains the size. Nice to know that at least someone has enabled > 16 MB block mining, though.

At this point, I think it's very likely that we're running into the AcceptToMemoryPool bottleneck. Several of us have noticed some strangeness in the way that scale.cash behaves, and it seems to be happening at about the throughput rate that we would expect the ATMP issue to show up, given the data we got from the Gigablock Testnet Initiative, and the difference in performance and peer counts between the GTI and typical mainnet nodes.

https://www.reddit.com/r/btc/comments/9c8tv2/either_atmp_or_scalecash_is_bottlenecking_the/
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Isn't it a bit of a red herring anyway? Mining doesn't need to be where the pool is located. If you're located in an area with poor connectivity but low electricity cost, locate your pool where the connectivity is good and just pass back the relatively small data that needs to be mined.
 

jtoomim

Active Member
Jan 2, 2016
130
253
All serious pools are located in major datacenters with 100 Mbps connectivity. Datacenters in China are well connected to other datacenters in China. Datacenters outside of China are well connected to datacenters outside of China. Datacenters in China have terrible connectivity to datacenters outside of China. So if you want to have good connectivity to the rest of the Bitcoin network, then either all of the Bitcoin network needs to be inside China, or all of it needs to be outside of China. Since we will never be able to agree on which of those is the right option, we have to deal with the fact that many pools will have bad connectivity to other pools.

Even if you have good connectivity, the nature of TCP gives you far less good throughput than you would expect. TCP uses a congestion control algorithm that limits the number of packets in flight to the TCP congestion window (cwnd). When a packet makes the trip successfully, cwnd gets increased by one. When a packet is dropped or times out, cwnd gets decreased by e.g. 50%. This is known as the additive increase/multiplicative decrease feedback control. With this feedback, the cwnd can double each round trip time (RTT). Thus, if your RTT is 1 ms, you'll send 1 packet at t=0ms, 2 packets at t=1ms, 4 packets at t=2ms, 1024 packets at t=10ms, etc, until you reach the capacity of your network.

That works pretty well in low-latency networks, but in high-latency networks, things start to suck. If your RTT is 200 ms, then it can take 2 seconds before you're able to scale your bandwidth to 1024 packets per 200 ms, or 7.6 MB/s. During those first two seconds, you will have sent a total of 2047 packets, or 3 MB (1.5 MB/s). So long distance links with high latency are in ideal circumstances only able to provide high bandwidth after they've been transmitting for a few seconds.

But that's only for ideal situations. Things get really bad when you start adding packet loss to the mix. Let's say you have a 50% decrease in cwnd for each lost packet, and you have a packet loss rate of 5% (fairly good for cross-China border communication). In this case, you will reach a cwnd equilibrium where every 20 packets gives you the same amount of linear increase from packets that arrive as you lose from dropped packets. (20 + x)*.50 = x, so x=20. With 5% packet loss, you will get a cwnd that oscillates between 20 and 40. At 1500 bytes per packet, that's an average of 45 kB per round trip time, or 225 kB/s for a 200 ms RTT. This is completely independent of your local pipe bandwidth, so even if you have a 40 Gbps connection, you're only going to get 225 kB/s through it per TCP connection.

And that's with a 5% packet loss rate. 5% is a *good day* in China for cross-border communication. On an average day, it's about 15%. On a bad day, packet loss is around 50%. With 50% packet loss, your average cwnd will be 2, and you'll get about 15 kB/s.

Yes, 15 kB/s. Even if you have a 1 Gbps pipe. I've seen it happen dozens of times.

Why does this happen in China? It has nothing to do with technology, actually. China could easily get packet loss to 0% if they wanted to. They just don't want to, because it does not align with their strategic goals.

China has three major telecommunications companies: China Unicom, China Telecom, and China Mobile. Of the three, China Mobile mostly just does cell phones and is of only tangential relevance. CT and CU are the big players. Both CT and CU have a policy of keeping their international peering links horribly underprovisioned. Why? Because there's no money to be made off of peering. By making peering slow and lossy, they can drive their international customers to pay a premium for bandwidth that doesn't suck.

And boy do they charge a premium. Getting a 1 Mbps connection from China Telecom in Shenzhen to Hong Kong (20 km away! but it crosses the China border) can cost $100 per month. Getting a 1 Mbps connection from Shenzhen to Los Angeles (11,632 km), on the other hand, will only cost about $5.

Yes, the longer the route, the cheaper the bandwidth is. That is not a typo.

China Unicom and China Telecom both charge more for shorter connections because they can. They have a government-enforced duopoly, so in the absence of competition or net neutrality laws, they charge whatever they think they can get away with, regardless of how much the service actually costs them to provide.

Because the China-USA and China-Europe connections are cheaper than the China-Asia ones, most routers in Asia are configured to send data to the USA or Europe first if the final destination or origin is China. This is known in network engineer circles as the infamous Asia Boomerang. Bulk traffic from Shenzhen to Hong Kong will often pass through Los Angeles because that's the most economical option. This adds an extra 250 ms of unnecessary latency, and wreaks all sorts of havoc on TCP congestion control.

China Mobile, on the other hand, is usually willing to engage in fair peering practices abroad and does not charge predatory rates. Unfortunately, they mostly only serve mobile phones and rarely have fixed line offerings, so they aren't in direct competition with CT and CU for most of the market. But if you ever find yourself in China having trouble accessing websites abroad, setting up your phone as a mobile hotspot will likely give you better bandwidth than using the 200 Mbps fiber optic connection in your office.

So... do you put all your pools inside China, where most of the hashrate is? Or do you put the pools outside China, where friendlier governments and better telecommunications are? Or do you write a new protocol like Graphene that compresses data so much that it doesn't matter if you only get 15 kB/s? Or -- and this is my favorite option -- do you stop using TCP altogether and switch to UDP with forward error correction?

One thing is certain: you don't blame miners for being in remote regions with poor connectivity. That just isn't what's going on at all.

@Norway @Richy_T @awemany
 
Last edited:

reina

Member
Mar 10, 2018
33
92
It really amazes me, the issues nChain is willing to start a hash war over now, all in the name of "current miners must learn". Surely in the best interests of the protocol. What's the road to hell paved with again?
nChain or Craig did not say they are opposing ABC changes only to start a war. That is NewLiberty's own claim about the intent.

This might be NewLiberty's amalgam of the general philosophy on the nature of Bitcoin Craig has described before (that Bitcoin will need to be resilient, that it grows stronger with each attack, that this is how Bitcoin works, martial arts philosophy etc), but if it hasn't been as clear as day from Craig's Twitter feed:

Craig absolutely hates PoS.

The very idea makes him vomit. The combination of ABC seeking to change the base protocol outside the scope of its original features in 0.1 (which happen to be in his opinion, pro-Omni/wormhole supporting changes), combined with the fact that Jihan is very vocal about desiring a PoS system on BCH where coins are votes, not hashpower (appeared inside a chat screenshot that Craig or someone translated), and PoW becomes less important, COMBINED with Craig's claim that before the fork, Craig tried to gather hashpower to scale the blocksize on BTC and keep one Bitcoin, and asked Jihan it he could help, but Jihan "stabbed him in the back" and created ABC instead... When you consider all those things together, it's very obvious this isn't some random lesson for miners. It is defending Bitcoin from being sabotaged once again.

I think none of the lessons in Bitcoin need to be arbitrary or random: Bitcoin will continue to be attacked from many angles both opaquely and more insidiously, so that's more than enough to chew on, as is.

Other accusations/thoughts from Craig that you are free to consider if you want:
  • Craig accuses Jihan of supporting Segwit (there may actually be some backing on this)
  • Craig says Jihan supports Vitalik and Poon to build Plasma (https://plasma.io/plasma.pdf), which is kinda the end game for both BTC and BCH to have all the momentum taken off the main chain and sucked into the PoS system on top
  • Craig supports building everything through the Bitcoin script system, hence even L2 apps basically are all *native* to Bitcoin:
The road to Bitcoin hell is paved by letting PoS creep into or usurp Bitcoin, imho. Stay vigilant.

Here is the screenshot where Jihan describes letting miners choose is "flawed", and it is after all the "coinholders" who should have a say and it would be "easy to implement technically" . "Pos important." " PoW can't be deserted, ofcourse".

[doublepost=1535903313,1535902373][/doublepost]
So if you want to have good connectivity to the rest of the Bitcoin network, then either all of the Bitcoin network needs to be inside China, or all of it needs to be outside of China. Since we will never be able to agree on which of those is the right option, we have to deal with the fact that many pools will have bad connectivity to other pools.
I think miners are learning, so they will deal with those facts. I think the ones that can, already move out of China. China is policy-wise so uncertain: One day the local govt clamps down on mining and tells you to wrap up your operations. Within a few months it's "hey we are not in the news anymore because price has gone down, we unofficially don't care if you mine anymore, so you can stay".

We shouldn't care if pools are economically viable or not, they work this out themselves. The ones that aren't profitable eventually stop: Bad connectivity compared to others ultimately leads to being outcompeted. If Chinese telecommunication providers don't care about miners its their own business. It's just one country, there's many more telecommunications providers around the world would would love to welcome more miners and industry in if they have the infrastructure for it or are creating better infrastructure for it.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@jtoomim

>So... do you put all your pools inside China, where most of the hashrate is? Or do you put the pools outside China, where friendlier governments and better telecommunications are? Or do you write a new protocol like Graphene that compresses data so much that it doesn't matter if you only get 15 kB/s? Or -- and this is my favorite option -- do you stop using TCP altogether and switch to UDP with forward error correction?

that's easy for most of the ppl in this thread; uncap the limit and let the market figure it out. that may sound trite at this point, but hey, we're the one's who predicted and pushed all the big block progress to date from a political standpoint but based on real technical and economical analysis. i have my bias which i've stated ever since this debate began; don't cripple the blocksize b/c you want to coddle the Chinese miners. there's an entire ROW out there with vastly superior internet just waiting to get in on the mining game. but they can't and won't as long as the incumbent players insist on crippling the protocol to levels they feel most "comfortable" with or can afford. strangely enough, that sounds like you TBH, no offense intended.

as well, your ATMP will act like a protocol enforced limit in the absence of a dev imposed default limit. miners are likely setting hard limits around 22MB with soft caps near there too and probably largely due to your work and analysis. until of course, they figure out a solution to ATMP and 22MB, which they have every financial incentive to do now that big blocks have been shown to be safe and Bcore a bunch of dishonest geeks. what we've witnessed the last 24h is suspiciously exactly like what me and a few others in this very thread predicted when we conceived BU way back in 2015; competitive miners "creeping" up their blocksizes (up to 21MB now) in an attempt to either harvest the fees or claim bragging rights to having produced the biggest block to date. yes, the progressive increase is not as granular as our original theory b/c everyone knows this is just a temporary test. but there's no doubt in my mind this is what we're witnessing as miners are predictably and progressively probing the upper limits of blocksize during this stress test while simultaneously not trying to get orphaned by peer miners OR technically orphaned by something like ATMP. what i based this "creeping" effect on was the obvious linear but progressively blocksize production increases we easily graphed out btwn 2009-2015. imo, if you remove the limit permanently (which is essentially what 1MB was in relation to the <<<1MB blocks the network was producing at the time), the BCH protocol will just go back to producing incrementally larger and larger block sizes according to economic demand as we saw btwn 2009-15. there is really nothing to be had (while everything to lose) by large pool miners attacking their smaller brethren while risking orphans. otherwise, we should have seen a large miner producing a stream of 1MB blocks when the avg was only around 100-200kB. and if they do so be it. compete. plus, as i said before, w/o a limit, legions of new miners will be now be encouraged to come onboard from ROW b/c of their newly realized free and uncrippled ability to compete in the pursuit of mining sound money coins that have the potential to hyperbitcoinize the entire fiat system. just to be crystal clear about what is at stake (which is a fundamental argument for my game theory), BCH has the potential to become the next world reserve currency and gut the entire Forex market of it's $5-7T traded per day. it's high time we stop coddling the Chinese miners and those miners/devs who don't believe Bitcoin can handle it. coddling China mining is pretty much over with anyways as their gvt has forced many of them out. and as for a gvt sponsored "poison" block? i seriously doubt any agency, esp the NSA at this stage of the game, is going to risk their entire budget on a mining facility just to try and produce a 1GB bloat block when they 1) don't have the money (believe it or not they do have budget constraints and keep losing top hackers to private industry) 2) will be found/ratted out to be constructing such a facility due to mere acreage & building requirements and electricity demands 3) even if they were to get this facility setup successfully, why produce attack blocks that most likely will fail and get orphaned while simultaneously risking discovery? what a political fiasco and setup for multiple lawsuits from hedge funds, venture capital firms, large investors, banks, and probably other gvts who now have Bitcoin investments to protect. what a waste of money. they'd be more likely to just mine honestly according to the WP, lol.
[doublepost=1535903679][/doublepost]>The road to Bitcoin hell is paved by letting PoS creep into or usurp Bitcoin, imho. Stay vigilant.

Amen
 
Last edited:

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
I do not think that it is a good idea to require miners to develop their own software in order to be able to mine competitively.
FT;FU: I do not think that it is a good idea to require businesses to improve in order to be able to produce competitively."
This is not how incentives work in a free market, it's like saying Communism is better because we all use a system that targets the lowest common denominator,

If they're told they have to either hire a developer for $20,000 to write them good pool software, or suffer a 4% orphan rate disadvantage, or pay a large pool a 2% fee, they're going to choose to join the large pool.
Developers should be the ones running pools and those miners who don't want efficient software should move there hash to the better developers team. ABC having 51% of the hashrate is possibly more risky than a miner having 51% of the hashrate, especially if miners agree to upgrade every 6 months regardless of the changes.

We don't want them to do that.
We who s in control again. Is my opinion part of this wee, or does @deadalnix get to push his agenda over mine?
If you have enough hashrate, then you won't see as much of an orphan rate increase as your competition will, because the block propagation time from your node to your node is 0.
assuming you don't do header first mining while you validate a big block. I value the work the developers do, I'm just saying the theoretical threats are just theoretical. Theoretical scaling problems are theoretical, theoretically I know I need a bigger hard drive to store the blockchain, but I only need to get one when it is needed. I don't think we will see confirmation times more than 5 min in the absence of a transaction limit, and if we do I,m sufficiently convinced they wont produce a 39% orphan rate regardless of the math being correct.



The presence of transaction fees to offset the orphan rate risk makes this worse
The presence of transaction fees to offset the orphan rate risk makes this worse
I can't help thinking you are using numbers from the old paradigm before Xthin and Graphene, large blocks have already propagated with those technologies.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
Headers-first mining is not implemented in any full-node software, nor is it implemented in any open-source pool software that I know of except p2pool (which has its own scaling issues). I do not think that it is a good idea to require miners to develop their own software in order to be able to mine competitively. If they're told they have to either hire a developer for $20,000 to write them good pool software, or suffer a 4% orphan rate disadvantage, or pay a large pool a 2% fee, they're going to choose to join the large pool. We don't want them to do that. If we had HFM in BU or ABC or XT, then I agree, that would change things. But we currently don't have that feature. The only pools that have HFM are the large ones with >10% of the network hashrate, and those are exactly the pools that we want to avoid encouraging people to use.
wow, i had missed this post.

i'm sorry, but this is just not the right attitude.
 

jtoomim

Active Member
Jan 2, 2016
130
253
The way I see it, we can either have the dedicated protocol devs like BU write block propagation and validation code once, and have everybody use that open source implementation, or we can have each miner write their own special version, and let the ability to come up with a block propagation implementation be a gatekeeper for whether someone gets to be a mining pool.

My argument is that the former approach is a better one, as it results in less effort duplication and is fairest to all miners, regardless of whether they have 0.1% of the hashrate or 30%.
 

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
@jtoomim what is questioned is whether any sense can be attached to the phrase "fairest to all miners". it assumes there is a (moral?) perspective from which to adjudicate what is or is not fair, whereas no such perspective is necessary to adjudicate what is profitable or not. now sometimes in regulated markets those two criteria may blur, but as we all know in the mind-blowing game theory of bitcoin they don't: merely to have the ability to act unfairly -- i.e. to steal, which in this context is to double spend -- is predicted to be unprofitable.

on a different note, it seems WA power utilities are not on board , sry :/
https://news.bitcoin.com/energy-rate-hike-crypto-miners-washington-state/
 

reina

Member
Mar 10, 2018
33
92
The way I see it, we can either have the dedicated protocol devs like BU write block propagation and validation code once, and have everybody use that open source implementation, or we can have each miner write their own special version, and let the ability to come up with a block propagation implementation be a gatekeeper for whether someone gets to be a mining pool.

My argument is that the former approach is a better one, as it results in less effort duplication and is fairest to all miners, regardless of whether they have 0.1% of the hashrate or 30%.
Whether a mining pool uses some other teams software or their own is another economic question miners sort out themselves easily and rationally:
  • If there is an advantage or special interest in having our own implementation (ie. I want certain features or want to protect certain features), I make my own team who I know will be trying to make or keep this feature regardless of what other teams do.
  • If I have no special interests how it goes, and I am happy to trust the judgment or the direction of other teams, I choose the implementation I think is best and just use that.
If you have 30% of the hashrate, I think you should be leaning towards option one: This helps to protect your own large and long term vested interest in Bitcoin's success.

If you have 0.1% of the hashrate, firstly maintaining your own dev team would be rather expensive compared to how invested you are in mining. Secondly, if you ever did have something in the protocol you wanted to fight for or win, it's not very likely users adopt software made by such a small pool. Small basically shows to others you're not as successful as the larger pools (unless you start growing). When you have 0.1%, you can step out more easily because you have less skin in the game, which means that your plays can be of a more short term nature. Other players see you as someone that can flux in and out.

Independent developers who are experienced in developing Bitcoin will be more and more sought after by large mining pool companies to help solve specific challenges in their code.

And the code continues to be opensource, so maybe effort duplication is not as much effort as it seems: As long as another developer understands the code they're seeing, and how to add it to the implementation they are working on. Even if it is extra effort, it could be worth it, as long as it helps protect your long term stake in mining.
 
  • Like
Reactions: lunar and 79b79aa8

jtoomim

Active Member
Jan 2, 2016
130
253
"Fairness" as I see it can be quantified as the average deviation between a miner's or pool's hashrate and their share of the total revenue. If a pool has 30% of the hashrate, they should get 30% of the revenue. If 1%, then 1%. Unfair would be if the 30% hashrate pool gets 31% of the revenue and the 1% pool gets 0.9% of the revenue. An Antminer S9 should have approximately the same marginal revenue regardless of whether it's attached to a small, independent operation or the biggest of the big. If that is not true, then decentralization will suffer.

Note: I don't think that fairness is an inherently valuable goal in mining. We don't have a moral obligation to make sure that things are fair for small miners. My argument is that it's important for the security of Bitcoin that mining is kept fair, because otherwise we end up with superpools like Coingeek who will throw their weight around and bully others into submission, both when it comes to protocol changes and in double spend/51% attacks.

@reina: Any code that a miner or pool develops on their own will likely not be open source. Pools will want to maintain their competitive advantage, so they will keep their code private. This is what results in code duplication.

> Independent developers who are experienced in developing Bitcoin will be more and more sought after by large mining pool companies to help solve specific challenges in their code.

Yes, if mining pools are hiring people like Andrew Stone to work on their own private projects, then there will be fewer people working on the public projects like Bitcoin Unlimited. I think it is desirable to keep as much development as possible in the public, open-source realm. I would much rather have miners contribute to Bitcoin ABC and BU and XT's general funds, and let them build software that works for all miners, rather than have miners hire protocol developers directly to work on private, closed-source solutions to full node performance issues.

@79b79aa8 As for WA utilities turning unfriendly: you have no idea. My utility, the Grant PUD, voted on Tuesday to institute a new Schedule 17 for "Emerging Industry" customers (i.e. cryptocurrency miners). This schedule will increase our electricity rates by 207% phased in in three steps over three years. On April 1, 2020, I expect that we will not be able to justify continued operations and will need to shut down. There's a total of 30 MW of customers in Grant County that will be affected by this rate, and I expect most of them to shut down on that date as well. We are currently looking for a buyer for our facility from the traditional datacenter industry. As Schedule 17 only applies to cryptocurrency miners, and as the Grant PUD Commissioners made it clear that they like traditional datacenters and only want to exclude crypto folks due to their perception of regulatory risk (lol, the irony) in Bitcoin mining, our facility can continue to enjoy our current 2.65 cents/kWh rates if we switch from mining to regular servers (e.g. render farms or AI/ML).
 
Last edited:

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
let the ability to come up with a block propagation implementation be a gatekeeper for whether someone gets to be a mining pool.
Seems like an elegant way of ensuring the proof-of-work incentives are directed toward solving any transmission issues.
Headers-first mining is not implemented in any full-node software
Seems like an elegant way of ensuring the PoW incentives also encompass node software optimization.

Security against 51% attacks comes from the incentives for miners to compete on hash speed (the work aspect of PoW), but the same PoW competition also drives block transmission improvements and node code optimizations (the proof aspect of PoW). That is, if miners are left to compete on these things instead of waiting for handouts in the name of fairness.

Fairness as equal opportunity for miners (left side of image) is desirable, but fairness as enforced equality of outcome (right side) would only ensure that the relentless forces of capitalist competition are never brought to bear on network and software enhancement.



I am pretty convinced that Bitcoin goes nowhere unless the entire mining infrastructure is subjected to the same competitive pressures that drove the staggering innovation in hash speed we've witnessed over the years.

At some point the training wells must come off. Either because the independent dev teams remove them or because some miners - whether driven by ideals or just seeking to make an extra buck off any idlers - start kicking the training wheels off by prodding their fellow miners with bigger blocks to see if someone in the minority wobbles.
[doublepost=1535918926,1535918087][/doublepost]And if the concern is that bigger miners have an advantage in what they can spend on coding and networking, note that this dynamic is inherent in all industries under free market capitalism. If you think this tends toward runaway monopolies in mining, you should think it does in all industries, which sounds like a difference in economic beliefs that would take us afield. Indeed taking this view to its logical endpoint would dictate that we give a helping hand on hashing techniques to make hashing itself fairer in order to compensate for the inherent advantage a big hasher has (they can R&D their own ASICs, negotiate sweet deals with power companies, etc.).
 

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
"Fairness" as I see it can be quantified as the average deviation between a miner's or pool's hashrate and their share of the total revenue. If a pool has 30% of the hashrate, they should get 30% of the revenue. If 1%, then 1%. Unfair would be if the 30% hashrate pool gets 31% of the revenue and the 1% pool gets 0.9% of the revenue. An Antminer S9 should have approximately the same marginal revenue regardless of whether it's attached to a small, independent operation or the biggest of the big. If that is not true, then decentralization will suffer.
interesting thought, but is it correct? you are arguing that safeguards ought to be placed on the protocol to curtail economies of scale, on pain of mining centralization. a reply would be that (1) either mining centralization in bitcoin takes care of itself (via miner self-interest), or the system is doomed anyway; (2) to deploy those safeguards requires a coordination effort that is itself bound to be considered unfair by at least some actors; (3) possibly (this may or may not be empirically true, idk) the safeguards have the net effect of making the system less safe or more expensive to use, therefore less competitive.
 

jtoomim

Active Member
Jan 2, 2016
130
253
It's very common for people to use the term "miner" as if it is the same as "pool." This distinction is important, and I think we ought to make an effort to not mix them up.

Mining economies of scale are generally fine. We should not worry about those.

Pool economies of scale are not fine. Pooled mining was not part of Satoshi's design. He didn't foresee it, and it compromises some of the assumptions he made about Bitcoin's security. We need to be vigilant to make sure that pools do not get out of control.

There's a natural mechanism that prevents a single miner from getting to big. As a miner adds hashrate, the revenue of their existing hashrate decreases. A miner who already has 30% of the hashrate will only see 70% as much marginal revenue for adding an S9 as a miner who has 0% of the hashrate. This stabilizes the BTC+BCH mining ecosystem, and ensures that we don't have a single miner with an overwhelming hashrate majority. (Unfortunately, if you look at just the BCH mining system alone, this mechanism fails, and that's why we have Calvin Ayre with 25-30% of the hashrate. But if BCH overtakes BTC, that will no longer be a problem.)

However, this mechanism does not exist for pools. A pool has near-zero marginal cost for adding more hashrate. Indeed, they get to amortize their fixed costs over a larger revenue pool, so they can increase their profit margins, not just their gross profit. Pools purely benefit financially from having more hashpower. This is a big problem for the Bitcoin system.

So far, we've been able to prevent any pools from maintaining >35% of the network hashrate by encouraging miners to leave pools if they get too big. When ghash.io exceeded 51% of the hashrate for a day in 2014, the community responded by convincing miners to abandon ghash.io in favor of smaller pools. Ghash.io didn't like this, but the community spoke. When BTCGuild got over 40%, the same response happened, although this time BTCGuild's operator spoke in favor of people leaving for smaller pools.

These kinds of voluntary miner redistributions only work as long as there is no revenue advantage for staying with the big pool. If Ghash.io had a revenue advantage, each miner would have posted in forums asking other miners to leave ghash.io, but nobody would have wanted to leave themselves. Voluntarily taking a hit in revenue so that others can stay behind and continue to enjoy the profits is not something miners are keen on.
 

reina

Member
Mar 10, 2018
33
92
"Fairness" as I see it can be quantified as the average deviation between a miner's or pool's hashrate and their share of the total revenue. If a pool has 30% of the hashrate, they should get 30% of the revenue. If 1%, then 1%. Unfair would be if the 30% hashrate pool gets 31% of the revenue and the 1% pool gets 0.9% of the revenue. An Antminer S9 should have approximately the same marginal revenue regardless of whether it's attached to a small, independent operation or the biggest of the big. If that is not true, then decentralization will suffer.
I think this should generally be true, but the more profitable pools will grow, because they have more money to reinvest if they wish, back into mining. The more efficient pools who can do a bit better than the average pool, I would expect them to grow in % while shifting some weaker pools out.. The best will be neck to neck, which puts pressure on average pools. Eventually you will end up sitting with competitors that are almost neck to neck in efficiency with strong incentive to reinvest into R&D that will gain them the next advantage.

Note: I don't think that fairness is an inherently valuable goal in mining. We don't have a moral obligation to make sure that things are fair for small miners. My argument is that it's important for the security of Bitcoin that mining is kept fair, because otherwise we end up with superpools like Coingeek who will throw their weight around and bully others into submission, both when it comes to protocol changes and in double spend/51% attacks.
I don't see a situation where they will remain consistently a "super" pool or eclipse others, because it means somehow, none of the other players can mine on par or close to their efficiency, which is a bit hard to believe or almost impossible for the long term:

There are always big companies who are the stalwarts of the industry, like the Apples and Samsungs of the phone business. But then there are always shifts and new entrants: Apple wasn't big until they innovated over Nokia. Walked into a store today and found a big selection of Huawei's and Honor phones, which i had never tried before. They had better screens and cameras, and camera functionality and features, and better price points than Apple. Also now they have some interfaces quite close to Apples on newer OSes. So there are constantly angles of which to compete from. A new chip inventor with an awesome miner can jump in and start mining too.

Competition is not static, it is dynamic over a longer time frame.


@reina: Any code that a miner or pool develops on their own will likely not be open source. Pools will want to maintain their competitive advantage, so they will keep their code private. This is what results in code duplication.

> Independent developers who are experienced in developing Bitcoin will be more and more sought after by large mining pool companies to help solve specific challenges in their code.

Yes, if mining pools are hiring people like Andrew Stone to work on their own private projects, then there will be fewer people working on the public projects like Bitcoin Unlimited. I think it is desirable to keep as much development as possible in the public, open-source realm. I would much rather have miners contribute to Bitcoin ABC and BU and XT's general funds, and let them build software that works for all miners, rather than have miners hire protocol developers directly to work on private, closed-source solutions to full node performance issues.
Opensourcing helps to get bugs and vulnerabilities spotted. I think the key strategy is not to hide *code* because that's not the innovation you want to hide. Hiding is not advantageous to you because it would make your implementation more buggy, and also the network in general could be worse because of it. Sharing is advantageous, because you're helping this network to become more stable and more viable to use, like solving bottlenecks, and helping to get the OP_codes to a place that it's flexible for applications and services to script on. There is a win-win situation here for the pools, and I am sure they will take it if they are smart.

For the innovation you can keep to yourself: There's always mining chip technology and innovation, better setups, better cooling, etc.
 

jtoomim

Active Member
Jan 2, 2016
130
253
@79b79aa8 as for your numbered points:

2. The coordinated safeguards are essentially just limiting the blocksize to a level proportional to the performance of publicly available full node software (i.e. 32 MB for our current code), and in investing heavily in public open source software development to make sure it stays ahead of any private implementations.

3. The safeguards should make the system safer to use, so that part is empirically untrue. Having a blocksize limit in place reduces the likelihood of network segmentation from DoS or overload events, and will make everything (except 0-conf) work more reliably. However, it can increase transaction fees slightly if we're not able to scale our open source full node implementations' performance fast enough to stay ahead of demand. Personally, I think it's very unlikely that transaction demand will grow as fast as our code performance, so I'm not worried about the costs. But that's a judgment call.