Gold collapsing. Bitcoin UP.

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
There's a natural mechanism that prevents a single miner from getting to big. As a miner adds hashrate, the revenue of their existing hashrate decreases.[...] However, this mechanism does not exist for pools. A pool has near-zero marginal cost for adding more hashrate. Indeed, they get to amortize their fixed costs over a larger revenue pool, so they can increase their profit margins, not just their gross profit. Pools purely benefit financially from having more hashpower. This is a big problem for the Bitcoin system.
but the law of diminishing marginal returns is not bitcoin's primary anti-mining centralization mechanism. for the case remains that if a pool's combined hashrate grows too large, and miners pointing to that pool refuse to self-regulate in an effort to preserve their present status, the system becomes 51% vulnerable, which puts severe downward economic pressure on those very miners, along with everyone else.

2. The coordinated safeguards are essentially just limiting the blocksize to a level proportional to the performance of publicly available full node software (i.e. 32 MB for our current code), and in investing heavily in public open source software development to make sure it stays ahead of any private implementations.
who is the agent doing the limiting and the investing?

3. The safeguards should make the system safer to use, so that part is empirically untrue.
or they may have the net effect to discourage mining investment, affecting hashrate growth and thus security long term. (I have no clue if this is true, just pointing out that it is not obvious or easy to assess).
 
Last edited:
  • Like
Reactions: majamalu

jtoomim

Active Member
Jan 2, 2016
130
253
@reina

> but the more profitable pools will grow, because they have more money to reinvest if they wish, back into mining

You're confusing pools and miners. Pools typically do not own their own hashrate. See also my comment right above yours on the mechanism that reduces marginal revenue for large miners, and how that does not apply to pools (or currently to large BCH-only miners).

> I don't see a situation where they will remain consistently a "super" pool or eclipse others, because it means somehow, none of the other players can mine on par or close to their efficiency, which is a bit hard to believe or almost impossible for the long term:

As I have said elsewhere in this thread, the greater the hashrate share a pool has, the lower their orphan risk becomes. A pool will never directly orphan their own block, so if they have 30% of the network hashrate, their orphan rate will only be 70% as high as the orphan rate for a 1% miner with the same block propagation and full node performance. When orphan rates get high on average, pool operation incentives change from survival of the fittest to survival of the biggest.

If average blocksizes get too high, then other players will not be able to mine with the same efficiency of the bigger players simply because they don't have the hashrate to mine with that efficiency. A new entrant into the competitive pool market would need to be more efficient in order to attract hashrate, and would need to already have hashrate in order to be efficient.

> Opensourcing helps to get bugs and vulnerabilities spotted. Hiding is not advantageous to you because it would make your implementation more buggy, and also the network in general could be worse because of it.

Yes, the network in general will be worse because of it. No, it is not a net advantage overall to a miner to open source their code. We can see that by looking at the BIP66 chainsplit and SPV mining issue.

When BIP66 was activated, it turned out that some miners were not actually enforcing the new rules. This caused invalid blocks to be produced by a BTC Nuggets. Normally this wouldn't be an issue, as the rest of the hashrate would simply ignore the block. However, the miners in China were making heavy use of SPV mining at the time, and did not have checks in their code to timeout on headers that were not followed by valid blocks. So after the invalid block was mined, F2pool mined two blocks on top of the unverified header of BTC Nuggets's block, followed by Antpool mining on top of F2pool etc. All told, some transactions reached 6 confirmations in SPV wallets before being reorged out of the blockchain. F2pool mined 4 invalid blocks (a loss of 100 BTC), and Antpool mined 1 invalid block.

Surely, F2pool must have sorely regretted having done SPV mining given a loss like that, right? Actually no. F2pool went on the record as having said that even with that bug, SPV mining was a huge win for them overall. It makes sense if we do the numbers. F2pool had around 30% of the network hashrate at the time. Average orphan rates for Bitcoin were about 2%, and F2pool had a 1% orphan rate which they claimed was due to their SPV mining. This means that SPV mining was earning them 25 BTC/block * 144 blocks/day * 20% * 1% = 7.2 BTC per day, so this SPV mining mishap only cost them 13.9 days worth of SPV mining benefits. As F2pool had earned a lot more than that in the last few months from SPV mining, it was a no-brainer.

If F2pool had released their SPV mining code publicly, then their 1% orphan rate advantage would have disappeared. That would have been good for Bitcoin as a whole but bad for F2pool, so it didn't happen.
[doublepost=1535925745][/doublepost]> won't 2 happen by itself? (otherwise, who is the agent doing the limiting and the investing?)

No, pools won't safely self-regulate. That's the problem. I've done the math on this a few times earlier in this thread (e.g. this post). Large pools have an incentive to build blocks that are too large for everyone else, specifically because of the orphan rate advantage that pools with large hashrates have. This is a destabilizing force, and can result in a single pool growing to the point that 51% attack chances are high and customer confidence in Bitcoin is harmed.

> the system becomes 51% vulnerable, which puts severe downward economic pressure on those very miners, along with everyone else.

Yeah, that's something we want to avoid having actually happen. It would be good to have some mechanism for regulating pool size that doesn't cost the whole ecosystem a fraction of their life savings.

Developers putting a default soft limit into the code at e.g. 32 MB and modifying that default value on occasion as new benchmark data comes in might seem like an arbitrary way of dealing with the issue, but it is effective, and I have been unable to come up with a better option. All the market-based approaches to dealing with the problem boil down to deciding that there is no problem or that the problem is not worth dealing with, and I do not agree with that perspective at all.
 
Last edited:
  • Like
Reactions: freetrader

jtoomim

Active Member
Jan 2, 2016
130
253
I would like to see that. However, it needs to be configurable/optional in the client, as some miners and pools (e.g. Kano) are philosophically opposed to HFM.

If we had HFM in at least one (and ideally more than one) open source client, that would make the math on the safe limits of the system much more permissive.
 
  • Like
Reactions: adamstgbit

wrstuv31

Member
Nov 26, 2017
76
208
or we can have each miner write their own special version, and let the ability to come up with a block propagation implementation be a gatekeeper for whether someone gets to be a mining pool.
Yes Please! Everyone is free to innovate so innovation is maximized under the competitive POW scheme.

The way I see it, we can either have the dedicated protocol devs like BU write block propagation and validation code once, and have everybody use that open source implementation,
This is not a stable situation. It can only be enforced through top down central planning and coercive power. An example of this is having a protocol with a blocksize limit. No thank you!

You're stuck in the old ways.
 
  • Like
Reactions: AdrianX

sickpig

Active Member
Aug 28, 2015
926
2,541
maybe that is the case. this is why i'm beginning to think that clients should be maintained, coded, and developed by the largest vested players; miners
This I recent realization I had: suppose you are a miner and a contentious fork is approaching. Suppose also that the mining algo will remain the same across all chains that will stem in from the fork.

As long as you think that the fork will not completely destroy the values of all the possible chains resulting from the approaching fork chain split, the rational thing to do is stay calm and start mining on the winning chain.

This is the logical thing to do in the near term scenario IMHO. On the other hand it could happen that a miner could see a particular group of developers/company as a danger to the future of particular chain and then act accordingly, on a longer time frame scale.

I think that's the reason why miners are not directly developing bitcoin protocol clients. On the other hand they do have direct developers that works on pool softwares, custom propagation networks, HFM etc etc.
 
Last edited:

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
Wow. Please tell me others here are noticing how quickly ABC has turned into a totalitarians. (Blocksteam 2.0.)

@deadalnix do you no longer believe PoW is the governing mechanism in Bitcoin?

you tweeted the https://www.trustnodes.com/2018/09/02/coinex-list-bsv article. (coindesk 2.0)

"He (Craig) seems to be under the mistaken belief that hashpower has any say over the rules nodes follow. It doesn’t. The worst it can do is double spend its own coins, something which would be a 51% attack and would attract considerable measures, including changing Proof of Work (PoW) if circumstances warrant it."

(UASF 2.0) proof of hats.


Core mindset detected.

If there's any truth to the economics of Wormhole, we have lightning 2.0, and with the recent (clearly manipulated) change of vibe on r/btc, we have all the same components of the Blockstream attack.

The best consolation, is how quickly people are waking up this time around.

At the time, my last post on this was highly speculative, but hands are being shown. If Bitmain are in too deep on this one, or simply don't understand the economics, how can further discussion be productive? It's looking much more likely we'll see a hash war as It explains all the recent drama.

It will be glorious.

Nakamoto consensus ftw.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
these are the objectives I believe in and support. well written @shadders:

Bitcoin SV’s remit is to restore the Bitcoin protocol and lock it down (with the exception of security fixes and those changes absolutely necessary to meet scaling goals). What constitutes the ‘protocol’ in our view is a common sense combination of

  1. The whitepaper
  2. The original code
  3. The obvious example Satoshi set by fixing bugs
  4. His subsequent musings on protocol matters.
https://nchain.com/en/blog/raising-op-code-limits-holding-sky/
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
This I recent realization I had: suppose you are a miner and a contentious fork is approaching. Suppose also that the mining algo will remain the same across all chains that will stem in the fork.

As long as you think that the fork will not completely destroy the values of all the possible chains resulting from the approaching fork, the rational thing to do is stay calm and start mining on the winning chain.

This is the logical thing to do in the near term scenario IMHO. On the other hand it could happen that a miner could see a particular group of developers/company as a danger to the future of particular chain and then act accordingly, on a longer time frame.

I think that's the reason why miners are not directly developing bitcoin protocol clients. On the other hand they do have direct developer that works on pool software, custom propagation networks, HFM etc etc.
Yes, I agree with this.

It's ultimately markets that decide, not just miners. Miners should want to extend the chain that will have the most value. For this they have to be somewhat predictive, to know what the market will want in the future before they mine it. Those that are more successful at this will make more money.

This is why miner voting on proposals doesn't really make sense. It transmits information in the wrong direction. The miners don't need to tell everyone else what they want, they need to listen to information of what everyone else (speculators, users, etc.) wants so that they can make a good choice about what to mine.

Predictive markets, and pre-fork markets like what CoinEx is doing can help a lot with this.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
We disagree on this. As the transactions naturally form a DAG and not a sequence, I think that enforcing a linear sequential data structure is arbitrary and inelegant. I think it is more elegant to describe the chain as a sequence of sets, with no arbitrary data included beyond what is inherent in the DAG and the block-level timestamp
did you ever address what effect CTOR has on FSFA?
[doublepost=1535990791][/doublepost]
@cypherdoc honest question: what about removing P2SH transaction then?
what's this in reference to?

Edit: nvm, I get it. I'll answer once off my phone
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@cypherdoc honest question: what about removing P2SH transaction then?
i've been very vocal in protest of how p2sh has been exploited a number of times in the past (SW) and will be in the future (Mast, etc) to insert major changes into the BTC protocol. you can hide anything behind the redeem script it appears. so yeah, i think it has been a vector for attack on the original protocol. can it be removed? no. at this point, too many outputs exist with too much value to do anything about it. apparently ANYONECANSPEND hasn't been the danger we thought it would be (yet). BCH is knee deep in p2sh as well so bygones be bygones. that doesn't mean i still can't hold the ideal from here on forward that our emphasis should be removing the limit and onchain scaling according to SV's vision. their's is the same vision i've consistently held since my day 1. so i've been consistent. am i worried about CSW patents? damn straight. and i've articulated my stance against them. but literally, when i think about it, there's no way he can hold them against BCH since he's stated numerous times he wouldn't do that which establishes a public record against him doing that. if he or nChain tries to insert something diabolical into the protocol that violates the same principles i hold dear above, we can all fork off once again. i don't think he's gonna try that though.
[doublepost=1535992907,1535992024][/doublepost]
This I recent realization I had: suppose you are a miner and a contentious fork is approaching. Suppose also that the mining algo will remain the same across all chains that will stem in from the fork.

As long as you think that the fork will not completely destroy the values of all the possible chains resulting from the approaching fork chain split, the rational thing to do is stay calm and start mining on the winning chain.

This is the logical thing to do in the near term scenario IMHO. On the other hand it could happen that a miner could see a particular group of developers/company as a danger to the future of particular chain and then act accordingly, on a longer time frame scale.

I think that's the reason why miners are not directly developing bitcoin protocol clients. On the other hand they do have direct developers that works on pool softwares, custom propagation networks, HFM etc etc.
I think that's the reason why miners are not directly developing bitcoin protocol clients.
true.

but my sense is that we've moved past the age of passive miners following the whims of a particular voluntary dev group. the problem that i've noticed for a while now with the latter is that a core dev team could have absolutely zero investment into the coin it's developing yet pull tremendous sway in the direction of the protocol if they have a modicum of decent ideas and lotsa marketing prowess associated with censorship. Joseph Poon comes to mind. i'm quite sure based on his posting history he didn't throw one dime behind LN and probably still hasn't until it takes off. i still think LN is a failure and even if successful becomes a highly centralized hub fiasco.

what i envision when i say i'm liking the idea of miner based dev teams is one where ONE implementation gets devved by all miners who have representation/devs on the team. supplemented of course by either voluntary or paid devs from the public sphere. maybe i'd be better off describing it as a merger of the dev talent from miners and voluntary teams. it's high time miners determine their own future b/c i don't see it optimally coming from independent voluntary teams, for the most part. they just don't have the same vested interests as that of miners themselves. as an example, and i don't mean to diss Andrew, but i don't think we've seen the last of what initially was OPGROUP then GROUP and later who knows what. it's human nature not to want to waste code nor the time spent on it. and to be clear, i don't like ABC either at this point. Amaury has taken the take it or leave it attitude too far b/c he violated the one qualifier to making that strategy successful and what worked with the initial ABC; listening to the community before acting on a key issue. admittedly, he was drawn in/duped by the ridiculous q6mo hard fork mandate that's made voluntary dev team hubris go off the charts while crawling all over each other to implement their favored code du jour. thus, i want to support some new thinking on this matter and support something like SV until they screw it up.
 
Last edited:

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
so i'm quickly reading https://nchain.com/en/blog/bitcoin-sv-big-blocks-safe-path-scaling/

Contrary to what appears to be widely held but incorrect belief, Bitcoin SV has no intention to force its users to accept a particular setting. We are simply moving the configuration settings to a much more prominent place.
...
he default value for the hard cap in the upcoming beta release will be 128,000,000 bytes (128mb).
Miners however will be free to change these numbers higher or lower as they see fit.
so... ABC could accept the 128MB default change and then just change the setting back to 32MB, why split the network over a default value that you can change as you see fit?
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
No, pools won't safely self-regulate. That's the problem. I've done the math on this a few times earlier in this thread (e.g. this post). Large pools have an incentive to build blocks that are too large for everyone else, specifically because of the orphan rate advantage that pools with large hashrates have. This is a destabilizing force, and can result in a single pool growing to the point that 51% attack chances are high and customer confidence in Bitcoin is harmed.
can you explain how pool operators do this without being detected?
 
  • Like
Reactions: AdrianX

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Since we will never be able to agree on which of those is the right option, we have to deal with the fact that many pools will have bad connectivity to other pools.
We don't have to agree. That's for the market to decide. It might be problematic if the majority of the pools are in China since I suspect it would be harder for external pools to operate within China than Chinese pools to operate outside.

As for using UDP over TCP, absolutely. Let's do that rather than place artificial limits on our system. BitTorrent added it long ago. If we need it, let's do it, not sit here wringing our hands over it like Core does.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
My argument is that it's important for the security of Bitcoin that mining is kept fair, because otherwise we end up with superpools like Coingeek who will throw their weight around and bully others into submission, both when it comes to protocol changes and in double spend/51% attacks.
here is the problem. you believe that Coingeek would be the end all of superpools that force you and all others out. the latter will probably be true; as a small fry, you'll be out. but at 128MB blocks and higher, many more Coingeek-like mining pools will enter the space. once the limit is totally removed, i even envision gvts mining. that'll be the end all and a very good thing b/c they will be competing against private sector behemoths in mining. all out competition encouraged by Sound Money. that's the Beautiful Bitcoin you're talking about.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
@jtoomim

I think the scenario you describe only makes sense if there is 1 large pool and many smaller pools

if there are 5 pools with 19% hash-rate and 1 pool with 5% hash-rate, the advantage the large pools have is probably marginal?

is there some maths that show the bottom line of how much of an advantage a 30-40% pool actually would have?
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@adamstgbit

his scenario makes no sense. in an unlimited situation, there will never be one superpool who dominates the entire miner space. there are too many billionaires, corporations, sovereign investment funds, and gvts who will get involved to compete in mining.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
question, on simiral topic..

my understanding is that, HFM produces empty blocks, because the miner do not know what TX have been included in the block until they have actually done the validation.

but with xthin or graphine , miners know the contents of the block without downloading the whole block. so is it possible to do "GFM" Graphine-First Mining? in which miners could build another full block ontop of a block they haven't necessarily fully validated yet?
 
Last edited: