Gold collapsing. Bitcoin UP.

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@chriswilmer: Yes, that's correct. Sending twice the information takes twice as long, over a given channel. So the only way you can remove the propagation time's dependance on block size is if the amount of information sent when the proof of work is solved is constant (i.e., it doesn't depend on the number of transactions in the block). For example, you could send just the block header with the solved PoW along with a reference to the pre-propagated contents of the block.

With such a scheme, the miners need to pre-propagate the entire block contents (including the ordering) ahead of time and come to approximate consensus on that, prior to the proof of work being solved. But they can only come to (pre)-consensus on the block contents at the rate at which they can transmit information to each other, which places a limit on how quickly new transactions can be added to the "pre consensus." So even if you assume such a scheme exists and assume that miners follow it [1], the "supply curve" will turn upwards at some point for blocks that are larger than how much information can be transmitted between the hash power in 10 minutes.

[1] We know rational miners won't follow such a scheme because it means they cannot include recent transactions in their block candidates--even if those transactions pay a very high fee--because the miners haven't come to pre-consensus on those new transactions yet. In reality, rational miners will include these new transactions if the fee is greater than the expected loss due to the increased orphaning risk.
 

chriswilmer

Active Member
Sep 21, 2015
146
431
@Peter R: "miners need to pre-propagate the entire block contents (including the ordering) ahead of time and come to approximate consensus on that"

As long as it's approximate, I would assume that the cost to resolve the error** scales (at least) linearly with the number of transactions... no? Because then you can just (simply) say, in absence of an exact block pre-propagation scheme, propagation of a block always scales (at least) linearly with the number of transactions, period.

**if "resolve the error" seems ambiguous, I mean that if other miners only know about the block contents approximately, that implies there is some information about the block they don't know... and transmitting that information to them "resolves" the "error" in the approximation, which is necessary for the block to be built on top of.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
BS/Core: "Bitcoin is secure even if 51% of the miners are dishonest. This is why full nodes are so important and why we need to keep the block size small so full nodes are affordable to run."
Weird equivocation on "dishonest."

Dishonest-1: Willing to perform doublespends if they get 51%.

Dishonest-2: Willing to vandalize the network with megablocks if they get 51% (or less?).

Why on earth would miners be willing to be Dishonest-2 without being Dishonest-1? If 51% of the miners were Dishonest-1, Bitcoin would already be broken.

Though I think this Core/BS argument just mistakes BU for having unlimited blocksize. Adjustable blocksize settings are exactly what (economically significant) nodes would need to rein in miners, so there should be no objection.
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Peter R: "miners need to pre-propagate the entire block contents (including the ordering) ahead of time and come to approximate consensus on that"

As long as it's approximate, I would assume that the cost to resolve the error** scales (at least) linearly with the number of transactions... no? Because then you can just (simply) say, in absence of an exact block pre-propagation scheme, propagation of a block always scales (at least) linearly with the number of transactions, period.

**if "resolve the error" seems ambiguous, I mean that if other miners only know about the block contents approximately, that implies there is some information about the block they don't know... and transmitting that information to them "resolves" the "error" in the approximation, which is necessary for the block to be built on top of.
I'm not sure how the math works here. My intuition tells me that the minimum amount of extra information (H) needed to be sent with the PoW to "resolve the error" would depend (at least) linearly on the transaction rate (R) [like you said] and inversely to how old the newest transactions in the pre-consensus block were (T). So doubling the transaction rate (R) would double H, but the miners agreeing to double the delay until new transactions could be included in a block (T) would reduce H. It would be interesting to try to work this out properly....
 

albin

Active Member
Nov 8, 2015
931
4,008
@Roger_Murdock

My absolute favorite gem was Maxwell in spring 2015 with his omnibus anti-blocksize increase gish gallop wall-of-text post on the mailing list, where he argued that tx fees cannot pay for the PoW portion of the cost of making a block.

The same way that my hot dog sales obviously cannot pay for the cart rental portion of my running x number of hot dog carts!
 

chriswilmer

Active Member
Sep 21, 2015
146
431
I'm not sure how the math works here. My intuition tells me that the minimum amount of extra information (H) needed to be sent with the PoW to "resolve the error" would depend (at least) linearly on the transaction rate (R) [like you said] and inversely to how old the newest transactions in the pre-consensus block were (T). So doubling the transaction rate (R) would double H, but the miners agreeing to double the delay until new transactions could be included in a block (T) would reduce H. It would be interesting to try to work this out properly....
We really should focus on turning this argument on its head... rather than arguing that block space has a cost, we should point out how staggeringly unlikely it would be for the block space to have 0 cost. There is a long list of resource/time constraints that basic intuition suggests should each (individually!) place a theoretical minimum cost on block space. Maybe the odd item or two on the list could turn out to have no cost floor ... but I think the burden of proof should really fall on the person that claims that block space can theoretically have 0 cost (for which there is no empirical evidence).
[doublepost=1487714841][/doublepost]Or more simply put, if someone says "cars can be manufactured at zero cost"... you could waste a lot of time trying to prove that:

- there's a minimum cost on bolts, which are required to build cars [but your assumption that cars require bolts is fundamentally flawed, one could use glue/fabric/nanotechnology!]
- there's a minimum cost on paint [idiot! paint is not *fundamental* to cars, we could make them without]
- and on, and on...

whereas basic intuition suggests that it must cost *something* to produce a car, and if not, that would be a staggering breakthrough requiring significant proof!
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
Further to my previous post, if the marginal cost were so low that for all practical purposes we could consider it "zero", then that would imply that the cost is so infinitesimally small that even for the worst case (think LukeJr kind of bandwidth problems), the cost is irrelevant. If that were the case, then the whole discussion about node resources and "centralization" would be moot anyway.
@Impulse If it was zero it would give credence to the notion of the "tragedy of the commons" where miners abuse block space in a race to the bottom to collect the smallest fee. It's not proven that block space is produced at zero cost, in fact there is evidence to suggest there is a cost - and there has historical been a high cost - and as @Peter R has shown the cost can never go to zero.

@Peter R, @Roger_Murdock, @AdrianX, all

Matt Corallo think that block space has 0 cost of production, hence the block space is not a commodity, hence marginal cost of adding a tx to a block is 0.

He seem to think that producing a 1MB block and 100MB has the same cost.
As if the amount of information contained in a block wasn't correlated to the block propagation time.

Any thoughts?

Matt Corallo invented the centralized relay network that reduced the cost of propagating large blocks by sending just block headers, this in of itself is an attack on bitcoin because it destroys the orphan cost that keeps block space limited, as I understand it's also one of the technologies that allowed GHASH.io to grow so quickly (they reduced their orphan rate which was disproportionately high and it was the tech that allowed them to stop orphaning their own blocks)

Miners adopted it because it gave an instant increase in profit - however it only gives extra profit in reduced orphan rate if you have a disproportionate advantage, if everyone has the same advantage competition is equalized. (so in a state where everyone has a 1% orphan cost that increases with block size or everyone has a 0.01% orphan cost that increases with block size no one is unjustly disadvantaged) and so GHASH.io lost there competitive advantage.

Xthin creates the same phenomenon in it reduces orphan cost due to block propagation it's superior in that it is the bitcoin network not a centralized one and is more efficient than the previous sending of full blocks allowing for more scaling with existing network infrastructure. (I know it competes with centralized relay networks but I don't know if it's faster - it would be great is it was)

There is still a cost to block space as i understand it, it's just much smaller and consists of the:

Bloom filter no idea how they work but I understand the more information they contain the bigger they are @Peter R can confirm whether or not bloom filters are always the same size or adjust in size to the number of transactions described. I understand they increase in size as transactions volume increase in size so there is a non zero cost to relaying a bloom filter - and they are limited by the speed of light so there will always be a time cost.

Thin block - transactions that need to be sent - NB. this increases as the network slows due to congestion so it's a safety break - increasing orphan risk related to block size.

Block Headder - as far as I know this is a constant size. BS/Core fundamentalists believe the rest of the block is irrelevant and it's just the order in which the Block Header is received that determine the block chain - the fact that the centralized relay network is on the Core Road Map leads me to believe they have a limited understanding of how bitcoin incentives work. it's this fundamental belies that leds them to discount any relationship of block size to orphan risk.

Validating transactions - the bigger a transaction the more CPU time is needed to include it in a block - miners who want blocks to propagate fast should avoid hashing complicated scripts (or charge a premium to do so to offset the risk) scripts and lage transactions increase orphan risk due to a limit on CPU speed. Segwit gives an advantage to large script hashing and a discount distorting the the orphan risk associated with sending risky transactions.

So I think Matt is just a kid enjoying the lime light with limited understanding and his income is dependent on his limited understanding and fundamentalist view that there is no cost to propagating large blocks when clearly there is.
 
Last edited:

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
There is semantic space for them to play in by saying that miners can produce a gigantic block and just relay it among themselves within their servers that all sit in a single room or area, so it is essentially zero cost or negligible cost in a tragedy of the commons way because that the network will be burdened. The resolution of such a back and forth eventually requires mention of nodes of economic importance, or else there's that weird constant sense on their side that nodes are both all-powerful and powerless at the same time.
 

albin

Active Member
Nov 8, 2015
931
4,008
Block Headder - as far as I know this is a constant size. BS/Core fundamentalists believe the rest of the block is irrelevant and it's just the order in which the Block Header is received that determine the block chain - the fact that the centralized relay network is on the Core Road Map leads me to believe they have a limited understanding of how bitcoin incentives work. it's this fundamental belies that leds them to discount any relationship of block size to orphan risk.
If they were actually serious about this argument, then they would be moving to softfork in commitment of a hash of the canonically-ordered utxo set before it's too late and their inevitable mining centralization apocalypse makes it impossible to push through. Yet this has been and will continue to be the subject of endless bikeshedding for years to come, which by revealed preference indicates how seriously they care about the issue in reality. Although admittedly there's also a side issue where being able to sync backwards would take away one of their last-ditch arguments against capacity increase.
 

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
@Impulse It doesn't make sense, and it hasn't made sense 2012.

Every single argument they make was refuted years ago, and they are fully aware of this, and they continue to make them.

Ergo, they are pathological liars.

You can not successfully interact with pathological liars as if they are sincere people with a difference of opinion. The only solution that will work is ostracism.

People don't like enacting ostracism, so they try to pretend there are other available options.

There are no viable alternatives though.
Yes. What is true for all churchs and religions is also true for the church of the streamblockers:

Article II. — Any participation in church services is an attack on public decency. One should be harsher with Protestants than with Catholics, harsher with liberal Protestants than with orthodox ones. The criminality of being Christian increases with your proximity to science. The criminal of criminals is consequently the philosopher. (Nietzsche)
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Peter R., @chriswilmer , @albin, @Zangelbert Bingledack, @AdrianX :

Excellent analysis and nice read above! A short rehash of what we have all been saying all the time anyways, but it appears that with the repetition comes some more clarity.

One point: On the 'it takes information proportional to the # of txn that piled up in the mean time to propagate those':

I think @Peter R. and I have been discussing this a couple months ago. What I remember is that this rather appears to be some kind of uncertainty relation also involving 'transaction pre-commitment assembly cost'. I think Greg even alluded to the idea that - in principle - you could send just a block header and then let the node figure out which transactions (that have been pre-broadcast to all nodes as well, an assumption that fails with a more complex relay policy, see below) have to be assembled which way to yield the correct Merkle root. He has a point, however:

Of course, you buy that bandwidth reduction to O(1) with a corresponding increase in CPU time of O(2^n). Which will be - for any realistic non-zero blocksize and utilization of the Bitcoin network - just completely prohibitive, making this 'block headers and figure it out yourself' approach a theoretical curiosity but an interesting corner case nonetheless.

The scheme will also break down when the n in O(n ^ 2) approaches the number of bits in the hashes of the Merkle tree - at some block size, there's likely several orderings of transactions existing producing the same hash - which we'll never know because we'll never be able to enumerate them.

The question then becomes what the fundamental trade-off here is in terms of information theory, and I am not well-versed enough to know whether an analysis exists.

I think it won't be a trade-off just in CPU time and bandwidth, it will also be a trade-off in error rate. Because if you think about it, the above example of a hash collision would be an - admittedly extremely unlikely but theoretically possible - failure of the hash as an error detection scheme.

Does anyone know anything else about this?

Furthermore: I am pondering now whether the relay policy core is intending to do (forward just transactions that are likely to be als mined) is in principle a bad one, like some folks on rBtc seem to insinuate. One one hand, I think there's no need to change it (simple constant that can be changed on the command line) - if not for the artificially created problem of full blocks now.

On the other, I think with an open-ended blocksize, this relay policy will likely drift towards a sane value anyways. So I don't think such a policy is harmful as such. It is also something a node operator can configure like s/he wants to anyways.

Yet, I can see that in the future and a successful on-chain scaling Bitcoin, this parameter might actually become completely unnecessary: The bandwidth, CPU and BTC required to produce valid transactions plus on the attacking side and the rejection of conflicting ones in the mempool might be enough of a discouragement against flooding attacks anyways.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Interesting estimates and points from Ryan X. Charles:

[Bitcoin-development] No Bitcoin For You
Ryan X. Charlesryanxcharles at gmail.com
Sun May 17 02:31:10 UTC 2015


I agree with this analysis. I'm not sure if we will increase 1 MB
block size or not, but with a block size that small, it is all but
impossible for most people on the planet to ever own even a single utxo.

At 7tps, how long would it take to give 1 utxo to all of the 7 billion
people currently alive? It would take 1 billion seconds, or about 32
years.[1] So for all practical purposes, at 1 MB block size, far less
than 1% of people will ever be able to own even a single satoshi.
Unless those people are willing to wait around 30 years for their
lightning network to settle, they will either not use bitcoin, or they
will use a substitute (such as a parallel decentralized network, or a
centralized service) that lacks the full trust-minimized security
guarantees of the main bitcoin blockchain.

I can't speak for most people, but for me personally, the thing I care
most about as an individual (besides being able to send bitcoin to and
from anyone on the planet) is being able to validate the blockchain.
With a pruning node, this means I need to download the blockchain one
time (not store it), and maintain the utxo set. The utxo set is,
roughly speaking, 30 bytes per utxo, and therefore, at one utxo per
person, about 7*30 billion bytes, or 210 GB. That's very achievable on
the hardware of today. Of course, some individuals or companies will
have far more than one utxo. Estimating an average of ten utxos per
person, that will be 2.1 TB. Also very achievable on the hardware of
today.

I don't think every transaction in the world should be on the
blockchain, but I think it should be able to handle (long-term) enough
transactions that everyone can have their transactions settled on a
timescale suitable for human life. 30 years is unsuitable, but 1 day
would be pretty good. It would be great if I could send trillions of
transactions per day on networks built on top of bitcoin, and have my
transactions settle on the actual blockchain once per day. This means
we would need to support about 1 utxo per person per day, or 7 billion
transactions per day. That translates to about 81 thousand
transactions per second [2], or approximately 10,000 times the current
rate. That would be 10 GB per ten minutes, which is achievable on
current hardware (albeit not yet inexpensively).

Using SPV security rather than pruning security makes the cost even
lower. A person relying on SPV would not have to download every 10 GB
block, but only their transactions (or a small superset of them),
which is already being done - scaling to 7 billion people would not
require that SPV nodes perform any more computation than they already
do. Nonetheless, I think pruning should be considered the default
minimum, since that it what is required to get the full
trust-minimized security guarantees of the blockchain. And that
requires 10 GB blocks (or thereabouts).

The number of people on the planet will also grow, perhaps to 14
billion people in the next few decades. However, the estimates here
would still be roughly correct. 10 GB blocks, or approximately so,
allows everyone in the world to have their transactions settled on the
blockchain in a timely manner, whereas 1 MB blocks do not. And this is
already achievable on current hardware. The most significant cost is
bandwidth, but that will probably become substantially less expensive
in the coming years, making it possible for everyone to inexpensively
and securely send and receive bitcoin to anyone else, without having
to use a parallel network with reduced security or rely on trusted
third parties.
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
@Norway +1 Ryan Charles. good to see these number being run.

@sickpig

Bitcoin Transaction Fees Are Up More Than 1200% in Past Two Years

Hardly worth reading the article, as the headline says it all. I feel like i'm powerless and watching a completely avoidable slow motion train wreck. There's lunatics in the engine room. Shame the cargo is vitally needed aid and protection for the billions of financially persecuted people on the planet.

It worth noting with each incremental increase in transaction fees we are pricing out use cases for Bitcoin. Each with their own hugely important network effects. Bitsquare and Open Bazaar become basically useless when we have backlogs like this. So much for the reason "keeping bitcoin decentralised"

I was thinking an info graphic might be nice to show how many potential users/services we exclude with each incremental increase in fees. Spam, Dust, Microtransactions, Tipping, (we are here), P2P Cash, Betting, personal trading, currency remittance, Stocks, Digital gold. I'd argue that once you've priced out P2P cash, the most useful market, then a cascade effect will occur, and users will go to a more useful coin.

Current optimal fee is 0.00135735BTC from Block Trail THAT'S $1.50 :mad:

The question remains why so few miners have acted? I mean if 2 years is not enough time to understand whats going on here, will they ever? I'm concerned they have become blinded by the temporarily increased fees they are pocketing. It's super short term thinking but with the life cycle of the average miner that's hardly surprising. Are we seriously going to have to wait for a whole new generation of pools and miners before this situation gets resolved?
 
Last edited:

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
@awemany & @sickpig I consider myself an observer in bitcoin, it took about 2 years or reading and observing before I started asking questions and commenting.

I have to admit I have in the past been ideologically opposed to the Relay Network and ideas like Xthin, and I've held the belief that the cost to send every transaction twice by re sending it in a block was the minimum cost to maintain the integrity of the incentives that kept the orphan risk functioning as a deterrent to increasing block size.

I have come to accept bitcoin is not preserved by ideology but by incentives and the Relay Network provided an incentive that is dangerous and once adopted couldn't be undone. Xthin on the other hand challenged my beliefs in that it provided a theoretical 100% efficiency upgrade on the existing infrastructure and made the Relay Network almost obsolete in addition it empowered bitcoin nodes to relay transactions and blocks in place of a centralized server, strengthening the decentralized nature of the network.

So I've been wrestling with Xthin as a good, but how do we maintain the orphan risk, often I fall victim to the notion that there is no cost to relay a block and therefore no orphan risk and that result in block space abuse and a tragedy of the commons.

But as others have pointed out in the past there are costs to making bigger blocks as well as costs to incorporating nonstandard transactions or large scripts or big transactions that increase orphan risk. so while the costs are smaller than the orphan risk of the past they are significant and not practically 0.

One point that was missed @theZerg also pointed out another interesting aspect of the bitcoin network in this paper on Single Transaction Blocks he shows that mining empty blocks as a result of a large block creates a market driven transaction limit for the network that in turn increases fees pressure that curbs demand for transactions reducing transaction volume and block size. It works like a Rev Limiter on a a high performance sports car. When a blocks is mined that is bigger than the network can handle and other miners can't verify it before finding a new block they build an empty block, that block propagates very fast on top of the previous large block, reducing network transaction capacity, and securing the network.

Small block fundamentalists say miners never orphan their own blocks so therefore the network will fork and fail. We have empirical evidence that this is not true on 4 July 2015 header only miners were forced to orphan their own blocks and did so willingly in order to stay connected to the majority network. In fact there are forks in excess of 500 blocks that have been orphaned.

Small block fundamentalists also say, that if 51% of miners collude to mine huge blocks that the other 49% can't validate then and bitcoin will centralize. This is a very week version of the 51% attack it's not cost effective as there is no incentive to do it, the risk of loss of income for a miner and the degradation of the network would result in a lose, lose for all involved.

Small block fundamentalists also say, that this won't work when the block subsidy reaches zero - I don't think it's the case. If I was mining and found a block that would signal a transaction limit to the network and I could propagate it at no cost I would not turn off my mining equipment and give up mining I'd send it out and try get the next block. Even if their argument was true a transaction limit could be agreed 100 years in the future and 10-40 years in advance of the diminishing block reward and programmed to activate when block reward is depreciated.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Currently, 4.3% of the total moneysupply is stuck in mempool.

EDIT: And growing fast. 720k bitcoin in mempool now.
EDIT2: 750k now, less than a minute later. This is crazy!
EDIT3: 780k, just after writing EDIT2!!!
 
Last edited:

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Whatever Blockstream promised miners to keep them onboard in previous years, I cannot see it holding up much longer.

Fees through the roof, users getting screwed by poor performance - the name of Bitcoin is being dragged through the mud in cause-effect fashion due to Blockstream (and by proxy Core) policies.

How long this can go on is anyone's guess, but business adoption of Bitcoin must be approaching glacial pace if the glaciers are not already melting. It'll be interesting to see if large investors will pull this carriage out of the mud - but I don't believe it. Would they be sold on Bitcoin's inability to grow or what?
 
Last edited: