Gold collapsing. Bitcoin UP.

albin

Active Member
Nov 8, 2015
931
4,008
As a corollary, we're told that variance drives centralization of mining, yet at the same time miners are going to chase variance through theoretical attacks. All the while completely ignoring that miners might achieve their business goals by financializing risk, like every other commodity production market in existence!
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
these types of mining attacks also completely ignore the fact that they can't be repetitively executed w/o some reaction by everyone else to defend against such a thing.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@Peter R I just had a thought which I want to run by our resident theorist.

Let's first run through the mining algorithm:

Basically miners first receive a block header and start mining a 0-tx block on top of it. They can't add any txns in because they don't know what txns were "used" by the block they have not yet received.

Next they start receiving and validating the block. When that is complete they start mining a non-0 tx block on top of it to get as many txn fees as possible.

Therefore the average tx/second is essentially defined by the miner's network and validation capacity.

No limits required. If a miner produces a huge block, there will be a few 0-tx blocks after it if that huge block exceeds the average capacity of the network to process it. As miners upgrade their network or sig validation infrastructure, the bitcoin network as a whole will "naturally" produce fewer 0-tx blocks, resulting in a higher throughput!!!

This seems like a much more powerful mechanism curtailing the average bandwidth as compared to the fork mechanism. But the network as a whole (the users) do not care (much) whether we had 1 block with 10k tx and then 2 with 0, or 3 with 3.3k tx. In fact, the former is "better" because more txns get more confirmations sooner and the mempool clears out.

It seems like we should be able to put some math behind that and also look at past 0-txn block history to see the effect happening "live".
 

albin

Active Member
Nov 8, 2015
931
4,008
in his latest Epicenter Bitcoin on NG, he says that a patch has been added to Core code to prevent SM altho i'm not sure what exactly that is.
From what I understand the idea is have all transactions specify an nlocktime to prevent inclusion backwards in the chain in the event of a big re-org, although this really only addresses the fashionable concern right now that way into the future when tx fees are more significant, there is supposedly strong incentive to keep re-orging to take fees instead of moving forward in height. I'm finding it extraordinarily difficult to find any discussion about this whatsoever though.
 
  • Like
Reactions: majamalu

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
NG is not worth the risk and disruption.
Thin blocks are going to capture a very significant fraction of the gains that NG produces and IBLT would pretty much wrap up the rest of it.

Both of those changes are easier to deploy than NG because the changes they require to existing software are less extensive.


Sure, NG looks better than Bitcoin as it's currently implemented, but it's going to be difficult to make the case that NG is superior to other available scaling solutions on a cost/benefit basis.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
albin said:
I feel like there's a whole world of real economic considerations relating to time preference and risk aversion that might actually turn out to be way more involved than any of these theoretical attack scenarios being presented.
albin said:
As a corollary, we're told that variance drives centralization of mining, yet at the same time miners are going to chase variance through theoretical attacks. All the while completely ignoring that miners might achieve their business goals by financializing risk, like every other commodity production market in existence!
Great posts, @albin! I'm glad you've joined the forum.

There is probably some way to quantify the trade-off between higher variance + higher expected return versus lower variance + lower expected return by considering the time value of money.

For example, the present value of a bitcoin that I will receive a month from now is worth less than a bitcoin I receive today. If the effective annual interest rate is 12%, then it's worth about 1% less.

From the mathematical perspective, rather than calculating expectation values over some static probability distribution, integrate over time as well by introducing the concept of an interest rate. We'll probably be able to show that higher-variance plays are only profitable in low-interest rate environments. Similarly, if interest rates in the bitcoin economy are high, then investors will rationally favour lower-variance plays.

I haven't seen much work done on trying to quantify an "effective interest rate" for the Bitcoin economy, but it would be an interesting research paper. I suspect it is probably well over 10%.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@Peter R I just had a thought which I want to run by our resident theorist.

Let's first run through the mining algorithm:

Basically miners first receive a block header and start mining a 0-tx block on top of it. They can't add any txns in because they don't know what txns were "used" by the block they have not yet received.

Next they start receiving and validating the block. When that is complete they start mining a non-0 tx block on top of it to get as many txn fees as possible.

Therefore the average tx/second is essentially defined by the miner's network and validation capacity.

No limits required. If a miner produces a huge block, there will be a few 0-tx blocks after it if that huge block exceeds the average capacity of the network to process it. As miners upgrade their network or sig validation infrastructure, the bitcoin network as a whole will "naturally" produce fewer 0-tx blocks, resulting in a higher throughput!!!

This seems like a much more powerful mechanism curtailing the average bandwidth as compared to the fork mechanism. But the network as a whole (the users) do not care (much) whether we had 1 block with 10k tx and then 2 with 0, or 3 with 3.3k tx. In fact, the former is "better" because more txns get more confirmations sooner and the mempool clears out.

It seems like we should be able to put some math behind that and also look at past 0-txn block history to see the effect happening "live".
but the miniblockers will scream that in the short term, full nodes are having to store a bunch of exablocks. your miner hardware upgrade check is one that occurs gradually more long term.

i understand this is the #1 major concern against no limit. even Gavin used it yesterday in one of the reddit BU threads. i guess the question is how likely is it for an irrational attacker to construct such a self made bloat block and then mine it. a gvt/bank would have to buy significant hardware to get such an attack block into the chain within a reasonable amount of time. i don't believe any rational miners would do this as it would destroy the very system they've invested in.
 
Last edited:
  • Like
Reactions: Norway and majamalu

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@cypherdoc

That makes me think the user-selectable blocksize limit aspect of BU may be indispensable after all.

I was thinking that it was duplicative: the network incentives (orphan risk) regulate blocksize but then also nodes do, too. It seemed inelegant. Why not instead just have no cap and let the market work? However, non-mining nodes need to be given a voice since they don't mine; the inelegance in the duplicative market process seems like it's really just a reflection of the basic suboptimal situation of having non-mining nodes at all.

Until we fix that, we may need such an inelegant, duplicative market process where both miner cost/benefit analysis and node permissiveness function as market parameters simultaneously.

In other words, it may be that user-selectable limits are required in order to give non-mining nodes a voice in lieu of the original voice they were meant to have through mining, and that any perceived clunkiness of that is just the clunkiness of the situation of nodes being disconnected from mining showing through.

Then if, for instance, a government maliciously mines a bloat block, the nodes don't just have to rely on the probability that miners will refuse to build on top of it, but they themselves can refuse to contribute to its propagation (and the propagation of any other blocks over the limit they specify), which in turn increases the probability that miners will refuse to build on it because miners will notice what size of blocks the nodes do and don't propagate.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Peter R I just had a thought which I want to run by our resident theorist.

Let's first run through the mining algorithm:

Basically miners first receive a block header and start mining a 0-tx block on top of it. They can't add any txns in because they don't know what txns were "used" by the block they have not yet received.

Next they start receiving and validating the block. When that is complete they start mining a non-0 tx block on top of it to get as many txn fees as possible.

Therefore the average tx/second is essentially defined by the miner's network and validation capacity.

No limits required. If a miner produces a huge block, there will be a few 0-tx blocks after it if that huge block exceeds the average capacity of the network to process it. As miners upgrade their network or sig validation infrastructure, the bitcoin network as a whole will "naturally" produce fewer 0-tx blocks, resulting in a higher throughput!!!

This seems like a much more powerful mechanism curtailing the average bandwidth as compared to the fork mechanism. But the network as a whole (the users) do not care (much) whether we had 1 block with 10k tx and then 2 with 0, or 3 with 3.3k tx. In fact, the former is "better" because more txns get more confirmations sooner and the mempool clears out.

It seems like we should be able to put some math behind that and also look at past 0-txn block history to see the effect happening "live".
I think it makes a lot of sense. I spent one Sunday morning working on this idea and then documenting my findings on Cypher's old thread. Here is the post:

https://bitcointalk.org/index.php?topic=68655.msg11791889#msg11791889

In hindsight, the wording I used in places is awkward; for example, I refer to the time to "process a block" when I should have written "receive and process" to make it clear that this time includes propagation delay.

This was actually the post that marked the beginning of my battle with Gmax. Noosterdam submitted it to /r/bitcoin where it was up-voted to the top post:

https://www.reddit.com/r/Bitcoin/comments/3c579i/yesterdays_fork_suggests_we_dont_need_a_blocksize/

Gmax then comes in and says "I think what your post (and this reddit thread) have shown is that someone can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing."

I was actually upset for a few days, thinking that I was in fact missing something or that maybe I shouldn't post my ideas publicly until they've gone through peer review--despite receiving a few PMs and emails saying Gmax was out-of-line. Anyways, I eventually became more motivated than ever and started working on the idea in more detail over the following weeks. Somehow the work morphed into what became my fee market paper (which Gmax fought against even harder) and I sort of dropped the original idea hoping to come back to it later.

JorgeStolfi (despite what some may think of him) has actually written some intelligent comments on this idea too.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@cypherdoc

That makes me think the user-selectable blocksize limit aspect of BU may be indispensable after all.

I was thinking that it was duplicative: the network incentives (orphan risk) regulate blocksize but then also nodes do, too. It seemed inelegant. Why not instead just have no cap and let the market work? However, non-mining nodes need to be given a voice since they don't mine; the inelegance in the duplicative market process seems like it's really just a reflection of the basic suboptimal situation of having non-mining nodes at all.

Until we fix that, we may need such an inelegant, duplicative market process where both miner cost/benefit analysis and node permissiveness function as market parameters simultaneously.

In other words, it may be that user-selectable limits are required in order to give non-mining nodes a voice in lieu of the original voice they were meant to have through mining, and that any perceived clunkiness of that is just the clunkiness of the situation of nodes being disconnected from mining showing through.

Then if, for instance, a government maliciously mines a bloat block, the nodes don't just have to rely on the probability that miners will refuse to build on top of it, but they themselves can refuse to contribute to its propagation (and the propagation of any other blocks over the limit they specify), which in turn increases the probability that miners will refuse to build on it because miners will notice what size of blocks the nodes do and don't propagate.
but it only takes one bloat block to screw things up.

in effect, we'd have to ship BU with a predefined limit of our choosing (central planning) in order to prevent relay of such a block.

i think it come down to the probability of being able to get the bloat block to relay before another mines a small avg size block. the bloat block can't be too big, ie, take close to 10 min or more to validate since then it would be highly probable to be orphaned. it's the bloat blocks smaller than this that take somewhere btwn the current avg of ~5 sec and up to some time <10 min that would be problematic since there is only one block produced every 10 min on avg that could come along and orphan it.

and then there's that relay network which gums up the theory even more since they're willing to relay txs and blocks w/o full validation.
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
EDIT: @Peter R Yes exactly! I think that you should formalize that posting and print it as chapter 2 of your paper. I think that we will find that the public is a lot more receptive to it now...

Especially if we look back into the blockchain history for 0-tx blocks; I will write a quick script

but the miniblockers will scream that in the short term, full nodes are having to store a bunch of exablocks. your miner hardware upgrade check is one that occurs gradually more long term.

i understand this is the #1 major concern against no limit. even Gavin used it yesterday in one of the reddit BU threads. i guess the question is how likely is it for an irrational attacker to construct such a self made bloat block and then mine it. a gvt/bank would have to buy significant hardware to get such an attack block into the chain within a reasonable amount of time. i don't believe any rational miners would do this as it would destroy the very system they've invested in.
In theory a monsterous "excessive block" would be followed by zero length ones (assuming that the excessive block really is too much for most miners to handle). So very quickly the average would descend back down. Your network throughput would be no worse than if that data was broken into N blocks, because that's your natural or artificial (traffic shaping) limit.

It is true in theory that a rogue miner could DOS the network for free by producing a huge block of pay-to-self transactions. But in this case other issues will likely apply -- for example, miners should only be willing to mine on top of an unverified block for a certain amount of time. Beyond that it becomes dangerous because the provided hash could be bogus. However, from this perspective a "sanity" check limit (like BIP101) does make sense... but the key point is that it is a sanity check. The network will use the 0-tx mechanism to naturally limit the bandwidth to below what the mining majority can handle, so if BIP101 "misses" on the high side its not really a problem.

But the reasonable case is more important than the extreme case: the likelihood of the network producing a zero length block for slightly excessive blocks rises just a bit because it takes a bit more time to receive them and process them.
 
Last edited:
  • Like
Reactions: majamalu

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@cypherdoc

That makes me think the user-selectable blocksize limit aspect of BU may be indispensable after all.

I was thinking that it was duplicative: the network incentives (orphan risk) regulate blocksize but then also nodes do, too. It seemed inelegant. Why not instead just have no cap and let the market work? However, non-mining nodes need to be given a voice since they don't mine; the inelegance in the duplicative market process seems like it's really just a reflection of the basic suboptimal situation of having non-mining nodes at all.

Until we fix that, we may need such an inelegant, duplicative market process where both miner cost/benefit analysis and node permissiveness function as market parameters simultaneously.

In other words, it may be that user-selectable limits are required in order to give non-mining nodes a voice in lieu of the original voice they were meant to have through mining, and that any perceived clunkiness of that is just the clunkiness of the situation of nodes being disconnected from mining showing through.

Then if, for instance, a government maliciously mines a bloat block, the nodes don't just have to rely on the probability that miners will refuse to build on top of it, but they themselves can refuse to contribute to its propagation (and the propagation of any other blocks over the limit they specify), which in turn increases the probability that miners will refuse to build on it because miners will notice what size of blocks the nodes do and don't propagate.
Completely agree; however, I see this as elegant. You said "the network incentives (orphan risk) regulate blocksize but then also nodes do, too" and you referred to this as "duplicative." I see the actions of nodes as a cohesive part of what defines what the orphan risk actually is. The curve that describes the probability of orphaning versus block size will thus be partly a result of technical limitations (bandwidth, latency, etc.) and partly an emergent phenomenon based on the transport rules (for large blocks) that each node implements individually.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@Peter R

Agreed. The apparently duplicative user-selected limit proposal only has the appearance of inelegance because of the underlying messed-up-ness of validation becoming unlinked from mining, but it is actually an elegant way of addressing that problem.

This was actually the post that marked the beginning of my battle with Gmax. Noosterdam submitted it to /r/bitcoin where it was up-voted to the top post:

https://www.reddit.com/r/Bitcoin/comments/3c579i/yesterdays_fork_suggests_we_dont_need_a_blocksize/

Gmax then comes in and says "I think what your post (and this reddit thread) have shown is that someone can throw a bunch of symbolic markup and mix in a lack of understanding and measurement and make a pseudo-scientific argument that will mislead a lot of people, and that you're willing to do so or too ignorant to even realize what you're doing."
I remember reading with interest at how flashy and righteously indignant Gmax's response was. Your post was just a single post in a giant thread, not even its own separate post, so you clearly were just floating it as a tentative idea and not as any kind of authoritative propaganda that would be in a position to "mislead a lot of people." Then the reddit OP title only said "Yesterday's hard fork suggests we don't need a blocksize limit" (not "proves" or "demonstrates" or even "shows") which again indicates the idea was only being floated.

Yet Gmax chose to interpret it as a threat and go into full damage-control mode, complete with his very best effort at a nuclear-grade authoritative tone, perhaps because it was heavily upvoted. My impression is that it was being upvoted as an interesting possibility, so I was surprised he took so much offense and wondered if perhaps you had struck a nerve. He did say he would be correcting people mislead by this for years to come, which shows he at least thinks the argument has a dangerous plausibility to it. Dangerously plausible notions are often wrong, but perhaps as often they are so plausible because they are very close to the truth.

It's not an unexpected reaction from someone whose job as chief code optimizer, and whose business and its projects, would be made much less crucial if the argument caught on. Also recall that Gmax has a habit of using any small technical error he can latch onto to dismiss everything someone has written (charitably, this is perhaps due to years of dealing with bad arguments from newbs), and that he exhibited a similar pattern in his initial dismissal of Bitcoin for technical reasons that turned out to be misconceived.

Here he again may have dismissed something very close to the truth just because of a technicality (real or imagined). Someone in his position doing that, whether consciously aware they were being petty or not, would be understandably panicked and want to include the most forbidding-sounding language as support.

The argument in general points toward cutting the central planners out of the loop, and his reaction there reminded me of how a vampire's response to daylight is depicted in movies. Not that this necessarily means anything :D
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
Thin blocks are going to capture a very significant fraction of the gains that NG produces and IBLT would pretty much wrap up the rest of it.

Both of those changes are easier to deploy than NG because the changes they require to existing software are less extensive.


Sure, NG looks better than Bitcoin as it's currently implemented, but it's going to be difficult to make the case that NG is superior to other available scaling solutions on a cost/benefit basis.
This is absolutely right.

When I worked with some New Yorkers they had a favorite saying about over-engineered solutions "It's building a rocket-ship to go to Staten Island".

NG looks like a rocket-ship compared to the speed-boat of thin blocks (or the teleportation device of IBLT).
 
  • Like
Reactions: majamalu

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Peter R. , @Zangelbert Bingledack: I have heard people saying 'Block generation is basically just a time-stamping mechanism'.

Although this doesn't really explain a lot about the detailed economics of blocksize, it does put the whole discussion into quite a different frame of reference. I like to think that framing the operation of Bitcoin as 'just a big, decentralized time stamping service with incentives' also clears the mind to see which optimizations could be possible and how.

Greg's imagined attacks on efficient block propagation feel a lot like chasing ghosts - but very weirdly, we haven't yet found the definite reason why that is so. Yet, every concern into this direction eventually falls apart when further inspected. To an extend, I believe this is Satoshi's genius of designing such an incentive system and protocol for which various attacks can be imagined but as a whole seem to be always proven impossible. Almost as if seeing the whole system is beyond our collective cognitive limits to grasp - and to fully see why such attack schemes all (seem to?) fail. I wonder whether Satoshi understood his incentive scheme on some level - or whether he got it right just by excellent intuition.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
Greg's imagined attacks on efficient block propagation feel a lot like chasing ghosts - but very weirdly, we haven't yet found the definite reason why that is so.
There's a lot of imagined problems with Bitcoin that are just semantics.

Might there be a clue in the fact that, at this very moment because of the possibility a block being built on will actually be orphaned, what is actually the longest chain - in other words what is actually to be called the "Bitcoin" blockchain - is undefined or at least subject to change? And in the fact that what constitutes the community of miners and nodes changes depending on how block propagation goes? (Or would in the case of Bitcoin Unlimited.)

What if all the attacks only gain their plausibility from this kind of semantic blur obfuscating the reason they wouldn't work? That could be the systematic mechanism you're looking for.
 
  • Like
Reactions: majamalu

Melbustus

Active Member
Aug 28, 2015
237
884
...
I wonder whether Satoshi understood his incentive scheme on some level - or whether he got it right just by excellent intuition.
I think the valuable intuition was really that Satoshi relied on free-markets to work out the details wherever possible.

I've said this before, but look at emission halvings. Just bluntly cutting supply in half every 4 years looks absolutely insane to a macro-economist ("think of the shocks it'll create!"), and yet the market handles this without the system falling apart. Plenty of people argued essentially the same thing when price went under $300 in January; "the shock will cause a diff-drop death spiral!". Um, no, free markets actually work.
 

Members online