Gold collapsing. Bitcoin UP.

rocks

Active Member
Sep 24, 2015
586
2,284
Honestly, if I think about it, even for peace-of-mind, I rather let my node follow the cumulative-hashpower-wise longest chain of valid transactions rather than kick out blocks that would be valid.

Thoughts?
That is my view, if my node can't keep up I'd rather it just automatically fall off of the P2P network. Then my options would be either 1) invest more resources to run it or 2) decide to stop running a full node all together. I would rather take one of these two options than hold back the network from fully functioning.
[doublepost=1447264034,1447263320][/doublepost]
@rocks

I haven't studied thin blocks yet. Can you summarize them quickly?

Offhand it doesn't sound like iblt plus fees would stop a f2pool multi input single tx exablock attack would it? I can't remember if it had much in the way of fees because of its construction. I'll have to look.
Thin blocks are transmitted blocks where only the transaction hashes are sent. Since transactions should already be in the mempool of other nodes, receiving nodes can look up transactions from in their own mempool based on the hash and can reconstruct a full block from that. Transactions that were not pre-announced or which a node did not receive yet (unlikely) need to be requested. It's really simple.

You could also layer IBLT on top of that and send a compressed IBLT packet of hashes. Then the receiving node would first reconstruct the hash set from the IBLT packet and then the full block from the hash set. That would take the 14x reduction thin blocks provide and layer the full benefits of IBLT on top of that. I forget IBLT's compression ratio that Gavin was seeing, but it's likely we'd see a full 50x or 100x reduction in transmission requirements at least (probably more). This means we could go to 100's of MB blocks and still only transmit single digit MB packets to communicate new blocks.

Of course transmitting historical blocks still requires the full block to be transmitted, this means that nodes catching up would require significantly more network bandwidth than current online nodes. It probably means that over time catch up nodes would need to subscribe to some service to catch up properly.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
That's what the meta-cognition stuff was for that we were discussing several weeks ago. Nodes would still have their own block size limit, but they would accept an "excessive" block only once it was buried at a certain depth.
Am I missing something, or wouldn't nodes having their own limits set (with fairly conservative default, like 8MB) avoid the exablock problem @cypherdoc is concerned about altogether?
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
You could also layer IBLT on top of that and send a compressed IBLT packet of hashes. Then the receiving node would first reconstruct the hash set from the IBLT packet and then the full block from the hash set. That would take the 14x reduction thin blocks provide and layer the full benefits of IBLT on top of that. I forget IBLT's compression ratio that Gavin was seeing, but it's likely we'd see a full 50x or 100x reduction in transmission requirements at least (probably more). This means we could go to 100's of MB blocks and still only transmit single digit MB packets to communicate new blocks.
...which means that spending a lot of time agonizing about block size policy is a waste in the long term.

One way or another, the process of transmitting blocks through the network is going to get temporally smoothed. Instead of a block being a monolithic thing that appears out of nowhere every 10 minutes, miners are going to continually broadcast compressed information about the contents of their upcoming block so that when they find a valid hash all they need to broadcast is a constant-size header.

The decisions that nodes need to make regarding their bandwidth management will be centered around which transactions to relay, and which miner's pre-announcements to relay. Actual block announcements will be a trivially small fraction of their bandwidth.

This is why I suggested working on market allocation of real time bandwidth flows - because, unlike work done on block size policy itself, that's a solution that won't be made obsolete by upcoming P2P network protocol upgrades.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
@rocks @theZerg @Peter R

seems to me that even with thin blocks and IBLT, exablocks would still be a problem in terms of blockchain storage for those that take anywhere from the current normal block validation time of 2s up to somewhere near but below 10min where the risk of orphaning goes parabolic. in other words won't nodes still have to accept it within those validation time parameters? unless of course the thin block patch excludes transmission of blocks in the standard way we have now?
 
Last edited:

rocks

Active Member
Sep 24, 2015
586
2,284
@cypherdoc
There are a couple factors working against the pre-mined exablock attacker. 1) They have to transmit the full block, which as Peter has shown carries significant costs especially above 100MB. 2) Receiving nodes have to validate the entire block from scratch, this adds time to the propagation path.

This exablock is competing against normal blocks that can be transmitted quickly and do not require as much validation (since the nodes have already processed most of the transactions in the mempool). This means that it is easier for a normal block to come a little after an exablock and still be picked up by the network.

On top of all of this miners themselves have an economic incentive to kept other miner's blocks small. Other miner's blocks to not add revenue to a miner but add costs in terms of storage space. This means that miners are incentivized to ignore super large exablocks and not build on them.

Even with no limit, the normal economic behavior we should see is miners skipping abnormal blocks that are > 10x larger than the recent norm, unless others build on them.

The only arguement I've seen from Greg and Peter on this is "but no one has written that code yet". Well that may be for the core, but most pools are running modified versions already and this is an easy thing from them to do.

This essentially is the configurable block size limit we're discussing. I think this makes sense for miners to use an adjustable configurable limit based on recent block sizes, but personally prefer non-mining nodes to follow the longest mined chain and trust that miners are making the right choices on what to build on. This puts votes where they should be (with hashes) and still have network behavior that limits the exablock risk.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
@rocks

This exablock is competing against normal blocks that can be transmitted quickly and do not require as much validation (since the nodes have already processed most of the transactions in the mempool). This means that it is easier for a normal block to come a little after an exablock and still be picked up by the network.
you're missing my pt. there is ever only 1 block solved on the entire network on avg every 10 min. it's not like for every exablock that gets released there is a small block also waiting in the wings to orphan it.
 
Last edited:

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@rocks

Re: votes being with hashes

If non-mining nodes can only facilitate propagation, not slow it, then it makes sense that they have a "vote" of a sort. The vote they have is only whether they will actively help propagate a block, nothing more, so I don't see how this could result in a Sybil attack. But perhaps they can also slow propagation through some sort of fakery? In that case I would agree that non-mining nodes should probably not have a vote.
 

albin

Active Member
Nov 8, 2015
931
4,008
To a degree what is tripping up small block adherents is the "magic" (i.e., the "invisible hand") of the market, the way people react dynamically to incentives.
If anything, I think Bitcoin itself as a system is reasonably hardened at this point that the most threatening vectors for attack will be outside the system entirely. Social attacks to create strife in online communities, marketing/PR-attacks to undermine public perception of Bitcoin and permisionless-cryptocurrency in general, and probably most importantly, state actors.

I don't mean state actors doing something ridiculous like buying a billion dollars worth of hash power and 51% attacking (that mere thought feels like a complete fetishization of the internal rules of the system akin to small-block thinking). I mean relatively free western governments publicly stating things like "Bitcoin is fine, no worries, it's even probably going to create jobs and improve the economy, just pay your taxes" while privately undermining the ability of startups to get bank accounts and choking business through regulation akin to beating startups to death with a stick while telling them that they're allowed to do Bitcoin.
 
  • Like
Reactions: AdrianX

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
$DJI slowly rolling:


[doublepost=1447271966][/doublepost]gold slowly moving towards 3 digits:

 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
@Peter R

i wonder if there is a way to mathematically model the chances (dependent on size) of an exablock being orphaned the closer it gets to 10m to validate whereby the chances of a normal smaller sized next block coming along increasingly has a chance to orphan it? assuming of course that this exablock is being propagated in the p2p network and not the relay network (which is always a problem).
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
...
One way or another, the process of transmitting blocks through the network is going to get temporally smoothed. Instead of a block being a monolithic thing that appears out of nowhere every 10 minutes, miners are going to continually broadcast compressed information about the contents of their upcoming block so that when they find a valid hash all they need to broadcast is a constant-size header.
...
I completely agree that the block transmission process will be smoothed (using coding gain techniques such as weak blocks, thin blocks, IBLT, etc.); however, @awemany and I were working on a proof to show that it's actually impossible to achieve true O(1) block transmission if

(a) the network has a finite size, d,

(b) miners build blocks according to their own volition and act rationally to maximize their profit.

Instead, block transmission remains O(1 + kN) where k becomes very small but never zero.

We haven't really pinned down the proof yet, but I think it's pretty clearly true: imagine a miner at a distant corner of the network receives a new transaction with a big juicy fee. There exists some fee that will entice him to add that new TX to the block he is working on before he is confident that the rest of the network is aware of that TX (since it takes a finite amount of time for the TX to propagate across the network due to the speed of light constraint). If he solves the block quickly, then his block solution announcement will necessarily contain information about this TX in addition to the constant sized header.
 
  • Like
Reactions: AdrianX

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
Explanation attempt #2:

Exablock concerns are a red herring, because it assumes that block propagation will continue to exist.

Block propagation will eventually cease completely, since it's the worst possible solution to the problem.

The network will move away from block propagation toward block solution propagation.

Block solution propagation means that transactions must be broadcast before the block solution.

This means there's no such thing as "exablocks" any more - only abnormally high transaction volumes.

Handling sudden bursts of abnormally high transaction volumes is the only long term problem that needs to be solved, which is a general case of needing to intelligently handle transaction transaction relaying decisions.

The originator of a transaction needs to both own bitcoins and convince other nodes to relay.

The former puts an inherent limit on the number of fake transactions they can broadcast, especially if most nodes limit the dependency chains of unconfirmed transactions they will accept.

If nodes start treating transaction relaying as a service for which they should be compensated, then the cost of producing the transactions needed to produce the "exablock" goes up even further.
[doublepost=1447273259][/doublepost]This could end up with a market where users are paying nodes to relay their transactions, and miners are buying un-mined transactions from nodes to produce blocks with, and then paying nodes to relay their block solutions to the rest of the network.

Price discovery will occur at several different levels.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Peter R

i wonder if there is a way to mathematically model the chances (dependent on size) of an exablock being orphaned the closer it gets to 10m to validate whereby the chances of a normal smaller sized next block coming along increasingly has a chance to orphan it? assuming of course that this exablock is being propagated in the p2p network and not the relay network (which is always a problem).
Yes, this is essentially what this chart shows:



Let's consider a 128 MB "spam block" full of TXs the rest of the network is unaware of. Using my estimates for the propagation impedance (7.5 sec / MB), it would take

~128 x 7 = 896 seconds

to propagate (assuming the network is actually willing to try). The chance of another miner finding a block in the mean time is

1 - e^(-t / T) = 1 - e^(-896 / 600) = 80 %

So, his block will most likely be orphaned. In fact it will be orphaned about 4 out of every 5 times. On average he will lose the 25 BTC block reward four times before succeeding. This is why my chart shows that the cost of this spam block is 100 BTC (25 BTC per attempt x 4 attempts = 100 BTC).
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
i wonder if there is a way to mathematically model the chances (and the size) of an exablock being orphaned the closer it gets to 10m to validate whereby the chances of a normal smaller sized next block coming along increasingly has a chance to orphan it?
I think the cost landscape for exablocks is at some point very weird. Imagine a truck full of decent HDDs. Chance of that propagation is 0 in practical terms and you come close to scenarios like 'Mr. Evil built this examiner and invalidated the whole chain in his basement. All transactions in the last couple years are thus invalid.'.

@Zangelbert Bingledack: Very well said @ narrow thinking & market forces!

@Peter R.:

Remember that we found that in principle, one could actually imagine a constant-data propagation scheme: Here's the merkle root hash for the next block, figure out yourself what went in there, solve the transaction puzzle.

Of couse, that is not at all a workable way to transmit transactions, due to the exponential cpu-time it will take to figure out the correct transaction set. I believe, though have no proof at all, that there is something similar to a 'transmission-cpu-time x transmission-bandwidth > h/2' lower, physical limit too.

That said, I don't even know whether it is in the end important to show that block synchronization is not in O(1). Miners are timestampers, and they can only timestamp what's floating around in the network anyways - everything else quickly gets expensive.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
world's largest mine continuing to drop:

 

rocks

Active Member
Sep 24, 2015
586
2,284
Explanation attempt #2:

Exablock concerns are a red herring, because it assumes that block propagation will continue to exist.

Block propagation will eventually cease completely, since it's the worst possible solution to the problem.

The network will move away from block propagation toward block solution propagation.

Block solution propagation means that transactions must be broadcast before the block solution.

This means there's no such thing as "exablocks" any more - only abnormally high transaction volumes.

Handling sudden bursts of abnormally high transaction volumes is the only long term problem that needs to be solved, which is a general case of needing to intelligently handle transaction transaction relaying decisions.

The originator of a transaction needs to both own bitcoins and convince other nodes to relay.

The former puts an inherent limit on the number of fake transactions they can broadcast, especially if most nodes limit the dependency chains of unconfirmed transactions they will accept.

If nodes start treating transaction relaying as a service for which they should be compensated, then the cost of producing the transactions needed to produce the "exablock" goes up even further.
[doublepost=1447273259][/doublepost]This could end up with a market where users are paying nodes to relay their transactions, and miners are buying un-mined transactions from nodes to produce blocks with, and then paying nodes to relay their block solutions to the rest of the network.

Price discovery will occur at several different levels.
This is a great explanation / walkthrough of the concern and why it's not a long term issue.

One way or another we are going to move towards mechanisms that simply confirm pre-announced and known transactions, as this eliminates redundant traffic and greatly speeds block transmission. This turns the pre-mined exablock attack into a standard spam attack.

If the market needs some sort of "limit" to make people comfortable, my preference is for automatically adjusting limits that grow with normal transaction volume. This effectively removes the limit but also prevents the abnormal pre-mined attack.

A simple version of this is the block limit re-adjusts every 2000 blocks to be the 4x the average of the prior 2000 blocks, which makes the limit self adjusting just as difficulty is self adjusting. Hard limits concern me because you run into them again later. By then it might be even harder to change the rules.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
all,

fwiw still short on time to contribute in a meaningful way. just want to thanks everybody for the tons of wisdom distributed in the last few pages. really really impressed.

that said I just want to share with you the the video of Daniel Krawisz's presentation at Las Vegas Bitcoin Investor Conf:


Thanks, that was/is a great talk. Daniel Krawisz's hits a ball out the park.

Bitcoin is useful because people invest in it, not because of the technology. The technology is infinitely reproducible. (and again @ 14:19-14:45 nails it for me)


It is the lack of respect for this fact above that is my red flag market. While many Core Developers give lip service to this they choose to work on Bitcoin precisely because it is useful not because of its technology.

If it’s technology that matters, why was Bitcoin worthless for its first year?

The technological features of Bitcoin are infinitely reproducible, and People have tried this lot! If you remember back in 2013-14 we saw the altcoin mania for a while and people though you could just create an infinite number of currencies and make money from that.
This talk explains why Bitcoin is a Peer-to-Peer Electronic Cash System and why we need a Cash system in our economy. Krawisz has outlined exactly why I’m investing in Bitcoin.
 
Last edited:
  • Like
Reactions: awemany

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@awemany "This disconnect - the seemingly rational first-order attacks that eventually all crash and break at the rock-hard shore of the Bitcoin incentive system - does indeed weird me out a bit about Bitcoin's design."

I think it all has to do with incentives and feedback loops. If it were just a static system, then yes, technical vulnerabilities could be exploited to attack the system. But if the system has self-corrective feedback then that could explain how it could respond to these attacks and overcome them.

Determining whether or not Bitcoin falls into this category is of utmost importance when trying to understand it.

Looking at Bitcoin as a complex adaptive system helps to understand what's going on. Things like free market economics, and biological evolution are other examples of complex adaptive systems. Basically, these systems are very difficult to model in detail since they are complicated, but we can look at high-level "emergent behavior" to determine how the dynamics of the system will play out. Sometimes complex systems will evolve into very precarious fragile states called a self-organized criticality. When this happens, they can suddenly collapse, or change characteristics abruptly.

In other cases, complex systems become very stable and resilient. The concept of anti-fragility falls into this category. In these cases, the systems in question tend to have certain characteristics. They respond to small disturbances by adapting and becoming stronger. They develop their structure into many loosely coupled modules or sub-components. These modules become complex internally, and increasingly specialized in their external interfaces. The system sub-components develop redundant functions, while at same time having random variations in how they work. They evolve over time, and become increasingly complex.

So looking at Bitcoin and seeing how it develops over time, seeing things like multiple node implementations develop is a good sign. Also, seeing the whole ecosystem respond to adversity by adapting and developing redundancy is positive. Things like the block-relay network are also good, and in future I would hope to see many more and better parallel communication channels adding redundancy.

And we are also part of this system! So nudging things in the direction of anti-fragility, diversity, redundancy, and free markets will only help increase the chances that Bitcoin succeeds.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
I completely agree that the block transmission process will be smoothed (using coding gain techniques such as weak blocks, thin blocks, IBLT, etc.); however, @awemany and I were working on a proof to show that it's actually impossible to achieve true O(1) block transmission if

(a) the network has a finite size, d,

(b) miners build blocks according to their own volition and act rationally to maximize their profit.

Instead, block transmission remains O(1 + kN) where k becomes very small but never zero.

We haven't really pinned down the proof yet, but I think it's pretty clearly true: imagine a miner at a distant corner of the network receives a new transaction with a big juicy fee. There exists some fee that will entice him to add that new TX to the block he is working on before he is confident that the rest of the network is aware of that TX (since it takes a finite amount of time for the TX to propagate across the network due to the speed of light constraint). If he solves the block quickly, then his block solution announcement will necessarily contain information about this TX in addition to the constant sized header.
So a healthy market will develop for transaction fees for priority service. There will still be possible space for very low fees, and even free transactions most probably more so while the block subsidy is still active. (even if just to clear mempool and guarantee your including propagated transactions)

It still makes lots of sense to maintain 0 confirmation network as transactions sitting in a mempool, that is more or less, universal agreed before being added to a block.

Could one possible attack on medium to large miners be that an attacker can brawdcast a big transaction with a big fee to that miner only, in the hope it is includes it in a block that is discovered quickly, that then propagates slowly increasing its orphan risk?