Gold collapsing. Bitcoin UP.

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I thought one version said the block was invalid (because it was too big) while the other version said the block was valid? What "consensus rule" did the two versions disagree on for that block then?
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@theZerg :

OK, I added a new section and adjusted the other sections to reflect this change:

What makes a valid block?

From the Bitcoin white paper, "nodes accept the block only if all transactions in it are valid and not already spent." A block cannot be invalid because of its size. Instead, excessively large blocks that would pose technical challenges to a node are dealt with in the transport layer, increasing the block's orphaning risk. Bitcoin Unlimited nodes can accept a chain with an excessive block, when needed, in order to track consensus.

Link to complete post: https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-68#post-2503
yes I like your rewording.
 
Last edited:

yrral86

Active Member
Sep 4, 2015
148
271
Sorry, I got it backwards. We actually went back to the chain compatible with the older codebase to allow more time for people to upgrade.

My point was, however, that manual intervention occurred in order to choose the shorter chain and preserve consensus.

https://bitcointalk.org/index.php?topic=152030.msg1613200#msg1613200

There was no "consensus rule" that was disagreed upon explicitly. There was an unknown limitation in the pre 0.8 software that was triggered by the larger blocks. This was not an explicit limitation, but was a consequence of the default configuration of an underlying library. Such things happen in software engineering, and can never be fully accounted for ahead of time.

The intentions of the developers are one thing. The actual behavior should match, but mistakes will be made and the actual behavior will differ. The 2013 fork happened not because an explicit limit was triggered, but because a bug prevented the validation of a block well below the explicit limit. Even if there was no explicit limit the behavior would have been the same.
 
  • Like
Reactions: ladoga and AdrianX

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I'm still confused and @theZerg seems to disagree with you. There must have been something about that block that 0.7 said NO to while 0.8 said YES to. What was it?
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
I'm still confused and @theZerg seems to disagree with you. There must have been something about that block that 0.7 said NO to while 0.8 said YES to. What was it?
It wasn't nearly that deterministic.

If any version of the reference client ran out of BDB locks for any reason while trying to validate a block, it would return a false result for block validity.

Whether or not any particular node would run of out locks on a particular block was highly dependent on the exact database state state of that node - how long it had been online, how many orphaned chains it had observered (which were stored in the database), etc.

It was entirely possible for two pre-0.8 nodes running identical versions of the reference client on identical OS images and on identical hardware to have divergent behavior in terms of validating a block if they had a different set of network peers and/or uptime.

You could say that the conditions under which a 0.7 node would reject a block was deterministic, but on that other hand that determinism was based on state unique to every node so effectively block validity criteria were for all practical purposes non-deterministic prior to 0.8.
 
  • Like
Reactions: ladoga and AdrianX

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
In a diverse, multi-implementation future, any bugs should only hit a smaller subset of miners, so the longest chain can win out without intervention. But if a bug hits the majority, and limits block size accidentally, sticking with the longest chain will effectively force the block size back into the consensus layer until a fix can be widely deployed
This is interesting because it suggests a connection between having a centralized "reference" implementation and ending up with the blocksize in the consensus layer.
[doublepost=1445228959,1445228205][/doublepost]@solex

For completeness of the analysis, we should probably note that there isn't always a clean line or principled distinction between "Care" and "Don't care" transactions. As blockspace becomes tighter, in fact, I'd expect the various gradiations along the spectrum between C and D to come to the fore. Hence the idea of a fee market. However, this doesn't take away much from the point that an artificially created fee market is missing the forest for the trees, or in Nick Szabo's words, creating and playing a small game while oblivious to the larger game.
 
  • Like
Reactions: AdrianX and Peter R

sickpig

Active Member
Aug 28, 2015
926
2,541
I just thought of something: wasn't the accidental hard fork in early 2013 (the LevelDB bug) a direct result of the block size limit being part of the consensus layer? If Bitcoin Unlimited was running with @theZerg's/@awemany's idea to accept excess blocks once they're buried at a certain depth, then wouldn't that incident have been automatically resolved without any intervention?
Just to add a few actual points

gavin andresen https://github.com/bitcoin/bips/blob/master/bip-0050.mediawiki#root-cause said:
Bitcoin versions prior to 0.8 configure an insufficient number of Berkeley DB locks to process large but technically valid blocks. Berkeley DB locks have to be manually configured by API users depending on anticipated load. The manual says this:

The recommended algorithm for selecting the maximum number of locks, lockers, and lock objects is to run the application under stressful conditions and then review the lock system's statistics to determine the maximum number of locks, lockers, and lock objects that were used. Then, double these values for safety.
Because max-sized blocks had been successfully processed on the testnet, it did not occur to anyone that there could be blocks that were smaller but require more locks than were available. Prior to 0.7 unmodified mining nodes self-imposed a maximum block size of 500,000 bytes, which further prevented this case from being triggered. 0.7 made the target size configurable and miners had been encouraged to increase this target in the week prior to the incident.
Bitcoin 0.8 does not use Berkeley DB. It uses LevelDB instead, which does not require this kind of pre-configuration. Therefore it was able to process the forking block successfully.

Note that BDB locks are also required during processing of re-organizations. Versions prior to 0.8 may be unable to process some valid re-orgs.

This would be an issue even if the entire network was running version 0.7.2. It is theoretically possible for one 0.7.2 node to create a block that others are unable to validate, or for 0.7.2 nodes to create block re-orgs that peers cannot validate, because the contents of each node's blkindex.dat database is not identical, and the number of locks required depends on the exact arrangement of the blkindex.dat on disk (locks are acquired per-page).
another critique that I've always found insightful is this one from @2112 at btctalk:

2112 https://bitcointalk.org/index.php?topic=152208.0 said:
Just create the file named "DB_CONFIG" in the ".bitcoin" or "AppData/Roaming/Bitcoin" directory that contains the following:
Code:
set_lg_dir database
set_lk_max_locks 40000
The default was 10000. You can monitor the lock usage using "db_stat -e". I've tested it on an old laptop with 0.3.24 and 0.5.0. Obviously it will work with alt-coins based on the Bitcoin & BDB code. The same file should go into the respective "testnet" subdirectories.

Please don't omit the "set_lg_dir database" line. Without it you may corrupt the "_db.00?" and/or "log.*" files if you are unlucky and run the Berkeley DB utilities like "db_stat". With this line you will at worst get the error message from those utilities.

Good luck.

Edit: Oh, I forgot about the most obvious or non-obvious: restart the bitcoind/bitcoin-qt.
 
  • Like
Reactions: AdrianX

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
OK thanks @sickpig, @Justus Ranvier, @yrral86 and @theZerg; I think I'm starting to understand.

Would it be correct to say that V0.7 didn't strictly say that the block was invalid, but rather it triggered some branch of the code (that was expected never to trigger) that defaulted to labelling the block as invalid simply because it knew it didn't know the correct answer (b/c it ran out of BDB locks)? In other words, the program wanted to say "something went wrong and I don't know if this block is valid or invalid"?
 
Last edited:
  • Like
Reactions: AdrianX

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Sorry, I started writing a post tried to delete it when I read deeper but it got sent anyway. While it would be wonderful to imagine that BU deployed in both version would have avoided the 2013 issue, really the only reason it would have is because we would have tested very large blocks from the get-go and therefore found the bug. The issue AFAICT without analyzing the code and patch directly is that the code was written "poorly" -- it basically said "if anything goes wrong (with this portion of the code) then reject the block". This allowed a program error to affect consensus.

I think "we" have evolved to a philosophy more like "Determine block validity following these clearly defined rules. if anything goes wrong with handling a valid block then abort the program." That is, the "consensus" rules are more important than any particular program instance. Ofc this is also dangerous. In a single-source environment a block that triggers a bug could in theory at least cause most of the bitcoin nodes to go down. But presumably the bad block would not be relayed, so an attacker would probably have to deliberately craft both a bad block and an aggressive propagation node to actually accomplish this.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
Would it be correct to say that V0.7 didn't strictly say that the block was invalid, but rather it triggered some branch of the code (that was expected never to trigger) that defaulted to labelling the block as invalid simply because it knew it didn't know the correct answer (b/c it ran out of BDB locks)? In other words, the program wanted to say "something went wrong and I don't know if this block is valid or invalid"?
If software had metacognition that might be a valid approach.

You can teach the code to recognize that particular failure mode and respond appropriately (although what the appropriate response to "I can't determine if this block is valid or not" isn't clear), but there's always a non-zero possibility of some new failure mode which will not be covered by any existing error handling path.
 
  • Like
Reactions: AdrianX

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
"If software had metacognition that might be a valid approach."

I think that with multiple protocol implementations, it sort of can have metacognition :)

Bitcoin Unlimited's view of consensus is that a block is valid if all transactions in that block are valid and not already spent (and the PoW meets the difficulty target). The output of any software test on a block must either be:

1. Valid
2. Invalid
3. I don't know

The BDB fork was a result of one version saying "Valid" and the other one saying (or should have been saying) "I don't know." Although V0.7 was correct to not relay that block, it should not have ruled that block as invalid. Instead, when it realized that consensus was following the chain with the questionable block, then it should have tracked consensus in the same fashion as @theZerg is proposing for dealing with excessive blocks.

I should point out that @Mengerian has been advocating this idea of "robustness to tracking consensus" and might have comments to make here.

Anyways, I'm really starting to think that the "multiple implementations won't work because you can't guarantee bug-for-bug compatibility" is just more FUD from Core Dev.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@rocks
Agree nodes contribute to the security model, but their contribution largely is validation to keep miners honest and to ignore invalid chains and thus incentivize miners to build honest chains.

We are introducing a new question of what is validation here. If a node accepts a chain of blocks for it's own internal purposes, but refuses to pass the chain on to others, is the node still validating the chain for the community or is it rejecting the chain? It is behaving almost as if it invalided the chain by refusing to propagate the chain. I'm not sure what this means yet.



They don't get forked off. They continuously download the chain for their own use and pass transactions on, but no longer valid the chain since they don't communicate it. From a block propagation view, they function as leaves to the network, not as nodes.

What I am worried about is if 5% of the nodes (mostly miners) continue to upload the blockchain to others, but 95% of the nodes simply download the full chain for their own use. This would magnify the upload requirements for real full nodes by 20x since they have to upload a block 20 times each to pass it on to the 95% who don't. Bittorrent networks break down when this happens.

A full node is one that validates a chain both for itself and for the community at large. This includes validating blocks, passing transactions on and passing the longest valid chain on. A node which does not do all of these things is not a full node. In terms of network participation it is functioning as something less than a full node, but still draws the same resources from the network (while not providing them back). There are adverse effects to this. We can say "but most people will leave the unlimited default" and that may be true today, but it won't be when blocks are 100MB.

We are in complete agreement on BU and the motivations for it. However I think introducing new user parameters that limit the node's usefulness to the network to be a mistake.That is my only concern here.
i think you're right that we are introducing a new question here; should we allow users/full nodes to set their definition of a valid blocksize. i agree that the significance of this is not fully understood.

but what do you mean that a full node would continue to DL a big block larger than it accepts "for it's own use"? why would they do this? won't it be able to tell that it's too large from the header and thus refuse to DL the rest of the block? in that sense it wouldn't be draining network bandwidth.

furthermore, why would they allow themselves to be leaves of the network when what they should want is to be active participants, ie, full functioning nodes? they don't help themselves by not being able to transact on the main longer chain that accepts bigger blocks. they should simply be convinced to up their settings to accept bigger blocks (note that i am not yet convinced of the adding blocks to a full node's chain by the "excessive block" method currently being proposed).
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
Bitcoin Unlimited's view of consensus is that a block is valid if all transactions in that block are valid and not already spent (and the PoW meets the difficulty target). The output of any software test on a block must either be:

1. Valid
2. Invalid
3. I don't know
My suggestion would be to have a separate code path for handling the "invalid" state that produces a fraud proof.

Then you end up with a flow that looks like:

  1. Try to prove the block is valid.
  2. If the previous step fails, try to prove the block is invalid.
  3. If the previous step fails, it's not possible to determine the best chain.
Options for handling case 3 include: shutting down the application or waiting (potentially forever) until it's possible to determine the best chain again.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
Bitcoin Unlimited: A Peer-to-Peer Electronic Cash System for Planet Earth

A scalable Bitcoin

The vision for Bitcoin Unlimited is a system that could scale up to a worldwide payment network and a decentralized monetary system. Transactions are grouped into blocks and recorded on an unforgeable global ledger known as the Bitcoin blockchain. The Blockchain is accessible to anyone in the world, secured by cryptography, and maintained by the most powerful single-purpose computing network ever created.

Governed by the code we run

The guiding principle for Bitcoin Unlimited is that the evolution of the network should be decided by the code people freely choose to run. Consensus is then an emergent property, objectively represented by the longest proof-of-work chain.

What makes a valid block?

From the Bitcoin white paper, "nodes accept the block only if all transactions in it are valid and not already spent." A block cannot be invalid because of its size. Instead, excessively large blocks that would pose technical challenges to a node are dealt with in the transport layer, increasing the block's orphaning risk. Bitcoin Unlimited nodes can accept a chain with an excessive block, when needed, in order to track consensus.

Values and beliefs: adoption is paramount

- Bitcoin should freely scale with demand through a market-based process

- The user’s experience is important

- Low fees are desirable

- Instant (0-conf) transactions are useful

- Resistance to censorship and security against double spending improves with adoption

Technical: put the user in control

- Software fork of Bitcoin Core

- Bitcoin Unlimited can simultaneously flag support for multiple block size limit proposals (BIP100, BIP101, etc.)

- The block size limit is considered to be part of the transport layer rather than part of the consensus layer. The user can adjust his node's block size limit based on the technical limitations of his hardware, while still ensuring that his node follows the longest proof-of-work chain.

Politics: Bitcoin is interdisciplinary

The voices of scientists, developers, entrepreneurs, investors and users should all be heard and respected.

****************************************************

Critiques? I'm trying to come up with a simple "1 pager" that communicates the most important points.
@Peter R nice work Peter.

Just read this again and this stood out.

"Low fees are desirable"

While I agree with the statement is still subjective. It should say: free market fees, free of central controle or manipulation or something. (Fee market optimized fees)

We have a potential 7 billion people who would like to exchange value with no fees and miners who want as much as they can charge for a fee.

We also have a very critical minority in the Core developers who are saying they want a free market fee system with higher fees as a result of manipulating the Bitcoin code (Block Size).

I understand Bitcoin is optimized to converge on the minimal fee to provided the necessary security to protect the value exchanged on the network. Should transaction volume not be artificially limited. I also think there may be space for free transaction with old coins if miners and nodes want to encourage macro economic manipulation much like the FED tries to maintain money velocity with interest rates.
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
The output of any software test on a block must either be:

1. Valid
2. Invalid
3. I don't know
This is called "tri-state" in CS lingo BTW

Anyways, I'm really starting to think that the "multiple implementations won't work because you can't guarantee bug-for-bug compatibility" is just more FUD from Core Dev.
Yes, its FUD. What will happen is the bad implementations will fork or crash out of the network and either be fixed or abandoned. The result will be a much stronger and robust network with clients written with the sort of meta-cognition you are talking about -- they are essentially aware that they may not be right. This type of programming is pretty common in scaleable or high availablity applications today...

However, there is a nasty "transitional" period where the unexpected failure of a particular implementation has large ramifications in the network because it still is a large pct of the network.

@AdrianX I like the explicit idea that low fees are important. But you are correct we don't want some kind of CB subsidizing fees in the future. How about something like "Free market fee dynamics should settle on low fees"
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@Peter R Regarding "robustness to tracking consensus": The way I look at it is to consider the individual incentives of participants and think about how that will play out globally. The primary function of running a full node is to track the network consensus. Think of any business transacting in Bitcoin, or even individuals. They will want to know, with a high degree of confidence, which transactions are considered valid or invalid by the rest of the network. So yes, I think it makes sense for the software to try to detect cases where there is some uncertainty in the network consensus.

In addition to your 3 cases (valid/invalid/don't know), another important situation is the case where two parallel proof-of-work chains are being built by the miners, ie, a blockchain fork. This is definitely a place where the software should employ "Meta-cognition", at a minimum alerting the user. (like the term meta-cognition btw :) ) It should do this even if the two chains are different lengths or the software considers one of them invalid.

The fact that these individual incentives line up to make Bitcoin a robust decentralized consensus network is cool, and part of why Bitcoin is so interesting. But we should keep in mind that the primary "purpose" of a full node is to serve the individual interests of the node operator.
 
Last edited:
  • Like
Reactions: AdrianX and Peter R

yrral86

Active Member
Sep 4, 2015
148
271
In a multi-implementation network, the "I don't know" case can ask for a second and third opinion (or more) from other implementations. If those agree, you can probably trust the result.
 
  • Like
Reactions: Peter R

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
It seems we're in agreement that the output of a validator's check on a given block can either be:

1. Valid
2. Invalid
3. Don't know

However, it seems there's disagreement about the correct course of action in the case of "don't know." I don't think we need agreement here, however. Like @Mengerian said, the "primary 'purpose' of a full node is to serve the individual interests of the node operator." If it is true that
Governed by the code we run

The guiding principle for Bitcoin Unlimited is that the evolution of the network should be decided by the code people freely choose to run. Consensus is then an emergent property, objectively represented by the longest proof-of-work chain.
then dealing with the "don't know" case should really be left up to the node operator. I personally would want my node to fork back to the longest PoW chain (and hobble along in some error mode if necessary) even if it meant being uncertain about the validity of a given block.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Gavin gave an interesting talk at DevCore regarding validation costs [pdf, reddit].

Here is a slide particularly relevant to our discussion regarding what is and isn't strictly part of the consensus layer:



Like the block size limit, the number bytes to hash and the number of sigops should not strictly be viewed as part of the consensus layer, but rather part of the transport/validation layer. That is, these rules serve a different purpose than the consensus rules against double spending or creating coins out of thin air.
 
Last edited: