yes I like your rewording.@theZerg :
OK, I added a new section and adjusted the other sections to reflect this change:
What makes a valid block?
From the Bitcoin white paper, "nodes accept the block only if all transactions in it are valid and not already spent." A block cannot be invalid because of its size. Instead, excessively large blocks that would pose technical challenges to a node are dealt with in the transport layer, increasing the block's orphaning risk. Bitcoin Unlimited nodes can accept a chain with an excessive block, when needed, in order to track consensus.
Link to complete post: https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-68#post-2503
It wasn't nearly that deterministic.I'm still confused and @theZerg seems to disagree with you. There must have been something about that block that 0.7 said NO to while 0.8 said YES to. What was it?
This is interesting because it suggests a connection between having a centralized "reference" implementation and ending up with the blocksize in the consensus layer.In a diverse, multi-implementation future, any bugs should only hit a smaller subset of miners, so the longest chain can win out without intervention. But if a bug hits the majority, and limits block size accidentally, sticking with the longest chain will effectively force the block size back into the consensus layer until a fix can be widely deployed
Just to add a few actual pointsI just thought of something: wasn't the accidental hard fork in early 2013 (the LevelDB bug) a direct result of the block size limit being part of the consensus layer? If Bitcoin Unlimited was running with @theZerg's/@awemany's idea to accept excess blocks once they're buried at a certain depth, then wouldn't that incident have been automatically resolved without any intervention?
another critique that I've always found insightful is this one from @2112 at btctalk:gavin andresen https://github.com/bitcoin/bips/blob/master/bip-0050.mediawiki#root-cause said:Bitcoin versions prior to 0.8 configure an insufficient number of Berkeley DB locks to process large but technically valid blocks. Berkeley DB locks have to be manually configured by API users depending on anticipated load. The manual says this:
The recommended algorithm for selecting the maximum number of locks, lockers, and lock objects is to run the application under stressful conditions and then review the lock system's statistics to determine the maximum number of locks, lockers, and lock objects that were used. Then, double these values for safety.
Because max-sized blocks had been successfully processed on the testnet, it did not occur to anyone that there could be blocks that were smaller but require more locks than were available. Prior to 0.7 unmodified mining nodes self-imposed a maximum block size of 500,000 bytes, which further prevented this case from being triggered. 0.7 made the target size configurable and miners had been encouraged to increase this target in the week prior to the incident.
Bitcoin 0.8 does not use Berkeley DB. It uses LevelDB instead, which does not require this kind of pre-configuration. Therefore it was able to process the forking block successfully.
Note that BDB locks are also required during processing of re-organizations. Versions prior to 0.8 may be unable to process some valid re-orgs.
This would be an issue even if the entire network was running version 0.7.2. It is theoretically possible for one 0.7.2 node to create a block that others are unable to validate, or for 0.7.2 nodes to create block re-orgs that peers cannot validate, because the contents of each node's blkindex.dat database is not identical, and the number of locks required depends on the exact arrangement of the blkindex.dat on disk (locks are acquired per-page).
2112 https://bitcointalk.org/index.php?topic=152208.0 said:Just create the file named "DB_CONFIG" in the ".bitcoin" or "AppData/Roaming/Bitcoin" directory that contains the following:
The default was 10000. You can monitor the lock usage using "db_stat -e". I've tested it on an old laptop with 0.3.24 and 0.5.0. Obviously it will work with alt-coins based on the Bitcoin & BDB code. The same file should go into the respective "testnet" subdirectories.Code:set_lg_dir database set_lk_max_locks 40000
Please don't omit the "set_lg_dir database" line. Without it you may corrupt the "_db.00?" and/or "log.*" files if you are unlucky and run the Berkeley DB utilities like "db_stat". With this line you will at worst get the error message from those utilities.
Good luck.
Edit: Oh, I forgot about the most obvious or non-obvious: restart the bitcoind/bitcoin-qt.
If software had metacognition that might be a valid approach.Would it be correct to say that V0.7 didn't strictly say that the block was invalid, but rather it triggered some branch of the code (that was expected never to trigger) that defaulted to labelling the block as invalid simply because it knew it didn't know the correct answer (b/c it ran out of BDB locks)? In other words, the program wanted to say "something went wrong and I don't know if this block is valid or invalid"?
i think you're right that we are introducing a new question here; should we allow users/full nodes to set their definition of a valid blocksize. i agree that the significance of this is not fully understood.Agree nodes contribute to the security model, but their contribution largely is validation to keep miners honest and to ignore invalid chains and thus incentivize miners to build honest chains.
We are introducing a new question of what is validation here. If a node accepts a chain of blocks for it's own internal purposes, but refuses to pass the chain on to others, is the node still validating the chain for the community or is it rejecting the chain? It is behaving almost as if it invalided the chain by refusing to propagate the chain. I'm not sure what this means yet.
They don't get forked off. They continuously download the chain for their own use and pass transactions on, but no longer valid the chain since they don't communicate it. From a block propagation view, they function as leaves to the network, not as nodes.
What I am worried about is if 5% of the nodes (mostly miners) continue to upload the blockchain to others, but 95% of the nodes simply download the full chain for their own use. This would magnify the upload requirements for real full nodes by 20x since they have to upload a block 20 times each to pass it on to the 95% who don't. Bittorrent networks break down when this happens.
A full node is one that validates a chain both for itself and for the community at large. This includes validating blocks, passing transactions on and passing the longest valid chain on. A node which does not do all of these things is not a full node. In terms of network participation it is functioning as something less than a full node, but still draws the same resources from the network (while not providing them back). There are adverse effects to this. We can say "but most people will leave the unlimited default" and that may be true today, but it won't be when blocks are 100MB.
We are in complete agreement on BU and the motivations for it. However I think introducing new user parameters that limit the node's usefulness to the network to be a mistake.That is my only concern here.
My suggestion would be to have a separate code path for handling the "invalid" state that produces a fraud proof.Bitcoin Unlimited's view of consensus is that a block is valid if all transactions in that block are valid and not already spent (and the PoW meets the difficulty target). The output of any software test on a block must either be:
1. Valid
2. Invalid
3. I don't know
@Peter R nice work Peter.Bitcoin Unlimited: A Peer-to-Peer Electronic Cash System for Planet Earth
A scalable Bitcoin
The vision for Bitcoin Unlimited is a system that could scale up to a worldwide payment network and a decentralized monetary system. Transactions are grouped into blocks and recorded on an unforgeable global ledger known as the Bitcoin blockchain. The Blockchain is accessible to anyone in the world, secured by cryptography, and maintained by the most powerful single-purpose computing network ever created.
Governed by the code we run
The guiding principle for Bitcoin Unlimited is that the evolution of the network should be decided by the code people freely choose to run. Consensus is then an emergent property, objectively represented by the longest proof-of-work chain.
What makes a valid block?
From the Bitcoin white paper, "nodes accept the block only if all transactions in it are valid and not already spent." A block cannot be invalid because of its size. Instead, excessively large blocks that would pose technical challenges to a node are dealt with in the transport layer, increasing the block's orphaning risk. Bitcoin Unlimited nodes can accept a chain with an excessive block, when needed, in order to track consensus.
Values and beliefs: adoption is paramount
- Bitcoin should freely scale with demand through a market-based process
- The user’s experience is important
- Low fees are desirable
- Instant (0-conf) transactions are useful
- Resistance to censorship and security against double spending improves with adoption
Technical: put the user in control
- Software fork of Bitcoin Core
- Bitcoin Unlimited can simultaneously flag support for multiple block size limit proposals (BIP100, BIP101, etc.)
- The block size limit is considered to be part of the transport layer rather than part of the consensus layer. The user can adjust his node's block size limit based on the technical limitations of his hardware, while still ensuring that his node follows the longest proof-of-work chain.
Politics: Bitcoin is interdisciplinary
The voices of scientists, developers, entrepreneurs, investors and users should all be heard and respected.
****************************************************
Critiques? I'm trying to come up with a simple "1 pager" that communicates the most important points.
This is called "tri-state" in CS lingo BTWThe output of any software test on a block must either be:
1. Valid
2. Invalid
3. I don't know
Yes, its FUD. What will happen is the bad implementations will fork or crash out of the network and either be fixed or abandoned. The result will be a much stronger and robust network with clients written with the sort of meta-cognition you are talking about -- they are essentially aware that they may not be right. This type of programming is pretty common in scaleable or high availablity applications today...Anyways, I'm really starting to think that the "multiple implementations won't work because you can't guarantee bug-for-bug compatibility" is just more FUD from Core Dev.
then dealing with the "don't know" case should really be left up to the node operator. I personally would want my node to fork back to the longest PoW chain (and hobble along in some error mode if necessary) even if it meant being uncertain about the validity of a given block.Governed by the code we run
The guiding principle for Bitcoin Unlimited is that the evolution of the network should be decided by the code people freely choose to run. Consensus is then an emergent property, objectively represented by the longest proof-of-work chain.