Outline for a “Bitcoin Unlimited” White Paper

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I’m wondering if enough knowledge about BU and emergent consensus exists to write a white paper. The purpose of this thread is to (a) brainstorm the main points that such a paper should touch upon, and (b) collect links to the relevant information that should be cited.

To get us started...

1. Introduction

2. The Most Extended Chain

- Miners mine upon the most-likely-to-win chain rather than the longest chain. We'll refer to the most-likely-to-win chain as the “most extended” chain (as per @Roger_Murdock) .

- The "most extended" chain is normally the longest chain (it contains the most PoW) but it gets "retarded" by a factor that accounts for strange things that might make miners and nodes more likely to reject or orphan it.

- For example, a chain with a block at its tip that takes longer than 10 min to validate/propagate is less extended than the chain without that block (as per @theZerg's recent paper) even though it contains more work.

- Formalize this idea further...

- The "retardation factor" cannot be precisely measured or defined--only approximated. Each node and miner is free to make its own estimates.

- It is through this process of miners picking the most-likely to win chain by using their discretion for edge cases--rather than blindly mining upon the chain with the most PoW--that keeps Bitcoin's consensus objective.

- The reason for this is that the alternative is to define "the longest valid chain as the valid chain" which--as @digitsu just pointed out in his recent article--relies on weak subjectivity (Core defines what is "valid" rather than validity being an emergent property of the network). In BU, the subjectivity would appear to enter into the system in real-time in a decentralized fashion as each miner applies his own "retardation factor" (based also on what he thinks will be accepted by the nodes).

3. There is No Block Size Limit

- Rational arguments against the existence of a strict block size limit:
+ node operators and miners have always been free to roll their own code
+ The 1 MB limit in Core's reference implementation was just an "inconvenience barrier"
+ Core cannot prevent nodes/miners from changing their limits if miners and nodes are free to join the network

- Empirical evidence against the existence of a strict block size limit:
+ my charts here.

4. Large Blocks

- Formalize with math when a node will fork from consensus and when a block will be accepted into the longest PoW chain. Prove that:
+ A node with a block size limit greater than the hash-power weighted median will always follow the longest chain.
+ A large block will be accepted into the longest chain if it is smaller than the hash-power weighted median block size limit.
+ Analyze the conditions for network split events.

5. Forking Pressure

- Introduce the concept of "forking pressure" in the context of the block size limit as a node's or miner's desire to allow the network to process more transactions per second (looks like @Roger_Murdock just spoke about this here)

- Here is perhaps a partial quantitative description (the deadweight loss due to the existence of a production quota with Qmax < Q* is a sort of "pressure.")

6. Consensus Pressure

- Introduce the concept of "consensus pressure," which is particularly important for miners, and which is related to the great advantage that ensues if miners agree together on the same (or compatible) limits.

- It is best if the majority of the hash power has the exact same limit, in order to prevent network split under the conditions defined in Section 4 (in the very unlikely cases when it could happen).

- Perhaps "consensus pressure" could be modelled as something as simple as:

P_consensus = K x ( Median limit - Node's limit).

7. A Simple Model of Emergent Consensus

- Show using a simple program (a la Wolfram NKS) that entities that can "feel" forking pressure and that can "feel" consensus pressure, will spontaneously agree on new block size limits as pressure builds, so long as they have some means to signal their settings to the other nodes.

8. Conclusion
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
yes this is a good idea. It could be a higher level no-math summary of the other papers, delve into the pholosophy a bit, and also it would be very useful to go over BUIP001 carefully since newcomers seem to miss it today.
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
Agree this is a great idea @Peter R

Especially point 6 seems generally under appreciated in many anti-BU arguments. A good example is the 'iterative pruning' argument here: https://bitco.in/forum/goto/post?id=7389#post-7389

To us it seems obvious that every participant on the network has huge incentives to maintain consensus. But many of the counter arguments imagine scenarios where miners, or non-mining nodes, split from consensus for various trivial reasons. So it might be good to go through these objections one-by-one, and describe how economic and game-theoretic incentives at the individual level provide strong corrective forces against blockchain forks, and toward consensus.
 
  • Like
Reactions: Peter R

Aquent

Active Member
Aug 19, 2015
252
667
Yeh we probably need some sort of white paper and thanks for taking the initiative and for the effort.

I'd suggest that your point 2, just after the introduction, is moved to point 7. The ordering of the rest, bumped by 1 I suppose after taking out point 2, seems fine. The reason is, in my view, because the depth concept is somewhat complex and very minor (just a fail safe) as well as introducing novel things such as most likely to win most extended etc. It confuses when it is the first thing, when we want the primary idea, which is quite novel in itself, to take the main brain share and to be as simple and intuitive to the readers.

Once the primary idea is explained then the fail safe concept can be explained, perhaps somewhere near the conclusion.

Moreover I'd make it clear that the fail safe mechanism is distinct from the primary idea and the two are not related or dependent as well as highlight that it can be turned off.

In regards to emerging consensus , it's easy for readers to discard it as abstract stuff which we don't know works. That's why I'd want to focus quite a bit on the practice. That is I would zoom in on our experience in the past 7 years with lifting the soft limit.

People don't really like the uncertainty. They want to hear concrete things. Obviously we want to lay the theory, but focus on the practice. Your graph there is perfect. I'd perhaps try and research those events when we lifted the soft limit (although I understand time is a constrain). I don't remember the 250k one, but the 500 and 750k there usually was some reddit post and the miners just lifted it and pretty much a non event really with everything going just perfectly fine and no one even noticing.

That then can be used as evidence that we are just keeping in our concrete traditions and simply doing what we always have and know very much how to do. So there is nothing abstract about it sort of thing and there is no need to prove it works because we know it works from past experience.

So my suggestion is, consider the failsafe thing just above conclusion and add some meat to the theory by providing the evidence of the past 7 years in regards to the soft limit. Everything else is great. Can't wait for it.
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@theZerg: Agree on most points, but I'm wondering if we want to make the paper "implementation agnostic" so that it won't be censored from the Blockstream Core forums and mailing lists. This would mean that we wouldn't talk about BUIP001 specifically, except for perhaps in an Appendix that we could remove depending on the audience.

@Mengerian: Agreed on all points.

@Aquent: Agreed that we should not focus on the "acceptance depth" idea. What I wanted to do in Section 2 was explain that miners already mine the most-likely-to-win chain. We already have evidence of this:

1. the 2010-08-15 integer overflow event
2. the 2013-03-11 bdb event
3. the summer 2015 SPV mining event

#1 shows that miners will not always follow the longest chain even if it is *valid* as defined by the code.
#2 shows that miners will switch to a minority chain if it is deemed necessary for the health of Bitcoin.

I then wanted to formalize this by calling the best chain the "most extended chain" and use @theZerg's result of a large block that takes > 10 min to validate/propagate as actually retarding the chain rather than extending it. I think it will all make sense once it's written up with some diagrams...
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Hmm... I wouldn't try to write around their censorship. It will get to those who want to read it anyway, and I think it will be censored for its subject matter regardless. But if you don't want it to be instantly censored, you'll have to not have the words "Bitcoin Unlimited" in the name.

Anyway, we do seem to need a white paper specific to Bitcoin Unlimited which would basically be what you are suggesting plus some details...
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
I'm thinking a careful and precise definition of longest chain could be the linchpin of Section 2. "Most extended chain" may be the final form, or there may be room for even further precision. I've already seen several people elsewhere try to refine the definition for greater accuracy, which is a sign to me that it's a crucial term on which many things hinge. In such cases, we get many results "for free" just by having a completely precise definition, and the more refined the definition, the more elucidating the analysis becomes.
 
  • Like
Reactions: Peter R

plasticAiredale

New Member
Aug 28, 2015
2
3
On a less important point, I wonder if you should change the wording of "retard" in the most extended chain section to something less open for PC-controversy. Even though you are using the word in a technical sense, its possible people (trolls) may use it against it, even if they aren't actually "triggered". :rolleyes:

Maybe change it to the "emcumbered factor" or some other synonym without the extra baggage.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@plasticAiredale

When @Roger_Murdock mentioned the idea of the "most extended chain" where a big block at the tip would be treated as though it was a few minutes behind, I immediately thought of the parallel to retarded time used in electrodynamics.



At first I thought I should pick another word, but now I think it might help make things more clear. For example, if a small-block proponent FUDs about a 1 GB block attack, we can reply that such a block would "be significantly retarded" and literally mean it.

My hunch is that the terminology will help to get the message across the miners and nodes aren't stupid and will not blindly build upon ridiculously large blocks.
[doublepost=1452026735,1452025925][/doublepost]I also find it amusing that the physicist Oleg D. Jefimenko used the concept of "retarded time" to find a closed-form solution to "Maxwell's Equations."
 
Last edited:
  • Like
Reactions: Bagatell

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
- The reason for this is that the alternative is to define "the longest valid chain as the valid chain" which--as @digitsu just pointed out in his recent article--relies on weak subjectivity (Core defines what is "valid" rather than validity being an emergent property of the network). In BU, the subjectivity would appear to enter into the system in real-time in a decentralized fashion as each miner applies his own "retardation factor" (based also on what he thinks will be accepted by the nodes).
In line with the second sentence, it seems to me it's not that Core's approach relies on weak subjectivity, but that it relies on centralized subjectivity. If each miner is freed to choose which blocks to accept and reject (for any reason at all), then each miner is making a subjective choice taking into account their business calculations as well as the Keynesian beauty contest over Schelling points - i.e., will this be an acceptable blocksize to others? The trouble with Keynesian beauty contests is that they can't really be calculated as the potential complexity is boundless; but it can at least be characterized, and the alternative anyway is central planning.

Some loose thoughts on that:

 
  • Like
Reactions: Peter R

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
So if someone says "Large blocks are retarded", we can agree with that ;)

Thinking about the order, how about moving 5 and 6 (forking pressure and consensus pressure) up to the beginning? Seems to me these could be explained generally, and apply to many things other than block size. The case of miners trying to keep mining 50 BTC block reward after the last halving is a good illustrative example. This could lead into discussion of how consensus parameters are ultimately enforced by the market. Block size could then be introduced as a different case where perhaps the market would favour a fork, if that is deemed by most to be in their best interest.
 

bitcartel

Member
Nov 19, 2015
95
93
@Peter R Unfortunately a phrase like "retarded" is going to distract from the arguments being made. I think words like "Brake", "Drag", "Resistance" could work better.
 

Erdogan

Active Member
Aug 30, 2015
476
855
You could name the branch that is most likely to succeed (be the longest in the future) the speculative chain or the forward chain or the premium chain or the future chain, words loaned from the future markets. (Digression: We could even have the market for betting on each new blocks success!)
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
In line with the second sentence, it seems to me it's not that Core's approach relies on weak subjectivity, but that it relies on centralized subjectivity. If each miner is freed to choose which blocks to accept and reject (for any reason at all), then each miner is making a subjective choice taking into account their business calculations as well as the Keynesian beauty contest over Schelling points - i.e., will this be an acceptable blocksize to others? The trouble with Keynesian beauty contests is that they can't really be calculated as the potential complexity is boundless; but it can at least be characterized, and the alternative anyway is central planning.
That's a good point!

So both BU and Core rely on subjectivity--the difference is how that subjectivity enters the system. In the case of Core, it is when the "developers" proclaim they have "konsensus" (with a "k" to differentiate from consensus as defined by the longest chain).

I think we could say that with Core, the purpose of proof-of-work is only to come to consensus on the ordering of transactions. The rules encoded by the protocol are absolute and cannot be changed without "konsensus." An example of "konsensus" is the change in the protocol rules used to erase the 184 billion bitcoins created on 15 August 2010 (block #74,638). This was known as the "Value Overflow Incident." Although the protocol permitted the creation of these coins, the developers came to "konsensus" that this was not the intended behaviour of the system, and organized miners to mine a new chain off block #74,637, thereby orphaning the offending block. This is an example of Core making a subjective (but very reasonable) decision to change the rules as defined by the code, based only on their intuitive belief of what Bitcoin is.

In Bitcoin Unlimited, the subjectivity enters continuously as miners and nodes deal with edge cases--each miner choosing to build upon the chain that he deems most likely to win. There is only one type of "consensus" and it is defined objectively by the most extended chain. Proof-of-work is used both to determine the ordering of transaction, but also to subtly express the views and desires of the participants who compose the system. BU is organic and can evolve through a bottom-up process much like nature itself. Core is rigid and can only evolve through a top-down process.
 

chainstor

New Member
Aug 28, 2015
16
25
Prevailing chain? (technically correct, but common use of word is more like 'common')

Prevalent chain? ( in the context of the more powerful, dominant)
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
If we take seriously the idea that each miner and node shapes what Bitcoin is (i.e., what constitutes a valid block or the *valid* most-extended chain) through their individual subjective choices about which blocks to generate, relay, and build on, the logical endpoint is something like a "block controller" (or team) working at each mining pool/firm and business node. In other words, when each block is worth hundreds of thousands or millions of dollars, real-time human monitoring of the network by each participant to optimize block production and relay choices for maximum mining profit and business currentness (or direct node profit, if node-incentivization ideas like @Justus Ranvier's are implemented).

In such a future, these professionals, much like daytraders, would want software that helps them work. BU's oversize block acceptance depth would be helpful as something like a stop loss for a daytrader, for example. BU takes the first steps toward such a future, where Bitcoin is grown up, leaving the Core nest to fend for itself. Bottom-up control will have to be asserted for maximum efficiency, adaptivity, and to guard against the attack vector of consensus handed down from Core central command.

Bitcoin is in that Forkology 101 sense a creature of the market: the decisions of each infrastructural stakeholder to maximize their own profits is what "canals" everyone into following the consensus rules, not Core dev shepherding or Satoshi's setting in stone at the start - though Satoshi's precedents / social contract and Core's recommendations do serve as Schelling points around which consensus will or may form.

--

Now despite the utility of hiring someone to monitor the tip of the chain, it may be theorized that for business certainty reasons the market won't change the blocksize cap very often. It would likely be in stasis for longish periods, like a year, jumping higher in a series of discrete jumps as we discussed in another thread.

The result is a series of punctuated equilibria. Whereas in evolution koinophilia (preference for "the usual" in mate selection) may explain why there are discrete species as there is a disincentive to step out of line with a minor mutation, in Bitcoin the incentive is economic ("coinophilia," if you will :D). In the koinophilia article a criticism of koinophilia is given: it cannot explain why beneficial mutations would ever be adopted. This is like how people wonder if Bitcoin might never change because people like staying in consensus too much.

The answer to both, I think, is that koinophilia and the inertia of established consensus (and trust of "known factor" Core) only serve as a hurdle. Once a beneficial mutation that is sufficiently survival-enhancing appears, or once a change to the Bitcoin consensus rules becomes deemed sufficiently value-enhancing, the threshold will be crossed and a quantum jump to the next stable genetic variation or the next market-favored Schelling point will happen.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Interesting and potentially relevant paper:

http://www.sciencedirect.com/science/article/pii/S0096300315010619

The peloton superorganism and protocooperative behavior

Abstract. A theoretical framework for protocooperative behavior in pelotons (groups of cyclists) is proposed. A threshold between cooperative and free-riding behaviors in pelotons is modeled, together comprising protocooperative behavior (different from protocooperation), hypothesized to emerge in biological systems involving energy savings mechanisms. Further, the tension between intra-group cooperation and inter-group competition is consistent with superorganism properties. Protocooperative behavior parameters: 1. two or more cyclists coupled by drafting benefit; 2. current power output or speed; and 3. maximal sustainable outputs (MSO). Main characteristics: 1. relatively low speed phase in which cyclists naturally pass each other and share highest-cost front position; and 2. free-riding phase in which cyclists maintain speeds of those ahead, but cannot pass. Threshold for protocooperative behavior is equivalent to coefficient of drafting (d), below which cooperative behavior occurs; above which free-riding occurs up to a second threshold when coupled cyclists diverge. Range of cyclists’ MSOs in free-riding phase is equivalent to the energy savings benefit of drafting (1-d). When driven to maximal speeds, groups tend to sort such that their MSO ranges equal the free-riding range (1-d).