Gold collapsing. Bitcoin UP.

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@rocks
The simple fact is nodes which can not contribute to the valid chain built by miners, should switch to SPV clients. Otherwise they leech bandwidth from the network and do not contribute, such users would be better moved to SPV clients.
Although I'm not sure how much getting the benefits of running a full node can be considered leeching, given that people who care to run a full node presumably have a strong interest in Bitcoin scaling as well, it's an interesting point in light of today's reddit thread about historical full node count decline:

https://www.reddit.com/r/Bitcoin/comments/3p5n9c/number_of_bitcoin_nodes_is_at_a_6_year_low_if_you/

(@Bloomie Linking in text is still broken on my Android, and I just updated the version.)
 
Last edited:

yrral86

Active Member
Sep 4, 2015
148
271
@cypherdoc @Peter R

We are in full agreement in enabling market based decisions and moving decisions from miners/users and away from developers. That said, I think full nodes (which download the blockchain) should operate in a manner that supports and follows miner preferences (miners are the source of security).

I think there are disadvantages to having nodes on the network that operate with preferences below miners. Such nodes are leechers and in P2P networks leechers cause significant negative effects.

For example, let's say miners start to build 100MB blocks. Most home users find their upload pipe fully saturated so they set their preference to not transmit blocks larger than 10MB so they can continue to run a full node. Such nodes become leechers, they download the blockchain but do not contributed to uploading to others. If lets say 50% (or more) nodes start to do this, this puts significant upload pressure on the remaining nodes. Bittorrent networks do not function with too many leechers, and neither will Bitcoin's I'd believe.

The simple fact is nodes which can not contribute to the valid chain built by miners, should switch to SPV clients. Otherwise they leech bandwidth from the network and do not contribute, such users would be better moved to SPV clients.

My view is it is best for BU to either:
1) accept unlimited blocks or
2) follow the BIP101 schedule or
3) follow BIP100 voting.
And not set their own limit below all of these, such users should move from full nodes to SPV clients.

This means preference should really be set by the miners with the blocks they mine being the sole source of votes. Nodes should follow whatever the miners vote for (provided they validate and agree with the transactions included). That is the security model of bitcoin IMHO.
If you separate the concerns of upload bandwidth limitation and blocksize, there is less of an issue. When a block is too large for the given bandwidth, nodes can still contribute transaction broadcasting. Each node needs a block header and the transactions. I find it unlikely that if we raise the block size we stick with transmitting full blocks around. Block headers will by synced and each node will download the transactions they have not already seen (or a subset if they want to act as SPV node/client). The difference between a node and client being whether they also broadcast transactions and block headers. SPV clients would be leachers. SPV nodes would contribute transaction and block header upload.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
But why add an extra dimension of risk in the first place? If miners know that all nodes will transmit blocks up to size X or are able to poll the network for node preferences, then there is one less dimension of risk towards building larger blocks.

We're already seeing conservative behavior from the pools, why encourage more of that
that's a good point, if I'm understanding BU correctly, miners will pole the network and when demand for greater transaction volume is there they rake a risk to include more transactions for profit stretching the average. but (and this was my original concern) what if max block size set by 51% of nodes can't be exceeded by miners even if they wanted to make bigger blocks.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
Bitcoin Unlimited: A Peer-to-Peer Electronic Cash System for Planet Earth

Satoshi's original vision--a scalable Bitcoin

Bitcoin Unlimited adheres to Satoshi Nakamoto’s vision for a system that could scale up to a worldwide payment network and a decentralized monetary system. Transactions are recorded on an unforgeable global ledger known as the Blockchain. The Blockchain is accessible to anyone in the world, secured by cryptography, and maintained by the most powerful single-purpose computing network ever created.

Governed by the code we run

The guiding principle for Bitcoin Unlimited is that the evolution of the network should be decided by the code people freely choose to run. Consensus is then an emergent property, objectively represented by the longest proof-of-work chain.

Values and beliefs: adoption is paramount

- Bitcoin should freely scale with demand through a market-based process

- The user’s experience is important

- Low fees are desirable

- Instant (0-conf) transactions are useful

- Resistance to censorship and security against double spending improves with adoption

Technical: put the user in control

- Software fork of Bitcoin Core

- User-adjustable max block size limit (unlimited by default)

- Bitcoin Unlimited can simultaneously flag support for multiple block size limit proposals (BIP100, BIP101, etc.)

- The block size limit is considered to be part of the transport layer rather than part of the consensus layer; Bitcoin Unlimited can be configured to accept a chain with an excessive block, when needed, in order to track consensus.

Politics: Bitcoin is interdisciplinary

The voices of scientists, developers, entrepreneurs, investors and users should all be heard and respected.

****************************************************

Critiques? I'm trying to come up with a simple "1 pager" that communicates the most important points.
to avoid being called an alt I think you should explicitly say: "Transactions are recorded on an unforgeable global ledger known as the Bitcoin Blockchain"

just some thinking on the name too, Core is such a powerful name it sounds like it's the hart of Bitcoin when in fact its just an implementation.

Unlimited has a connotation of being the antagonist in this block size debate, I'd be concerned it may meet some resistance on that fact alone. (if we think of Bitcoin Unlimited as the working name does anyone feel this project may have a better branded name?)

name ideas anyone? here's my first take:
Bitcoin Secure
Bitcoin Vision
Bitcoin Central Core
Bitcoin Consensus
Bitcoin Evolution
Bitcoin System
[doublepost=1445149929,1445149128][/doublepost]
Rusty responding to @solex :


To me it seems that Blockstream's vision for Bitcoin is for the block size limit to be a policy tool used to balance "fees" with "decentralization."
decentralization being a measure of the number of nodes running a centrally controlled code base.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@rocks you have to look at the full system. every one of those "leacher" full nodes could be supporting 1000 SPV clients.

@AdrianX. I really like unlimited. It really says that bitcoin is for everyone.
 
  • Like
Reactions: Peter R and AdrianX

_mr_e

Active Member
Aug 28, 2015
159
266
I like the idea that BU could completely run parallel to the current bitcoin network. It would just be that the function that determines the chance of getting their block accepted would be 0% for > 1mb until it has determined that there are enough nodes that will accept something bigger. It would be a pretty damn smooth fork.
 
  • Like
Reactions: Peter R

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@peter_r: I think that these 2 points need to be combined because the first one give people who are currently part of this debate the incorrect impression that Bitcoin Unlimited is just core/XT but with user config. But the truth is that it is a very different configuration parameter in BU because its consensus algorithm is very different (WRT this setting).

- User-adjustable max block size limit (unlimited by default)

- The block size limit is considered to be part of the transport layer rather than part of the consensus layer; Bitcoin Unlimited can be configured to accept a chain with an excessive block, when needed, in order to track consensus.
 
  • Like
Reactions: ladoga and Peter R

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
Still not sure about the default block size limit being "unlimited." With an "8 MB" default, BU might be palatable to a greater number of people (e.g., the Chinese miners already gave their blessing for 8 MB) and deflect a lot of FUD (e.g., "zomg unlimited is totally reckless").
I think if we're going to cut these alternatives loose, we cut them loose entirely and make the distinction clear. Default Unlimited.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
Rusty responding to @solex :


To me it seems that Blockstream's vision for Bitcoin is for the block size limit to be a policy tool used to balance "fees" with "decentralization."
I have been thinking of a different way of presenting the argument about the risk of perpetually full blocks, i.e. >75% over days.

Consider that there are 2 types of Bitcoin transaction:
  • C or "Care tx" the user pays a fee, expects a money-moving service, and actually cares about prompt confirmation.
  • D or "Don't Care tx" the user pays, but is spamming, advertising, or stress-testing, and doesn't care about whether confirmation occurs.
This means that there are 3 types of blocks, C, D and a CD mixture. In practice we only see CD blocks, however the ratio C/D in each block is variable, but also immeasurable.

Core Dev and many 1MBers think that there is an unquantifiable amount of D that can be squeezed out (raising the CD ratio) as real-world ecosystem growth continues, by using a hard block limit (L). They also think that this reduces centralisation pressures by keeping blocks smaller than otherwise, even though centralisation pressure has historically been due to users choosing SPV/web wallets and pool mining; not due to the average block size.

So, although there have been periods of blocks averaging >900KB the likelihood is that these have not yet had a very high percentage of C type tx. The network can hum along with potential C+D > L where potential C < L (as seen in stress-tests), but as soon as potential C > L then damage results:
  • Angry users complaining on the forums about non-confirming tx
  • Media attention, bad press, PR disaster
  • Price collapse
  • Users moving to alternative cryptocurrencies, increasing their market share
Fortunately, Gavin and Mike can see this and XT provides a lifeboat, but only after a lot of damage has occurred.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@rocks

I don't think it's quite fair to say that non mining nodes don't contribute security to the network ie that miners provide all the security. They provide the valuable property of decentralized verification and relaying to the network and so serve an important role. The greater their numbers the better. And they do it for free under today's model. The user configurable seeing allows them to choose their own destiny.

If they decide to relay only smaller blocks than what the longest chain is propagating, instead of becoming leechers, don't they just get forked off and become non participants? Maybe they even continue to provide the block chain to new nodes only up to the height from where they got forked off? Maybe they continue to relay and verify TX's? I don't think they just become leeches.
[doublepost=1445194718,1445194069][/doublepost]@so
I have been thinking of a different way of presenting the argument about the risk of perpetually full blocks, i.e. >75% over days.

Consider that there are 2 types of Bitcoin transaction:
  • C or "Care tx" the user pays a fee, expects a money-moving service, and actually cares about prompt confirmation.
  • D or "Don't Care tx" the user pays, but is spamming, advertising, or stress-testing, and doesn't care about whether confirmation occurs.
This means that there are 3 types of blocks, C, D and a CD mixture. In practice we only see CD blocks, however the ratio (C/D) in each block is variable, but also immeasurable.

Core Dev (and many 1MBers) think that there is an unquantifiable amount of D that can be squeezed out (raising the CD ratio) as real-world ecosystem growth continues, by using a hard block limit (L). They also think that this reduces centralisation pressures by keeping blocks smaller than otherwise, even though centralisation pressure has historically been due to SPV/web wallets and pool mining, not the block size.

So, although there have been periods of blocks averaging >900KB the likelihood is that these have not yet had a very high percentage of C type tx. The network can hum along with potential C+D > L where potential C < L (as seen in stress-tests), but as soon as potential C > L then damage results:
  • Angry users complaining on the forums about non-confirming tx
  • Media attention, bad press, PR disaster
  • Price collapse
  • Users moving to alternative cryptocurrencies, increasing their market share
Fortunately, Gavin and Mike can see this and XT provides a lifeboat, but only after a lot of damage has occurred.
Damage occurs well before C>L, namely when C+D>L and the number of C's is less than it would be otherwise.
 
Last edited:
  • Like
Reactions: ladoga and solex

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@peter_r: I think that these 2 points need to be combined because the first one give people who are currently part of this debate the incorrect impression that Bitcoin Unlimited is just core/XT but with user config. But the truth is that it is a very different configuration parameter in BU because its consensus algorithm is very different (WRT this setting).

- User-adjustable max block size limit (unlimited by default)

- The block size limit is considered to be part of the transport layer rather than part of the consensus layer; Bitcoin Unlimited can be configured to accept a chain with an excessive block, when needed, in order to track consensus.
This is a really good point. The idea that the block size limit is not part of the consensus layer is actually novel (although probably the original intention of Satoshi). Perhaps one could write a research paper advancing the idea, with Bitcoin Unlimited as a reference implementation.

We need a good diagram to illustrate how BU's view of consensus is different than Core's.
 
Last edited:

Bloomie

Administrator
Staff member
Aug 19, 2015
511
803
(@Bloomie Linking in text is still broken on my Android, and I just updated the version.)
Was this issue discussed somewhere else before? If not, can you please start a thread?

Guys, if you need to draw my attention to something forum-related, please send a PM instead of tagging me in a thread. I don't read every post on the forum, and it's easy to lose track of notifications since I get quite a few of them.
 

rocks

Active Member
Sep 24, 2015
586
2,284
@rocks
I don't think it's quite fair to say that non mining nodes don't contribute security to the network ie that miners provide all the security. They provide the valuable property of decentralized verification and relaying to the network and so serve an important role. The greater their numbers the better. And they do it for free under today's model. The user configurable seeing allows them to choose their own destiny.
Agree nodes contribute to the security model, but their contribution largely is validation to keep miners honest and to ignore invalid chains and thus incentivize miners to build honest chains.

We are introducing a new question of what is validation here. If a node accepts a chain of blocks for it's own internal purposes, but refuses to pass the chain on to others, is the node still validating the chain for the community or is it rejecting the chain? It is behaving almost as if it invalided the chain by refusing to propagate the chain. I'm not sure what this means yet.

If they decide to relay only smaller blocks than what the longest chain is propagating, instead of becoming leechers, don't they just get forked off and become non participants? Maybe they even continue to provide the block chain to new nodes only up to the height from where they got forked off? Maybe they continue to relay and verify TX's? I don't think they just become leeches.
They don't get forked off. They continuously download the chain for their own use and pass transactions on, but no longer valid the chain since they don't communicate it. From a block propagation view, they function as leaves to the network, not as nodes.

What I am worried about is if 5% of the nodes (mostly miners) continue to upload the blockchain to others, but 95% of the nodes simply download the full chain for their own use. This would magnify the upload requirements for real full nodes by 20x since they have to upload a block 20 times each to pass it on to the 95% who don't. Bittorrent networks break down when this happens.

A full node is one that validates a chain both for itself and for the community at large. This includes validating blocks, passing transactions on and passing the longest valid chain on. A node which does not do all of these things is not a full node. In terms of network participation it is functioning as something less than a full node, but still draws the same resources from the network (while not providing them back). There are adverse effects to this. We can say "but most people will leave the unlimited default" and that may be true today, but it won't be when blocks are 100MB.

We are in complete agreement on BU and the motivations for it. However I think introducing new user parameters that limit the node's usefulness to the network to be a mistake.That is my only concern here.
 
  • Like
Reactions: AdrianX

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@theZerg :

OK, I added a new section and adjusted the other sections to reflect this change:

What makes a valid block?

From the Bitcoin white paper, "nodes accept the block only if all transactions in it are valid and not already spent." A block cannot be invalid because of its size. Instead, excessively large blocks that would pose technical challenges to a node are dealt with in the transport layer, increasing the block's orphaning risk. Bitcoin Unlimited nodes can accept a chain with an excessive block, when needed, in order to track consensus.

Link to complete post: https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-68#post-2503
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I just thought of something: wasn't the accidental hard fork in early 2013 (the LevelDB bug) a direct result of the block size limit being part of the consensus layer? If Bitcoin Unlimited was running with @theZerg's/@awemany's idea to accept excess blocks once they're buried at a certain depth, then wouldn't that incident have been automatically resolved without any intervention?
 

yrral86

Active Member
Sep 4, 2015
148
271
No, the bug was a misconfiguration of BerkeleyDB (the DB used before LevelDB). The older software could not handle the larger blocks, but this was unknown. Raising the limit triggered the bug in the older software, but the newer software did not have any issues handling the larger blocks. We will have more of these incidents as we push the limits of the software outward.

At the time, the older version actually had more hashpower, but to maintain continuity certain pools gave up their rewards and quickly patched their software. They took a short term financial hit in order to preserve consensus in the face of an unplanned divergence in implementation behavior.

In a diverse, multi-implementation future, any bugs should only hit a smaller subset of miners, so the longest chain can win out without intervention. But if a bug hits the majority, and limits block size accidentally, sticking with the longest chain will effectively force the block size back into the consensus layer until a fix can be widely deployed.
 
Last edited:
  • Like
Reactions: ladoga