Gold collapsing. Bitcoin UP.

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
If everyone is running BU, the slow nodes are going to be pruned. And then a month later, the *new* slow nodes are going to be pruned (though they used to be faster than the slowest 10%, the slowest 10% is now gone so the node suddenly finds itself in the slowest 10%). And the miners have incentive to do this so long as there is more money to be made by getting more fees.
The general problem with this type of reasoning is that you are treating the behaviour of entities on the network in a static stimulus/response manner. In reality people will change their behaviour as circumstances change. If some problem arises in future, people will adjust their actions, and can update the software they run.

Also, there are countervailing incentives to keep block sizes limited. For example, miners have an incentive to coordinate to limit block size somewhat to try to maximize total transaction fees.

In a decentralized community, people can find ways to coordinate their action voluntarily without central controls, if there are benefits to doing so. One likely outcome would be the emergence of "Schelling Points" of agreement. For blocksize, for example, nodes might converge around block size limits of 8MB, or 20MB, BIP101 schedule, or some other limit, depending on the circumstances at the time.
 

VeritasSapere

Active Member
Nov 16, 2015
511
1,266
My first real attempt at explaining Bitcoin Unlimited, let me know if I am missing something or if there are any mistakes in my understanding here:
VeritasSapere said:
Everyone can essentially decide the blocksize for themselves under Bitcoin Unlimited, it takes this power away from centralized development teams. In effect this would have to be a negotiation between the miners and the economic majority. If more people started running BU nodes then the miners could simply start mining bigger blocks as they see fit and appropriate. As opposed to the type of central economic planning that placing this power with a development team would necessitate, having multiple implementations of course expands this choice, however BU is even better since the blocksize limit would become a product of the emergent consensus of the Bitcoin network as a whole. Ultimately this would reflect the true supply and demand of block space as opposed to the arbitrary limits set by Core.
 
  • Like
Reactions: majamalu and Inca

jonny1000

Active Member
Nov 11, 2015
380
101
@Zangelbert Bingledack You are correct, today I'd set my max mining size to 1MB, and my excessive size to 1MB. So you would not mine on a big block fork until other people have done so.

I think we may need to tweak the "accept depth" semantics a bit. Let's say that the miner sets it to 4 blocks, and fork is created where EVERY block is excessive. Today IIRC the miner will always be 4 blocks behind a hash-rate majority, the user will always be 4 blocks behind the chain tip.

But the more I think about it the more I think that the algorithm should be to look for the FIRST INSTANCE of an excessive block on that chain. After all the point of accept depth is to prove that there is significant hash power on a chain, compared to someone who just got lucky. And if the first instance is > the accept depth, the chain is accepted.

So the difference between this proposal and today is that after 4 "excessive blocks" the miner pops to the chain tip rather than always trailing it.
Yes this is a great improvement. I think this removes almost 50% of the attack vectors associated with the N depth idea. Now can we please change the default value of N to 1 or infinity?

Just to clarify, is the rule now that you need 4 confirmations on the first larger block and the overall lead, to be the most valid chain?
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
The idea that some people think removing Coinbase from the bitcoin.org website is mind boggling to me.

I think this will make it more obvious to many reasonable people how anti-social these tactics are. Acts like this are so obviously unreasonable, that even people who aren't paying much attention to the debate will notice. It is a tactical mistake by those pushing the "small-block" party line.

This is one of the major consequences of censorship. Censorship may harm the target, but I believe it does even more fundamental harm to those on the side doing the censoring. When they are not exposed to differing viewpoints, they lose all perspective of what are reasonable differences of opinion. They sink further and further into their entrenched positions and rigid groupthink. Their arguments become weak and unconvincing.

Those of us who are willing to expose our ideas to debate and scrutiny can learn from our challengers. We can learn how other people interpret or misinterpret our words, and adjust them to attempt to achieve clear communication. We can learn the weak points in our own arguments, and shore them up as needed. We might even learn that we are wrong in some ways, and update our position.

In a battle of ideas, the best strategy is to seek truth.
 

Inca

Moderator
Staff member
Aug 28, 2015
517
1,679
My first real attempt at explaining Bitcoin Unlimited, let me know if I am missing something or if there are any mistakes in my understanding here:
Great effort. Here is another attempt..:


Bitcoin Unlimited (BU) returns the power over deciding what constitutes the bitcoin protocol back to the individual. We believe that parameters important to network scaling such as the maximum blocksize should be decided by the network through emergent consensus rather than arbitrarily chosen by a group of developers. We strongly believe that as bitcoin grows the maximum block size should be based upon a network driven consensus between miners and nodes. The bitcoin network should not rely on centrally planned scaling parameters by trusted developers where the market and network can decide instead.
 

VeritasSapere

Active Member
Nov 16, 2015
511
1,266
Having spent a significant part of my life trying to understand the powers that be, I have come to realize that it is more useful focusing on the things that we can know, instead of speculating on the unknowable, what is known is often enough of a condemnation and often more effective due to the evidence supporting it. Wisdom can be defined as recognizing our own lack of understanding and knowledge, we are often not in a position to know certain things.
 

VeritasSapere

Active Member
Nov 16, 2015
511
1,266
You might have noticed that I did like what you said there. I suspect that many of the small blockists think the same about us. I am confident we are on the right side of history though, the dream is alive, and we will prevail. :)
 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@theZerg
I upvoted it earlier today. Now, it's being downvoted! From 8 to 6 to 5... and then to 7 upvotes. (I refreshed it a few times now.)

Repost if not getting attention. It's a war ;)

EDIT: Thanks for BU, btw ;)
 
  • Like
Reactions: AdrianX and bitsko

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@theZerg

There was just so much action today that a lot of news got lost. I would just re-post your paper in a couple of days. Today, CoinDesk also published this article that they asked me to write. Although it received 256 tweets, there wasn't really a peep on Reddit, here, or at BitcoinTalk about it.
 

Epilido

Member
Sep 2, 2015
59
185
This is true! But it ignores a super important idea: different parts of the network have different throughput. For the rest of this example, I'm going to assume an infinite amount of transactions with an unlimited fee, because that's the assumption under which a fundamental block size limit was proven.

If you are a miner, and you know a block of size X can be processed by 85% of the network, but not 100%, do you mine it? If by 'network', we mean hashrate, then definitely! 85% is high enough that you'll be able to build the longest chain. The miners that can't keep up will be pruned, and then the target for '85% fastest' moves - now a smaller set of miners represents 85% and you can move the block size up, pruning another set of miners.

If by 'network', you mean all nodes... today we already have nodes that can't keep up. So by necessity you are picking a subset of nodes that can keep up, and a subset that cannot. So, now you are deciding who is safe to prune. Raspi's? Probably safe. Single merchants that run their own nodes on desktop hardware? Probably safe. All desktop hardware, but none of the exchanges? Maybe not safe today. But if you've been near desktop levels for a while, and slowly driving off the slower desktops, at some point you might only be driving away 10 nodes to jump up to 'small datacenter' levels.
To make a large block you need transactions to fill the block(unless you are intentionally spamming the network)

The slow node will be pruned only for the time it takes to download and verify the block (similar to a node that is doing the initial sync) as long as the blocks come back down to the size that the slow node can handle it will catch up and return to the network.

If there are so many transactions that the slow node cannot keep up either the network needs to be constrained or the slow node will have to upgrade to continue to support the network.

The choice here is: What is the right hardware for the node? How fast should this minimum hardware improve?
I think the answer is the market aka network should decide.

We can only hope for a future that has larger blocks with more transactions (that are not spam) because this would mean much larger adoption and I expect more censorship resistance due to wider more varied group of users

Intentionally manufacturing artificially large blocks has a cost. They may be orphaned or forked away from. If you are using BU then you track the longest chain and continue to validate the chain. When the attack is over you are still on the longest correct chain. Is seems that this method allows a large miner create a large block that temporarily prunes some nodes but still allows the pruned nodes to verify the transactions at the fastest rate the node is capable of. This is useful in times of stress and attack.