Unlimited really should be Unlimited...

Gavin Andresen

New Member
Dec 9, 2015
19
126
It seems to me Bitcoin Unlimited really should be the "Unlimited" choice.

I've been thinking about the consensus code related to block limits, sigop limits, etc. for quite a while, and for everybody except miners, the rule can be pretty darn simple:

Don't enforce any limits on blocks. As long as blocks have valid proof-of-work, accept them, no matter how big they are or how long they take to validate.

Yes, theoretically a 'rogue miner' could produce a block that takes a long time to broadcast or validate. So what? Worst case is your node spends half an hour validating it, and then re-orgs onto the longer chain with less expensive-to-validate blocks.

For everybody except miners that looks no different from there being a gap of half an hour between blocks, which happens every once in a while. I wouldn't count it as a denial-of-service attack.

So my suggestion for Unlimited: just remove the block size and sigop counting code from CheckBlock().

For transactions, the existing IsStandard tests are sufficient to prevent denial-of-service attacks.

While you're changing code (disclaimer: I haven't looked at current Unlimited code, you might have already done this) change the MAX_PROTOCOL_MESSAGE constant in net.h back to 32 megabytes. Again, worst case is somebody gets you to allocate 32MB of memory-- "meh". If somebody actually wants to DoS attack you, they will just flood your ISP with packets until your ISP decides you're not worth the trouble and null-routes your connection. That's a lot easier and more effective than trying to send you valid 32MB bitcoin protocol messages.

-----

As for solo miners or pools using Unlimited with these suggested changes... that is fine, as long as the miner sets the block size to something the other miners won't reject, and as long as there are not economically irrational rogue miners producing expensive-to-validate-blocks.

I've seen zero evidence that there are any economically irrational rogue miners. However, to sidestep all of the "how many miners can dance on the head of a pin" arguments, if I were King of Unlimited I'd also rip out all of the mining-related code (CreateNewBlock / getblocktemplate / getmininginfo / etc) and explicitly say it is not for miners-- it is for everybody else who doesn't care about limits on blocks.
 

Erdogan

Active Member
Aug 30, 2015
476
855
Yes unlimited. Let the market sort it out, including bugs, and the limits of nature that have to be in there somewhere.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I (think I) like this a lot! When we were planning Bitcoin Unlimited, there were several people who wanted to simply make it "unlimited"; however, we decided on the current scheme to: (a) reduce FUD vectors (e.g., "ZOMG no limit!"), and (b) because at that time we were designing for mining nodes too so we made compromises. It we strip out the mining code (and I think this is definitely something to consider), then I believe this proposal makes a lot of sense.

One question, @Gavin Andresen:

Let's say we do go unlimited. Should we still give node operators the option to set a lower limit?

An idea we had was that if node operators became worried about the size of blocks, then they could tell miners "hey, enough is enough!" by signalling their block size limits in (e.g.) their user-agent strings (which isn't Sybil-attack proof but may still be useful to help gauge the "feel" of the network nodes).
 

Peter Tschipper

Active Member
Jan 8, 2016
254
357
MAX_PROTOCOL_MESSAGE is commented out in BU and this comment is next to it

// BU: currently allowing 10*excessiveBlockSize as the max message
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
Haha. While we take rods to the fishpond Gavin likes to bring his dynamite.

Block validation will be best in a separate thread before this limit check is relaxed. In the short-term BUIP016 to synchronise with Classic is a simple temporary solution.

One idea for the sigops/sighash limiting is that emergent consensus can also be done for those (both mining & non-mining) by having settings which are not human entered. They are determined per node by the software performing an internal benchmark for the cpu/cores it is using for block validation. An incoming block which exceeds the benchmark determined limits is treated like an excessive block. If too many of these are seen then the node owner is using under-spec hardware.

With respect to removing the mining code. Again this is probably a long-term goal for when all other major implementations have done away with the antiquated universal block limit, i.e. if Classic were to adopt a BUIP001-like change when the 2MB needs changing. I suspect that many node owners "feel good" that their node could theoretically mine even if they don't intend to do it.

It would be really nice to see the effect that the public user-agent info has on the wider-community before getting too excited about changing the focus of BU.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
there were several people who wanted to simply make it "unlimited
i was one of them. that's b/c i feel that miners should know that if they produce a valid block, esp if it's a bigger block than has ever been seen on the network before, it should be validated and relayed by nodes, no questions asked. this keeps the blocksize decision making at the transport layer, as we advertise in the Articles. orphaning simply depends on latency from too big a block being created. allowing individual node operators to bring it back up into a consensus layer creates doubt in the minds of miners, esp if they can't get accurate stats on what the avg size block is being accepted on the network.

I've seen zero evidence that there are any economically irrational rogue miners.
i haven't either. f2pool doesn't count.

I'd also rip out all of the mining-related code
very interesting idea. implementation diversity. i like it.

and core devs from other different implementations then would be forced to work with miners to supply them code instead of against them.
 

Peter Tschipper

Active Member
Jan 8, 2016
254
357
. I suspect that a many node owners "feel good" that their node could theoretically mine even if they don't intend to do it.
I agree, there is no need to make the effort to take the mining code out. There is much to do in other areas and I think it's good to have the mining code in there while the long term block size issue is still unsettled and still will be even after the hard fork.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Peter Tschipper, @solex

One advantage of ripping out the mining code would be to figuratively state to the community: "Bitcoin Unlimited is for non-mining nodes! You guys are our customers and we want to listen to your wants and needs and work to keep you happy" (i.e., without having to worry about what miners want).

Also, from a purely marketing perspective, imagine if we kinda-sorta rebranded more specifically as the "implementation for users." We default to 'no limit,' get a nice endorsement from @Gavin Andresen ;), work more closely with the media (e.g., CoinDesk), and do a sort of re-launch when the next version of BU is ready.

I think Classic has done good things, especially with respect to communicating with miners; however, what we DON'T want is for everyone to blindly migrate as a herd from Core to Classic. The ideal situation is for the community to branch out to multiple implementations.

What is the best way--over the short term--for Unlimited to win node share during the migration away from Core?
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@Gavin Andresen

what's changed your thinking on sigops blocks? up to just now, you *have* been worried about them. along with everyone else.
 

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
i was one of them. that's b/c i feel that miners should know that if they produce a valid block, esp if it's a bigger block than has ever been seen on the network before, it should be validated and relayed by nodes, no questions asked. this keeps the blocksize decision making at the transport layer, as we advertise in the Articles. orphaning simply depends on latency from too big a block being created. allowing individual node operators to bring it back up into a consensus layer creates doubt in the minds of miners, esp if they can't get accurate stats on what the avg size block is being accepted on the network.
The orphaning risk can be made negligible by fast relay technologies (such as extreme thinblocks) that ensure that most of the validation work has been accomplished by nodes already when the block is actually broadcast. Because of this we cannot trust that the orphaning risk is enough to drive fee market, instead we'll have the tragedy of the commons. In other words, we need limits, the question is just who sets them. Will that be a privilege of a select group like devs, or miners? Or will it emerge through negotiation in which entire ecosystem takes part?

I think the work that we have already done points to the way forward. The advertising of individual settings in the user-agent string and the xthinblock protocol both empower full nodes. In particular, with further optimization of the relay protocol, the full nodes, distributed around the globe can form a relay network that serves as a replacement for Corallo's network, and so full nodes can become relevant to miners in a way that they currently are not. Beyond this, we would need to find some more "magic ingredients" to give incentive for miners to stay with the full nodes (and not form a separate relay system).

The idea in the opening post does not empower the nodes in any way AFAICS. It effectively negates everything we have done so far, and then anything goes. I agree with @solex in every point; the next logical step is to make the excessiveblock settings to reflect the node's actual computing resources. Then we should be thinking how to make the publishing of settings stronger and more sybil-resistant.

As for the removal of mining code: I actually haven't checked the GUI lately but I think the "generate Bitcoins" button has been long gone? The setgenerate function however is eminently useful when doing regression testing, where we have to create test blockchain from scratch. So when we switch to using continous integration server for builds, we absolutely need to have the mining capability in the client. And I kind of like the idea of users being capable of using the testnet and mining test coins and learning how things work in general.
 

Erdogan

Active Member
Aug 30, 2015
476
855
Is there really a tragedy of the commons scenario in the block space? Every single actor wants more, but the commons is better off with using less?
 

sgbett

Active Member
Aug 25, 2015
216
786
UK
I absolutely agree with Gavin on this.

Non mining nodes aren't miners, they just help miners by relaying blocks! Miners decide what blocks to build on, consensus emerges from this.

Anyone running a non-mining node should just do everything they can to propagate blocks, let the miners decide just like the system is supposed to work.

The 75% / 750MB block thing is purely a courtesy to others and is just being civilised. It in no way replaces consensus as originally defined.

I'm running BU for this reason. Tough call whether to switch to classic to signal, but I think this still counts as 'Support for 2MB'.
 
  • Like
Reactions: majamalu

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Yes, I do like this idea.

Unlimited would then still be a/the implementation that verifies the hash-power-wise longest chain of valid transactions, just as @Peter R. noted.

The advantage of this approach would be that BU full nodes are simply out of the way regarding blocksize.

Even in the case full nodes want some constraints on blocksize, I think our signaling of accepted block size through full node limits in a very confusing cloud of full nodes with varying versions and clients and incomplete information is an especially crude way to do so.
 
  • Like
Reactions: sgbett

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Most if not all of these secondary limits have already been commented out since what the OP is saying has been our guiding philosophy. That is, we believe that the only **real** consensus rules are those that protect Bitcoin's money function. Compromises for the sake of FUD management have occurred, such as setting the max message size (a transport layer decision) to 10x the max block size. I am inclined towards no limits at all and am preferring solutions that push potentially long running jobs into separate threads, but we have to assess the practical realities of the effort to do that compared to the value added.

WRT block size, remember that it IS unlimited in BU. Nodes simply refuse to relay excessive blocks and activate chains containing large blocks for a period of time.

Without engaging in the question of BU's relationship to mining, I'd like to observe that you can extend the OP's train of thought to miners.

All rules that do not protect Bitcoin's money function should really be conceived as Schelling points. If/when the economic majority of full nodes run BU, all of these rules become "soft forks" -- that is, temporary agreements that are enforced by mining majority but can be changed.

Miners can pick those points in the painful development-centric process we have been engaging in this past year, or miners can advertise what soft limits they have set in their blocks and Schelling points can coalesce around what the majority is advertising.

However, ultimately miners should act in their own economic self-interest. In the case of blocks, if the expected validation time will exceed the expected time to mine and validate a sibling block, the miner should do the latter. I showed this in my paper for block size, but my argument is trivially extended to any property of a block that affects validation time.


WRT @YarkoL 's comment that fast relay technologies will destroy the fee market, you are incorrect. As transaction issuance rates increase to meet or exceed actual line and RAM capacities, a significant % of the transactions in a block will not be available to a validating node. So the time to acquire these transactions will become part of the block validation time, and miners will be forced to mine sibling (forcing an orphan) or 1-txn (empty) blocks during this period, or risk producing an invalid block. And the production of empty blocks will naturally limit network transaction validation throughput as I showed in my paper.
 

Peter Tschipper

Active Member
Jan 8, 2016
254
357
The setgenerate function however is eminently useful when doing regression testing, where we have to create test blockchain from scratch. So when we switch to using continous integration server for builds, we absolutely need to have the mining capability in the client. And I kind of like the idea of users being capable of using the testnet and mining test coins and learning how things work in general.
I absolutely agree with @YarkoL . Without mining capability we would not be able to run the essential regression test suite.
 
  • Like
Reactions: YarkoL

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
@theZerg
nitty nit: I did not really claim that the relay tech would destroy the fee market, just that the orphaning risk might not be large enough to enable it. If I understand correctly your scenario, (I confess I have only skimmed your paper, I have to dig in sometime again and see if I'm able to decipher the maths) it is more likely that the nodes will just start dropping low-fee transactions according to their capacity, which is another way to have the fee market (and was in fact mentioned in the OP's paper on fast block propagation).

But in order for this to follow, nodes should first have to be relevant to miners. The mempool policies of individual nodes do not matter, if miners simply relay transactions to each other to be validated through centralized relay network, and if users begin to submit their transactions directly to miners. Same thing with publishing excessiveblock settings, under what conditions do they matter? jstolfi among others have remarked that currently full nodes are dead weight to the miners, so if Unlimited is for nodes, how can we change that situation? I don't believe getting rid of any excessive blocksize settings and just accept whatever blocks miners choose to create will advance toward that goal.
 

sgbett

Active Member
Aug 25, 2015
216
786
UK
WRT the dead weight comment: is it fair to say nodes only provide value to their owner in terms of trust, and to the network in a 'cache' sense?

If mining nodes peer with each other they don't need non-mining nodes at all?

What is the point in non-mining nodes, decentralised consensus of the mempool!?

Sorry if my ignorance is showing, just lately I've been resisting some of my long held assumptions to make doubly sure that I'm not just a victim of popular opinion!
 

Members online

No members online now.