Unlimited really should be Unlimited...

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Roy Badami Yes, I was thinking about that afterwards, such as getting the list from bitnodes. It won't include the quiet ones without port-forwarding, but still an effective temporary disruption. It is fair to say that your premise is weakened where the block validation is done in a separate thread, a pre-requisite for removing sigops limits, as mentioned earlier.
In which case maybe @Gavin Andresen wants to defend his suggestion in the OP some more.
 

Roy Badami

Active Member
Dec 27, 2015
140
203
Well, no need to remove SIGOPS limits right now, just switch them to strict counting as Classic is doing.

But I think we still want to apply a SIGHASH limit to big blocks. At least, Gavin seems to think it's needed for Classic, even with SIGOPS limits.

As I say, if we only apply the SIGHASH limit to big blocks then we remain compatible with both Core and Classic and don't have to track Classic activation.

EDIT: But I certainly would like to get Gavin's input here.
[doublepost=1454626427,1454625776][/doublepost]Re SIGOPS limits - we could allow this to scale linearly with block size. So set the sigops limit (strict counted) to the larger of 20,000 and blocksize/100. That's fine.

But it's not just sigops - I think we need to worry about sighash, too. We could set the limit to the larger of 1,300,000,000 and 650*blocksize - but only for blocks larger than 1,000,000. Leave it unenforced for blocks 1,000,000 bytes or less for now. (We can, and probably should, eventually remove the special case for small blocks once the Classic fork is well established as the new consensus - but no need to worry about this now.)

How's that?

EDIT: The point is that we want to ensure worst case validation time scales linearly with block size, not quadratically. (But we want to be sure we avoid rejecting valid blocks both now and post Classic fork.)
[doublepost=1454627424][/doublepost]And in case the rationale for the above isn't clear. The idea is that we achieve all of the following (all without tracking fork activation):

Pre classic fork (status quo), things are as follows:

sigops, we're allowing 20,000 strict counted, which is somewhat more generous than the Core consensus rules. (Legacy sigops counting tends to overcount, so the current consensus limit is stricter.)

sighash, unenforced (since no blocks >1MB) - so again compatible with Core.

Post classic fork


sigops still 20,000 strict counted, so we're now exactly in agreement with Classic

sighash, 1.3GB limit (so in agreement with Classic) except for small blocks (where it's in practice impossible to get much worse than 1.3GB anyway). So we're broadly in agreement with Classic - except potentially slightly more permissive for small blocks. (And after the Classic fork triggers, when we eventually remove the special case for small blocks, we become in exact agreement with Classic.)

Long term post classic (>2MB blocks)

Both sigops and sighash limits scale up linearly as blocksize increases beyond 2MB, helping to ensure validation cost is linear in blocksize post-Classic.
 
Last edited:
  • Like
Reactions: solex

go1111111

Active Member
I'm strongly against the OP's suggestion to remove the ability of nodes to set their own block limit.

From the BU main site: "As a foundational principle, we assert that Bitcoin is and should be whatever its users define by the code they run, and the rules they vote for with their hash power." My understanding is that "Unlimited" is supposed to refer to unlimited choice.

If me and a bunch of other Bitcoin users decide that right now, in early 2016, we don't want our full node software to automatically follow blocks above 32 MB (or automatically waste time trying to validate them), we should have some easy way to do this.

Is there really a tragedy of the commons scenario in the block space? Every single actor wants more, but the commons is better off with using less?
The tragedy of the commons occurs among miners. Every individual miner wants more block space available for blocks that they mine (because they get all the fees of those extra txns), but not necessarily for blocks that other miners mine (because a higher supply of block space can lower fees for all miners). This is similar to how a cartel can make themselves better off by restricting supply.
 

Aquent

Active Member
Aug 19, 2015
252
667
Personally I think bitcoin unlimited has a strong chance to be used by miners because it gives them the ability to co-ordinate in a decentralised manner without needing any developer to provide orders on what size the blocks should be.

If we all were running bitcoin unlimited then the blocksize would be raised in the same way as the soft limit has been raised for the past 7 years. Namely, the miners increase their acceptance/generating limit, they communicate it to all, and once more than whatever threashold they are comfortable with so agrees, they can create the first block.

In my view this is the only solution which decentralises decision making in regards to the blocksize and avoids in the future what we are facing today. The alternative is a centralised limit which unfortunately biases towards inaction with just a little bit of scaremongering perhaps being sufficient to cripple the whole network.
 

sgbett

Active Member
Aug 25, 2015
216
786
UK
amen to that. block size should be in the hand of the people concerned with making blocks.
 

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
The way I see it, miners will first of all benefit from BU by checking out data websites like coindance, bitnodes, and the upcoming replace for xtnodes, NodeCounter.com. I'm also thinking of rolling out bunodes.com or similar.

Second, they are going to benefit from running full BU nodes when we get the thinblock relay network going. There is plenty of innovation room here, and we're only getting started.
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Roy Badami
I like your thinking on this problem. Certainly the priority is that for blocks >1MB we don't want to risk a theoretical situation where BU rejects a block which Classic accepts. There is no vote for BUIP016 imminent yet, but I am willing to modify it to reflect your input beforehand. I am a bit puzzled why Gavin thinks that his change for Classic is not automatically good for BU. These cpu/RAM heavy tx are outliers, far from normal business.
The point is that we want to ensure worst case validation time scales linearly with block size, not quadratically.
Absolutely.
I do like the idea though of seeing whether sigops/sighash limiting can become a network-wide emergent property based upon the physical cpu/RAM capacity of all individual nodes. The reason is that even a validation time which scales linearly with block size might exceed technological growth rates.

@go1111111
this isn't about the block size in bytes, its about computing resources to verify them.
 

Erdogan

Active Member
Aug 30, 2015
476
855
Erdogan said:

Is there really a tragedy of the commons scenario in the block space? Every single actor wants more, but the commons is better off with using less?


go1111111 :

I'm strongly against the OP's suggestion to remove the ability of nodes to set their own block limit.

From the BU main site: "As a foundational principle, we assert that Bitcoin is and should be whatever its users define by the code they run, and the rules they vote for with their hash power." My understanding is that "Unlimited" is supposed to refer to unlimited choice.

If me and a bunch of other Bitcoin users decide that right now, in early 2016, we don't want our full node software to automatically follow blocks above 32 MB (or automatically waste time trying to validate them), we should have some easy way to do this.



The tragedy of the commons occurs among miners. Every individual miner wants more block space available for blocks that they mine (because they get all the fees of those extra txns), but not necessarily for blocks that other miners mine (because a higher supply of block space can lower fees for all miners). This is similar to how a cartel can make themselves better off by restricting supply.

Re red text:

But a cartel can't, really, restrict supply, in the free market. Ask Henry Ford. They can, only if they are protected by the power of the state, which miners are not.

Re blue text:

In our case of mining, the miners act independently accordingly to their self interest to use as much space as possible, but they are restricted by the difficulty, an essential property of the system that protects the sound money aspect. Later, with only fees, the difficulty is still essential to assure great hashing power. The difficulty protects the commons.

To compare to the standard example on tragedy of the commons, grassing rights: You can come to the commons with as many cattle as you like, but in the satoshi-grassing-system, each cow-owner can only feed a percentage of his cows. If the commons can support 100 cows, and three farmers come with 30, 70 and 100 cows, difficulty becomes 50% and they can lead only 15, 35 and 50 cows into the field, respectively.

I am sure the conditions of transmission time, latency and verification time (the main constraints) at the very limit of the capacity can be discussed. But orphaning is not really a problem for the system as a whole, it just makes the average block a bit more costly, and therefore reduce the difficulty. The question is, will the transaction capacity (transactions per second) be lower near the limit, and if so, is it a problem?
[doublepost=1454714511][/doublepost]Yes I hate the quoting system of this site, an try to find a better way, like a real entrepreneur.
[doublepost=1454714561][/doublepost]When I press Quote, I would like the fucking text to be copied.
 

Roy Badami

Active Member
Dec 27, 2015
140
203
@Roy Badami
Absolutely.
I do like the idea though of seeing whether sigops/sighash limiting can become a network-wide emergent property based upon the physical cpu/RAM capacity of all individual nodes. The reason is that even a validation time which scales linearly with block size might exceed technological growth rates.
Agreed, but it's a much harder job than blocksize as an emergent property. I think it will require significant research and testing to come up with a good solution here.

Just thinking aloud, but it may require something like temporarily halting vaidation of blocks that appear to be too expensive to validate, but still tracking (but not validating) blocks with valid PoW built on top of them so we can resume validation of that and subsequent blocks if a longer (by some margin) chain appears to have been built on top it.
[doublepost=1454715478][/doublepost]And as for this bit, specifically:
@Roy BadamiThe reason is that even a validation time which scales linearly with block size might exceed technological growth rates.
True, if/when we go fully unlimited. As long as we completely reject anything ten times larger than our limit, then with linear scaling the impact of this attack is probably manageable.
[doublepost=1454715900][/doublepost]
@Roy BadamiI am a bit puzzled why Gavin thinks that his change for Classic is not automatically good for BU. These cpu/RAM heavy tx are outliers, far from normal business.
I wonder if he discounted the possibility that an attacker would waste hash power generating an unmineably large expensive block deliberately to attack BU nodes. Given the relatively low theoretical cost of this attack and the severe effect it would have on the P2P network if BU ever became the dominant non-mining node, I think this attack is worthy of consideration.
 

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
Agreed, but it's a much harder job than blocksize as an emergent property. I think it will require significant research and testing to come up with a good solution here.
straightforward way is to simply test how your machine handles set of transactions with different number of sigops, which can be simulated on a private testnet. Same thing with blocks. User can then have various options how to handle cpu/memory/BW tradeoffs. All this benchmarking can be done programmatically and can be integrated with wallet or as a separate software that aids in choosing intelligent settings. I think the data gained by testing can be condensed to a blocksize settings that indicates the ballpark we are occupying...
 
Last edited:

Roy Badami

Active Member
Dec 27, 2015
140
203
The complexity comes from the fact that neither strict sigops counting nor sighash counting can be done without actually executing the scripts. It's not a problem for Classic, because the worst case is bounded since these are hard limits in Classic. The worst case of how much work it takes you to reject an invalid block is no worse that the worst case of how much work it takes you to process a valid block - and since the attack is expensive (it requires valid PoW so it requires the miner to be willing to forgo the block reward) that's good enough.

I maintain that designing a system without hard limits, in order to allow the limits to arise as an emergent property, is hard.
[doublepost=1454720680,1454719789][/doublepost]Although if there were to be a consensus around making the 100kB transaction limit a consensus rule rather than just a standardness rule, this problem would largely, if not entirely, go away. This would be a fairly trivial soft fork, so long term it might very well happen.
 

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
I maintain that designing a system without hard limits, in order to allow the limits to arise as an emergent property, is hard.
Agreed. that's why I think sigop limits cannot be communicated in the same way as block sizes, instead user could have an implicit limit associated with block size, i.e. worst case measure of script complexity.
 

Erdogan

Active Member
Aug 30, 2015
476
855
The point of hard money is that the market will expand the money quantum through credit expansion in times of need, and the market will regulate the ratios between consumption, investment and saving (holding money) through the money value and the interest rate, based on the individual preferences of all actors.

No central planner can better decide what the aggregate investment should be, and what the aggregate saving (in money) should be. Whatever they do, it leads to systemic instability, when the manipulations and their effects are revealed to the market actors, typically when they have consumed their capital and when their savings are lost, when misallocation of capital have reduced the overall productivity, which is the source of all prosperity.

So we don't needs softness in the money. The softness of the fiat money is exactly what we not need, we need to get rid of the softness, and we will.
[doublepost=1454761535][/doublepost](And they don't really try to optimize for the general good of the humanity, they just want your resources for nothing)
 

Gavin Andresen

New Member
Dec 9, 2015
19
126
If me and a bunch of other Bitcoin users decide that right now, in early 2016, we don't want our full node software to automatically follow blocks above 32 MB (or automatically waste time trying to validate them), we should have some easy way to do this.
If you want users to have a choice, then don't specify the limits as knobs that they won't understand. Nobody but geeks will have any idea what sigops or sighash mean. Most won't even know whether or not eleven megabytes is a lot or a little.

Give them a single knob that uses a unit they WILL understand. Something like: 'Reject blocks that take more than eleven seconds to validate."

It should be pretty easy to change the ValidationCostTracker I wrote for Classic/Xt to keep track of real-time or CPU time instead of sigops/sighashes, and to reject transactions or blocks that take too long.
 

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
Gavin Andresen said:
Give them a single knob that uses a unit they WILL understand. Something like: 'Reject blocks that take more than eleven seconds to validate."

It should be pretty easy to change the ValidationCostTracker I wrote for Classic/Xt to keep track of real-time or CPU time instead of sigops/sighashes, and to reject transactions or blocks that take too long.
The excessive block settings are to be communicated to miners, so time is not a good measure.

However, when the user is deciding on block size that she is willing to accept, it is good if she can get a clear idea how long a block of given size and structure (sigops) takes time to validate. ValidationCostTracker is very useful for this purpose, so thank you for that. I also liked very much the clean and clear regtest code in Classic and of course the "guided tour" blog post.
 
  • Like
Reactions: freetrader

go1111111

Active Member
If you want users to have a choice, then don't specify the limits as knobs that they won't understand. Nobody but geeks will have any idea what sigops or sighash mean. Most won't even know whether or not eleven megabytes is a lot or a little.
I assume most people running full nodes will be pretty geeky, but I agree sigops or sighash are too complex. I do think block size is understandable. The user can be given info like "the largest block in the last month was X MB. The average block size in the last month was Y MB. Picking this block size would require Z GB of hard drive space per year if all blocks were full", etc.

Give them a single knob that uses a unit they WILL understand. Something like: 'Reject blocks that take more than eleven seconds to validate."
I think it's important to help users identify Schelling points. The time blocks take to validate on a particular users machine can't be a Schelling point. I imagine that if Unlimited becomes the default client in the future, we will still have groups similar to Classic/XT/Core which advocate for particular Schelling points rather than particular software implementations. In that case, Unlimited could list popular Schelling points as determined by debates in the community, for instance right now it might be 1 MB, 2 MB, 4 MB, 8 MB, or No Limit. 99% of users would probably be happy picking one of these. As described above, the UI could provide the user with info one how costly each of these would be on their machine, along with info on their popularity (maybe one is endorsed by all the leading exchanges and a large group of merchants). There would be advanced options for the remaining 1%.
 

Erdogan

Active Member
Aug 30, 2015
476
855
Erdogan said NOT:
If me and a bunch of other Bitcoin users decide that right now, in early 2016, we don't want our full node software to automatically follow blocks above 32 MB (or automatically waste time trying to validate them), we should have some easy way to do this.

If you want users to have a choice, then don't specify the limits as knobs that they won't understand. Nobody but geeks will have any idea what sigops or sighash mean. Most won't even know whether or not eleven megabytes is a lot or a little.

Give them a single knob that uses a unit they WILL understand. Something like: 'Reject blocks that take more than eleven seconds to validate."

It should be pretty easy to change the ValidationCostTracker I wrote for Classic/Xt to keep track of real-time or CPU time instead of sigops/sighashes, and to reject transactions or blocks that take too long.
All right, but I didn't say it. It is the fucked up quoting mechanism of this site.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
I like gavin's idea here. We spawn this in a 5hread and abort it after N sec. It naturally grows as underlying hard/soft tech improves.
 

Roy Badami

Active Member
Dec 27, 2015
140
203
I'm not sure it's really that useful. Certainly it doesn't help the problem that the user has no idea what to set it to.

If you set it lower than miners, sooner or later a block will make it into the blockchain that you won't see, and you'll stop processing the blockchain at that point.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
hmm... I may be wrong but I assumed he meant "reject" in the Bitcoin Unlimited "sense" -- ignore until the chain > acceptDepth...