Gold collapsing. Bitcoin UP.

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
This is a very interesting new angle, as far as I know. I wonder whether it figures in, over and above the orphan risk factor. Let's see...
I believe this "limit based on max fee revenue" is going to keep blocks small for years to come. only when TX demand is clearly above the size which leads to orphan risk will orphan risk become the limiting factor. because a small % of the hashing power is needed to block, block size increases

... [ maybe with a ]cartel agreement [it would work?] ...
there would be temptation to build bigger blocks to collect those last 10-20% of paying fees, if we go with your rule of thumb.
... tragedy of the commons... other miner can come in and sweep the fees that they worked hard to keep high.
your forgetting an important detail. there will always be a limit, and that limit must be agreed upon by a very high % of the network, thats how BU works, its not unlimited... if BU were to take over the network today, block size would NOT jump to 20MB, for various reasons, the network will simply not agree to make block size jump much that fast( i think this is a fair assumption ).

so my theory probably holds up so long as some SMALL % of the network agrees to limit block such that they maximize profit.

thats not to say there will be a small % of the network Blocking block size at 2MB forever, in theory its in there best interest to make blocks "fit just right" based on demand.

[your idea might not work out for real, but thats ok because orphen risk certainly will and it is a good limiting factor ]
(sorry i totally left out your reasoning.)
i agree with you, that orphen risk is a good enough limiting factor, but i'm fairly confident that it will NOT be the real limiting factor for a long long time.
beside the idea that some minners will see value in blocking unnecessary BS increases in order to make sure there is always Some fee pressure... i think everyone ( expect core fan boys? ) can agree that, the idea that:
"miners ( and mining pools ) are mindless robots, who will literally kill the ecosystem which feeds them if you let them"
is very wrong...
 
Last edited:

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@adamstgbit

I see your point now. In this way, a shifting but definite BU-set rightsized blocksize cap may be even better than an ultra-high cap or no cap. It would at least help shut down the "big miners can terrorize small miners" attack under uncapped blocksize, which is still being mentioned even though the big miner's advantage grows quite slowly with orphan risk (argument as I recall it from @Peter R). Perhaps BU with a rightsized cap is actually necessary to keep that advantage from getting high enough for the "big terrorize small" attack.

And as you say, maybe rightsizing will be miners' first priority because it could be what maximizes fee revenue - otherwise from now until we start to approach the actual current network capacity (suppose 67MB just for kicks), fees will be essentially zero. That's a long time to wait without fees being tied much to network usage, a lot of extra UTXO to store, etc. This could be a big deal if you're right, because it would allay many small blocker fears as well, including many points that Greg raised. Need to consider it more before I'm certain.

One thing I wonder: fees spiked up as we hit 1MB. Will it be necessary to actually hit the cap (Gmax-style full blocks) before fees go up at all? Then we have the backlog problem due to blocktime variance, etc. I'm feeling led toward Greg/Mark flexcap idea, which is disconcerting. I might be missing something due to sleep deprivation.
 
Last edited:
  • Like
Reactions: xhiggy

albin

Active Member
Nov 8, 2015
931
4,008
@Zangelbert Bingledack

I'm not convinced that a Flexcap-type proposal would actually redress any of these concerns anyway, because specifically the Maxwell-formulation (which does not require a tail emission) is exactly equivalent to introducing an artificial 2nd source of orphan risk into the system, which I don't see would have any different dynamics at all (other than MC=MR being set by the arbitrary choice of a function marrying difficulty penalty to size, as opposed to the natural market condition where MC=MR equilibrium can naturally shift over time due to technological constraints changing). I don't see how whatever intended effects it might have on utxo set growth and other concerns are in any way unique versus simply just deciding to force blocks to be small.

Also I would argue that it's just as plausible that's there an opposing effect, whereby in attempting to arbitrarily get miners to pay for the perceived negative externality costs they impose on nodes by creating utxo set entries, an incentive incompatibility is created as well whereby miners are simultaneously penalized for creating the positive externality of system utility (which also incidentally happens to be by mining transactions to create utxo set entries!). I don't think a human being can actually make these economic calculations at all for an entire system when literally the same economic activity produces both positive and negative externality simultaneously for the same entity, and the obvious market mechanism for reacting to this cost vs utility calculation is simply participation vs non-participation at the level of the individual actor.
 
Last edited:

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
i think the BU solution to "how do we make blocksize dynamic / free market driven" is very well thought out.

its not too simplistic, like saying " block size limit is whatever >50% of the network want it to be. "
ultimately it does (sorta) boil down to that... but the way miners can set / signal:
1) the block size they will produce
2) the max block size they will accept
3) the idea that they will ultimately accept larger blocks should the network keep minning ontop of a block which they deem to big.
this really is the right way to do it...

because,
doing it this way makes another aspect of scaling block size a market driven phenomenon
what % of the network needs to signal for 1.5MB block before miners feel its safe to start producing 1.5MB blocks ??? 51% 75% 95%???
with BU this too is left to the free market, in theory is possible for the network to attempt to force 1.5MB blocks once (50% + 1) miners are signaling they will accept 1.5MB blocks. But this is not automatic, miners must CHOOSE to start producing larger blocks. what % of the network must signal 1.5MB acceptance before miners actually start to incress the block size they will produce???
who knows!
if i was a miner, i wouldn't attempt a 1.5MB block unless >73.58689% was signaling acceptance of such a block size.

BU is simply brilliant, because it leverages the free market ( an awesome force ) properly.
 
Last edited:

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@albin

"It's a net-negative externality on nodes...because off-chain scaling is better."

It's fascinating that almost all Core's arguments, including the security and externality concerns, seem to boil down to a presumption that off-chain scaling is better. It's a giant loop of circular reasoning.

It seems to me that miners are the only party that is incentivized to do the market research to figure out what the best blocksize is to please the most nodes, the most economically important nodes, the holders, the investors, the exchanges, etc. Miners' job is to please the nodes, and that includes not shifting too many negative externalities onto them vs. positive ones. I thus find it really odd when Greg and his followers bring up the "nodes will be burdened with all these externalities" issue. Why would miners want to piss nodes off? They tread on thin ice doing that.

The only possible objection I see here is tragedy of the commons, where one miner thinks short-term. But even if that happens, the other miners can band together to orphan such blocks.

Core doesn't seem to agree with the BU approach, but I think the long view is clarifying here: how can it be possible for Bitcoin to operate at a trillion-dollar market cap when everyone is still being spoonfed their consensus settings by a dev team? Miners and business nodes need to grow up into their role as stewards of the network, and they are indeed highly incentivized to do so - especially in this kind of controversy.
 
Last edited:

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
Greg's a communist.
he believes the free market needs his forceful hand to make arbitrary rules ( 1MB blocks, discount signature data, etc...) in order to create stability.

he places exactly 0 faith in miners ability to recognize the value of keeping blocks small and at the same time miners are expressing their will to keep blocks as small as can be by voting for segwit.

his comments are so inexplicably off base that its very hard to believe he is being genuine. he knows full well miners listen to his ideas and try to carefully weigh the pros and cons themselves. but he resorts to saying BS like "10GB" and "imaginary forces" on forms because he believes poeple will buy this BS.

do you honestly believe he uses the same arguments with the miners themselves "you guys would ramp up blocksize to 10GB by centralizing to one pool and or agreeing to stop validating if we let you! admit it!" LOL i dont think so...
 

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
I think Flexcap is going to be the biggest danger here, because they're basically going to try to sell it to us the permanent solution that allows the market to drive blocksize with no more human intervention, in order to obscure the fact that they will just be indirectly setting prices at the margin.
Yes, there already is a flexcap implementation on the market. It's called Bitcoin Unlimited.
Users/miners are free to flexibly define the size of the cap, which will be far above the actual sizes of the blocks. The flex cap 'solution' of the Politbüro is a full block solution to enforce the txs to the hubs or the hub of all hubs.
 

molecular

Active Member
Aug 31, 2015
372
1,391
guys, is the "orphan risk limits blocksize" argument really valid? With xthinblocks and other tech, it seems to me orphan risk is largely mitigated.
[doublepost=1480539892][/doublepost]mempool is crowded again with what I would call "legitimate transactions" (ones paying a reasonable or high fee) and I thought of a car analogy (lol):

It's like the engine is revving at it's maximum of 8000 rpm and it's time to switch to 2nd gear. Core is just telling us to push on the accelerator even more, which naturally wont solve the problem of unsatisfied urge to accelerate.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@molecular: Yes, fair point. More efficient block transmission reduces the orphaning risk and makes bigger blocks possible (but doesn't reduce it to zero).

In fact, when the transactions flying around don't reach the nodes in time, the transmission efficiency drops down to the old case.

So in any case miners have to send something that the rest of the network can digest.
 
  • Like
Reactions: majamalu

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
guys, is the "orphan risk limits blocksize" argument really valid? With xthinblocks and other tech, it seems to me orphan risk is largely mitigated.
[doublepost=1480539892][/doublepost]mempool is crowded again with what I would call "legitimate transactions" (ones paying a reasonable or high fee) and I thought of a car analogy (lol):

It's like the engine is revving at it's maximum of 8000 rpm and it's time to switch to 2nd gear. Core is just telling us to push on the accelerator even more, which naturally wont solve the problem of unsatisfied urge to accelerate.
is the "orphan risk limits blocksize" argument really valid?
I think so.
besides downloading, miners must verify the block and its TXs. this take time in CPU cycles, even at 1MB we saw a block with a signature so complex it tooks minutes to validate.

ofc once the quadratic sig problem is made linnear, it will become much faster to validate, but not instent..

downloading + validating time will always incress the bigger the block gets no matter what new optimization reduce that time and there by push orphen risk as a limiting factor higher up, but there will always be orphen risk.

i think bitcoin node implementations need to be written in such a way to incress orphen risk. i think parallel validation would do just that. and thats a good thing.
 
  • Like
Reactions: majamalu

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
@molecular

Yeah I now think that it may be more the Keynesian beauty contest that would cause the orphaning if a miner produced blocks that were too big (assuming truly no hard-coded cap, which isn't how BU is anyway). That is, other miners would orphan an overlarge or abusive block even if they could relay it in time, because they know the ecosystem doesn't like such blocks. The small blockers are always telling us that miners mining "large" blocks will just be ignored by the nodes and ecosystem as "not Bitcoin," so how can't this also work in general to prevent every kind of miner abuse?

The Keynesian beauty contest aspect really seems to amplify this as well, because as long as there are miners who are prudent they will resist such blocks even if other miners are tempted to push it.

Maybe my brain is fried from arguing with Greg, but it seems to me that if miners are employees working on a conditional basis for the ecosystem, where the ecosystem determines the rules and the miners simply order the transactions, they have very little power to pull off any of the miner attacks the small blockers fear - including burdening full nodes so much that it would affect deventralization in any material way.
[doublepost=1480544592][/doublepost]@adamstgbit

Greg says they will just not validate.
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
is the "orphan risk limits blocksize" argument really valid?
I think so.
besides downloading, miners must verify the block and its TXs. this take time in CPU cycles, even at 1MB we saw a block with a signature so complex it tooks minutes to validate.

ofc once the quadratic sig problem is made linnear, it will become much faster to validate, but not instent..

downloading + validating time will always incress the bigger the block gets no matter what new optimization reduce that time and there by push orphen risk as a limiting factor higher up, but there will always be orphen risk.

i think bitcoin node implementations need to be written in such a way to incress orphen risk. i think parallel validation would do just that. and thats a good thing.
"spy mining" (or the mining of empty blocks right after big ones) also limits the average block size. And its really the average that matters...

You can literally see it happening on the network in the graphs that I put in my paper: https://www.bitcoinunlimited.info/resources/1txn.pdf

Also, note that XThin blocks dramatically reduces the propagation time for blocks with propagated transactions. So all kinds of attacks were a miner fills a block with unrelayed "spam" transactions are massively discouraged.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
@theZerg

"validation rates of 17 seconds/MB" wow that slow...

there is a way of actually cutting down the validation time of a block to less than a second.
dont validate the block! sounds crazy but its actually fine...
basically the idea would be to validate TX as they come in and once a new block is found just make sure the TX in the block are TX you've already validated. and only validate TX that you haven't validated. somehow i think this would tie in nicely with thin blocks... not sure if this is actually viable or not? just a thought...

question,
so the most costly part to block propagation is not downloand time its validation time.
the most costly part to block validation is checking the signatures.
and core wants to discount the signature at 75% !!?!?!?!?

the burden on miners will grow and the fees will not compensate.
the longer it takes to validate a block the more probable the next block will be an empty block.
segwit will lead to more empty blocks


didnt read all your paper, maybe another time. but i liked this:
32MB blocks would require approximately 10 minutes of validation (1-txn block generation) time. Since it subsequently takes (on average) 10 minutes to find a new block, we end up with a network with a 50% duty cycle (i.e. half the blocks are 1-txn half are carrying useful data), and ~30KB/sec of transaction throughput.
its going to be interesting to see what a revised analyst will look like once signature validation is made linear not quadratic... i would assume to see only a small benefit since probably most TX included in blocks are already simple and dont really suffer from quadratic-ness.

also your data is getting a bit old, i wonder if this 17sec/MB validation time is with or without libsecp256k1


also,
i think there's ways for a miner to create full blocks even tho he hasn't fully validated the pervious, my feeling is they got lazy and wanted to keep it simple so in the special case they mine an empty block.
 
Last edited: