Signaling fork activation policy in block headers

priestc

Member
Nov 19, 2015
94
191
BU has created the system for mining nodes to signal to the rest of the network their setting for "excessive blocksize" and "acceptance depth", but why not also activation theshold?

For instance if a miners feels like 750 out of the last 1000 blocks is a safe threshold for activation, their block header would say:

EB:8AD:6:AT:750/1000

The "AT" stands for "activation threshold", and it governs how the maximum blocksize is determined. In this example, whatever 75% of the last 1000 blocks determine to be their "excessive blocksize", will be the maximum blocksize that node uses. Right now 1MB is probably going to be the size that will give you validity on 750 out of the last 1000 blocks, so this is basically a way of saying "I think 8 should be the max blocksize, but my node is actually at this time going to enforce 1MB (or whatever it ends up being, because thats what 75% of nodes will accept). Each miner can set their own preference, whether it be 10% or 95% or whatever.

This solves the "median EB fork" issue, or at least makes it more predictable. There could even be a policy of a node that orphans any node that does not have AT higher than, say 75% (configurable or course).

Overall I also think this should make the hard fork more predictable when it actually happens. If we know everyone's strategy for accepting bigger blocks, we can better predict when it will actually happen.

Basically this idea is to use emergent consensus to crowdsource the proper activation threshold.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
i like the idea.
but i dislike the idea.

I dont think AD should be advertized.
and for the same logic AT shouldn't be advertised either.
also this proposal completely changes the current dynamics of EC
IMO nodes and mining node need only advertize 1 value, their EB
generation size is implied when a block is created
AD is a private variable that others should not be made aware of.

AT is not very useful, because in the end, if a new block of 8MB is created and the network is >51% accepts such a block, it will be valid regardless of AT setings. unless we change EB to mean somthing different like "my future blocksize limit" instead of "my blocksize limit".
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
Advertising AD tells a miners how much support is needed to activate a fork.

If AD is above 4, it would be impossible to fork with 51% and bring nodes along, 53% giving miners only a 4 block advantage in a 24 hour period.

an AD of 12 the BU dealt would require miners to have above 60% of the hash as a minimum, to find 12 extra blocks in a 24 hour period, and a much higher percentage if the wanted to find 12 in a row.

so AD is a user threshold for the preferred activation threshold

users wanting to signal for a minimum of 95% support fork would signal with an AD of about 64.

someone who is better at math could quantify the exact numbers using the statistics.
 
Last edited:

painlord2k

Member
Sep 13, 2015
30
47
I agree with the need to advertise miner's present and future preferences.
I suggest a different approach.

1) EB X ( Excessive Blocks - accept blocks up to size X)
2) AD Y (Accepted Deep - if a block with size greater than X is mined, I will converge to the branch with more PoW after Y blocks)
3) FG Z (Future Generation - I want build blocks up to size Z)
4) FT W/V (Future Threshold - I will build blocks up to the size signaled using EB by W blocks over the last V)

In this case if FT is 1500/2000 the miner will look at the last 2000 blocks and if 1500 or more blocks signal EB X or >X it will start generating blocks up to X MB)

5) FD T (I will build blocks not immediately after FT is achieved but T blocks later)
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
@painlord2k can you explain the intent of ( FG - Future Generation) a little more?

BU already has a a setting for MG or MGS - Max. Generation Size (largest block this node/miner will generate)

I am not convinced giving more controller over block size to miners is a good idea. Miners are incentivizes to make small blocks already. My concern is they will limit block size and the way to break a mining cartel that is enforcing a limit is to have no imposed limit at all, or one they have no control over what so ever.

There are inherent market mechanisms that limit growth to the technical limits of the network before nodes drop off. There is no cost to pay, no centralization of control risk when increasing the block size limit. The ultimate limit is not block size it is the size of the UTXO (the sum of all unspent transactions) the cost and time it takes to confirm and of validating them is what determines the cost of a node. I still believe we will need LN to scale above this limit. The UTXO is a result of growth and the number of users - LN does not shrink this. Here are 6 inherent market mechanisms I can think of that restrain block size.

These reasons need to be debunked or understood. BU allows the network to come to terms and understand them in time.

  1. Compact blocks and Xthin blocks need to relay increasing missing transactions with growth - the closer the block size gets to network capacity the greater the number of transactions that will have not fully propagated the network at the time a block in mined. Missing transactions need to propagate before they can be validated in a block. When transactions are missing orphan risk increases as the number of missing transactions increases validation time increase as does orphan risk. - *there is a very real cost that deters including more transactions than the network can handle.*

  2. There is headers first mining and it can be abused to mine higher capacity than the network can handle - it comes with significantly orphan risk, - the optimization here being all headers are the same size, in this case validation times increase orphan risk making this practice is very risky, - *there is a very real cost that deters this practice in time miners become less profitable.*

  3. Compact blocks and Xthin have to communicate information in the bloom filter that increases with the quantity of transactions in the block - bloom filters describing more transaction are larger, and larger amounts of information the longer it takes to propagate and validate. Larger blocks increase orphan risk. - *there is a very real cost that deters including more transactions than the network can handle.*

  4. Another optimization is parallel validation, here too smaller blocks are advantaged by propagating early increasing the risk of orphaning larger blocks when two block are found at a similar time, the smaller one is validated propagated and built on. - *there is a very real cost that deters including more transactions than the network can handle.*

  5. There is an inherent mechanism to processing very large blocks that may take over 10 minus to validate, - firstly they can be orphaned by any previously mentions method, or can be built on and secured by mining empty blocks. in the later case fewer transactions are processed creating a backlog and a technical limit and a demand for block space. - *there is a very real demand for block space that builds to encourage miners to include transactions, and an incentive to cost optimize available network capacity*

  6. Like the majority network today miners will not deviate from the 1MB as the majority of users would reject that block, the same is true of use adjustable limits, BS/Core and BU depend on relay nodes to enforce a network limit. Although with a user set limit when transaction volume increases above capacity miners can be fooled with a sibil attack into making bigger block, miners have a very big intensive to avoid orphan blocks, however if the network can handle the capacity the network will grow not reject those blocks.

Point 6 being the BU proposal, the other 5 being inherent to bitcoin, why do you think we need more complicated limits?
 

painlord2k

Member
Sep 13, 2015
30
47
EB, AD& MG are signals of the present preferences.
EB is public, AD could be public and MG could be public, but usually is private (not publicized)

Future Generation (FG) is to signal the desire to build blocks up to some size. ( Bitcoin.com put it in their coinbase transactions - /pool.bitcoin.com/FG2/ /EB1/AD6/".bG| )

I just advocate for FT & FD to allow miners to coordinate times and size of the increase using the blockchain and the PoW to prevent Sybil Attacks.

If you don't trust miners to be rational selfish economic actors, this is your problem. But the security of the blockchain is in their hand's work. They have a vested interest in maximizing the value of the network and the value of the coins they mine and get from fee.

Standard economics theory shows up in a competitive setting the miners can not form any form of monopoly. If their profit margin become too large, someone will start to mine competing with them. And mining is open to anyone. Nodes don't care where a block come from or who mined it.

1) A transaction travels the network in less than 1 second. And what matter is it to get to the miners.
xThin and Compact Blocks reduce the data needed to be transmitted during block distribution about 95% (so with xThin 20MB blocks are like 1 MB blocks without xThin). More so, if the network was able to clean up the mempool every block there would be less need of re-relay transactions dropped by the mempool of nodes. We don't know how many transactions the network can handle today and surely we don't know how many can handle tomorrow. Lets see.

2) Validation times are not very important, if the transactions are already known and verified. The time to check just the Merkle tree is negligible.

3) The size of the bloom filter increases slower than the size of the block (from my understanding)

4) Miners will do blocks of the size they prefer. Probably they will end publicizing the template of the blocks they are working on, so other miners can check them faster and accept them and work on the next.

5) Relay Networks and such are a problem for miners, because they MUST build on past blocks and have a vested interest in getting them and confirming them in the least amount of time

User's nodes can only check blocks they receive follow the rules, can not build blocks. If they are not able to download the blockchain, they better move to SPV nodes.
 
  • Like
Reactions: AdrianX

yrral86

Active Member
Sep 4, 2015
148
271
While it would be useful to signal to the community that we do not intend to hard fork at a small mining advantage (which somehow many have come to believe), we need to remember that at the end of the day this signalling means nothing. Trying to put it on autopilot just makes it easier to game. If you have 25% of mining power and there is an activation threshold of 750/1000 then you can begin signalling once it reaches 500/1000. However, you are evil bastard who hates scaling and so your signalling is a lie. You are still running core consensus. So when the networks forks at "750", it actually forks at 500 and really fucks up everyone's year.

Also, this does not solve the median EB fork issue since (assuming honest signalling) it only provides more degrees of freedom to find ways to carve up the miners into disjoint subsets.