Gold collapsing. Bitcoin UP.

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
@AdrianX I like the explicit idea that low fees are important. But you are correct we don't want some kind of CB subsidizing fees in the future. How about something like "Free market fee dynamics should settle on low fees"
@theZerg
Better than I had, I like it, but thinking about it some more we want economies of scale to reduce tx fees.

How about something like: "Free market fee dynamics, governed* by economies of scale"

*optimized for.
*suites for

The small block proponents think they have a free market solution when they suggest a limited or restricted transaction volume to increase fee pressure to compensate miners for security costs as block subsidies diminish. On the old forum they used the meme "free $#!+ army to attack proponents of lower fees. I'd like to avoid that criticism if we can and have low fees.
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
Thinking of @rocks concern about nodes "leeching" by downloading large blocks but not forwarding them along: Looking at the situation under @Peter R 's framework above, the large blocks would fall into the "don't know" category, and not forwarding is the software's meta-cognitive response for this case.

But would not forwarding the blocks hurt the network? Maybe. But again, I think we need to consider the individual node incentives. So why would a node want to forward any blocks at all? Miners definitely have incentive to propagate their blocks out to the network, so one could expect them to run nodes that try to upload their new blocks. Also, businesses like merchant processors would want to upload blocks and transactions so that "their" transactions are included in the network consensus.

Even just regular peer-to-peer network design should be able to handle the incentives though, similar to how bittorrent works. Nodes can choose to upload data to other nodes that share blocks with them. They can be less generous with nodes that just leech off them. This tit-for-tat approach aligns individual incentives in a way that encourages widespread sharing of data.

Further in the future, I would expect that bitcoin transactions and blocks will be shared in many ways in addition to the peer-to-peer network. There is already the block relay network for miners. With the payment protocol, wallets can send transactions directly to merchants, bypassing the peer-to-peer network. Large businesses could send transactions directly to miners, SPV wallets may pay for full node services, and I'm sure other "transport layer" protocols will emerge in future.
 

rocks

Active Member
Sep 24, 2015
586
2,284
@rocks
but what do you mean that a full node would continue to DL a big block larger than it accepts "for it's own use"? why would they do this? won't it be able to tell that it's too large from the header and thus refuse to DL the rest of the block? in that sense it wouldn't be draining network bandwidth.
This could be my misunderstanding of what is being proposed. My understanding is BU would add a parameter to not upload blocks over size X, in order to preserve upload bandwidth, however the node would still download all blocks and consider the longest chain the valid chain.

If instead the parameter specifies to ignore blocks over size X by instructing the node to not even download blocks over size X, then the node is no longer leeching, and instead in effect fully rejecting blocks over size X. This behavior would cause the node to fork from the larger block chain, and the node would remain forked until it's size parameter was increased. The key here is to make sure it's download preference is the same as it's upload contribution.

A question is how to do this. Block headers do not have a size parameter in them. You'd have to add new functionality enabling a node to 1) download headers only first, 2) query how large the block is from another node and 3) only download the node if the size was below their parameter. The problem is block headers do not have a size marker, the only method to determine block size that I know is to download the full block. But that brings us back to the problem above and creates leechers.
https://en.bitcoin.it/wiki/Protocol_documentation#Block_Headers

@rocksfurthermore, why would they allow themselves to be leaves of the network when what they should want is to be active participants, ie, full functioning nodes? they don't help themselves by not being able to transact on the main longer chain that accepts bigger blocks.
I meant leaves in the sense of block propagation. Yes they would be connected to multiple nodes in a graph configuration, but if you traced block propagation from node to node, they would be endpoints (i.e. leaves) that download but don't propagate.

@rocks
they should simply be convinced to up their settings to accept bigger blocks (note that i am not yet convinced of the adding blocks to a full node's chain by the "excessive block" method currently being proposed).
If nodes with a smaller X block size parameter are in fact forked off the network, then they obviously have to increase their size X parameter to continue running. Which brings us to the point of why enable this at all? It's really just putting a cap on a node where above that cap a node simply deactivates itself until the operator turns it back on by increasing the cap limit.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Using pejorative terms like leeching likely inhibits analysis here. If the block is invalid, are we "leeching" if we do not forward it? If the unrelayed block does not make it into the longest chain (for whatever reason, invalid or simply excessive) then the node is benefiting the network by not wasting its peers bandwidth, not obstructing it.

We are proposing to introduce a non-boolean block evaluation in BU. There is now a grey area between valid and non-valid which is perhaps best described as "valid but discouraged", and a node that chooses not to forward this block is exercising its right as a participant to influence the evolution of the network.

People complain that the miners have all the power and nodes have none. That's not true, nodes have a little bit of power, and this is it.


@rocks I've been very careful to say that the node will not "relay" the block, not that it won't download it. I think that we'll have to download it even if it is "excessive"... at least in the near future.
 
  • Like
Reactions: Mengerian

rocks

Active Member
Sep 24, 2015
586
2,284
@theZerg

The difference is pools have a revenue model built on fees that enables them to invest in infrastructure capable of handling larger blocks. However standard nodes do not, and many that are running today probably are managed by users that do not want to pay $100/month to run the node.

Nodes are also a vector for sybil attacks. It would be easy for a 1MBer to launch 6000 nodes for a few days that disrupt and slow large block propagation and encourage miners to only mine small blocks. This is why nodes should not get a say in what is or isn't a valid blocksize.

Not relaying is leeching, it is the very definition of leeching in bittorrent world. We could easily end up with 50 nodes relaying large blocks, and 6000 nodes running with a cap stopping them from relaying. This puts tremendous upload pressure on the 50 real full nodes. It is not a healthy network.

IMHO full nodes should fully support the longest valid chain as determined by the miners producing real work. If they are not able to keep up with the longest valid chain it would be better for all that they either upgrade and spend more resources to keep up, or drop off as a full node and become a light client. Introducing a new full node client that does not fully support the longest chain as determined by miners is dangerous long term.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@rocks

in your scenario, what would be your solution to defending against repetitive exa-blocks by an attacker ala f2pool's single tx multi input self mined block?
[doublepost=1445357831][/doublepost]nice:

 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@rocks the network really doesn't care what you or anyone else thinks should happen. The point is that they do have this power whether you like it or not.

@theZerg
Nodes are also a vector for sybil attacks. It would be easy for a 1MBer to launch 6000 nodes for a few days that disrupt and slow large block propagation and encourage miners to only mine small blocks. This is why nodes should not get a say in what is or isn't a valid blocksize.
Somebody could launch a sybil attack on the network today and do an isolation attack, a non-relay attack, a bandwidth exhaustion attack, garbage block attack and so on. BU functionality does not change this.

@theZerg
Not relaying is leeching, it is the very definition of leeching in bittorrent world. We could easily end up with 50 nodes relaying large blocks, and 6000 nodes running with a cap stopping them from relaying. This puts tremendous upload pressure on the 50 real full nodes. It is not a healthy network.
Exactly. It is clear from your example that most of the network wants smaller blocks. So those 50 full nodes can either carry the weight of the entire network, produce smaller blocks, or stop relaying larger blocks (and therefore putting pressure on the miner to produce smaller blocks).

Also, this is not a bittorrent network. Your analysis using bittorrent is flawed because those 6000 nodes do not ever ask for the excessive block. The 50 nodes relay the block to whoever they want (who they choose to connect to). Those nodes ignore the block and so the situation stops there.

What if those 6000 nodes don't represent the real network average? What if they are artificial nodes (sybil attack)? In that case the situation is no worse than what we could have today if a Sybil-attack node network connected to external nodes requesting block download, but refused to relay any blocks to incoming connections.

@theZerg
IMHO full nodes should fully support the longest valid chain as determined by the miners producing real work. If they are not able to keep up with the longest valid chain it would be better for all that they either upgrade and spend more resources to keep up, or drop off as a full node and become a light client. Introducing a new full node client that does not fully support the longest chain as determined by miners is dangerous long term.
This idea basically means that a small group of miners and nodes can drive the entire network to larger blocks. But the fundamental idea behind BU is that network (and other) pressures restrain block size (on average) to whatever the average participant is willing to accept. This is why BU does not need explicit block size limits. Your idea removes this negative pressure.
 
  • Like
Reactions: Peter R

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@theZerg

Also, this is not a bittorrent network. Your analysis using bittorrent is flawed because those 6000 nodes do not ever ask for the excessive block.
ok, you got me confused. i thought in your excessive block scenario, you were proposing that the nodes who had set their size below the avg block size would still *have* to DL the excessive block if the chain got longer than a certain #blocks and was deemed the longest chain?

i also agree with you that today a bank attacker could set up 6000 nodes as a Sybil attack.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@cypherdoc yes, and at that point the node would also relay the excessive block. We are talking about before that time.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@theZerg

well, in that case, doesn't rocks have a point that these nodes who choose not to participate in larger blocks will still be draining resources from the network in terms of the DL's of all these "excessive" blocks? this is actually why i'm not convinced that these nodes should have to do this b/c they've chosen, after all, not to participate in these bigger blocks by changing their block size setting downwards. why make them do this while also making the network have to feed them blocks they don't want?

imo, they should get forked off the network and be stuck not being able to process tx's if the avg block size is growing on a longer chain that they've chosen not to participate in. eventually, i think the pain would become unbearable for these nodes and they've have to up their block size setting to at least match that of the network if they want to transact.
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@cypherdoc it would be ideal if the BU node could determine that it didn't want the block before actually downloading it. But @rocks suggested that that can't happen without changing the protocol (I haven't investigated it yet).

Eventually we could extend the protocol so that 2 BU nodes could do this for efficiency.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@theZerg

yeah, @rocks is right about that. unfortunately the header doesn't provide any blocksize info so it would have to DL even excessive blocks. but relaying doesn't have to happen, right?
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
right. The node does not relay the excessive block until it is part of the "fork" chain that the node recognizes as authoritative (N longer than the next best choice). At that point it will relay it.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@theZerg

either i'm missing something or i'm not explaining myself right.

if the node has chosen to not relay blocks over a certain size via it's setting, why should it relay an excessive (bigger) block at any point? it's owner has made a conscious choice not to do this for either ideological reasons or b/c perhaps he knows something everyone else doesn't.
[doublepost=1445366587][/doublepost]"So why aren’t central banks embracing the Swiss example?

i can tell you why:

http://www.wsj.com/articles/switzerland-offers-counterpoint-on-deflations-ills-1445189695
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@cypherdoc If the excessive block is never relayed by BU nodes then you would break the network given certain topologies. Your node would basically be telling other nodes "this is the longest chain" but then when those other nodes say "ok give me the blocks in the chain" you respond with no.

If those other nodes are only connected to non-relaying BU nodes then they would be stuck forever in the synchronizing phase.

When the BU node accepts a chain as authoritative, it must start relaying the blocks in it.

Look think about it from the user's perspective. Let's say he's selling something, so receiving $. The txn appears in an excessive block. So his BU wallet shows it in orange with a little caution symbol near it -- it basically is not much more secure than a "zero-conf" txn at this point because this BU wallet is encouraging a different chain to replace this one. And that other chain may not have the txn.

Now let's say that the chain's depth grows N greater than the next best. So his BU wallet "accepts" the chain and switches the orange coloring to green. The txn is "accepted" so the user sends out the package. At that point, the BU wallet should "push" this chain to all askers because it wants to ensure that the the money his user received remains part of the longest chain.
 
  • Like
Reactions: Peter R

rocks

Active Member
Sep 24, 2015
586
2,284
I'm thinking of this in very simple terms. If the goal of BU is to encourage large blocks, why introduce a new parameter (that we don't have in bitcoind today) that encourages smaller blocks.

My understanding is BU would implement for miners a "vote for all larger block methods" in order to grow consensus around the single idea. A new block size relay limit goes against this. Part of the "vote for all larger block methods" should also include nodes willing to accept any sized block, relaying a block is the node's vote and BU should vote for larger blocks.

If the concern is network issues, it seems a better way is to simply enable nodes to have upload/download bandwidth limits (same as bittorrent clients). Here nodes behind slower networks could slow their upload contribution to match their ability, but they would stay on the longest chain and continue to upload that.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
We don't want infinitely sized blocks right away. We want the transport layer pressures to "right-size" the blocks so we don't have to invent 20 year inflation schedules today.

Traffic shaping clients like I wrote for XT will work to some degree. The unthrottled network itself will work as described formally in peter r's paper.

But traffic shaping already has its detractors (in core and gavin) who claim it would be better to "throttle" by nacking certain classes of requests or by limiting the number of connections.

In a similar manner not relaying excessive blocks discourages the exact behavior without impacting the rest of the network.

I personally think that BU should have both shaping and excessive block discouraging.
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
@theZerg
> We don't want infinitely sized blocks right away. We want the transport layer pressures to "right-size" the blocks so we don't have to invent 20 year inflation schedules today.

Exactly.

The point of each node having its own personal block limit for relaying is fundamentally to provide a braking mechanism on block size growth, a further form of feedback into the orphaning risk for large block miners. This is also a political benefit, avoiding the meme that BU means "no-limit" blocks. If it can be made to work then surely it is worth having.

The cases for block validation:

1. Valid
2. Invalid
3. Don't know

Assuming "don't knows" are blocks received which are simply too big to relay, then just because one block > X is received does not have to mean subsequent ones are permanently non-relayed. A big block followed by six blocks < X could be the signal for the node to fully participate again in the network. This creates a short period of elevated orphaning risk for large block miners that disappears after a while.

Users who want to select a block limit might also want to select a start date and an allowed ramping percentage per year, e.g. 2MB 2016/01/01 20%. So the block limit tolerated by the node slowly increments like it does in BIP101 and Pieter Wuille's proposal.
 

rocks

Active Member
Sep 24, 2015
586
2,284
@theZerg @solex
I guess I am less concerned about seeing excessively large blocks, and more concerned about non-contributing nodes or even sybil attacks to keep block sizes small.

It is important to remember that when satoshi first put the 1MB limit in place any one could solo mine blocks at home at a limited cost. The fear was individuals with limited resources creating massive blocks just to disrupt the network.

Today it is different, mining is a business and finding just 1 block requires a significant expenditure. Any entity that finds a block is more motivated to create a block the network can handle in order to receive the payout, than to create a massive block to disrupt the network (which probably will be orphaned anyway due to propagation delays).

It hasn't been a problem in the past, and with the economics of today it just seems unlikely to me.

But nodes taking bandwidth and not contributing it back seems to be much more likely and problematic. I think the BU max reply approach is enabling a likely problem to prevent an unlikely one.

@rocks
in your scenario, what would be your solution to defending against repetitive exa-blocks by an attacker ala f2pool's single tx multi input self mined block?
The defense is the other miners. If they build on f2pool's blocks, then bitcoin accepts those blocks. If other miners choose to ignore and orphan those f2pool blocks, then the block is rejected.

I also wouldn't characterize f2pool's single tx self mined block as an attack, it was a clean up of dust transactions. That is valuable.
 
  • Like
Reactions: AdrianX

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Great ideas @solex, @theZerg and others regarding enhancements to allow nodes to affect network orphaning risk!

I just wanted to interject that there's probably tons we could do to give node operators all sorts of controls and adjustments (@awemany's block size governor comes to mind). What's cool is that this can happen at a different (and more customizable) level outside of the true consensus layer. I think the immediate hurdle is to win mind share that our basic idea is sound (that we don't need perfectly rigid rules for things like the BSL, # SIGOPs, # bytes hashed, etc). To do this, it might be best to implement something simple to begin with, but then talk about all the cool things that could be done in the future.
[doublepost=1445401125][/doublepost]@rocks

I think I'm imagining this will play out very differently than you. I envision that miners will be very conservative in raising their block sizes (they will use the tippy toe method). Most nodes will never actually encounter excessive blocks in practice, as most people's settings will be less restrictive than what the miners dare to produce on a regular basis. I imagine that the network capacity will slowly grow in this interesting dance between nodes, miners and demand for block space.

Are you imagining that the average node will be dealing with excessive blocks on a daily basis?
 
Last edited:
  • Like
Reactions: majamalu