Block space as a commodity (a transaction fee market exists without a block size limit)

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
I might be wrong here, it's been a while since I read into IBLTs, but when the miner sends the information for the mined block he sends more or less a clever hash table of transactions included. Now I receive it, unpack it, see I'm missing transactions (because they didn't propagate all the way to me yet, I didn't see them, or something along these lines) and have to get those transactions before I can verify it and pass it along. So it's not just 4 bytes, it could be many. The IBLT itself isn't just 4 bytes either, and has to be sent as well. If block size is bigger, the IBLT is proportionally bigger.

Look at the discrepancies in mempool between TradeBlock and Blockchain.info for a good example of this.
Geezuz crimony. You read this thread from over a year ago, which includes many small blockists, and you'd think a miner would never, ever dare launch a large bloat block attack.

https://m.reddit.com/r/Bitcoin/comments/2hchs0/scaling_bitcoin_gavin_begins_work_on_invertible/

What happened?
 
  • Like
Reactions: dgenr8

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@awemany

So because entropy (transactions) can enter the network through any node, and because the nodes are physically separated, it is physically impossible for the network to agree on the present state of the ledger due only to the speed-of-light constraint (we don't even need the Shannon-Hartley theorem for this point). This had never occurred to me until you mentioned it! I think it further strengthens the idea that O(1) block propagation is not possible (for the reasons you point out in your visualization). It would be great to see this diagram you're proposing!
Yes! But I have to say I am still pondering about the details. I agree it would be a good idea to actually sketch it is as a diagram and not a textual description of a diagram.

Also, above I wrote a little bit convoluted about a left and right border distinction, but that is not even necessary as far as I can see now. The nice thing about writing out thoughts is that it helps better processing them :D

Note also the following: It is not only not possible to agree on the present state of the ledger, it is also not possible to agree on a past state of the ledger - collapsing the transaction streaks into points - without further transmission of information!

If you think about it, all a single node knows about a transaction is its content and the local time it arrived at(*). The size of the network creates a lower bound on the uncertainty of that time stamp for other people's nodes.

And the collapse of the wave function - very aptly named IMO - happens when blocks get transmitted and the exact information about what is in the block is being exchanged between nodes.

Further, you can divide the transaction space, let it be C, up into subsets, such as transactions with varying fees and so forth. In any case, you can only do a finite number of subdivisions (as we are talking about finite computers) and pattern the C x T(ime) space into at most a finite number of blocks. That subdivision subsumes the fee, relay policy and so forth, as well as any (pre)communication about block propagation schemes. In any such scheme, these patterns will then have a time-wise boundary (on the left and right side in the diagram) and one has to select a cutoff value for each pattern touching the 'past or present' limit. Barring very weird worlds where some kind of entity discretizes the arrival time of these transactions, there is real uncertainty about the transactions that are included or excluded at these cutoff times. The total rate of arrival of such transactions that will be uncertain at the cutoff is proportional to u(t). This uncertainty has to be resolved with a proportional amount of information transfer.

Of course, one can go and create just a very small surface area on the diagram, which might also constrain the transaction selected for a block and thus there is another proportionality for information in front of u(t), which is the fraction of accepted transactions to total transactions. An empty block can obviously be transmitted in O(1), but this is as well something you already pointed out.

Yes, let me go and actually sketch that.

---
(*) Ok, you could say also the link it arrived on but I hope we can reasonably drop that dimension as being not of interest.

I want to release a new revision to my fee market paper and one of the things I want to do is show with a very simply argument that true O(1) block propagation is impossible (assuming there's more than one miner or mining pool). Then I think everyone will have to concede that "yes, a fee market exists." Of course the small-block proponents will then suggest that the equilibrium block size Q* could become too large for decentralization unless…uh…a centralized group of developers intervenes…but I'll call that progress. It will mean that the idea of the propagation impedance is sound.
Absolutely. However, bogus arguments are repeated often. This also goes to @Yoghurt114 and the bogus O(n ^ 2 ) problem. If we have full nodes proportional to users, we won't have any centralization problem and economical factors will limit rate to physically and economically possible transaction rates.

Regarding relationships with quantum mechanics, I'm quite certain you're right. I believe there will be some "uncertainty principle" and I wouldn't be surprised if we can even recycle some of the formalisms from physics. Bitcoin is an exciting new field of research; it mixes economics, physics, computer science, etc., all together!
Indeed, it looks like it.
 

humanitee

Member
Sep 7, 2015
80
141

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@cypherdoc

I think most people now agree that IBLTs are not truly O(1) once you factor everything in (this isn't to suggest they aren't helpful--quite the contrary). IIRC Rusty Russell discussed an IBLT implementation at ScalingBitcoin that communicated blocks using 97% less bytes than the target block size. Even @Yoghurt114 "somewhat agreed" with this point yesterday:

It sounds like we are in agreement then. IBLTs could significantly reduce the propagation impedance, but they won't in general make the impedance zero.
Somewhat agreed with current proposals, to a point. I must stress I am thoroughly unconvinced O(1) propagation is outright impossible - which would eliminate block-size dependent orphan risks, and I think it is unsafe to assume that it is.
 
Last edited:
  • Like
Reactions: awemany

humanitee

Member
Sep 7, 2015
80
141
If the network gets bigger there will be more transactions on the far corners of the network. I doubt O(1) is ever possible. The network is in a constant state of flux. It's probably possible to get very, very close to it, asymptotically speaking.

Is there a peer that exists today that has every transaction in its mempool? Seems impossible to prove but I highly doubt it, it's limited by propagation speeds and maximum connections allowed by a node.
 
  • Like
Reactions: awemany

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@humanitee

Mostly agree. However, instead of saying that we can get very close to O(1) propagation, I'd suggest that propagation is O(z*N) and we can work to make z very small. This z parameter is the propagation impedance and affects the cost to produce block space.
 
  • Like
Reactions: awemany

humanitee

Member
Sep 7, 2015
80
141
I agree with that, Peter. We more or less said that same thing, yours is more refined, z approaches 1/N asymptotically!
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
The purpose of having a universally agreed-upon ledger is because the ledger is only useful for forming a currency if all observers agree upon transaction ordering (so that conflicting transactions can be resolved in a deterministic manner).

At the point we decided to do that, we're already trying to create a system that emulates a property which does not exist in nature - different observers can disagree about the ordering of events without either of them being wrong.
Yes. The question is of course whether and which further properties that artificial system entails. And just saying 'shared ledger' actually implies so much about the structure of this thing that many of the above assumptions would hold. In a way, Bitcoin is quite 'pure'.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Ok, so here's what I imagined above, with a picture and a terse but hopefully correct argument. Feedback appreciated:




By the nature of having to make decisions a miner has to select a time boundary on transactions, regardless of selecting other parameters for including/rejecting a transaction. The multitude of possible parameters, all based on the transaction content (*) are compressed into a schematic view on the y-axis. He might of course, select a more complicated scheme than the one shown above, but that doesn't bring propagation any closer to O(1), it might only go into the opposite direction.

From his local POV, he only knows each transaction with a time uncertainty. He has a time stamp on each transaction, but he doesn't know what other nodes' timestamps are. The size of that uncertainty, the width dt of each small little bar in the picture above, is the size s of the network (as transactions can arrive from anywhere) divided by the propagation speed c, so dt=s/c.

Any selected time boundary will coincide with a subset of transactions. The size subset k per block is proportional to the transaction rate n, the miner acceptance rate a and the width of each transaction's uncertainty dt, or

k=n a dt=n a s/c

This is, notably, proportional to the transaction rate n.

Assuming that there is no mempool loss and all other nodes have the same set of transactions (breaking this assumption only makes propagation worse), this amounts to k bits of entropy that has to be exchanged at minimum between nodes.

It follows that the theoretical lower bound on more efficient block propagation schemes is strictly in Omega(n).

(*)- Adding parameters (such as a funny per-incoming-link policy or similar) does not change the picture that time is one of those parameters, which is the important part of this argument.
 
  • Like
Reactions: Peter R

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Oh I forgot to mention: The nature of time and decisions to eventually collapse uncertainty should (in principle!) also work on a higher level in some certain developer circles. :D
 

humanitee

Member
Sep 7, 2015
80
141
I like most of that, but wouldn't the left side of that chart be all clear inclusion? We know what was included in the last block.
 
  • Like
Reactions: awemany

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@humanitee: Good point! In the case of all transactions being included, there would be none left after a block to have an unclear boundary. In other words, the right boundary of a block becomes the left boundary of the next block. I tried to picture the more general case, though: There might be transactions that are thrown away and thus the left and right boundaries might differ. Basically, blocks are non-overlapping but potentially (partially) touching set boundaries (as in a venn diagram) on the semi-infinite transaction band.

My view in the picture is that of a 'timeless being' and basically just looking at time like any other variable and seeing that you'd need to draw lines left and right. They might be jaggy and complicated, but to contain a 'measurable set of transactions', you have to draw an 'area greater zero' meaning you have to select boundaries in time.

But you are right, a more refined drawing should show that the right boundary of a block coincides to a large extend up to identity with the left boundary of the next block - and optimization might mean that you might need to only transfer the information about one such divider line per block.

My argument doesn't rest on that though, it rests on needing to draw such lines at all.


EDIT: So here's an addendum about a potential 'solution to the problem' that really isn't one but which I also just shortly pondered about: What about selecting a jagged line cutting carefully between those transactions? Wouldn't that allow O(1) propagation? That would mean that every jag has to be agreed upon, too, though, which also means information transmission (a bit per jag). (Or you previously agree on the scheme, though then you lose the advantage of going around unclear transactions). However, this makes me think that we maybe didn't arrive at a clear-enough picture of the whole process yet and some further thought would help.

EDIT#2: Small mistake in my picture above, the lines crossing the light blue areas should be black of course (the miner is not interested at all and able to somehow communicate his scheme efficiently), not red.
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Great diagram!

I think what it illustrates, however, is that information needs to be communicated between miners in order for miners to come to consensus about the contents of blocks--and that the amount of information is fundamentally proportional to the transaction rate. I'm not sure that this (at least by itself) also shows that block propagation cannot be O(1).

Let me play Devil's advocate: imagine that miners forfeit their ability to build blocks according to their own volition. They agree to only build blocks that contain transactions from between 70 min and 60 min ago. This gives all the miners lots of time to come to consensus on what transactions occurred "60 - 70 min ago" (using a non-PoW method, for example). Now, when a miner finds a block solution, he only has to communicate the PoW since the network already knows what he was working on and already agrees that it would be valid so long as it includes the correct nonce.

So I think we still need the reductio ad absurdum. The only case where O(1) block propagation is possible is if the network has already come to consensus (e.g., like the scenario I described above). But then, what was the point of the PoW?
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
What I meant to express is that distributed consensus is hard because it tries to objectively answer questions that do not always have an objective answer.
Agreed! In fact, your comment helped me think up of a fun problem for a special relativity class:

Alice is in Hong Kong and Bob is in Vancouver when they each learn the private key to the same address holding 1,000 BTC. According to an observer in Earth's reference frame, Alice spends the 1,000 BTC to her private address (by posting a signed TX to node in Hong Kong) 10 ms before Bob spends the 1,000 BTC to his address (by posting a signed TX to a node in Vancouver).

(a) Does a reference frame exist where an observer would witness Bob spending the 1,000 BTC first?

(b) How much earlier (in Earth's reference frame) must Alice post her TX to the Hong Kong node in order for observers in all possible reference frames to agree that Alice indeed spent first?

EDIT: I'm such a geek :)
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Some preliminary thinking about "weak blocks"

"The costs are not zero but they are the SAME irrelevant of how big the block is made." -- /r/walletceo

Not really. Let's imagine an "ideal implementation" of weak blocks and let's imagine that all of the Network's hash power plays along...

OK, so after 1 minute, Miner A mines a weak block that's 0.5 MB in size. Miner B can now reformat his block to include all of the transactions in Miner A's weak block for free. But if he wants to include more transactions, for example to earn more fees, then the additional transactions he includes will still take additional time to propagate. Miner B can't all of a sudden make his block infinite in size, for instance.

Let's assume Miner B decides to add another 1 MB to Miner A's weak block and then he finds a second weak block a minute later. Well now Miner C can include the contents of Miner A's block and Miner B's block for essentially no cost. But again, if Miner C wants to include additional transactions, then he opens himself up to a greater risk of orphaning.

Miner's will still build upon the weak blocks as described in my paper, but (in a perfect implementation) they get to tack on the contents of the previous weak blocks without accruing additional orphaning risk. In a way, we can imagine that the "cost" of these TXs were already paid by the previous weak blocks.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
Regardless of how special relativity may be interpreted, for each validator either Alice is first or Bob is first. Observers in different frames cannot see mutually contradicting blockchains forming, of course, else as they all converged upon Vancouver some of them would see the blockchain history magically change somewhere along the way.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Zangelbert Bingledack

Of course you are correct that the transaction sequence as recorded by the Blockchain will be the same for all observers. The point Justus was making is that in certain cases (i.e., events separated by space-like intervals) there is no universal answer to who spent first (it depends on the reference frame of the observer). But (like you said) the Blockchain needs to give a universal answer so the Bitcoin network must come to consensus on which transaction to include even though some observers will think the decision was "wrong."

Anyways, this is totally academic until we're flying around in spaceships at speeds approaching the speed of light.
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Let me play Devil's advocate: imagine that miners forfeit their ability to build blocks according to their own volition. They agree to only build blocks that contain transactions from between 70 min and 60 min ago. This gives all the miners lots of time to come to consensus on what transactions occurred "60 - 70 min ago" (using a non-PoW method, for example). Now, when a miner finds a block solution, he only has to communicate the PoW since the network already knows what he was working on and already agrees that it would be valid so long as it includes the correct nonce.
Devil's advocate playing is great!

But if you think about it, the time uncertainty for past transactions, even those that are 60-70 min ago does not evaporate without further transmission of information. Meaning that even 60min back, you'd have to draw your line somewhere, and unless you know your territory (the exact layout and timing of the transactions), you are going to have a couple that are on the boundary and thus need synchronization on whether to include them or not.

You could, of course, try to communicate a scheme with your peers that tries to pierce a line where there is space or make it a jagged line going around the transaction uncertainty. The question then becomes, as far as I can see, whether you can communicate, given random timing of incoming transactions, enough information to avoid them in less bits than the number of transactions you would pierce through with an arbitrary line. And that is a good point, that is something that needs to be figured out still.

Naively, you could think here that you just number all the transactions and transmit a cutoff boundary in O(log n) by transmitting the transaction number at which you want to do the cutoff. But that doesn't work because for every transaction-index boundary number, you'd again make a line piercing through transactions with uncertain timing (and your exact index is uncertain as well, due to the ordering of the transactions). Maybe there is a weird scheme that could work?

So I think we still need the reductio ad absurdum. The only case where O(1) block propagation is possible is if the network has already come to consensus (e.g., like the scenario I described above). But then, what was the point of the PoW?
Yes, but I hope that just a certain size of the network and random incoming transactions are enough to show a floor on the necessary information to synchronize. Because a certain size of the network would also imply that there's more than a single miner (combined ownership aside).
 
  • Like
Reactions: Peter R