Block space as a commodity (a transaction fee market exists without a block size limit)

humanitee

Member
Sep 7, 2015
80
141
Thanks for your response.

And as an aside, I would argue against Bitcoin being an efficient *payment* network (now or ever), for reasons unrelated to this debate. We're *using* it as a payment network, but it has become quite clear this has some problems. Bitcoin is a settlement network, clear and simple. The analogy with Bitcoin being a digital form of cash payments sounds nice, but it's false and breaks down right quick: When I give you a dollar bill, I don't need to wait ~10 minutes for you to receive it, nor does the entire world see that transaction.

Waiting for payment is waiting for settlement. Bitcoin is a settlement network. And not even an efficient one at that. But it's secure, independent of anything - decentralized, and *relatively* fast (which isn't hard considering the competition). Payment networks may be built on top, as the Lightning Network proposal demonstrates, but Bitcoin is not one.
I liked your post except for this bit, doesn't really make sense. Bitcoin can be both a payment network and a settlement network. You can spend 0 confirmations, so it's instant. If you want security, you must wait. Sending digital USD through some service or bank is also not "instant." Driving to an ATM to get my money to give to my friend isn't "instant."

I also have qualms with your IBLT explanation. IBLT still will require bandwidth, which could still be significant if blocks were large enough. It definitely scales with block size, albeit with greatly reduces bandwidth costs.

But what if propagation impedance drops to, effectively, 0 - as an IBLT or some such scheme could pull off. What then? Do we have a boundless block size limit? Would transaction fees not be 0 satoshi? And would not a single entity in the world be able to validate all throughput?
Don't forget miners choose what to include. Why include a transaction that isn't paying at least some tiny fee? I wouldn't, even if propagation was as you describe.

I think there should be a limit somewhere for the record. Simply because infinite block sizes could wreak havoc on the network due to transactions still needing to be verified, synchronized across nodes (even with IBLT), and stored indefinitely.

BIP101 can be softforked to provide a cap, so it doesn't worry me, and the fact that the bounds are so high is encouraging compared to the other proposals that limit the cap to a modicum of BIP101. I much prefer it, simply because we won't have to revisit this issue for a much longer time.
 
Last edited:

dgenr8

Member
Sep 18, 2015
62
114
@Yoghurt114

Your outlook seems driven by fear, and an irrational demand for certainty. These are not the motivations that drove Satoshi to introduce bitcoin, and they are not the motivations that will drive whatever cryptocurrency is successful in the next 20 years.

That cryptocurrency will be one that relies on productivity growth, spurred in part by sound money. It will be one that solves the propagation challenges without imposing a restrictive blocksize limit. And it will be one that accepts reasonable resource requirements for full nodes.
 

humanitee

Member
Sep 7, 2015
80
141
So far it hasn't come from a rising exchange rate, it's come from inflation due to the generation of new coins.

(Price) deflation of a money happens when the economy it is being employed by increases in productivity and efficiency, it is not an inherent property of a scarce money. You therefore cannot base the incentive structure that maintains the scarce properties of the money on price deflation (except by somehow guaranteeing continuing increased productivity of said economy).
Deflation can occur simply by lessening the available supply of money (users buying available supply, for example).
 
Last edited:

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
Two points:

1) History is not a guarantee of the future, but it is an equally bad error to say that history is not an indicator of the future. If this were the case, our very lives as we know them would not be possible. We would never have any basis for thinking that any of our actions would produce the same effect as they always had; we really could not effectively do anything. Today are objects going to fall down, or up...or maybe stay suspended in mid-air? They have historically almost always fallen downward, but if that is not an indicator of what they will do today, I'd better chain myself to this telephone pole for fear of floating into outer space.

2) The ability of the market to switch protocols doesn't matter IF the current popular protocol actually does use something near the optimum blocksize cap (including no cap, if that is optimal). What the ability to switch protocols* does mean is that the blocksize cannot be set with arbitrary conservativeness. Some acceptance of risk about the future must be accepted, or else another protocol will take a rational level of risk with bigger blocks and enjoy market popularity. This ties back to point 1, as others have underscored.

*not something like Litecoin, which is not just a different protocol but a whole new ledger; the idea that we should switch ledgers whenever we switch protocols cuts the ground from underneath the whole idea of cryptocurrencies as stores of value. In such a world only very active and extremely talented investors could ever trust the system.
 
Last edited:
  • Like
Reactions: awemany and Peter R

Yoghurt114

New Member
Sep 17, 2015
21
8
Bitcoin can be both a payment network and a settlement network. You can spend 0 confirmations, so it's instant. If you want security, you must wait. Sending digital USD through some service or bank is also not "instant."
The fact Bitcoin has been used as a payment network since its inception does demonstrate that fact. But its being useful as a payment network is because first-seen policy is prevalent throughout the network. This being true is not a guarantee - first-seen policy is an unenforceable policy - and is a faulty assumption to base secure and instant payments upon, as is often (wrongly) perpetuated. It 'works' for this reason, but no other. Various other schemes allow for on-the-blockchain payments to occur instantly, such as green-address payments - but this introduces a trust relation. Further, to be somewhat pedantic, I said Bitcoin is not an *efficient* payment network, which it isn't. An efficient payment network doesn't require a broadcast to every participant.

I also have qualms with your IBLT explanation. IBLT still will require bandwidth, which could still be significant if blocks were large enough. It definitely scales with block size, albeit with greatly reduces bandwidth costs.
Yes. But the bandwidth required would be irrespective to the amount of data required to transmit a full block and would therefore not affect the orphan rate. The paper holds so long as propagation impedance is not 0; an O(1) block propagation scheme such as IBLT can potentially make it 0. (see nuance with this line of thought in previous posts)

Don't forget miners choose what to include. Why include a transaction that isn't paying at least some tiny fee? I wouldn't, even if propagation was as you describe.
I agree. But the maths described in the paper would give such a result. Personally, I feel a fee market based on the preferred time at which transaction is being included is possible, though I have little evidence to back this up.

BIP101 can be softforked to provide a cap, so it doesn't worry me, and the fact that the bounds are so high is encouraging compared to the other proposals that limit the cap to a modicum of BIP101. I much prefer it, simply because we won't have to revisit this issue for a much longer time.
Bear in mind a soft fork without miner consensus can only be extremely messy. If miners have an incentive to create larger blocks, there will be a problem to soft fork a cap on top of BIP 101 or any proposal. I rather we come up with a solution which excludes the requirement of a soft fork later down the line.

I do agree not having to have this excruciating debate all over again in a few years is an important consideration.
 
  • Like
Reactions: humanitee

Yoghurt114

New Member
Sep 17, 2015
21
8
Your outlook seems driven by fear, and an irrational demand for certainty. These are not the motivations that drove Satoshi to introduce bitcoin.
I would argue to the contrary. I perceive the very reason of Satoshi inventing Bitcoin was to be certain of a money that was absolutely clear of any manipulation by any corruptible entity. I also wouldn't call this a fear, but rather an inevitable reaction to the existence of a money adhering to the opposite.

That cryptocurrency will be one that relies on productivity growth, spurred in part by sound money. It will be one that solves the propagation challenges without imposing a restrictive blocksize limit. And it will be one that accepts reasonable resource requirements for full nodes.
Agreed fully.

But, many of these issues have not been solved in Bitcoin. It would be excellent if they were, but we are not there yet. I see no reason to forego these issues with no solution.
 

Yoghurt114

New Member
Sep 17, 2015
21
8
Deflation can occur simply by lessening the available supply of money (users buying available supply, for example).
I would think this only happens when confidence in the money appreciates, which is true when productivity of the associated economy increases (or is perceived to be so).

Two points:
1) History is not a guarantee of the future, but it is an equally bad error to say that history is not an indicator of the future. If this were the case, our very lives as we know them would not be possible.
It is good practice to assume the worst case when security is part of the equation. If past results indicate a 'good (or in this case 'suitable') case' to be likely, then I would consider discarding it immediately to be the correct way forward and instead work with any conceivable 'bad or worst' case. It's a pessimistic way of thinking, sure, but it'll help you get out of a fire or prevent the fire altogether.

2) The ability of the market to switch protocols doesn't matter IF the current popular protocol actually does use something near the optimum blocksize cap (including no cap, if that is optimal). What the ability to switch protocols* does mean is that the blocksize cannot be set with arbitrary conservativeness. Some acceptance of risk about the future must be accepted, or else another protocol will take a rational level of risk with bigger blocks and enjoy market popularity. This ties back to point 1, as others have underscored.
I actually do agree. The current blocksize of 1MB appears to be completely arbitrary, and I agree it is neither suitable for today's level of adoption nor appropriate for the recent strives that have been made in improving the software that deals with the contents constrained by the limit.

As to proposals including arbitrary variables (nearly all of them) bearing zero information pertaining to (among other things) the consequence to the network's validation capacity; there was a discussion a few months back (was gonna say half a year, but it happened in July - time really does move quicker for Bitcoin) concerning a 'min spec' for Bitcoin auditors, which I found very interesting:

http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-July/009303.html

I think such a proposed min spec would greatly help everyone get a much clearer view on the 'arbitrariness' of values.

*not something like Litecoin, which is not just a different protocol but a whole new ledger; the idea that we should switch ledgers whenever we switch protocols cuts the ground from underneath the whole idea of cryptocurrencies as stores of value. In such a world only very active and extremely talented investors could ever trust the system.
Agreed.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I just thought of a reductio ad absurdum that explains in a simple way why true O(1) block propagation is not possible:

Let the amount of information required to communicate an empty block be S_e and let the amount of information required to communicate just the transactions in a (non-empty) block (assuming an arbitrary level of coding gain) be S_tx. The total amount of information required to communicate a non-empty block is then S_e + S_tx. Thus, communicating a non-empty block necessarily requires transmitting more bits than an empty block unless S_tx is identically zero.

Therefore, true 0(1) block propagation is only possible if exactly zero information about the transactions included in the block is transmitted. But if no information about the transactions included in a block is transmitted, then how can the other miners figure out what was in the block? The only way would be if they already knew ahead of time with 100% certainty. But if they already knew exactly what the block contents were going to be, then what purpose did the block serve?
 
Last edited:
  • Like
Reactions: awemany

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Yoghurt114:

In this whole discussion, I am pondering about where you are actually coming from. On one hand, you worry about full nodes being able to audit the blockchain because resources needed might be too high to actually validate them.

On the other hand, you worry about O(1) block propagation, which necessarily includes O(1) block validation.

A miner needs to validate the block chain. How can validation at the same time become more and less expensive?
 
  • Like
Reactions: Peter R

Yoghurt114

New Member
Sep 17, 2015
21
8
Thus, communicating a non-empty block necessarily requires transmitting more bits than an empty block unless S_tx is identically zero.
I can communicate to you a complete block with all contents (size of is irrelevant) except the nonce. I send the nonce to you in 2 minutes. At that time, I will have magically communicated a valid full block to you at the expense of 4 bytes - a constant amount irrespective of the size of the block.

awemany said:
On the other hand, you worry about O(1) block propagation, which necessarily includes O(1) block validation.
Validation effort is still O(n) per node in any case. It takes O(1) to validate an IBLT / do set reconciliation, and it has constant size allowing it to propagate irrespective of block size, but it doesn't allow you to forego validation of its contents.

awemany said:
A miner needs to validate the block chain. How can validation at the same time become more and less expensive?
So the paper hinges on the propagation impedance being non-zero, it is non-zero when there is a variance in orphan rates respective to the block size. This is intuitively simple: large blocks are at greater risk to be orphaned than smaller block are.

However, in a system with O(1) constant-size block propagation techniques the propagation impedance would sink into 0; the orphan risk becomes irrespective of the block size, and the result of the paper no longer holds because the block space supply curve moves toward infinity.

Full validation has little to do with all that, it is O(n) in any case (unless we have SNARKS or some such - they would allow O(1) validation, but it has many problems) because you still need to see and inspect each and every transaction that goes into the blockchain (whether you validate before or after it exists in a block is not relevant, it remains O(n), though the former would further push impedance into zero)
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
@Yoghurt114

So we can finally dispense with the extremely popular FUD argument of the small blockists for the last 6 months of the large miner, large block attack against small miners dependent on propagation latency, correct?
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
I can communicate to you a complete block with all contents (size of is irrelevant) except the nonce. I send the nonce to you in 2 minutes. At that time, I will have magically communicated a valid full block to you at the expense of 4 bytes - a constant amount irrespective of the size of the block.



Validation effort is still O(n) per node in any case. It takes O(1) to validate an IBLT / do set reconciliation, and it has constant size allowing it to propagate irrespective of block size, but it doesn't allow you to forego validation of its contents.
This is just shifting stuff around: All algorithms dealing with n transactions are going to have to touch each transactions at least once, meaning number of transactions n times something is the lowest achievable complexity. Of course, you can (and will, it is being done already) 'prevalidate' transactions before a block arrives. But again, you'll eventually run into fundamental limits.

Consider this:

Assume the network's size is s and propagation speed is c. Further assume that the average rate of transactions is r. There will be a transaction time uncertainty delta_t = s/c where a transaction is in flight.

Let the average rate of transactions be our used and abused n. The number of uncertain transactions u at a single point in time (from a single node's time if you want to further account for SRT) is

u(n)= n * delta_t = (n*s)/c

Note that u(n) is in Omega(n)!

Any set reconciliation scheme has to deal with that amount of uncertainty thus entropy to communicate.

And I fail to see how that is fundamentally not limited by at least one bit of information of 'include this transaction, or leave it out'.
 
  • Like
Reactions: Peter R

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I can communicate to you a complete block with all contents (size of is irrelevant) except the nonce. I send the nonce to you in 2 minutes. At that time, I will have magically communicated a valid full block to you at the expense of 4 bytes - a constant amount irrespective of the size of the block.
I'm not disagreeing with this. That's how pools work today. Workers #7 finds the magic nonce value, communicates it, and the pool knows that Block Template #7 + nonce is now a valid block. O(1) intrapool block propagation. That's possible because the pool already knows with certainty what each of its workers are working on. That's also why the fee market breaks down if all the hash power joins the same pool, like I said in my talk.

But I think you're missing the absurdum part of my post if you try to apply this to the entire network at once. The only way true 0(1) block propagation is possible across the entire network is if the entire network knows with 100% certainty what all the hash power is working on at every point in time. Which means that the network has already come to consensus on the next block (it's either block A, B, C, D, …. Z but nothing else) and the PoW only serves to pick, for example, Block C as opposed to A, B, or D. But if the network had already come to consensus, then what was the point of the block in the first place? Why not use an algorithm like Nxt to pick which of blocks A - Z are the next block?
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Consider this:

Assume the network's size is s and propagation speed is c. Further assume that the average rate of transactions is r. There will be a transaction time uncertainty delta_t = s/c where a transaction is in flight.

Let the average rate of transactions be our used and abused n. The number of uncertain transactions u at a single point in time (from a single node's time if you want to further account for SRT) is

u(n)= n * delta_t = (n*s)/c

Note that u(n) is in Omega(n)!
Great point!
 
  • Like
Reactions: awemany

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Peter R.: Thanks!

I was pondering about this today and I don't yet have the full mathematical grasp firmly in place to prove this formally. But I think I am able to do the usual physics way of doing a graphical argument :) See below.

In any case, I am pretty sure now that there is no way out of the above u(n) factor in the set reconciliation.

The way I think about it is as follows: Imagine a diagram of all transactions beginning at the Genesis block and until eternity (assuming Bitcoin survives), for a single chosen node. [Imagine you are the timeless super-god, being able to observe the Bitcoin network in its full time-wise totality, but only from a single node].

Imagine the diagram to have time on the X axis and the transaction itself on the Y axis (in the sense that transactions are some finite number of q bits, so they are equivalent to a finite natural number < 2^q, which is strictly smaller than the set of real numbers R).

Ok. Now (un)fortunately, each transaction on that diagram would not only be a single dot, but instead a small streak timewise, with the streak having a length of s/c, a time uncertainty. Your node can draw dots for the transactions when they arrive at local time, but it can only draw uncertainty streaks when those transactions will arrive at other nodes local times. [This is even limited fundamentally by Einstein's special relativity. There is no absolute time.]

So... after one imagines this pattern of streaks now - what does 'creating a block' actually do?

Creating a block is drawing boundaries, not unlike a venn diagram on this infinite set of transactions.

There are limits on how you can draw these boundaries and have a functioning Bitcoin system:
One can certainly imagining drawing these limits according to complicated rules of the transaction itself (fees etc.), which would mean a very detailed structure on the Y axis. However, on the X axis, you are limited. Due to the 10min block rule, you are drawing boundaries on this graph with a mean frequency on the X axis. And the right side of these boundaries (block venn diagrams!) would be strictly bounded by the globally unknown future to us mortal Earthlings. The left side of each boundary is - to some extend(*) - movable. A certain miner (venn diagram artist) might certainly go and include some very old transactions in his block (assuming they are still valid).

But look at the right side. Somehow, you have to semiregularly pierce through the set of transactions with a vertical line - or, if you prefer, an arbitrarily complex and jaggy and moving line - but that only makes that line longer.

And here is the problem: You always will have uncertainty here: Again, with your local time stamp of your full node, you could draw all transactions as *points* - however and again: With the uncertainty in the rest of the network, those transactions are actually not points but your limited information means they are actually streaks.

However, the drawn venn diagrams on all nodes with all their local times need to be such that they all agree on which of the streaks to include and which to exclude.

And herein lies the problem for any O(1) propagation scheme: You have to draw that venn line somewhere. And that always means you will cut through a couple of transactions, proportional to u(n), thus proportional to total transaction rate, that you need to figure out boundaries with your peers, meaning you need to transmit information about all these transactions to your peers.

Again: The transactions are the denser and more overlapping on the X axis - the rate of this described by the above u(n) - the more transactions enter the network, meaning this is a self limiting process!

---
(*) - Note that a closer look gets complex as the subset of valid transactions at each point in time is a very complex subset of the Y axis. This is not at all of importance here.
---

Last but not least, pondering and going a little crazy about this, and @Peter R., your 'wave function collapse', I really start to think that there might be interesting coincidence between the Bitcoin network and the time and energy uncertainty from quantum mechanics: Information, entropy can be expressed in terms of energy and there is above described time uncertainty, so this naturally leads to something uncannily resembling the dE*dt > h Heisenberg uncertainty...
 
  • Like
Reactions: Peter R

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@awemany

So because entropy (transactions) can enter the network through any node, and because the nodes are physically separated, it is physically impossible for the network to agree on the present state of the ledger due only to the speed-of-light constraint (we don't even need the Shannon-Hartley theorem for this point). This had never occurred to me until you mentioned it! I think it further strengthens the idea that O(1) block propagation is not possible (for the reasons you point out in your visualization). It would be great to see this diagram you're proposing!

I want to release a new revision to my fee market paper and one of the things I want to do is show with a very simply argument that true O(1) block propagation is impossible (assuming there's more than one miner or mining pool). Then I think everyone will have to concede that "yes, a fee market exists." Of course the small-block proponents will then suggest that the equilibrium block size Q* could become too large for decentralization unless…uh…a centralized group of developers intervenes…but I'll call that progress. It will mean that the idea of the propagation impedance is sound.

Regarding relationships with quantum mechanics, I'm quite certain you're right. I believe there will be some "uncertainty principle" and I wouldn't be surprised if we can even recycle some of the formalisms from physics. Bitcoin is an exciting new field of research; it mixes economics, physics, computer science, etc., all together!
 
Last edited:
  • Like
Reactions: awemany

humanitee

Member
Sep 7, 2015
80
141
I can communicate to you a complete block with all contents (size of is irrelevant) except the nonce. I send the nonce to you in 2 minutes. At that time, I will have magically communicated a valid full block to you at the expense of 4 bytes - a constant amount irrespective of the size of the block.
I might be wrong here, it's been a while since I read into IBLTs, but when the miner sends the information for the mined block he sends more or less a clever hash table of transactions included. Now I receive it, unpack it, see I'm missing transactions (because they didn't propagate all the way to me yet, I didn't see them, or something along these lines) and have to get those transactions before I can verify it and pass it along. So it's not just 4 bytes, it could be many. The IBLT itself isn't just 4 bytes either, and has to be sent as well. If block size is bigger, the IBLT is proportionally bigger.

Look at the discrepancies in mempool between TradeBlock and Blockchain.info for a good example of this.
 
Last edited:

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
@awemanySo because entropy (transactions) can enter the network through any node, and because the nodes are physically separated, it is physically impossible for the network to agree on the present state of the ledger due only to speed of light constraint (we don't even need the Shannon-Hartley theorem for this point).
The purpose of having a universally agreed-upon ledger is because the ledger is only useful for forming a currency if all observers agree upon transaction ordering (so that conflicting transactions can be resolved in a deterministic manner).

At the point we decided to do that, we're already trying to create a system that emulates a property which does not exist in nature - different observers can disagree about the ordering of events without either of them being wrong.
 
  • Like
Reactions: awemany

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Justus Ranvier

I can't tell if you're agreeing or disagreeing with what @awemany and I are postulating.

Regarding ordering of events, two observers can only disagree (and both be correct) for events separated by space-like intervals. This is actually related to the point @awemany is making when he's talking about s/c and how it requires the flow of information to come to a consensus...
 
Last edited: