Gold collapsing. Bitcoin UP.

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
You can't make economic sense to someone who is being paid not to understand economics (to paraphrase something someone else once said).

The "service level" concept that you are talking about is what I mostly associate with monopolies which can afford to offer that. It's disappearing in the face of strong competition, where service becomes more egalitarian.

If you're competing on transaction confirmation time, you're competing against the rest of the network and hard cold hashing power probability. My answer : let the miners sort that out - they are free to mine whatever transactions they want. Some company could build a network of nodes linking to "express miners", or to "charity miners" or whatever.

Let's fix Bitcoin scaling first before tackling that business idea.
 

jl777

Active Member
Feb 26, 2016
279
345
My point is that miners are not bitcoin core devs, so how exactly will they "sort things out"??
also several scaling ideas directly support or are even enabled by the different service levels, see my post above about submitting to all 10 interleaves, thus getting an expected 1 minute confirm time. The subchains also is allowing miners to make more money. If BU is becoming the best version for miners, it wins. Simple as that.

Without the tools that allow the miners to implement service levels, then they wont be able to implement service levels. This is self-evident.

Once the miners get the tools that allow them to boost their revenues, then they will use that unless it has some negative aspect, like obsoleting all their equipment

Work backwards from the required solution. Anything that isnt supported by miners wont have the hashrate to secure it and no business in their right mind will support it and if none of the businesses support it, the users wont be able to use it.

Like it or not, the miners control the blocks.
If it is made push button easy for the miners to create service levels which get them more revenues, then this seems a much better chance of working than appealing to their altruism, or just making an altcoin

James

P.S. Alternatively, someone can crack SHA256 and BU can fund new hashrate using QC
 
  • Like
Reactions: AdrianX

steffen

Active Member
Nov 22, 2015
118
163
In my opinion, making the weak blocks "append only" (i.e. just like real blocks) is both the simplest and the most useful. What is the benefit of allowing the "diff blocks" to remove transactions?
Miners don't necessarily know the same transactions because they haven't crossed the network yet. I believe that is the most obvious reason for one miner not to include a transaction another miner tries to include. It could also be that the second miner considers the transaction to be spam or with too low a transaction fee. Anyway, I think a removal of transactions would occur seldom in practice so "As a side effect, transactions get a better confirmation feedback" (compared to our present situation) as the bitcoin9000 paper states.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Miners don't necessarily know the same transactions because they haven't crossed the network yet. I believe that is the most obvious reason for one miner not to include a transaction another miner tries to include. It could also be that the second miner considers the transaction to be spam or with too low a transaction fee.
Yes, I agree completely with this. Miners could learn about transactions one at a time, or as new transactions included in a weak block, as you mentioned. However, I see this as orthogonal to the need to remove a transaction from a previous weak block.
Anyway, I think a removal of transactions would occur seldom in practice so "As a side effect, transactions get a better confirmation feedback" (compared to our present situation) as the bitcoin9000 paper states.
Fair enough, but then what is the purpose of allowing transaction removal in the first place? It makes the protocol more complex (append-only is simple, but if the Δ-blocks can also encode instructions to remove transactions from previous Δ-blocks, then the protocol is more complex) and it reduces security of a fractional confirmation.
 

Roger_Murdock

Active Member
Dec 17, 2015
223
1,453

Turning back to the extended "blocked stream" analogy, the amount of "forking pressure" that's attempting to sweep away Blockstream's 1-MB "dam" can be thought of as roughly corresponding to the area marked "deadweight loss" in the above graph. So about how big might that area be right now? Well, according to blockchain.info, right now total transaction fees per day are about $20,000 and the total number of daily transactions is about 200,000. And that gives us an average fee of about $0.10 per transaction. Obviously it's impossible to know exactly what the free-market equilibrium would be in terms of fees / number of transactions in the absence of an artificially-constrained block size limit. But let's just assume that, without that limit in place, we'd be seeing about 400,000 transactions per day with an average fee of $0.05. (Those seem like at least reasonable guesses. In other words, I don't think they're orders of magnitude off and, if anything, I think they're conservative in the sense that I think they'll overstate the size of the current deadweight loss.) Using those numbers, we'd get a triangle with an area on the order of (400,000 - 200,000) x ($0.10 - $0.05) = $10,000 per day. For a network with a $6 billion market cap that's generating about $1.5 million total in new coins and transaction fees daily, that $10,000 figure seems ... fairly insignificant, no?

I've seen a lot of pessimism and even despair from supporters of Satoshi's original vision. With lots of us wondering: "how can it possibly be taking this long to fork???" But I wonder if, in light of the above, there's not a case for the exact opposite conclusion. In other words, the real question might be: "how is Blockstream Core already having to work this hard to keep the 1-MB limit in place when the real pressure hasn't even started?"
 

jl777

Active Member
Feb 26, 2016
279
345
https://bitcoin.org/bitcoin.pdf

Is the above some sort of doctored version of the whitepaper? I keep trying to find where any 1MB limit is part of the protocol requirements. Even if we take the satoshi white paper as omniscient gospel (it isnt), there is nothing in the original paper that says "thou shalt not make blocks bigger than 1MB"

In fact, the opposite can be inferred from this section:

"A block header with no transactions would be about 80 bytes. If we suppose blocks are generated every 10 minutes, 80 bytes * 6 * 24 * 365 = 4.2MB per year. With computer systems typically selling with 2GB of RAM as of 2008, and Moore's Law predicting current growth of 1.2GB per year, storage should not be a problem even if the block headers must be kept in memory"

He didnt specify that in the future the 1MB hard limit on blocksize should be expanded as the network capacity increases because a) it isnt part of the protocol and b) it is obvious and should not even need to be stated
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@jl777

you're right, there is no block size limit in the white paper. From day one it was meant to be a temporary measure.

The cap was introduced by satoshi and promoted by Hal Finney.

If memory serves for the first year or so bitcoin clients run without the cap, the only limit was due to an implementation constraint, namely the max network message can't be higher than 32 MB.
 
Last edited:

steffen

Active Member
Nov 22, 2015
118
163
append-only is simple
Is it? Doesn't it require an agreement on what weak block to append to? And how will it be negotiated what block is the one to append to? How can you append to a block if you (still) don't even know all the details about one or more of its transactions? The bitcoin9000 paper solves this problem by allowing miners to independently choose where to append their diff blocks. I guess they will most often choose to append to the longest diff block chain.

I hope I am not just creating unnecessary confusion. I am still trying to learn about the different weak block proposals.
 
Last edited:

jl777

Active Member
Feb 26, 2016
279
345
in what year was it introduced?
does anybody believe that if it were to be introduced now that a 1MB limit would be promoted by either of them?

1MB used to be a lot bigger when 1mbps was the typical home connection, now when things are 10x faster, even in a lot of the world not usually considered as advanced countries, the tradeoff decisions made for practical reasons in the PAST are no longer valid NOW.

Using this adherence to ancient history would be like all the chip and OS manufacturers sticking to the 640kb RAM limit, since that was what it was originally, plus it implements a turing machine so is equivalent to any other RAM limit. better to not change it, maybe something will break.
[doublepost=1457474502][/doublepost]
Is it? Doesn't it require an agreement on what weak block to append to? And how will it be negotiated what block is the one to append to? How can you append to a block if you (still) don't even know all the details about one or more of its transactions? The bitcoin9000 paper solves this problem by allowing miners to independently choose where to append their diff blocks. I guess they will most often choose to append the longest diff block chain.

I hope I am not just creating unnecessary confusion. I am still trying to learn about the different weak block proposals.
some sort of deterministic affinity metric can be used to determine what stronger block a weaker block should append to. Maybe hamming distance of the hashes, but anything that is deterministic would allow to resolve such things and I think if all the tx within a subblock are valid, then why allow any to be invalidated?

the indeterminacy of a tx's validity based on totally external factors is not a good thing
a valid tx should be valid and not be overrideable by an external factor. Isnt that the philosophical argument against RBF?
[doublepost=1457474954,1457474303][/doublepost]
If memory serves for the first year or so bitcoin clients run without the cap, the only limit was due to an implementation constraint, namely the max network message can't be higher than 32 MB.
Even with a network message limit (which is a good thing), it is possible to do a tx level sync, so if this was the determining factor strongly indicates it was an implementation shortcut to get a stable system and not any deeper fundamental thing. The fact that some magical historical implementation choice is being used to justify not changing things, this is the luddite position, isnt it?

remove the limits
subchains, interleaves, thin blocks, encoded blocks, compressed blocks

these take time and effort as opposed to:
#define ARBITRARY_BLOCKSIZE 1000000 ->
#define ARBITRARY_BLOCKSIZE 10000000

So if reducing the time required to increase tx capacity is the constraint so that the core devs can take long vacations, have island retreats and spend all their time with politics and business issues, ok, do the above.

But if improving bitcoin is the issue, then we need to find the devs who are able to implement the scalable solutions (gee, where an we find such devs?) and get a working version on testnet
 
  • Like
Reactions: majamalu

jl777

Active Member
Feb 26, 2016
279
345
the problem with anything "temporary" that benefits the decision makers, like income tax, is that the decision makers would have to go against their economic interests to remove the temporary measure.

In the PoW world, the miners get to choose the hardfork that is used by the network. Trying to convince them to do anything against their economic interests is like asking politicians to please eliminate the "temporary" taxes.

good luck with that
 

rocks

Active Member
Sep 24, 2015
586
2,284
Like it or not, the miners control the blocks.
This is absolutely wrong and something I've addressed in this thread before.

Users control the blocks and the rules by selecting which blocks to accept. Users are free to accept or invalidate any set of blocks they which.

If enough users agree on a new set of rules they are in absolute control of that and you have a new chain.

Miners only control transaction ordering within a given set of rules. Users decide everything else
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
in defence of @jl777 , @rocks is right so long as there is consensus, if all retailers and exchanges upgraded to a client that would accept bigger blocks, the control miners have would be diminished, one of them could break from the pack and not risk being alienated from the network and all others would flow. however when there is confusion it's the miners who choose. I don't thing Block stream is confused, they actively advising miners while there shills confuse the rest of the network.
 

jl777

Active Member
Feb 26, 2016
279
345
the probability of majority of users reaching consensus vs. majority of mining hashpower deciding that more money is better than less money.

gee, let me get my probability calculator and see which is has a higher chance

Of course if magically the vast majority of users change to a hardfork, then that makes things simple. So lets just do a forced field update? Oh, that button doesnt work? So, saying X is true if <nearly impossible thing happens>, does not make X the more likely thing to happen.

This reminds me of the "send me all your bitcoins" attack. Which if you assume works, then bitcoin is not secure, since clearly if all your bitcoins are sent to the attacker on demand, it isnt secure. What is missing is the real world likelihood that everyone will send all their bitcoins to the attacker. By just assuming this happens by magic (or maybe it is a Nigerian prince who needs a little help with his 1 million BTC) and therefore the miners have no power is ignoring reality.

Again, I speak of realistic real world outcomes. Or is this a theological debate where what actually happens is not relevant?
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
If you had a probability calculator you'd see miners remain independent when they majority of nodes dictate the block size, and in the long run they will earn more money and the bitcoin they save will increase in value.

but yes we have a perversion of incentives, in the short term they can earn more when they succumb to centralised control but ultimately earn less money. at the moment they are working against their self interests and in support of the interests of the developers who have controle.

If it wasn't for that fact, "Satoshis Vision" wouldn't be a viable proposal.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595

Turning back to the extended "blocked stream" analogy, the amount of "forking pressure" that's attempting to sweep away Blockstream's 1-MB "dam" can be thought of as roughly corresponding to the area marked "deadweight loss" in the above graph. So about how big might that area be right now? Well, according to blockchain.info, right now total transaction fees per day are about $20,000 and the total number of daily transactions is about 200,000. And that gives us an average fee of about $0.10 per transaction. Obviously it's impossible to know exactly what the free-market equilibrium would be in terms of fees / number of transactions in the absence of an artificially-constrained block size limit. But let's just assume that, without that limit in place, we'd be seeing about 400,000 transactions per day with an average fee of $0.05. (Those seem like at least reasonable guesses. In other words, I don't think they're orders of magnitude off and, if anything, I think they're conservative in the sense that I think they'll overstate the size of the current deadweight loss.) Using those numbers, we'd get a triangle with an area on the order of (400,000 - 200,000) x ($0.10 - $0.05) = $10,000 per day. For a network with a $6 billion market cap that's generating about $1.5 million total in new coins and transaction fees daily, that $10,000 figure seems ... fairly insignificant, no?

I've seen a lot of pessimism and even despair from supporters of Satoshi's original vision. With lots of us wondering: "how can it possibly be taking this long to fork???" But I wonder if, in light of the above, there's not a case for the exact opposite conclusion. In other words, the real question might be: "how is Blockstream Core already having to work this hard to keep the 1-MB limit in place when the real pressure hasn't even started?"
You know it never hit me until your post that we could already estimate the value of the dead weight loss! I think your figure of $10,000 / day sounds about right.

Comparing to other quantifies with units of "dollars per day", the dead weight loss is 0.7% of the size of the new coin issuance ($1.4M/day), 0.3% of the USD exchange trade volume ($3M/day), and 0.007% of the estimated transaction volume ($140M/day). So yeah, the deadweight loss due to the 1MB limit is fairly insignificant, as of today!

One thing I'm hoping is that Bitcoin reveals that certain "socioeconomic constants" exist for large groups of humans. For example, perhaps only so much economic pressure can build up until we spontaneously reorganize. There might be a pattern such that if we define

alpha = (deadweight loss - switching cost) / economic value [or something, this is just an example],

that whenever alpha gets to 2% that the system spontaneously reorganizes. A kind of swarm behaviour of human economic systems.

On the topic of swarm behaviour, we would usually say that an individual ant is unintelligent but the collective behaviour of the ant colony is quite sophisticated. If aliens were watching the block size limit debate -- and if somehow a new higher block size limit emerges -- they could say the same thing about us: that individual bitcoiners are dumb as rocks, can't agree on anything, and vow to die fighting; but somehow through the collective behaviour of the network the consensus rules are adjusted at just the right time to keep the system operating near peak efficiency.

I'm sure this is blasphemy to Blockstream/Core, but it highlights the difference in their thinking compared to my own. Compared to the collective intelligence of the market, I think I'm fairly stupid, have enormous blind spots, and that I'm not too different from anyone else. Blockstream/Core, on the other hand, seems to think that they have no bind spots and are smarter than the collective intelligence of the market. If they were ants, they'd call the Queen a whore and tell her to stop laying eggs until they come to consensus on the capacity of the nest.
 
Last edited:

jl777

Active Member
Feb 26, 2016
279
345
pretty sure that any formula for complex behavior would need to have sqrt() and maybe even log() in it
 

rocks

Active Member
Sep 24, 2015
586
2,284
in defence of @jl777 , @rocks is right so long as there is consensus, if all retailers and exchanges upgraded to a client that would accept bigger blocks, the control miners have would be diminished, one of them could break from the pack and not risk being alienated from the network and all others would flow. however when there is confusion it's the miners who choose. I don't thing Block stream is confused, they actively advising miners while there shills confuse the rest of the network.
The idea that miner or merchant "consensus" is required is vastly over stated IMHO simply because today there is zero consensus. 99.9999% of merchants and people use FED reserve dollars either directly or indirectly (other currencies indirectly use dollars through reserve balances). The only "consensus" that is exists today is that FED dollars are money. Only 0.00001% of merchants and people use crypto currencies or even think of them as money. There is no consensus here.

A few miners in China spending $1.4M per day to build data structures on top of each other is nothing. Absolutely nothing. Thinking that we need that mere $1.4M/day to build the data structures we want is absurd. Central banks print more than $1,000.0M/day on a slow day.

Within crypto currencies there are multiple variants competing to one day fork money away from FED dollars and towards a crypto currency. Bitcoin is currently the leader, but there are many contenders in the form of various alt coins.

A forked version of Bitcoin is simply another altcoin option. In the end the best option will gain the most traction and win. If Blockstream coin refuses to scale, then either an altcoin or a Bitcoin fork altcoin will eventually win out. Neither of these need to start off at the launch point with "consensus", it just needs to be better and gain adoption over time.

Users who want to scale can simply decide for themselves to start to build better data structures. You don't need miners in China to do it. Once a better data structure (larger blocks) starts to be built, if it is the right option more people will start to use it. That is how "consensus" is built, by slow adoption over time, not getting everyone to move together at once.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Yes. Consensus isn't "Let's all start building a house". It's Joe starts digging the foundation. Then Bob sees this and starts making some bricks, Dave starts chopping some wood for lumber. Chuck is happy in his shack so goes fishing.

It's not "everyone go this way", it's going that way and seeing who comes along.

Consensus is not a thing to be organized, to be voted on and it's not one singular thing. It's simply a number of individuals who agree.