Gold collapsing. Bitcoin UP.

BldSwtTrs

Active Member
Sep 10, 2015
196
583
The thing with consensus rule changes, they have economic externalities, you never know what they are until after the fact, and it's possibly imposable to revert to a state before the change.

The prudent way to move forward is to wait until the need for change becomes pressing. In the interim build and test, solutions, so you have options should the need arise.
That looks damn close to what small blockers said.

The need for CTO is not evident yet. In theory, it is, CTO is very exciting however we don't need to change the consensus rules and introduce potential externalities until there is a problem to solve and it's the best thing to do given the circumstances.
The need for 128 Mb blocks is not evident yet neither.
 

jtoomim

Active Member
Jan 2, 2016
130
253
gmaxwell, sipa, wladmir, thebluematt, peter todd and company are all really smart people, and they were right about a lot of things. They were wrong about some things too. The main errors they made were quantitative. They thought that anything more than 1 or 2 MB would be too much, and they were wrong about that. We can do 8 MB pretty easily already, and will soon be able to do 30 MB pretty easily.

That said, the code isn't ready yet for sustained loads of 30 MB magnitude. We'll get it there as quickly as we can, and we'll get it there long before there's actually demand for that capacity. But please, guys, calm down with the childish "I Want It Now!" nonsense. It won't make things happen any faster. All it will do is make the devs annoyed with how entitled and shortsighted the userbase is.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
@jtoomim

I just feel we are being too hesitant and second guessing every little fix.

I just Hope dev doesn't graind to a halt as we fight over ways of doing things.
there is always 1000 ways of doing things. My point is that its not necessary to choose the "best possible" way forward contently, we need to choose the piratical solutions that we can apply today, over the magical "will solve everything in a better way" solutions that are purely based in though experiments.

I like ABC because they seem to get that, and they seem to be able to produce working code, not just theoretical improvements.

remember when Bitcoin cash had a serious problem with dif adjustments, everyone was busy fighting over what EXACTLY to do about it, mean while ABC come out with working code. That's the best way forward.

Also BU has shown to be able to do this as well, 1 example Xthin blocks, good stuff!

still waiting to see wtf nchain delivers, so far seems like empty promises and ALOT of them.

[doublepost=1535723669,1535722677][/doublepost]
guys, calm down with the childish "I Want It Now!" nonsense. It won't make things happen any faster. All it will do is make the devs annoyed with how entitled and shortsighted the userbase is.
I often go to extremes to prove a point, and often my posts / ideas are structured in a way as to make poeple LOL when they read them. did the my post above sound like a child screaming " I Want it NOW ", ya that was kinda of the feeling i was expressing, i lol'd while writing it, please do not take me too seriously...
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@jtoomim
i know.
when i say i want 128MB i mean i want 128MB no excuses! fix wtv the fuck needs to be fixed-up.
There's alot of complex game theory wrapped up in this plea, but I fully agree. in fact, I say remove the limit altogether and let the economic players who have the most to gain and most to lose dictate the soft and hard cap limits of the system. as well as fix all the performance restrictions as they currently exist, such as AcceptToMemoryPool. they will be forced to learn the optimums under fire and fix them as they alone are getting paid by the system to mine honest blocks. with the uncertainty of expert dev "imposed" defaults finally removed, miners can take responsibility for maximally growing the system under the financial incentives as dictated in the WP. for example, with the limit removed, i'd foresee miners setting their hard caps at 32MB and their soft caps at what they are today; anywhere from 2-8MB. any large miner trying to produce self constructed attack blocks way above the economic demand will have to sacrifice tx fees to do so as well as pay the consequences when identified. such as being cut out of the small world relay network or having pool hashers flee to more honest actors. since miners don't advertise their limits to begin with, it'd be like trying to fill a bottomless well with barrels of pennies (assuming 1sat/byte minfees). poison blocks of 1GB are also a risk as any attacking miner would have to be a large enough one to mine the attack block in a reasonable amount of time. these attackers would most certainly be ostracized from the system. even pausing the system to remove a poison block like this is not a problem, much like the disaster of 0.81. IOW, attack blocks are a significant RISK that even gvts or large miners are unlikely to take. no one has even bothered to mine a 32MB block yet. yes, miners are out to make money and lots of it. growing the overall pie in the absence of limits is the fastest way to do this. the problem with the idea of raising the limit when we get there is inertia and ossification, whether it be by true tech limitations or bike shedding. we have a limited window to focus on getting the limit removed as, imo, ossification is occurring.
 

8up

Active Member
Mar 14, 2016
120
344
I am still of the opinion, that Bitcoin is BTC+BCH. And as long as there are miners burning electricity on the same algo I'll keep that stance. BTC declares to follow one side of the coin. And BCH should explore the other side in order to gain maximum information.

We should try to fuck up Bitcoin asap. BTC is currently doing its part.

If we don't break it in the process, Bitcoin can indeed become the cure the financial system needs. If we succeed we at least limit the negative impact for every day people.
 

Dusty

Active Member
Mar 14, 2016
362
1,172
What's to stop a large miner from faking the ordering (proofs)?
Exactly as it is right now, nothing stops a miner from creating an invalid block, except wasting time and money.

As I understand it (I may be wrong, and I would like confirmation), verifying correct TX ordering is also trivial and wastes no resources because you can do it while receiving the block: just check that the transaction hash you are reading from the network is ordered in respect of the previous one.

The need for CTO is not evident yet. In theory, it is, CTO is very exciting however we don't need to change the consensus rules and introduce potential externalities until there is a problem to solve and it's the best thing to do given the circumstances.
I mostly agree with that but, if I understand correctly, having CTO also enables us to have more simpler and compact inclusion and fraud proofs, and those can be used only on blocks after the hardfork mandating CTO, so the sooner we have it, the better because in the future they will be available for greater percentage of the UTXO.
 
  • Like
Reactions: AdrianX

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
Just so you know, even if you increase the limit to 128 MB ASAP, there's currently no released full node software that is capable of mining a 128 MB block in 10 minutes. The performance bottleneck in AcceptToMemoryPool limits a high-end server to about 100 transactions per second, or around 20-25 MB per 10 minutes.
Just so your developers know when someone says they what to move the limit from to 128MB they know we're as likely to see a 128MB block as we are a 32MB block and it does not increase blockchain usage.

Just so you know it is a figurative way of saying we'd like to see millions of organic users transaction with each other on the Bitcoin Cash blockchain.

Literally, there is no reason to want a bigger block. People who want bigger blocks for the sake of bigger blocks are idiots. We don't want big blocks we want people transaction within the limits of what is physically posable, and competition to improve on what is practically posable. It's just coincident that increased transition demand is correlated with bigger blocks, people don't want the big blocks they want the thing that makes big blocks, OK. It's just a shortcut to ask for bigger blocks it assumes you understand we want to allow more people to transact.

What we want is adoption organic growth according to the design expressed in the white paper, the design that's been running for about nine years the one with loads of empirical data. What we don't want is a 1MB hard cap or a 32MB hard cap or a 128MB hard cap. We want miners to limit blocks to within a reasonable limit, and between 4-8MB sounds reasonable given the current demand.

Just so you know, miners should limit blocks to about 8MB or whatever they feel comfortable with, and we should move the Consensus protocol limit well above what the miners think is reasonable.

On July 14th, 2010 the average block size was 0.4kB when the 1000kB block size limit was introduced the block size was 249900% greater than demand. It was not until Jan 29th 2017 that the first block was orphaned for trying to exceed that limit. (by accident - adding one too many transactions to the block)

Today the block size is about 100kB if we were to introduce a limit today that was 249900% above demand it would be 24,990MB

249,900% limit above demand was a mistake. It was way too conservative. The only reason it was introduced was you could produce blocks that were 799,900% above demand for less than $1.00 and flood the network discouraging adoption and use. That's not posable today. The threat is no longer with us as it costs $1000's with $100,000's of investment to produce one block.

Just so you know. 32MB is relatively way more conservative than the conservative 1MB limit. As you know, miners already limit to 8MB. And like those that had hardware that limited them to 10kB blocks were left behind when the majority of hashrate upgraded so to will those miners who don't upgrade. We don't want them to have the power to say we're not upgrading. We want the option to leave them behind and grow.

Ps that not CSW talking that's me talking (go back a few years), I've been saying this stuff since before CSW was with us, He could be folwing me for all I know. In fact the only reason. I think he could be a fraud is because he keeps saying the things I've been harping on about for yeas.
[doublepost=1535743325,1535742541][/doublepost]
That looks damn close to what small blockers said.
It is, Except they refused to engage in honest debate, they took the money and made promises to investors that would facilitate a second layer network on top of bitcoin that would allow them to extract rents.

They held conferences where you were not allowed to talk about the obvious option for increasing the transaction limit. The developers IRC made a rule that Hard forks discussion was moderated away.

Discourse was censored on all the prominent discussion platforms. A narrative was constructed to ignore the evidence and work with like-minded people. (1000's of solutions popped up to alleviate the problem BTC is losing its dominance as a result.)

The problem may have been bitcoin was too small and too centralized. We should learn from the mistakes and move forward.
[doublepost=1535743763][/doublepost]
The need for 128 Mb blocks is not evident yet neither.
Yis is 100% correct. The block size limit does not dictate the need, just the ceiling. It's a number that is hard to change.

Also true is The need for moving the 1Mb blocks was not evident until it was too late. People, though we can change it later.

Well, guess what the 92% of hashrate today are supporting? The 1MB limit. Guess where that 92% of hashrate is going to go when BCH goes up in price? (the BCH network)

Why do people think that 92% of miners who don't want to change the limit are going to want to change the BCH limit? why do people think the BCH hegemony in 2-5 years will be the same.
[doublepost=1535743878][/doublepost]
That said, the code isn't ready yet for sustained loads of 30 MB magnitude.
Moot point. The code and the network at the time of the 1MB limit was also probably not ready for sustained 1MB blocks.
[doublepost=1535744056][/doublepost]
Exactly as it is right now, nothing stops a miner from creating an invalid block, except wasting time and money.
Same reason mines won't risk creating 32MB blocks.
 
Last edited:

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
this is 100% correct.
i disagree. the need for 128MB blocks is evident!

without 128MB ASAP, without all the fixes required, (ex fixing the AcceptToMemoryPool limits) without all that stuff... we will NEVER come close to >8MB tx demand. Because! everyone thinks we're crazy, we are on a fools erin, we are working toward a stupid goal that could never possibly scale. That's what 90% of bitcoiner think of us.

we NEED to start showing off BIG blocks and NEED to start talking about GB block. thats what the whole bitcoin cash project is about. lighting isn't going to wait for the BTC blocks to fill up again to rush their implementation to market, this is a race!

if we play our cards right, we will be discussing GB blocks as poeple begin to realize that they CANNOT use lighting for buying coffees.

if we play our cards wrong, many will lose interest in Crypto in general once the lighting bubble pops. and again Massive delays in adoption.

lol. i need to stop ranting on and on about how i want bitcoin cash to have GB blocks already. I was honestly hoping for more fast moving action / developments . but i guess thats just how dev works... everyone thinks it's just a few things to fix up, 3 months job tops, we'll see huge blocks SOON, and it ends up taking 3 years! :D
 
Last edited:

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
@AdrianX

It's the Fidelity effect all over again.

Conservative Devs (or those with hidden agenda)... If they come, we'll build it.
Everyone who wants adoption and real global use... If we build it, they'll come.

Bring on the stress test, let's break things. it actually pays the stronger miners to (lightly) break some of the less competent ones and the ideologically positioned exchanges that don't follow the longest chain. It's about time some of this deadwood was removed, without it we'll not see new green shoots.

Too many years have been wasted.

I visited London recently, and literally didn't have to open my wallet. The NFC chips now work so well, you flash the wallet over the scanner for any purchase under £20 no need to even remove the card, no receipt necessary, no extra charges. The Tube, Buses, Shops, Bars - it's seamless and instant, tap and go.
 

jtoomim

Active Member
Jan 2, 2016
130
253
@AdrianX, @cypherdoc and others: I don't disagree with most of what you say about the consensus limit being a different thing from the miners' soft limits. However, I believe that the consensus limit has an important role in protecting the ecosystem, and that this role currently is quantitatively served by having the limit at around 30 MB.

I don't have much time right now, as I'm trying to fix some p2pool code before the stress test ends, but I'll give a quick overview of the issue.

The issue is that orphan rates cause mining unfairness. A pool will never orphan their own blocks, so a pool with 30% of the network hashrate will have an orphan rate that is 29% lower than a pool with 1% of the network hashrate. Thus, if orphan rates rise for everyone, large pools benefit and small pools suffer. Miners will respond by switching to large pools, ideally to the largest of the large. This creates a positive feedback loop, which in theory will result in runaway pool centralization.

I've done the math on this many times, and based on the best data we have, a large pool will have a 1% profitability advantage over small pools when average blocksizes are about 30 MB. This 1% figure is comparable to the difference between a small pool's mining fee (e.g. ckpool) and a large pool's mining fee (e.g. Antpool), so I think that this represents the threshold at which runaway centralization is a significant concern.

Currently with Xthin/Compact Blocks, block propagation runs at about 1.6 MB of block data per second, or 18.75 seconds for a 30 MB block. That would give an overall orphan rate of 1 - e^(-18.75/600) = 3.07%, which in turn gives a 30% pool a 0.92% advantage.

The main input for this calculation is block propagation velocity. If we can improve that, then this calculated safety limit goes up. Graphene should dramatically improve block propagation velocity. Once we have Graphene working, we can do some new benchmarks, and that will probably justify a large increase in the consensus limit.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Currently with Xthin/Compact Blocks, block propagation runs at about 1.6 MB of block data per second, or 18.75 seconds for a 30 MB block.
@jtoomim, why is this value for block propagation so low? The bloom filter of a 30 MB block would be around 30 MB / 24 = 1.25 MB. And you say that it will take 18.75 seconds to propagate this tiny amount of data in one hop on 50 megabit per second connections? My math tell me this would take 0.2 seconds.

What am I missing here?
 
  • Like
Reactions: AdrianX

jtoomim

Active Member
Jan 2, 2016
130
253
TCP sucks over long distance high-bandwidth high-latency medium-packet-loss links. TCP's congestion control means that we only get to use about 0.5% to 1.5% of the actual bandwidth in that situation (assuming 100 Mbps), according to data we collected in the Gigablock Testnet results and in my 2014 BIP101 testnet tests. The actual network throughput in those tests was generally in the 30 kB/s (2014) to 60 kB/s (2017) range. If your transmission crosses the China border, your goodput gets even lower, typically around 10 kB/s.

It's ridiculous, I know. It's also true. Actual throughput is not at all related to pipe size when you're using TCP across the planet.

> in one hop

The 1.6 MB/s of block size throughput figure was for crossing the entire network, not one hop.
 
Last edited:
  • Like
Reactions: freetrader

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Still, we both agree that the filter to be sent is just 1.25 MB. And you say that this takes 18.75 seconds to send from miner to miner on dedicated connections.

I find that very hard to believe. Do you see my point?

EDIT: By one hop, I mean from one bitcoin node to another. I'm not talking about the infrastructure of internet.
 
Last edited:

jtoomim

Active Member
Jan 2, 2016
130
253
Yes, it's hard to believe. But it's true. Go view the data.

Overall block propagation with xthin is about 1.6 MB/s for the entire network, and network goodput was about 500 kbps.

Demo of single-hop bandwidth via wget across the China border, followed by bitcoin p2p block propagation data without xthin:

I recommend rewinding and watching both talks in their entirety, as they both have a lot of useful information in them.
 
Last edited:
  • Like
Reactions: freetrader

jtoomim

Active Member
Jan 2, 2016
130
253
I'm sorry that the data that two independent groups have collected agree with each other but not with your preconceptions.

Perhaps you might find find this article illuminating?

https://www.performancevision.com/blog/measuring-network-performance-latency-throughput-packet-loss/

Keep in mind that the latency for long-distance links (e.g. London to Tokyo) is usually around 120 to 250 ms, and is greater than shown in this article.

Our findings on Bitcoin's performance with TCP are not unusual. Take a look at the "TCP throughput with 2% packet loss" column in the performancevision page. It's pretty similar to the data we collected for crossing the China border. I've actually seen packet loss across the China border exceed 30% sustained for hours on end. 2% packet loss across the border is actually nearly a best-case scenario.
 
  • Like
Reactions: freetrader

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
You double down on the claim that it takes 18.75 seconds to send 1.25 MB data between two parties that are highly incentivized to have a good connection. Got it.
 

jtoomim

Active Member
Jan 2, 2016
130
253
And I cite data. And I cite logic (read up on the bandwidth-latency product, and TCP congestion control! Please!). You do not cite data, and apparently do not bother to read the articles that I cite.

You don't believe me? Fine. Test it yourself. Set up a VPS in Aliyun Shanghai and a VPS in London, and do a wget from one to the other. Tell me what kind of bandwidth you get.

Got a better test to show how long-distance TCP traffic over lossy backbone connections is not slow? Go ahead and describe it.

This is an empirical question. Empirical questions can be answered with experimentation and data collection.

I have also proposed solutions several times. Graphene can mostly solve this issue. UDP+FEC can also solve this issue. All I'm saying is that we should solve the issue before we get rid of the safety limits.
 
Last edited:
  • Like
Reactions: freetrader

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
gmaxwell, sipa, wladmir, thebluematt, peter todd and company are all really smart people, and they were right about a lot of things
It amazes me that to this day were still battling this same anti big block argument, just in a different form. I really appreciate your math and testing over the years, and your apparent sincerity but I really think there is a point where one has to step back from all that and apply common sense

I take the position that there is NO incentive for a larger miner to become a larger more centralized runaway miner via the big block attack because they know, in aggregate, it exposes them to State intervention. That's not politics. It's why the sound money incentives of the Bitcoin system were designed the way they were (precisely to avoid centralization) and which miners, in aggregate, understand deep down based on experiences of centralized gold servers being seized in the 90's.

This ignores the fact that in an uncapped system that promises unlimited growth, there is infinite motivation for legions of new miners to enter the space to compete which by itself makes it impossible for a large miner to even run away.
 
Last edited: