Gold collapsing. Bitcoin UP.

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Norway,
I think the optimum default setting for BU block acceptance (EB) is the minimum of other full node implementations currently or imminently being used for significant mining hashrate. This means min (ABC, SV) hard-limits, i.e. 32MB today.

Of course, if the team with the minimum value updates its hard limit then the BU default can be updated in the next BUCash release, same if either was to not be used much any more by the miners, then its hard-limit can be ignored. You will recall that our philosophy is to support decentralized development so we should not be aggrieved at having to make evaluations using information which is not always within our power to influence.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
lol, the definition of a soft fork is itself being soft forked down to exclude the concept of backwards compatibility and freedom of choice to run an old node and relay blocks with limited resources

dude muh raspberry pi!
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
if im looking at that chart correctly , I estimate ~51 to 52% of the nodes are still susceptible to an attacking miner?
 
  • Like
Reactions: sickpig

sickpig

Active Member
Aug 28, 2015
926
2,541
@cypherdoc you are right indeed. Any version of Core between 0.14.0 and 0.16.2 is open to the DoS vector, whereas the subset of version from 0.15.0 to 0.16.2 is also vulnerable to the inflation vector.

The real point IMHO is determining how many miners/merchants/exchanges are still running a vulnerable (un-patched) version of bitcoind both on BCH and BTC.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Why do you want devs to be "responsible", as in having the responsibility of the miners?
@Norway I want devs to be responsible, but I never said they automatically have the responsibility of the miners or want that responsibility. Devs have their own responsibility - producing software that meets some requirements of its users.
People (miners included) usually expect developers to meet that responsibility, or else they won't run the software. If you're familiar with commercial software development, then this extends into penalties for failing to meet SLAs etc. Much of this doesn't apply to BU's software which is open source, take it or leave it (or improve it). I know BU's Lead Developer and rest of development team has a healthy attitude that the software is effectively responsible for large amounts of money, and thus requires careful handling.

I don't know if contracts exist between BU and miners using it to mine. I suspect there aren't any.
So if a miner loses blocks due to BU software failing, all that BU loses right now is probably credibility. Still not a good thing to have happen.

The last couple of months actually persuaded me that BU running its own mining pool would really be a good idea. Firstly, to obtain a revenue stream, secondly to be able to speak "as a miner" as CG / nChain like to do. Out of interest, would you something wrong with that? Would you consider it "developers assuming miners' responsibility"?

My take is that in a permissionless system, you are free to assume whichever responsibilities you consider yourself fit for. The market will reward or punish you according to the "correctness" of your judgment.

I don't want the laggards to be able to keep up without any effort.
Be more specific please - which laggards you are talking about?

As far as I can see, you're trying to persuade miners and other users that abolishing the limit should be the new Schelling point that everyone that matters to you (!) should move to.

When the knowledgeable among them explain to you the problems that need to be solved to make that feasible, you don't seem to acknowledge those, but pretend they're just being obstinate or ignorant.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@cypherdoc you are right indeed. Any version of Core between 0.14.0 and 0.16.2 is open to the DoS vector, whereas the subset of version from 0.15.0 to 0.16.2 is also vulnerable to the inflation vector.

The real point IMHO is determining how many miners/merchants/exchanges are still running a vulnerable (un-patched) version of bitcoind both on BCH and BTC.
so that amounts to over 5000 old vulnerable nodes, many of whom have been probably abandoned and won't upgrade. it's funny watching the UASF'ers who suddenly found religion about how only mining nodes matter, but that's beside the point. if a motivated miner, like Bitmain, who's taken years of abuse from the likes of Bcore (and who has loads of BCH) decides to attack BTC, now they have a reason and a vector. couple that with attack support from ViaBTC, bitcoin.com, and Coingeek among others, BTC would have a problem. furthermoe, what is there to prevent a bunch of anti BTC guys from booting up a bunch of old BTC nodes? for these reasons, i'm amused at the claims of BTC hard forking in this patch as being "non-contentious". the attack vector is asymmetric to BCH since we have way fewer nodes to upgrade, already sold off much of our BTC (some of us), much less in value at risk, already understand that mining nodes matter most, and a willingness to hard fork whenever we need for a valid purpose. we're only one year old too which helps. lots of interesting game theory.
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@BldSwtTrs , @Inca & everyone else who I forgot: Thank you for the kind words.

@cypherdoc: On the upgrade and the old nodes, I think in the end we have to realize, like Jonald Fyookball said, that this is a self-balancing social contract by humans. If any of the non-upgrades node gets out of lock-step and have inflated their money supply, I think the rest of the ecosystem will tell them to fuck off. And I think the economic majority of the exchanges and ecosystem and miners and so forth are aware and this will shift things into the right direction (no inflation) for this BTC bug.

We want the machines to handle this as well and automatic and "out of the way" as possible of course, but in times like these, the (necessary!) human factor becomes very apparent. I have seen discussion along these lines also on the BU slack and elsewhere, this bug seems to have lead also to a bit of a 'rethink what Bitcoin is' moment.

And obviously the machine cannot do good if it runs buggy software. Suddenly, you need the human element to say, considering the incentives and social contract that 'this is actually a bug'. That is a meta decision that you need humans for.

It is also the very same reason why I believe Ethereum gets it wrong with their Turing complete contracts that are supposed to be fully autonomous. Fully autonomous until the DAO debacle.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
So if a miner loses blocks due to BU software failing, all that BU loses right now is probably credibility. Still not a good thing to have happen.
So maybe it's a good idea to let the pools have the responsibility of the blocks they accept, right?


The last couple of months actually persuaded me that BU running its own mining pool would really be a good idea. Firstly, to obtain a revenue stream, secondly to be able to speak "as a miner" as CG / nChain like to do. Out of interest, would you something wrong with that? Would you consider it "developers assuming miners' responsibility"?
This has been an idea for a long time, and I think it's great! I hope it comes to fruition.

Regarding BU projects, I think it would be cool if we invited pools and other node operators to run their software and hardware on the Gigablock testnet. It would be like the Nürburgring in Germany where anyone can bring their own car and see what it's good for. This is very different from getting "lab results" from BU.


Be more specific please - which laggards you are talking about?
The laggards that don't have good enough software and hardware when it's needed. It's a possibility that devs start to recommend low levels to wait for the laggards, because devs hate when a computer crash. We should break free from dev recommended levels and have a constant race when the shit hits the fan with fast exponential network growth.


As far as I can see, you're trying to persuade miners and other users that abolishing the limit should be the new Schelling point that everyone that matters to you (!) should move to.
It's not about me. And I support BU's Emergent Consensus.


When the knowledgeable among them explain to you the problems that need to be solved to make that feasible, you don't seem to acknowledge those, but pretend they're just being obstinate or ignorant.
What do you think about this quote from Gavin Andresen?

"Yes, let’s eliminate the limit. Nothing bad will happen if we do. And if I’m wrong, the bad things would be mild annoyances, not existential risks, much less risky than operating a network near 100% capacity."
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
@Norway

For context about what Gavin said, you're right that he said he would be fine with eliminating the limit. This was in the context of blocks really filling up to the 1MB limit (2016):

https://np.reddit.com/r/btc/comments/4oadyh/i_believe_the_network_will_eventually_have_so/

He said he would be fine with either eliminating the limit or going for an algorithmic solution.

I know there was several change proposals within ABC for policy-driven adaptive blocksize capping. Similar to how BU discussed a while back that people could plug in blocksize governors of their choosing, e.g. BIP101, or whatever they wanted. For some reason, probably priorities of other things, these algorithmic policy approaches don't seem to have gotten to being deployed.

A while back then I used to think that an algorithmic solution might be a good idea. Nowadays, I lean toward thinking that policy algorithms are just a manifestation of a consensus limit -- one with increased complexity. My preference now would be that we don't bake such algorithms into the consensus protocol, but focus on creating clients that truly support capacity way in excess of demand, and that we provide clarity on the volume that the individual software clients can actually handle.

I would be fine with "abolishing the limit" within the BU client entirely if implementations like BU performed capacity stress tests at least with each major release and informed users on the performance characteristics of the current software.
Providing advice like "we've tested this on such-and-such hardware, here is how our software's performance scales and we find it is capable of handling sustained such-and-such a load under a simulated realistic scenario".
Getting to where BU can provide such advice isn't necessarily easy or cheap, but I think the Gigabit testnet provided a valuable step towards this goal.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
The data collected by seeder are not precisely a snapshot of all running nodes (both the ones that block incoming connections and the ones allow incoming conns).

It is more an ever growing database that will contains any nodes the seeder was able to get info for, even if it happened just one time since the seeder started.

The way the seeder is able to get such info is via the getaddr net message which let it record data also for nodes that do not accept incoming connections. The way that happen is so that a seeder send a getaddr to a node that does accept incoming connections and in return it gets a selection of random peers from the contacted node.

Going back to the main point, the amount of nodes recorded by BU BTC seeder is currently 72502 but among those ~45500 was last contacted successfully more than 6 months ago...

The amount of nodes that was "reached out" in the last month is around 15K.

So computing stats as Luke is doing does not make much sense to me.
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
Agreed with @freetrader
Gavin was referring to the hard limit, which was 1MB at the time. 2147MB is effectively unlimited for the software as sustained capacity, measured by the BCH network stress test, is 16-32MB.

We have moved on since the debate started. Previously, we knew that network capacity was more than 1MB per 10 minutes. What should have been an easy job was increasing a simple constant in the software. This proved so difficult that the whole ledger had to be forked!

Now we have the opposite problem where the hard limit is above network capacity. The difficult job is safely making many different improvements to the software to handle volume which the hard limit allows. This work is being done tirelessly in the background by people like @Peter Tschipper and @theZerg. It is work thousands of times more difficult than changing a "1" in the software, which your granny could do (excepting the grannies of the core devs).

We need to move on from focus on the block hard limit to focus on true scalability: parallelism including sharding, optimising, and many smaller techniques. This includes contributions such as graphene, where we have a BUIP for funding phase II, and also evaluating CTOR/Merklix where ABC is headed. True scalability is way more than naively changing a number and "getting out the way of the users".
 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@solex
First of all, I'm not naive.

The next version of BU is tested at VISA's average level with 1746 transactions per second, which is just awesome. Still, you want to have the default setting at 32MB (About 100 transactions per second).

There are clearly some aspects here I have tried to convey in different discussions that you don't see. I will write an article about these aspects to clarify before the voting starts.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@Norway I certainly don't think you are naive.
Part of the problem is that we are applying relative importance differently. I think hard limits are much more important than soft limits for scalability. To be clear, the hard limit is what the software will permit, while default / soft limits are a user setting between zero and the hard limit.

You think the default limit is important, as it is devs telling miners what to do. Two things there:
a) the BTC miners are ignoring the dev default of 750KB right now.
b) users want software to work out of the box. They expect defaults to be set sensibly so it is one less thing to learn about and have to change before running up a node. BU should be user-friendly and have considered defaults, based on testing, benchmarking and real-word metrics which are updated ad-hoc when new versions are being developed.

I think the benefit of BUIP101 is that it closes off the non-zero future risk that the BCH full node software ossifies with the block hard limit too small for global demand. i.e. 1MB redux in 20 years time.