Gold collapsing. Bitcoin UP.

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
interesting series of events. because of the CHECKDATASIG sigop count bug 10 blocks were mined with only the coinbase transaction in it, which made BCH mining even less profitable vs. BTC than normal. this likely incentivized some miners to point their asics away from the BCH chain (you can compare who mined BCH blocks today vs. the rest of the week). so there was a kind of double defection: given the bug, miners using ABC software manually turned to building empty blocks, and some proportion of miners simply took their hash elsewhere.

yet the consequences were not catastrophic: the mempool swelled but the chain did not stop, and the bug was evident enough to be resolved quickly, so the worse fallout is that some txns were stuck for a couple of hours (par for the course on BTC).

if someone is inclined to quickly explain, i have two questions: 1) why did the bug not kick in with previous txns that used CHECKDATASIG? intuitively, if the sigop count involving that opcode is faulty, this would mess up block validation the first time it was used (in 11.2018) -- why did it take today's specially crafted txns for the problem to manifest? 2) was there a similar bug for CDSV in the ABC code?

also 3) given that ABC QA is bunk, just why are more miners not using BU? they lost money today because of that. should lobbying by elected officials intensify? should BU dedicate monetary resources to promote hashrate share growth?
 
Last edited:

rocks

Active Member
Sep 24, 2015
586
2,284
2a- The reason behind limiting sigops is because signature verification is usually the most expensive operation to perform while ensuring a transaction is valid. Without limiting the number of sigops a single block can contain, an easy DOS (denial of service) attack can be constructed by creating a block that takes a very long to validate due to it containing transactions that require a disproportionately large number of sigops. Blocks that take too long to validate (i.e. ones with far too many sigops) can cause a lot of problems, including causing blocks to be slowly propagated--which disrupts user experience and can give the incumbent miner a non-negligible competitive advantage to mine the next block. Overall, slow-validating blocks are bad.
I've commented on this before, but it is truly mind-boggling how fundamentally broken the bitcoind client is with global locks everywhere and its monolithic design. The sigop compute topic above is one of a million examples on how not to do things.

Here the compute required for block validation scales roughly linearly with the number of sigops in a block. So what was their solution? It was to limit the number of sigops allowed in a block (yet another example of a soft-fork limit being added under the covers)

Any 101 micro-service oriented approach would be to instead create small workers to perform this work and then farm it out to them in parallel. Initially they would be threads in the process, but from there it's easy to get to massive scale by moving workers to remote nodes. But no, core team adds a soft-fork limit on the protocol to make up for their crappy client code. Any scalable node design is going to have to start from scratch.
 

kostialevin

Member
Dec 21, 2015
55
147
in which Séchet does not waste the opportunity to kick some sand at BU
The multiple implementation moto is of no help on that front. For instance the technical debt in BU to be even higher than in ABC (in fact I raised flags about this years ago, and this lead to numerous 0-days).
The hate versus Craig must be really very strong to choose BCH instead of BSV with a boss like this one.
 

shadders

Member
Jul 20, 2017
54
344
I've commented on this before, but it is truly mind-boggling how fundamentally broken the bitcoind client is with global locks everywhere and its monolithic design. The sigop compute topic above is one of a million examples on how not to do things.

Here the compute required for block validation scales roughly linearly with the number of sigops in a block. So what was their solution? It was to limit the number of sigops allowed in a block (yet another example of a soft-fork limit being added under the covers)

Any 101 micro-service oriented approach would be to instead create small workers to perform this work and then farm it out to them in parallel. Initially they would be threads in the process, but from there it's easy to get to massive scale by moving workers to remote nodes. But no, core team adds a soft-fork limit on the protocol to make up for their crappy client code. Any scalable node design is going to have to start from scratch.
I built a PoC sig validation server over Xmas with a basic network protocol and a cluster aware client. On a single home machine I got 120k sigops/sec. Was able to replicate performance with a small cluster of digital ocean servers for a total cost of $80/month. That gets us to GB order of magnitude blocks. Was going to commission a team to work on hardware sig validation solutions but this experiment convinced me it's a non-problem.
 

satoshis_sockpuppet

Active Member
Feb 22, 2016
776
3,312
in which Séchet does not waste the opportunity to kick some sand at BU, affirming that it has more technical debt than ABC, despite the fact that only the latter implementation crashed today, on his watch.
I almost puked over my keyboard when I read Amaury's post.

He fucks up big time (again) and his attitude is still: "Either work under me or don't work on BCH at all."

And than cries about too few devs. Yeah no surprise you arrogant fuck. XT closed shop, BU is under attack by you and your idiotic followers and you wonder why people don't want to develop for BCH?

And why are people like Roger (who mines with BU) ok with the braindrain that Amaury is responsible for?


An unrelated question to the BSV supporters: Would your outlook on BCH/BSV change if BCH implemented a reasonable long term dynamic limit or something like BIP101 or the like?
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
And why are people like Roger (who mines with BU) ok with the braindrain that Amaury is responsible for?
Based on Reddit comments I thought Roger mines with both BU and ABC.

Which would be a sensible thing to do, and something that would have prevented a string of empty blocks during the upgrade had other pools followed a similar strategy.
[doublepost=1558005691,1558004678][/doublepost]
BU is under attack by you and your idiotic followers
Apart from some questionable voting (which I voiced my dissatisfaction long ago) and critical resignation letters, what kind of attack on BU are you talking about here?

I'd be more worried SV members who have clearly expressed a desire to break up and fragment Bitcoin Cash (a chain from which they ironically forked off from under false pretenses, and which their leadership probably supported under initial false pretenses).

Their members clearly voted in a bloc fashion against a clarification BUIP for the Articles of Federation, and also to remove an actually contributing BU member despite giving lip service to being opposed to witchhunts.

You may legitimately dislike Amaury or his conduct, and point out any inconsistencies or hypocrisy you may find, but that's not going to get anyone (you, BU, or BSV) further.
Just like when we criticized Core.
It took action to get somewhere.

ABC code is open, fork it if you like, set up a better governed project and compete for hashpower.
Or don't.

But the ceaseless whining on this thread is starting to put me off.

XT closing up shop is ultimately XT's decision.

SV already forked. Hopefully you guys are helping it just as much as you are circling around in this thread.
 
Last edited: