[doublepost=1557925896][/doublepost]BCH; try this one on: "schnoor is an effective blocksize increase"man, the amount of critical thinking in this thread has hit a new low.
Trust us, we have the best devs.Looks like the sigop counting code for OP_CHECKDATASIG was faulty. Protocol changes ftw!
in which Séchet does not waste the opportunity to kick some sand at BU, affirming that it has more technical debt than ABC, despite the fact that only the latter implementation crashed today, on his watch.
I've commented on this before, but it is truly mind-boggling how fundamentally broken the bitcoind client is with global locks everywhere and its monolithic design. The sigop compute topic above is one of a million examples on how not to do things.2a- The reason behind limiting sigops is because signature verification is usually the most expensive operation to perform while ensuring a transaction is valid. Without limiting the number of sigops a single block can contain, an easy DOS (denial of service) attack can be constructed by creating a block that takes a very long to validate due to it containing transactions that require a disproportionately large number of sigops. Blocks that take too long to validate (i.e. ones with far too many sigops) can cause a lot of problems, including causing blocks to be slowly propagated--which disrupts user experience and can give the incumbent miner a non-negligible competitive advantage to mine the next block. Overall, slow-validating blocks are bad.
in which Séchet does not waste the opportunity to kick some sand at BU
The hate versus Craig must be really very strong to choose BCH instead of BSV with a boss like this one.The multiple implementation moto is of no help on that front. For instance the technical debt in BU to be even higher than in ABC (in fact I raised flags about this years ago, and this lead to numerous 0-days).
I built a PoC sig validation server over Xmas with a basic network protocol and a cluster aware client. On a single home machine I got 120k sigops/sec. Was able to replicate performance with a small cluster of digital ocean servers for a total cost of $80/month. That gets us to GB order of magnitude blocks. Was going to commission a team to work on hardware sig validation solutions but this experiment convinced me it's a non-problem.I've commented on this before, but it is truly mind-boggling how fundamentally broken the bitcoind client is with global locks everywhere and its monolithic design. The sigop compute topic above is one of a million examples on how not to do things.
Here the compute required for block validation scales roughly linearly with the number of sigops in a block. So what was their solution? It was to limit the number of sigops allowed in a block (yet another example of a soft-fork limit being added under the covers)
Any 101 micro-service oriented approach would be to instead create small workers to perform this work and then farm it out to them in parallel. Initially they would be threads in the process, but from there it's easy to get to massive scale by moving workers to remote nodes. But no, core team adds a soft-fork limit on the protocol to make up for their crappy client code. Any scalable node design is going to have to start from scratch.
I almost puked over my keyboard when I read Amaury's post.in which Séchet does not waste the opportunity to kick some sand at BU, affirming that it has more technical debt than ABC, despite the fact that only the latter implementation crashed today, on his watch.
Based on Reddit comments I thought Roger mines with both BU and ABC.And why are people like Roger (who mines with BU) ok with the braindrain that Amaury is responsible for?
Apart from some questionable voting (which I voiced my dissatisfaction long ago) and critical resignation letters, what kind of attack on BU are you talking about here?BU is under attack by you and your idiotic followers