@Christoph Bergmann :
And now we get transactional ordering, graphene, group, checksigvery, weakblocks - all for what?
I am obviously a bit biased here. But I still think there's different degrees of change here and we're going back to the issue of 'what is consensus critical' that came up back in the Core days:
I started working on weakblocks exactly because it was one area that would/will allow a very careful and gradual approach to implementing it. You can start mining (with any fraction of your HP) with "weakblocks waste production" and then see how well they propagate through the network. Next step is to build on top of a previous weak block and see how well
that goes. You can (and should) always keep a regular, non-weakblocks node in parallel to switch over to, as a fail safe. This way, it can be tested at a pace everyone's comfortable with - and also switched off again, should any problems arise!
Except for a common protocol spec, it doesn't need a lot of coordination.
Graphene shares the same properties (or even more so).
Note also that you can fuck up a single implementation by any seemingly innocuous or benign changes.
There's code in an XT PR (adapted from some experimental early code on Core) that will allow you to switch out the data base layer in bitcoind and replace with LMDB, which brings some performance improvements. But it is a change that can, of course, completely screw up node operation if something goes wrong. I like it, though, and would like to port it over to BU. Also because it allows one to play further with more efficient DB implementations.
Implementations are free to refactor their internals and I don't understand any real uproar about it.
If one is
concerned like you seem about ABC, I think that's fine, but you can run an older implementation or a different one that doesn't share the refactoring. Modulo practical problems regarding dominance of the ABC implementation, of course ... and also any consensus changes that would need to be backported to an implementation that you use before the refactoring, which, I guess, is part of the issue you are talking about.
However, as others have pointed out, we can't run 8GiB blocks on current day nodes as the code simply isn't there yet.
We can, however, likely run 8GiB blocks using a protocol that has a
specification very close to the implicit one the current implementations use. But the actual implementations are a very different matter.
In contrast, e.g. OP_DATACHECKSIGVERIFY needs a coordinated effort across all implementations to get it going, of course.
Given that I saw the opportunity for a new 0-conf safety feature using this new opcode, I wonder now whether the goal of Haipo is to stop the regular schedule before this next fork or let that happen and then go over to a more considered, miner based approach after that?
As I said, I am not opposed to a slow, even very slow development pace either and am fine with either path. Though I have to say I grew qjuite a little bit more fond of the two OP_DATACHECKSIG* opcodes seeing that it could have this IMO real use case solving IMO real problems.
And as you might recall, I was more sceptical before - I still don't believe the betting schemes enabled will be an important feature of BCH, for example. In contrast to group tokens, it is also a very local change of the script interpreter which is very much in line with all other opcodes that got introduced in Bitcoin in the past.
Heck, you might theoretically (though I didn't check and the effort would be quite big to try so) even simulate OP_DATACHECKSIG using the current opcodes and implementing EC math in script. But the resulting scripts would certainly be huge. If so, OP_DATACHECKSIG would basically be just a compression scheme to compress down a much more complex expression into a single opcode.
I think the current plans for the November hardfork are o.k. in terms of change complexity, with the biggest risk likely being the replacement of the current transaction order. I don't follow ABC closely enough to know this - but the refactoring you're talking about is very likely related to supporting this change ...
I like to remind everyone that there's
one final obvious change that needs to happen before full ossification, though - the 32MB limit needs to replaced with something likely based on miner-voting ...
That all said, I fully support Haipo when he says BCH's direction should be based upon miner voting. Maybe we should all work on enabling a good way to do so, at least after the next November HF.