Gold collapsing. Bitcoin UP.

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
... at some point, you have to ask yourself why the BCH price keeps dropping along with the ratio. maybe it's b/c BCH devs are going down the same path of constant tinkering with the base money protocol? encouraged by the q6mo hard forks?
I don't believe that the market is nearly sophisticated enough to price in the minutiae of development paths. Heck, even keeping the 1MB was largely ignored by the market until txn fees hit $50 again and again, pounding the users like a sledge-hammer.

In any bear-market there is a flight to quality and BTC still retains some of this aura, especially as volumes have fallen such that blocks are not persistently maxed out. So, it looks to the market like it is working OK again. Of course, we know that is temporary and the LN unicorn will be MIA when next needed.

So, as long as BCH development continues, even in a fractious state, it is activity, and activity evidences commitment, and the potential for future technical progress and future high valuations.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
Last edited:

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
you have to ask yourself why the BCH price keeps dropping along with the ratio.
I think these are not directly related but arise from the same situation. Not much is happening at the moment. For the price (for BTC), there was the excitement of nearly reaching 20k but now, not much is happening so we're in a vaguely bearish, vaguely sideways miasma where nothing is happening. I organize a local bitlunch thing and we were pulling in 20+ attendance at one point. Yesterday, there were three of us (Two of us were Cash supporters and one a newb).

As for the ratio, we are seeing a return to normal on Bitcoin. The lack of interest in general, coupled with the disadoption (and I'm sure Core supporters would want me to throw in the dismal Segwit capacity "increase" and batching) has led to less pressure on usage and lowering of fees to something perhaps palatable to those used to banking and Paypal. In this way, Bitcoin Cash has become a bit less relevant to the crypto community.

Make no mistake though, as it has before, attention, money and FOMO will be coming around again. It will be coming to BTC and BTC will be found wanting. LN will not be ready and if the block size is not increased, BTC will experience real catastrophic congestion. Not when the run up was already pretty much done as in December but before it even gets started. I can't say for sure that BCH will be able to step in but that's what I'm hoping. So let's stick at it.
[doublepost=1533884158][/doublepost]
To be fair (and I was looking at that thread earlier after the post on Reddit), I'm not sure that anyone who provided their PGP keys there is a BU developer and even if so, I don't see anyone who is obviously qualified or positioned to handle serious security threats. Also, the apparent attempt to gather info on all BU members in one place includes no PGP keys at all.

https://github.com/BitcoinUnlimited/BitcoinUnlimitedWeb/blob/master/src/data/members.json

So I'm not sure exactly what's going on there.

If there is a conference coming up, perhaps we could get some members to check IDs and sign each others keys or something.

Note that I'm not judging anyone for anything here but it does seem like the suggestion of publishing public keys for key individuals might be a good one.
 
Last edited:
this says alot about the arrogance of some BCH devs and the lack of testing . I guess I'm not the only one to notice @deadalnix can't be bothered to use PGP keys.

and you guys want to make major changes to the code q6mo? :

https://medium.com/@coryfields/http-coryfields-com-cash-48a99b85aad4
In this case, I have to agree with Cypherdoc and (shockingly) CSW: Every change has an inherent risk. If there is no STRONG reason for changing something, don't change it.

It was especially this quote that stuck me:

While looking through Bitcoin ABC’s change-logs earlier this year, I noticed that one of the most critical pieces of transaction validation had been refactored. The changes jumped out at me immediately because they seemed so unnecessary. Curious about the reasoning behind them, I took a look at the public review the changes had undergone. There was no justification other than “encapsulation,” it had only two reviewers, and review only lasted a week before the code was accepted.
There was a critical change, which was not really necessary, but more or less unreviewed?

And now we get transactional ordering, graphene, group, checksigvery, weakblocks - all for what? I'm excited about UTXO-commitment. But most other things don't spark anything in me but worries that they introduce new bugs for nothing.

When the old opcodes were reactivated, I asked what use case they ever have. Noone did give me good answers, and now we are some month later, and there is still nobody pointing out an interesting application.

I agree with cypherdoc about 0conf: For me they work great, and I'm not aware of any merchant / business for which BCH double spends are a problem. But now we do double spend relays, fraud proofs, weakblocks ...

Xthin made bandwidth-spikes a non-factor and heavily reduced bandwidth costs at all. Graphene might be better, but doesn't add so much to the whole picture - while being the reason for wanting transactional ordering, another strong and deap change.

Sorry, I don't want to be dismotivating. Bitcoin Cash development teams to a great job, every single one, and it is great that you use the free atmosphere to think about and discuss changes which would be early choked on Bitcooin Core.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
well, the good news is you can be sure some of the more malicious core devs will be digging around in the BCH code. which already has been done 3x with BU and now once with ABC.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,994
now that's an analogy i can relate to.

actually, the blind spot doesn't effect a one eyed person. it's too small and the brain fills it in anyways so the one eyed person doesn't even notice it.

so yeah, that's why you don't "fix" it.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Christoph Bergmann :
And now we get transactional ordering, graphene, group, checksigvery, weakblocks - all for what?
I am obviously a bit biased here. But I still think there's different degrees of change here and we're going back to the issue of 'what is consensus critical' that came up back in the Core days:

I started working on weakblocks exactly because it was one area that would/will allow a very careful and gradual approach to implementing it. You can start mining (with any fraction of your HP) with "weakblocks waste production" and then see how well they propagate through the network. Next step is to build on top of a previous weak block and see how well that goes. You can (and should) always keep a regular, non-weakblocks node in parallel to switch over to, as a fail safe. This way, it can be tested at a pace everyone's comfortable with - and also switched off again, should any problems arise!

Except for a common protocol spec, it doesn't need a lot of coordination.

Graphene shares the same properties (or even more so).

Note also that you can fuck up a single implementation by any seemingly innocuous or benign changes.

There's code in an XT PR (adapted from some experimental early code on Core) that will allow you to switch out the data base layer in bitcoind and replace with LMDB, which brings some performance improvements. But it is a change that can, of course, completely screw up node operation if something goes wrong. I like it, though, and would like to port it over to BU. Also because it allows one to play further with more efficient DB implementations.

Implementations are free to refactor their internals and I don't understand any real uproar about it.

If one is concerned like you seem about ABC, I think that's fine, but you can run an older implementation or a different one that doesn't share the refactoring. Modulo practical problems regarding dominance of the ABC implementation, of course ... and also any consensus changes that would need to be backported to an implementation that you use before the refactoring, which, I guess, is part of the issue you are talking about.

However, as others have pointed out, we can't run 8GiB blocks on current day nodes as the code simply isn't there yet.

We can, however, likely run 8GiB blocks using a protocol that has a specification very close to the implicit one the current implementations use. But the actual implementations are a very different matter.

In contrast, e.g. OP_DATACHECKSIGVERIFY needs a coordinated effort across all implementations to get it going, of course.

Given that I saw the opportunity for a new 0-conf safety feature using this new opcode, I wonder now whether the goal of Haipo is to stop the regular schedule before this next fork or let that happen and then go over to a more considered, miner based approach after that?

As I said, I am not opposed to a slow, even very slow development pace either and am fine with either path. Though I have to say I grew qjuite a little bit more fond of the two OP_DATACHECKSIG* opcodes seeing that it could have this IMO real use case solving IMO real problems.

And as you might recall, I was more sceptical before - I still don't believe the betting schemes enabled will be an important feature of BCH, for example. In contrast to group tokens, it is also a very local change of the script interpreter which is very much in line with all other opcodes that got introduced in Bitcoin in the past.

Heck, you might theoretically (though I didn't check and the effort would be quite big to try so) even simulate OP_DATACHECKSIG using the current opcodes and implementing EC math in script. But the resulting scripts would certainly be huge. If so, OP_DATACHECKSIG would basically be just a compression scheme to compress down a much more complex expression into a single opcode.

I think the current plans for the November hardfork are o.k. in terms of change complexity, with the biggest risk likely being the replacement of the current transaction order. I don't follow ABC closely enough to know this - but the refactoring you're talking about is very likely related to supporting this change ...

I like to remind everyone that there's one final obvious change that needs to happen before full ossification, though - the 32MB limit needs to replaced with something likely based on miner-voting ...

That all said, I fully support Haipo when he says BCH's direction should be based upon miner voting. Maybe we should all work on enabling a good way to do so, at least after the next November HF.
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
I like to remind everyone that there's one final obvious change that needs to happen before full ossification, though - the 32MB limit needs to replaced with something likely based on miner-voting ...
Agreeing with @awemany. If the mood is changing towards a "lockdown" of the protocol, then the 32MB can be replaced with Bitcoin XT's modified version of Jeff Garzik's BIP100
https://bip100.tech/

This is good enough for a permanent solution, without the leap of faith into full-blown emergent consensus.
 
@awemany @solex

I don't think there should be a final lock-in. There are a lot of exciting future changes which could be better with or require a hard fork: UTXO commitments, Schnorr signatures, BLS signatures, Confidential Transactions ... I'm in favor of keeping the 6month periods, but not for doing hardforks for the sake of hardforking. Rather keep it as an hardfork option day.

A great problem is imho "premature optimization". We don't need to lift 32mb, optimize bandwidth consumption further than thinblocks, find preconsus for blocks and so on. Nothing of this is needed now. It is good to have the concept when it is needed (when Bitcoin Cash has 20mb blocks, when it is used for brick and mortar merchants around the world and so on). Right now Bitcoin Cash needs adoption - and using hardforks to implement unneeded changes rather harms adoption.

@awemany Thank you for the long answer. I fail to understand many technical things in it. I just can say, I'm far from wanting you to stop optimization efforts. It made me sad arguing against weak blocks, because it made me really happy to learn to know that you started to work on it. I also am not in general against new op-codes. I just want to remind that there should be not taken any risk when the benefit is not clear.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Christoph Bergmann : I hear you. I agree that this 'hard fork day' might be basically leading to a bit of 'let's keep busy with changes and devs gotta dev'.

That said, I think the proposed changes so far are all absolutely with benign intent, except for the extra unknown risk they cause. And that is definitely something to keep in mind and for which devs are not incentivized enough to care enough.

Heck, BU doesn't have the new CHECKDATASIG* opcodes in the code yet and it needs lots of testing. And I have a feeling the transaction reordering issue will lead to a big pile of work of which nothing has been done yet either in anything but ABC (AFAIK). That does bother me a bit indeed.

Though I am favorable on the former change now (due to having found a potential use for it that I find valuable :D) and am o.k. with the latter.

I think the HF day should be replaced with a staged approach where there's a "beauty contest" for sufficiently well-specified proposals (for everything that's consensus-critical or sufficiently close to) where the miners will then vote during a period (a month or so) whether they should go into a second or even third iteration, with eventual final acceptance as the new consensus on the chain after two or three iterations of this kind of voting - and maybe with feedback from the miners on how long the iterations should take so that there's sufficient time to test everything well on all implementations and supported platforms. I think @singularity had such a thing in mind and I support anyone who wants to move things into that direction.

But I also think we'll eventually end up there. I also think it is valuable that so many people are rather in favor of putting the brakes on and keeping a slow and steady approach. As I have said, I favor this as well. I believe others are worried that we might lose out to other competing 'move fast and break things' altcoins, but I feel there's quite the sentiment from most old timers here that moving slow but steady is the right approach. In contrast e.g. @theZerg was a bit worried on this lack-of-shiny-features front, I believe, so maybe he should take notice of this sentiment of you, @cypherdoc, @Peter R and others :)
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
If the 32MB is a limitation of the network layer and not part of the actual consensus, arguably it is already lifted and all that's required is an upgrade to the network code. I believe this was originally the case for the block size limit also.

Again, we run into issue with the "implementation as spec" paradigm.
[doublepost=1534001518,1534000706][/doublepost]I think an important question to ask when people are wanting new features is "why Bitcoin?" (Cash or Core). There is always the option of alts or even minority forks of BCH or BTC if you want to try out cool new features. If the reason is "because it's already popular", I have to question the motivations (This, in my opinion, is why BTC did not see a block size upgrade. Certain people wanted to use an already established currency as a platform for their gee-whiz ideas). Changes to central functionality of an established and mature software should be needs based. The block size limit change was exactly that.

Note that I don't consider network layer upgrades like x-thin or graphene to be an issue.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Richy_T , on blocksize: Maybe that's a good way to see it, maybe just to nip any attempts to create another blocksize war in the bud ...


Note that I don't consider network layer upgrades like x-thin or graphene to be an issue.
I am curious, where would you place weakblocks then?

I still place them in the same bin, but my criterion there is something like "Can the update happen in part and without full coordination of all participants?"
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Weakblocks are fine if I understand them correctly. I think if you think of the consensus as a black box (as you should be able to with true "core" software or as libconsensus was intended to be), if you can feed in a valid block and it's accepted, that's the goal. I'm assuming that weakblocks are optional here and are not required for participation.