Gold collapsing. Bitcoin UP.

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
An artificially constrained limit, limits capacity planing
Au contraire, "no limit at all" and no orderly upgrade but "hash war" is what destroys capacity planning and incentive to use the chain.

This controversial uncertainty is one of the biggest drives for the investment hype cycle.
Because hype is what we need, instead of solid, reliable and predictable performance.

After we have removed the 23MB limit there will be a a new theoretical maximum limit that no other blockchain can boast, [...]
I have news for you my friend. Go hang out in /r/CryptoCurrency a while and see what other blockchains boast about. Heck, nowadays they even boast about it in /r/btc.

The only negative result that I can predict when removing the limit is the inability to extract rent when that limit is reached, consequently there is no negative cost to removing the limit.
Adjust your vision.

Others in the ecosystem have real costs to upgrade their systems. Miners do feel some duty of care not to let rogues split off half of the ecosystem / coin with some suitably large block. This leads to some general agreement about a maximum blocksize, even if you don't want that.

As an investor I want the to remove the limit.
Miners don't owe you anything special. The ones in this fight are probably bigger investors than you or me. They consider their interests carefully, and removing the limit in an unsafe way is not high on their agenda.
 
Last edited:
  • Like
Reactions: jtoomim and sploit

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
Does anyone think the impending Stress Test will help? I'm hopeful.

IF enough contiguous transactions are generated to significantly enlarge the mempool, I would hope miners would feel compelled to generate large blocks - to avoid Core calls of "look at that huge mempool rising!" Maybe we could see blocks larger than 8MB. I hold out hope we'll see some larger than 16MB, but what I'd like most of all is to see sustained traffic, and the appearance of a business-as-usual chain humming along without issue.

IF the mainstream press ran a few stories, that could give us good exposure.

Obviously this will all be "artificial" in that it won't consist of much "normal" economic activity. It could, however, show that BCH can and will support activity at that level.
the stress test is a very good idea.

not sure if miners will make bigger then 8mb blocks ( i guess this is there soft limit for now? and there is a 16MB hardlimit right now? )

It will be interesting to see the results, i would expect some nodes to go down, some websites might break,and thats good, much better things break during a test rather then in the middle of a boom in adoption.

press headlines will read both "stress test proves bitcoin can scale" and "stress test proves bitcoin can not scale", and the poeple of the internet will once again endlessly debate the issues.
[doublepost=1535564992][/doublepost]
Not sure this simplistic explanation can really refute the conspiracies
but if its true, then its a negative for CTOR.

so... lol!
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
As I lay awake unable to sleep last night, a thought crossed my mind:

Maybe Xthin will perform better in practice than Graphene.
With an Xthin block, you can technically start validating the block _as you're receiving it_. You don't need to wait to receive the entire thing (I do appreciate this may weaken your DoS resistance, but bear with me). You can start building the Merkle tree and verifying all the transactions in the order they're coming in against the UTXO set. And none of this work is wasted.

Let's imagine that in the future we've improved the network layer so that two nodes connected with 1 Gbps connections can achieve 100 Mbps good-put for Xthin messages. A 5 GB block sent with Xthin could be sent with about 100 MB of block information (50x compression). With 100 Mbps of good-put, it would take about 8 seconds to receive the complete Xthin message.

But within a second, you'll already have the first chunk of the block (I know I'm making some assumptions here about packets arriving the right order, etc.). You can start processing this chunk and building up the Merkle tree. Each second you know more of the block, and so can validate more and more.

If validation is faster than block reception, I don't see why you can't essentially be done validating shortly after you've received the last packet to complete the Xthin message.

If, on the other hand, validation is the bottleneck, then I don't see why the time it takes to receive the Xthin message really matters.


The situation with Graphene seems fundamentally different. Unless I'm mistaken, you can't perform set-reconcillation with the Graphene message until you have the entire thing. Only then can you start validating, sorting the block into lexical order, and building up the Merkle tree.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
Maybe Xthin will perform better in practice than Graphene.
With an Xthin block, you can technically start validating the block _as you're receiving it_. You don't need to wait to receive the entire thing (I do appreciate this may weaken your DoS resistance, but bear with me). You can start building the Merkle tree and verifying all the transactions in the order they're coming in against the UTXO set. And none of this work is wasted.

Let's imagine that in the future we've improved the network layer so that two nodes connected with 1 Gbps connections can achieve 100 Mbps good-put for Xthin messages. A 5 GB block sent with Xthin could be sent with about 100 MB of block information (50x compression). With 100 Mbps of good-put, it would take about 8 seconds to receive the complete Xthin message.

But within a second, you'll already have the first chunk of the block (I know I'm making some assumptions here about packets arriving the right order, etc.). You can start processing this chunk and building up the Merkle tree. Each second you know more of the block, and so can validate more and more.

If validation is faster than block reception, I don't see why you can't essentially be done validating shortly after you've received the last packet to complete the Xthin message.

If, on the other hand, validation is the bottleneck, then I don't see why the time it takes to receive the Xthin message really matters.


The situation with Graphene seems fundamentally different. Unless I'm mistaken, you can't perform set-reconcillation with the Graphene message until you have the entire thing. Only then can you start validating, sorting the block into lexical order, and building up the Merkle tree.
even if validation would be the current bottleneck for GB blocks, thus making Graphene's super efficient block propagation, a moot point. its more then likly that in the future validation will (for one reason or another) become blindingly fast. at which point Graphene trumps Xthin

in the end, even if bitcoin cash makes mistakes and picks one upgrade over another and it turns out to be the sub-optimal choice, no big deal... as long as we dont get RBF and segwit and shit like this. we'll be fine.
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
And what do we lose with it?
I don't really understand all the finer technological arguments on this one, but the numerous criticisms should be enough cause for us all to wait for more critique, study and testing. One question I had on lexicographical ordering and other forms of pre-consensus was, manipulation. What's to stop a large miner from faking the ordering (proofs)? If enough of network relied upon CTOR or other pre consensus ideas, could pre-created fake ordering be used to fool SPV wallets and double spend during the 0-conf period?
[doublepost=1535590956][/doublepost]
nChains hashpower is a fly speck compared to bitmain. How could they get their will?
I have a sneaky feeling you're wrong on this one, there's lot's of hidden hash right now on both BCH/BTC. I doubt we truly know the size of CoinGeek/nChains balls just yet. They're speaking very confidently for miners that don't have enough hash. Plus as @AdrianX points out.

nChain hold the ace card, Bitman is doing an IPO, they are on best behaviour, making positive ROI decisions. I find it unluckily they would try to explain negative ROI mining decisions to their new investors during an IPO and call it strategic. Bitmain also has no motivation to limit the network capacity to 1MB or 32MB.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Last edited:

NewLiberty

Member
Aug 28, 2015
70
442
"It can become the reference client for all that is left of Bitcoin."

Anathema detected.

"BU has a unique position here. Not a main party in the war, it can serve those that would switch for profit. It can provide the best telemetrics and controls for miners to wage such wars."

Why would miners not keep their "best telemetrics and controls" to themselves?
Makes no sense to me they would come to BU as an open source client for that, although it might good to keep an open ear.
I expect miners WILL keep the best telemetrics and controls for themselves. And the reference client is just that, a reference as a starting point for miners to craft their customer implementations.

I think you make a funny joke by equating a reference client for Bch with something that coordinates all miners, but you do know that is not the role of the reference client, (even as much as Core wanted to make it that). A reference client is the common starting point upon which private innovations each miner makes for themself are built.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
@NewLiberty : I was being serious, but you make a good point about the usual purpose of a reference client.

I'll join you in hoping BU can achieve something of that role in the future, and that if it does, the idea of power associated with that doesn't go to the heads of the developers.

However, in terms of actual hash, I'm hoping for much more of a balance of powers in terms of clients used.
 
Last edited:

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
on the topic of the sep.1 stress test: i see it as at least equal parts stunt and narrative prop than engineering benchmark. but it has led people to test cool scripts, and it does gauge capacity. moreover, images like this are forceful:



there are at least two ways to participate (this can be done at any time, but especially on sep.1):

1. using scale.cash. send a little BCH to a provided browser-embedded wallet. without waiting for confirmation, start the test (which takes up until the next block conf to finish). keep your browser window open. i just sent 1270 txns for USD$2.

2. using the txBlaster2 bot via memo.cash. go to https://memo.cash/topic/txBlaster2_bot, tip the bot 15000 or more satoshi (i.e. just "like" any message by @txBlaster2), and it will send the txns. if you type a reply to that same msg. you liked, your text will show up in the txns you send. the bot will give you a link to seashells.io, which pipes the outcome of the test so you can visualize your transactions. this starts immediately, finishes after 1 confirmation. i just sent 1728 txns for USD$2.

both of these methods rely on the BITBOX API. r/btc discussion on whether this might be a limiting factor here.

since the code used to send these transactions works for the BTC chain (mutatis mutandis), i suppose one thing that's been learnt as a corollary is how to DDOS it on the cheap.

otherwise, you might just put some coin in circulation via satoshidice or https://playbch.cash .
 
Last edited:

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
as long as we dont get RBF and segwit and shit like this. we'll be fine.
The thing with consensus rule changes, they have economic externalities, you never know what they are until after the fact, and it's possibly imposable to revert to a state before the change.

The prudent way to move forward is to wait until the need for change becomes pressing. In the interim build and test, solutions, so you have options should the need arise.

The more urgent the need for change becomes the more we'll see competing ideas, more focused effort at understanding the problems and the hopefully well select the most practical solutions with empirically tested assumptions.

The result is the best solution gets implemented. The more competition, the more information, the more choice, the more confident you can be with the direction chosen and the easier it is to accept the externalities. (censorship of competing ideas notwithstanding)

Premature optimization is said to be the root of all evil for just this reason. In economics, it's equivalent to the misallocation of resources.

Necessity is the mother of invention. A need or problem encourages creative efforts to meet the need or solve the problem
.

The need for CTO is not evident yet. In theory, it is, CTO is very exciting however we don't need to change the consensus rules and introduce potential externalities until there is a problem to solve and it's the best thing to do given the circumstances.
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
If you're interested in tokens and haven't watched this yet, get on it; and prepare to have your mind blown.
This is the best way i've seen so far. (it's almost like Satoshi planned it this way all along) ;-)


@theZerg your thoughts on this?
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
The thing with consensus rule changes, they have economic externalities, you never know what they are until after the fact, and it's possibly imposable to revert to a state before the change.

The prudent way to move forward is to wait until the need for change becomes pressing. In the interim build and test, solutions, so you have options should the need arise.

The more urgent the need for change becomes the more we'll see competing ideas, more focused effort at understanding the problems and the hopefully well select the most practical solutions with empirically tested assumptions.

The result is the best solution gets implemented. The more competition, the more information, the more choice, the more confident you can be with the direction chosen and the easier it is to accept the externalities. (censorship of competing ideas notwithstanding)

Premature optimization is said to be the root of all evil for just this reason. In economics, it's equivalent to the misallocation of resources.

.

The need for CTO is not evident yet. In theory, it is, CTO is very exciting however we don't need to change the consensus rules and introduce potential externalities until there is a problem to solve and it's the best thing to do given the circumstances.
I mostly agree with everything you say, don't get me wrong.

but Obliviously, not everyone sees it exactly the same way, and not everyone defines when " need for change becomes pressing" the same way.

what if i said....
bitcoin has been advertised as " the future of money " for almost TEN YEARS
AND STILL, we have ZERO proof this is actually achievable
128mb ???? LOLLOLOLOL THAT'S BULLISH!
and hell we cant even pull that off.
bitcoin URGENTLY NEEDS the proof that it indeed can handle and sustain GB blocks, one might argue.

you say we need to "select the most practical solutions with empirically tested assumptions."
i say, bitcoin itself is an assumption we haven't really tested yet!

Time is NOW!!!!!!
I will prudently fallow ANYONE that is willing to push forward.
I will NOT let fear prevent us from gaining any momentum.
I want 128 MB ASAP and then no limit soon after,
I want all kinds of software and hardware upgrades.






:cool:
 
Last edited:

jtoomim

Active Member
Jan 2, 2016
130
253
on the topic of the sep.1 stress test: i see it as at least equal parts stunt and narrative prop than engineering benchmark
I think it was intended as a PR stunt by the people who were organizing it, but as an engineer, I am really looking forward to it because of the data it will provide. Personally, I think that having a stress test once a month is a great idea, as it will give us devs something to look forward to, a chance to prove our code in the real world.

Also, for miners, it provides a kick in the butt to improve their system performance (potentially 1 day's lost hashing) without providing a strong enough orphan rate incentive to trigger irreversible runaway pool hashrate centralization.
[doublepost=1535713789][/doublepost]
I would hope miners would feel compelled to generate large blocks - to avoid Core calls of "look at that huge mempool rising!" Maybe we could see blocks larger than 8MB. I hold out hope we'll see some larger than 16MB, but what I'd like most of all is to see sustained traffic, and the appearance of a business-as-usual chain humming along without issue.
I personally will not be mining any blocks larger than 8 MB. I already know that the pool software I use (p2pool) can't handle anything beyond 8 MB, and struggles with 4 MB. I also know that Bitcoin ABC was strained a bit in getblocktemplate by 5 MB blocks during the Aug 1st test run, so I would not be surprised if all other pools choose to limit their blocks to 8 MB as well.
 

jtoomim

Active Member
Jan 2, 2016
130
253
I want 128 MB ASAP and then no limit soon after,
Just so you know, even if you increase the limit to 128 MB ASAP, there's currently no released full node software that is capable of mining a 128 MB block in 10 minutes. The performance bottleneck in AcceptToMemoryPool limits a high-end server to about 100 transactions per second, or around 20-25 MB per 10 minutes.