Ok, this is not true.
Firstly,
@theZerg,
@Peter R,
@Peter Tschipper and I set up the Gigablock Testnet, a geographically distributed actual network of nodes based on BU (8 miners and 12 transactions generators) and proved we can achieve 1GB block in size even if not in a sustained fashion.
We go through all the software bottleneck we found and remove it one by one. The most significant was the one that was limiting transactions admission to the mempool to a mere 100 transactions per second.
All that improvements have been ported to the BU code base, others are in the pipeline and we are testing it.
So we do think 64MB blocks are possible. We do think that 1GB are possible. More to the point we produced 1GB block. So saying that "no one thought" it was possible is plain wrong.
>We go through all the software bottleneck we found and remove it one by one. The most significant was the one that was limiting transactions admission to the mempool to a mere 100 transactions per second.
this is a mischaracterization. you missed this one. it only became apparent b/c of the stress test.
of course i'm provoking BU. i'm fully aware of your excellent accomplishments and work to date with the gigablock testnet. however, when the rubber met the road for you guys to step up to actually taking these big blocks and concepts live to mainnet, imo, you failed to step up. do you deny the back and forth arguments btwn the BU guys and the SV supporters have had in this thread over the last 6m or so of this debate about whether to fix the limit issue now vs later?
@Peter R,
@theZerg, and
@solex have all come out in favor of keeping a limit in place right now in deference to the ABC/CTOR/DSV plan . and you haven't emphasized an adaptive solution either. several of you hate CSW and some have had direct conflicts with him. i'm not aware of all the gory details of those disagreements but i do think they have gotten in the way of permanently resolving the blocksize limit problem; which i contend is still the biggest problem in Bitcoin. like i keep saying, i don't care if CSW is a fraud, a cheat, a liar, a FakeSatoshi, whatever. all i care about is the current SV plan for 128MB and re-enabling opcodes. and then his further plans to remove the limit. that's good enough for me right now and i'm not afraid of forking away from his ass, if and when he decides to go rogue. the new pluses on the SV cake right now is the huge accomplishments he is proving are possible in terms of blocksizes w/o orphans.
I mean what really matter is not the size of the biggest block you found, but tps. if you're able to achieve a max block size of 128MB, but your tps remains capped at 100... well you didn't achieve much, imho.
i disagree with this. i see your solved BU tx throughput issue as a separate problem/step from blocksize propagation.
first point, i keep saying it's great how fast you guys solved the former in BU under duress. would you have ever realized it was a problem w/o the stress test? no. even Greg didn't realize he created this problem. this is what i mean when i say "necessity is the Mother of invention". all your voluntary testing and tweaking failed to find this problem; b/c you were never stressed in terms of high tx throughput. and b/c you guys volunteer and don't have the money or foresight to have tested this. and the only reason for the stress tests revealing this issue is b/c of the
real big blockists wanting to probe and solve the blocksize limits. which is the biggest problem in Bitcoin.
second point, how do you know all the tx's in the 64MB were received into the mempool from the shotgun and network propagation? i've seen talk that they were self constructed. if true, the tx throuphput issue (TPS) is not the issue. furthermore, once miners have enough tx's in the mempool, or even if they don't, since they can self construct huge blocks, they push that huge block to the network and hope that it propagates w/o getting orphaned. the fact that SV was able to propagate these huge blocks successfully w/o getting them orphaned is still a huge accomplishment. if anything, i'd think the 40m delay may be from having to validate such a large block. but i'm not sure about that part.
[doublepost=1542729701][/doublepost]
@freetrader , you are seriously losing it. The stress test is nothing like that. There is no SWAT team.
if there's one thing i've noticed over the last several months is that
@freetrader has stepped up his trolly hyperbole.
[doublepost=1542730320,1542729498][/doublepost]
I don't want to restrict my view.
Tell me about all the bad things Bitmain does.
That's a challenge not only to Christoph, but to all who support SV. Let's hear it, but back it up with verifiable FACTS. Not hearsay propagated by CSW or Calvin.
you said you don't know much about Wormhole. then read this and give us your thoughts. btw, i've always said i don't like the idea of burning BTC/BCH for altcoins. that hasn't changed:
https://medium.com/@craig_10243/vampire-securities-from-beyond-the-wormhole-8c4e691c809e