Gold collapsing. Bitcoin UP.

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998

Dusty

Active Member
Mar 14, 2016
362
1,172
About the stress test: it was a very interesting thing even from the point of you of the wallets: many of them were not able to handle this volume.

Someone took the time to review how they behaved during the stress test and wrote a nice article about it: https://www.yours.org/content/stress-test-2018--android-wallet-results-609b40e690a4

I'm quite happy to report that the wallet I'm working at, Melis, was probably the only one not having problems at all coping with the traffic.

I suppose that building it with on-chain scaling in mind (it uses a scalable architecture, heavily multithreaded) helped :sneaky:
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
i used bitcoin.com wallet to send some BCH to the stresstest site during the test, everything seemed fine.
[doublepost=1536153941,1536153057][/doublepost]
About the stress test: it was a very interesting thing even from the point of you of the wallets: many of them were not able to handle this volume.

Someone took the time to review how they behaved during the stress test and wrote a nice article about it: https://www.yours.org/content/stress-test-2018--android-wallet-results-609b40e690a4

I'm quite happy to report that the wallet I'm working at, Melis, was probably the only one not having problems at all coping with the traffic.

I suppose that building it with on-chain scaling in mind (it uses a scalable architecture, heavily multithreaded) helped :sneaky:
6months from now there needs to be another stress test and see that wallets improve.

but... is it reasonable to expect servers to provision for more than 100x of current usage?

probably these wallets had trouble because the servers they connect too would need to be beefed up, and why would say bitcoin.com invest in better servers to the point where they can handle 100x the normal load? once there actually is 100x the normal load presumably these wallets and whatever services will be making that much more money at which point it they would be willing to spend more on better servers, until that time... i would expect these services to not do very much to improve their current reasonably tenable / stable / comfortable situation.
 
Last edited:

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@Peter R @awemany

I have been thinking about Lexical Transaction Order Rule (LTOR) and how it relates to subchains.

Seems to me the two are totally compatible. The only difference would be that with LTOR, when you receive a sub-block, you would insert the transactions into the appropriate place in the set, rather than appending them to the end of the list. This should be just as easy computationally I would think, and would be easily compatible with parallelized block construction in the future.

The Merkle tree would have to be re-built each time, but this is the case anyway with appending since the coinbase transaction also changes with each sub-block. Plus re-building the Merkle tree takes almost no time, and the only way to really solve that is with something like a "Merklix" tree (Which is almost exactly the same as a Merkle tree, but makes insertions efficient).

What are your thoughts? Was I wrong to have the impression you thought there was some issue with LTOR and subchains, or is there something I am overlooking?
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@Mengerian as you said from what I can see there's no striking incompatibilities between weakblock/subchains and LTOR per se. Can't say they won't arise as soon as we will start working on it.

The problem is what you noted already with LTOR updating the merkle root will be more costly. With TTOR you just need to append at the end and this basically mean computing a bunch of operations (double sha256 and string concat) that are proportional to merkle height. (fig. shows operations need to prepend but this is identical to append operation).



On the other hand while inserting in the middle due to LTOR constraint you have to rebuild all the merkle tree from the point of insertion till the end.

With regard to completely rebuild the merkle tree due the coinbase changing I don't think this is right. Replacing a transactions in the merkle root implies a little less work you'd need to append/prepend transaction. Basically you need to update a merkle path.

A good think to have would be a benchmark comparing the workload needed for weakblocks in terms of merkle operations w/ and w/o LTOR.

WRT deadalnix's Merkelix tree, you are indeed right that insertion and deletions became cheaper (ln(n)), but of course it need to be implemented, tested and introduced in the code base before seeing the benefits.
 

imaginary_username

Active Member
Aug 19, 2015
101
174
@sickpig forgive me if I'm skipping obvious restrictions inherent to a Merkle tree - I'm a complete newbie in this - but wouldn't an insertion in the middle (say, to leaf 4 in your diagram) be possible without rebuilding the whole tree as well using what you have illustrated, as long as you allow the depth on that branch to go one deeper? Or is there a special requirement that mandates such operation only be carried out at the beginning and the end?
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
this pretty much sums up my feelings on this matter:


i'm still waiting for an answer from @jtoomim for these two questions:

1. why don't you spam the BCH network with a series of 32MB blocks to push out small miners to prove your point or why isn't a larger pool doing this now, as we speak? note: this also never happened via a 1MB attack when avg blocksizes were <<<1MB
2. what effect does CTOR have on FSFA? imo, it's very important to retain the properties and reliability of FSFA even tho it's difficult to prove what exactly miner nodes are doing; i suspect they ARE enforcing it.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
i think this demonstrates that Coingeek is NOT going to become a superpool as FUD'd. esp with Rawpool now entering the mix. there will be others coming:

 

Tomothy

Active Member
Mar 14, 2016
130
317
'''cypherdoc-i think this demonstrates that Coingeek is NOT going to become a superpool as FUD'd. esp with Rawpool now entering the mix. there will be others coming"

How profitable is it mining btc vs bch currently?
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@sickpig forgive me if I'm skipping obvious restrictions inherent to a Merkle tree - I'm a complete newbie in this - but wouldn't an insertion in the middle (say, to leaf 4 in your diagram) be possible without rebuilding the whole tree as well using what you have illustrated, as long as you allow the depth on that branch to go one deeper? Or is there a special requirement that mandates such operation only be carried out at the beginning and the end?
My understanding is that for a list of N transactions, there is only one way to create the Merkle tree. For example, imagine Tx1 hashes first with Tx2, and Tx3 hashes first with Tx4. If you insert a Tx between 1 and 2, then Tx2 now needs to be hashed first with Tx3 instead of Tx1. So basically all the leaves get shifted which affects the entire row of hashes below them. Appending new leaves doesn’t have this cascading effect.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
no one seems interested. it's been weeks:

 
  • Like
Reactions: majamalu

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
congrats to BU:]
@cypherdoc, Your mention is much appreciated.

I think all the BU devs, in particular @theZerg and @Peter Tschipper, deserve real praise for having a continuous focus on scalability for several years. Xthin, Parallel Validation, Graphene, and "useful data" anti-DoS, amongst other improvements made the BU client more robust under high volume conditions. It is the first time we have seen a real-world environmentally-driven comparison between the two-major Bitcoin development paths since BU was forked from Bitcoin Core version 0.12 (November 2015).
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
Hi @sickpig
@Mengerian as you said from what I can see there's no striking incompatibilities between weakblock/subchains and LTOR per se.
OK, great, that was my understanding also. Like you said, it's always possible something unforeseen will be noticed when someone actually tries to implement it. But it's hard for me to imagine what that could be in this case.
The problem is what you noted already with LTOR updating the merkle root will be more costly.
With regard to completely rebuild the merkle tree due the coinbase changing I don't think this is right. Replacing a transactions in the merkle root implies a little less work you'd need to append/prepend transaction. Basically you need to update a merkle path.
OK, yeah, that's a good point. Updating the coinbase can be done by simply updating the left-most Merkle path. Then, if you are just appending sub-block transactions to the end if the list, the right-part of the Merkle tree can be (mostly) just added the the pre-existing portion. This means the "inner" hashes for the pre-existing transactions don't need to be re-computed.

But is computing the Merkle tree really a significant part of block construction? My impression was that it's computation cost is pretty minimal compared to everything else. I did some back-of-the-envelope calculations that calculating the Merkle tree for a 1GB block with 5 million transactions should take something like half a second.

I will try to find more information on the work of constructing the Merkle tree compared to other workloads, to get a better idea if Merkle construction is a significant factor or not.

A good think to have would be a benchmark comparing the workload needed for weakblocks in terms of merkle operations w/ and w/o LTOR.
Yeah, I agree this would be good to find out. It may also be interesting to try to also compare to Merklix insertion workload.

I'm also curious, does @awemany's subchains implementation do any of this efficient Merkle-updating? I would be surprised, since it seems like a small incremental gain. But I am interested to know if anyone has looked at how difficult or complicated it would be to actually implement.
 
  • Like
Reactions: throwaway

shadders

Member
Jul 20, 2017
54
344
My understanding is that for a list of N transactions, there is only one way to create the Merkle tree. For example, imagine Tx1 hashes first with Tx2, and Tx3 hashes first with Tx4. If you insert a Tx between 1 and 2, then Tx2 now needs to be hashed first with Tx3 instead of Tx1. So basically all the leaves get shifted which affects the entire row of hashes below them. Appending new leaves doesn’t have this cascading effect.
Exactly. Replacement anywhere and appends are easy. Only a merkle path to update which is roughly log2(n) . I did this for coinbase in 2011 for pool server an again for a utxo commitment PoC that used 'holes' (zero hashes) in an on disk Merkle tree to represent deleted outputs and refilled them with new outputs.

Insertions bugger everything up since all subsequent pairings change. This is not just a problem with sequential calculation since on average you have to recalculate half of the tree but if you take an obvious parallelization strategy of breaking a tree up into multiple subtrees running in different cores or even on different cluster nodes, for every insertion you push the last element out of it's subtree and into the next. At worst this could mean a network round trip to another cluster node. At best you could overlap the trees or have the last elements in the subtree pre-sent. But you're still recalculating roughly 1/2n instead log2(n). To put it into perspective for a million tx block that 500k vs 20 hashes per additional insertion.

The solution offered to this problem is replacing the Merkle tree (which I would argue is the heart of bitcoin) with the merklix tree. Which some may be OK with, but that should be made clear. CTOR creates a requirement (at scale) for giving bitcoin a heart transplant.
 
Last edited:

shadders

Member
Jul 20, 2017
54
344
In terms of the computational load of Merkle tree calculations I can only offer an anecdote from distant memory. Important to note that the application was a pool server not a full node. This was pre-stratum so each individual miner would poll using get work and needed a unique response. At the time adjustable difficulty wasn't a thing so as miners got faster the frequency of requests increased as they exhausted the nonce space faster. On BTC Guild sustained request rates in the order of 4000/sec per server were not unusual with massive bursts when a block is found and longpoll kicked in.

So one of the key engineering challenges was to pre generate get work responses faster than the requests came in.

The first version of work maker stopped proxying getwork off bitcoind nodes and built work items internally using getmemorypool. That more than doubled performance and dropped the requirement for many nodes to feed off. But the Merkle tree implementation in bitcoinj was straight port from bitcoind which calculated full trees only. Upon building an implementation that could just recalculate the Merkle path when you updated coinbase there were more performance multiples added, from memory I think it was about 5x.

As I said it's not quite the same as a bitcoind but there was a lot of other stuff going on in the pool server as well and I clearly recall nearly falling off my chair when I saw performance results. So I would be hesitant to discount performance impact of Merkle tree calculations with measurement.
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
damn girl ! :eek:

Two great articles demonstrating that Wormhole is toast. It really is lighting 2.0 and it's not looking good for Bitmain.

https://medium.com/@craig_10243/vampire-securities-from-beyond-the-wormhole-8c4e691c809e

"Wormhole falsely advertises that it is backed by bitcoin. This means it is an asset backed security. Unfortunately, it is a security derived on the misrepresentation to investors by Bitmain. Bitmain are a global company seeking to raise money through a capital issue by an initial public offering. There is no room for ignorance nor would the law consider that an excuse. This statement is blatant security fraud."

"To be permission-less, development needs to occur in a manner that does not require being bound and answering to other parties. Developments within Wormhole are completely permissioned. Any proof of stake system is by nature permissioned.

The way to create a system of money that operates in a permission less fashion is simple, lock the BCH protocol and build upon it. Bitcoin Cash will remove the various caps allowing people to build within script. They will build wallets and applications and oracles and all sorts of new systems and they will do this without having to ask the permission of Core developers, companies such as Bitmain or any implementation developer at all.

That and only that is what a permission-less system is."


https://medium.com/@craig_10243/worm-a-nomics-e8d59107f6d0

"Basically, Wormhole is a Blackhole. What enters is destroyed.
The reason there is no deadline is the aim is to slowly take all value from BCH and move it into WHC."

"As one BCH token returns 100 WHC, if the value of a single WHC ever reaches more than 1% of a BCH, then, it is in the BCH holders economic interest to exchange a BCH for 100 WHC."

"BCH has no value as a mere digital asset, it has value as cash, so, with the supply retarded, the value of BCH decreases more and more until, all we have is WHC which then moves on to leech off BTC and other PoW coins one by one."

"The Bitcoin Checksum is simply a part of the wallet and is not needed as a part of the burn address.This is a UI function. Bitmain could have used a process where you send to an address on the WHC wallet as BCH first, then, you move to the WHC and are minted. This would be no hindrance to the user as they must follow this form of process now to have a WHC issued, but it would stop all accidental burning of BCH. As such, we can also say this loss is a part of the aims of the Wormhole in consuming BCH — this way, Bitmain gains as BCH are destroyed for nothing as well."
 
  • Like
Reactions: bitsko

Members online