Gold collapsing. Bitcoin UP.

torusJKL

Active Member
Nov 30, 2016
497
1,156
The current transaction spike is very interesting to observe.
Someone is creating many transactions with 1-2 inputs and 120+ outputs and thus full 8MB blocks can only include around 1000 tx.

There might be a need to have multiple criteria for selecting transactions that go into the next block than only the fee in sat/B so that "regular" use cases are not too much interrupted.
e.g. coinage, number of outputs, etc.
(I'm assuming that there is no economical disadvantage because the fee in sat/B is the same but I could be wrong if it is cheaper for the miner to include a few large tx over many small tx)

It would be interesting to know if miners already do some advanced selection given the transactions all used the same fee (1 sat/B) or did they just include them in chronological order.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
anything BU can do to facilitate?
I'm not sure if it might be good to have a "sweep from X" tool where X would be wallet.dats, Private keys, Armory wallets, mnemonics, brainwallets etc.

If it would create transactions for both/all chains, that might be nice too.
[doublepost=1515981705,1515981036][/doublepost]
So what is the result of their "work with their provider (bitpay?) to accept lower bitcoin amounts"? Accepting Bitcoin Cash!?!?
Quite possibly Bitpay is willing to eat a loss to keep Microsoft as a customer. This only makes sense long term if they have intentions to add in something that would allow them to make a profit (such as supporting alternative cryptos).
[doublepost=1515982168][/doublepost]
also, is there a tool button on this site that allows an upload of a stored phone image? not that i want to store all these images but this is also a relic of the Android editing apps that force you to save your image markups before uploading to Imgur. arghh.
That's an interesting idea. The imgur api is pretty straightforward (Chartbuddy uses it) so it should be possible to integrate an upload button into this site. Perhaps there could be a qualification to prevent abuse (100 posts or something).
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
In a permissionless ledger the concept of spam does not exist. Each of these transactions paid a fee that miners accepted, so they were valid transactions to both sender and miner alike.

I am confused though, we have been assured for years that Bitcoin would break at over 1MB and the 1MB limit was "carefully balanced". But for some reason BCH was able to just eat through that mempool with no degradation of service and the earth is still rotating around the sun.
Likely bitcoin legacy proponents are desperately trying to render bch just as unusable as their own chain. Luckily we just chomp thru those tx and commit them to cheap storage. The attacks will get worse before it gets better.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
Can we get some kind of BTC version of this, but for loss of purchasing power due to fees? The symbolism of this kind of image has been very powerful over the years.



A concrete gold coin image like the coin in this might be a good base:



By the way, the "gold that crumbles when you move it" thing reminds me of that old joke:

"Dad, I need fifty dollars."

"Forty dollars?! I don't have thirty dollars! What do you need twenty dollars for? Here's ten dollars."

<hands the kid a five>

"Split it with your brothers...and bring back the change."
 
Last edited:

rocks

Active Member
Sep 24, 2015
586
2,284
Likely bitcoin legacy proponents are desperately trying to render bch just as unusable as their own chain. Luckily we just chomp thru those tx and commit them to cheap storage. The attacks will get worse before it gets better.
What is a core supporter to do. They can attack BCH with a flood of transactions, but if the attack fails and the network chugs along as normal, then their own attack will demonstrate that larger blocks were fine this entire time and they were lying.

To be disruptive they would have to flood the network with high fee transactions. At 8MB per block 300 satoshi/byte transactions costs 24 BCH per block, 144 BCH per hour or 3,456 BCH per day. Flooding attacks are expensive and at the end you simply lose money and the network goes on.

[doublepost=1516000478][/doublepost]BTW, at 1 satoshi/byte 1GB blocks yields 10 BCH per block, more than enough to keep the network going and secure. The only negative is people would only use 2nd layer solutions if they were functionally useful, not because they have to.
 
Last edited:

molecular

Active Member
Aug 31, 2015
372
1,391
There might be a need to have multiple criteria for selecting transactions that go into the next block than only the fee in sat/B so that "regular" use cases are not too much interrupted.
e.g. coinage, number of outputs, etc.
The effect on the UTXO set size could be such a measure ("number of outputs minus number of inputs", for example). And to some extent that is economically relevant. Larger UTXO set requires more resources (RAM, storage), continually until the end of time (or until those outputs are spent).

I'm not sure I'd like to see such incentives from a users point of view since it would encourage potentially privacy-reducing behaviour: in the extreme, transactions consolidating inputs and reducing the utxo set size could be offered to be free by (some) miners.
[doublepost=1516001735][/doublepost]
Likely bitcoin legacy proponents are desperately trying to render bch just as unusable as their own chain. Luckily we just chomp thru those tx and commit them to cheap storage. The attacks will get worse before it gets better.
It's not just storage. The UTXO set needs to be accessible. Afaik it's currently held in RAM or is at least being cached? If someone wanted to attack node resources, he could generate loads of outputs and keep querying nodes offering SPV service in an attempt to exhaust RAM and make those nodes incredibly slow.

Not saying this is likely what's happening, but brushing this off as "we'll just dump it to cheap storage and be done with it" is not adequate.

But also, don't get me wrong: I'm not afraid of a huge UTXO set. There's a bunch of ideas already out there to make UTXO querying more efficient and it's an engineering problem we can solve as it shows up as a bottleneck.
 
Last edited:

Epilido

Member
Sep 2, 2015
59
185
I'm not sure if it might be good to have a "sweep from X" tool where X would be wallet.dats, Private keys, Armory wallets, mnemonics, brainwallets etc.

If it would create transactions for both/all chains, that might be nice too.
I would find this extremely useful. Allowing BU to look at many wallet.dat files at once with a single chain refresh. Then send....
 
  • Like
Reactions: Norway

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@molecular : I see it like this:

8MB, filled with whatever, is not a short- or long-term problem for any reasonable node today. Of course, recent traffic patterns don't really look organic, but so what. I also think 8MB is plenty for the next couple months at least.

At $3k/BCH, a Satoshi is worth about 30 microdollars. Meaning a transaction with 1Sat/B costs about 30udollar * 300 = 9millidollars ~= 0.1ct. I calculated earlier that the cost for validating/storing a transaction across a thousand nodes is on the order of tens of microdollars, so 1Sat/B is still plenty of money if it flows into the right hands (which is not really happening yet, but I think is a temporary thing depending on further technology as below).

If they are attackers and want to drive the fees high, even at forcing fees up to a level of just 5ct/txn (which I find to be a reasonable figure for transacting on the network though am aware that others disagree), that is about $200,000 / day of fees that attacker has to pay. That's quite a bit of money. And if you pay that much money to 'attack' BCH, that also means that you see it as a viable threat. I am not sure this won't heavily backfire.

Anyways, I do think just lifting the cap completely without anything else will likely expose some nodes' misconfiguration (too low minrelayfee, too high mempool size) and might lead to some unfortunate crashes etc. For example, I would not accept <1Sat/B on any node I'd run but it looks like the JoHoe mempool nodes accept free transactions as well.

Also, we still do not have the ability to start a node from a committed UTXO snapshot and thus we're in a situation where we fought a long time against the (wrong) Core ideas but didn't have the time yet to implement what I think will make the blocksize problem a whole non-issue.

Personally and depending on future UTXO set sizes and usage patterns, I'd also be ok with UTXOs being dropped and just hashes of UTXO merkle trees being stored, and users having to provide merkle branches down those trees to prove their transactions as valid.

Others might be ok with the probabilistic approach to mining that Gavin proposed, using Cuckoo-filters. I don't like that kind of randomness in that part of the system and also the Cuckoo-filters would still have a size proportional to the UTXO set all other things being equal, but at least this shows me that there's more than one viable path on this problem as well. If it even becomes a problem.

Regarding UTXO coalescing, it is an interesting question whether nodes/miners making acceptance of your transaction dependent upon ancillary data that you have to provide is or is not a change of consensus critical rule set. I would argue it is not. As I could go and start implementing such a node today and implement my own little scheme to drop UTXOs and say "these are old".

And that means, depending on market conditions, and whether mining nodes feel like they want to store the UTXOs of everyone, might create a market both for archival nodes as well as shifting the burden of long-term storage of valid UTXOs back to the users. Some really dislike this idea, but I must say that I would be ok with that if that means I have to save a couple merkle branches for my long-term UTXOs every decade or so.

On blocksize, what I would prefer, right now, is for political/marketing/being-ready reasons to implement a dynamic or miner-voted blocksize limited ASAP. I am not afraid that the miners would vote for insane blocksizes now, nor am I afraid that they'll try to milk the users as much as they milk the BTC users now with insane blocksizes on the other end. After all, the reason the miners can take those fees is because a large fraction of the latter cohort seems to (have been socially enginered) to want that.

Dynamic limit or miner voting would be a solution which would also work, though might not be super-optimal, under all conditions in the long term, in the IMO unlikely scenario that outside forces attempt and succeed at forceful ossification of BCH like they did for BTC.
 
What is a core supporter to do. They can attack BCH with a flood of transactions, but if the attack fails and the network chugs along as normal, then their own attack will demonstrate that larger blocks were fine this entire time and they were lying.

To be disruptive they would have to flood the network with high fee transactions. At 8MB per block 300 satoshi/byte transactions costs 24 BCH per block, 144 BCH per hour or 3,456 BCH per day. Flooding attacks are expensive and at the end you simply lose money and the network goes on.

[doublepost=1516000478][/doublepost]BTW, at 1 satoshi/byte 1GB blocks yields 10 BCH per block, more than enough to keep the network going and secure. The only negative is people would only use 2nd layer solutions if they were functionally useful, not because they have to.
Ironically, if Core starts spamming BCH with high-fee transactions, they will attract hashrate to Bitcoin Cash, thus slowering Core itself ...

It is extremely interesting to see this spam attack. Imho nodes should temporarily increase minrelayfees, and if those attacks continue, there eventually needs to be some option to automatically increase the minfee when the network is flooded.

Also I'm curious what the miners will do. I hope they will ignore the spam transactions after having sufficiently tested the network's capacity to handle this load. I'm not very eager to store those transactions and require new nodes to validate them.

The idea to "tax" transactions with many output seems forward-thinking. Building many outputs does not only increase utxo, it is some kind of blockspace debt on the future. I remember Genesis Mining had this annoying habit to build 1-to-many transactions every day, resulting in hundreds of micro-inputs in my wallet, which have all become abolutely unspendable. It made sense for Genesis to do so, as it was cheap and reduced the risks of being hacked. So I think such transactions can and should be taxed.

The nice thing is that any action and restriction on transactions is up to the miners. I'm optimistic they will do what's best for BCH.
 
  • Like
Reactions: majamalu and rocks

torusJKL

Active Member
Nov 30, 2016
497
1,156
It is extremely interesting to see this spam attack. Imho nodes should temporarily increase minrelayfees, and if those attacks continue, there eventually needs to be some option to automatically increase the minfee when the network is flooded.
I don't think this is a good approach because it is a rule that users will not realize has been introduced. As a result users would continue to send their tx with 1 sat/B but they would not confirm (because not relayed).

It would also be easier to push fees higher with few transactions because the attacker would not need to continue to send small fee tx but just start using the next fee that is still relayed until that one is not relayed anymore and then go up to the next. (repeat)

A rule that de-prioritzes tx with many outputs vs inputs has a much better effect on usability.
e.g. while there were ~8'000 tx I sent a 1 sat/B tx and it was confirmed in the next block even though the block included only ~1000 tx and there where ~7000 older tx than mine already in the mempool. (mine was prioritized over other tx that paid the same fee/B)
 
  • Like
Reactions: throwaway

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Not saying this is likely what's happening, but brushing this off as "we'll just dump it to cheap storage and be done with it" is not adequate.
With parallel validation, it might be. If your transaction is old & crusty, we'll throw it on the hard-drive. If it doesn't make it out in time for the next block, perhaps a higher fee is needed to encourage us to move it to RAM for the next block.

One of the biggest changes in Bitcoin we'll see (but not for a while yet) is when we have to resolve the issue that a single fee is not sufficient incentive for storage of data forever.
[doublepost=1516047307,1516046480][/doublepost]
Also, we still do not have the ability to start a node from a committed UTXO snapshot
Is there any movement of this on the Cash side of things? I saw a roadmap but didn't see that on there. Perhaps BU could implement it itself somehow though it seems it would have to be some kind of consensus rule to be properly trustless.

Another thing I'd like to see work on which would help things a lot is the validation. I'm currently syncing my old Windows wallet to my Linux node and I'm 14 weeks behind. Network usage, tiny. CPU usage, tickling above idle. Disk usage, off the charts. I'm currently running at a 14 blocks per 10 minute period which is ridiculous.

Bitcoin's blockchain size points to spinning media but this points to SSD, better utilization of memory or just plain optimization. I know the Iguana dude (whatever happened to him?) claimed he could do the whole lot in an astoundingly small amount of time.

(Edit: Before anyone mentions it, yes, dbcache is set to a couple of gigs).
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
I would find this extremely useful. Allowing BU to look at many wallet.dat files at once with a single chain refresh. Then send....
A while back, I proposed a more modular Bitcoin client (Still thinking about it). Under this model, wallet software would be a completely separate piece of software. If the model were extended to that, the generation of private keys could also be modular. It would be quite easy to add all this stuff in that way. HD wallets might be a bit tricky though (they need to know if an address has been funded)
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
You have a lot more storage options on Linux than you do on other platforms.

I frequently use spinning disks to make a RAID-5 or RAID-6 array, then add a PCIe SSD and use bcache to create a hybrid drive.

It's close to a "best of both worlds" solution.
 
With parallel validation, it might be. If your transaction is old & crusty, we'll throw it on the hard-drive. If it doesn't make it out in time for the next block, perhaps a higher fee is needed to encourage us to move it to RAM for the next block.

One of the biggest changes in Bitcoin we'll see (but not for a while yet) is when we have to resolve the issue that a single fee is not sufficient incentive for storage of data forever.
[doublepost=1516047307,1516046480][/doublepost]

Is there any movement of this on the Cash side of things? I saw a roadmap but didn't see that on there. Perhaps BU could implement it itself somehow though it seems it would have to be some kind of consensus rule to be properly trustless.

Another thing I'd like to see work on which would help things a lot is the validation. I'm currently syncing my old Windows wallet to my Linux node and I'm 14 weeks behind. Network usage, tiny. CPU usage, tickling above idle. Disk usage, off the charts. I'm currently running at a 14 blocks per 10 minute period which is ridiculous.

Bitcoin's blockchain size points to spinning media but this points to SSD, better utilization of memory or just plain optimization. I know the Iguana dude (whatever happened to him?) claimed he could do the whole lot in an astoundingly small amount of time.

(Edit: Before anyone mentions it, yes, dbcache is set to a couple of gigs).
Wasn't there recently the MasterBlocks paper, storing a UTXO snapshot in a Masterblock ...?

About syncing, I had a similar problem last days with BU, very low resource usage, very slow syncing. For me it resolved when I used bitcoind instead of bitcoin-qt and disabled UPNP ...
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
A Bitcoin Unlimited membership voting period is open for two BUIPs

This includes the position of BU President for a two year term. All current members were advised by private forum email of the event, however, there is only one candidate BUIP submitted (my own).
A second BUIP is a bounty for coin splitting instructions on the BU website, intended to help new users release their BCH coins.

BU members are invited to use the voting system to record their decisions.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
On the Windows version, at least, it also doesn't seem to be very good at releasing memory. What was 600M earlier has climbed to over 3G.
[doublepost=1516076013][/doublepost]
You have a lot more storage options on Linux than you do on other platforms.

I frequently use spinning disks to make a RAID-5 or RAID-6 array, then add a PCIe SSD and use bcache to create a hybrid drive.

It's close to a "best of both worlds" solution.
It's a good work-around :)

It's nice when software behaves out-of-the-box though.
[doublepost=1516076184][/doublepost]
Wasn't there recently the MasterBlocks paper, storing a UTXO snapshot in a Masterblock ...?

About syncing, I had a similar problem last days with BU, very low resource usage, very slow syncing. For me it resolved when I used bitcoind instead of bitcoin-qt and disabled UPNP ...
I don't know about masterblocks. For me, it would be something that ideally every miner would have to validate (or nominally should be expected to). It can't really run alongside or off-chain, it has to be as trustworthy as the main chain and that means (I think) consensus is required.

I could try bitcoind [Edit: fuck AVG or more likely fuck Core false reporters. A pox on both their houses]. I don't have uPnP on my router so that shouldn't make a difference, I would think. Not sure why the UI would make it disk IO bound but it's worth a try.
[doublepost=1516076814,1516075752][/doublepost]
A second BUIP is a bounty for coin splitting instructions on the BU website, intended to help new users release their BCH coins.
I'm guessing "email me your private keys" isn't going to fly?
 
  • Like
Reactions: AdrianX and Norway

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
@Richy_T If I didn't have too many other things to do I'd build a node that worked as a cluster of message passing oracles rather than a single daemon. Each (category of) oracle would have its own database rather than trying to cram everything into one. Using ZeroMQ for the messaging would make it easy to run all the oracles on a single machine if practical, or load balance across a cluster of machine when necessary.

At some point the architectural limitations of the Satoshi prototype are going to become the bottleneck and that architecture will need to be replaced.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Justus, that's kind-of the architecture I'm thinking of. Not sure of the messaging but some kind of IPC. Basically, one module that accepts and validates transactions and blocks, another that handles the network side of things and one that interfaces with whatever database the user chooses.

I'm currently mulling over the architecture that would be required for a multi-user, multi-wallet design. The current bitcoind breaks the Unix philosophy and is like being back on windows 3.1. Wallet balances should be easy but historical transactions represent a slightly more complex issue.