Gold collapsing. Bitcoin UP.

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@Peter R

>I don't think you're going to get miners to agree to accept and build on top of arbitrarily-sized blocks.

i think that's too pessimistic as they now know, for sure, they can control blocksizes with soft limits. Bitmain and others had 4MB soft limits at the time of the 8MB attack that proved highly effective. i'm not afraid to give them control in this regard. this is how we've seemingly always agreed this is the way it should be.
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@cypherdoc: There seems to be a psychological problem with 'no limit' as this will make people uneasy - and then they'll rather stick to the status quo. I suspect in that sense, remembering the old BTC days, this might be counterproductive?

Just wondering though.

I am fine with no limit and I would be fine with any other open-ended (or nearly so, e.g. BIP101) proposal. Or, my very slight personal favorite, BIP100 @ simple "50%-median" (no 20%/80% logic).
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@cypherdoc

Yes I agree. But from what I’ve learned since we started this journey is that miners are more risk adverse than I originally thought. They want to agree to orphan blocks over some particular size (the exact size doesn’t really matter so long as it’s well above average demand).

So I don’t think the miners will leave the max block size up to chance. If I’m wrong, so be it. I would not be too bothered if the miners try to accept all blocks regardless of their size. I just don’t think they will.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
On the lexicographic order:

@Peter R.: Very well put. I had similar "what is the motivation" questions in mind today, but you put it much clearer than I could have.

I like to add that "a single reason" is sometimes a bit subtle:

I am a supporter of OP_DATACHECKSIG now. There's many different reasons why people support this. They can all be boiled down to "because it will allow checking signatures of arbitrary data in Bitcoin script" though. And it seems like supporters each individually have their one strong reason.

Can we similarly boil down the motivation for lexicographic ordering?


@jtoomim :

That said, aesthetics are not to be underestimated. Beautiful and elegant systems tend to be the most maintainable. If a system has a huge number of arbitrary details that simply need to be memorized in order to be used, then it's very difficult for new developers to join the community and start to become productive.
The trouble with aesthetics is that it has this metaphysical angle to it (like e.g. "quality") where folks just tend to have deep disagreements.

Now, if I understand you correctly, you see the aesthetics in the code that could be written or simplified?

I see the aesthetics more in the data structures. And there a continuous partial order along the chain has greater appeal to me. Compare also the illustration in my criticism write-up.

If you want to ask me on importance of aesthetics now, however, I have to say that data structure aesthetics is more important to me than code.

Someone quipped, and I unfortunately don't have the right quote at hand, but paraphrasing from memory: "If you have defined the data structures, the programs on them become obvious.".

And I think that is true for Bitcoin to a very large extend. If you want to describe Bitcoin, describing it with a C++ code base with its validation algorithms and everything will yield a much larger description than describing its data structures. The latter has a much higher compression factor.

With Graphene, encoding the order of the remaining 5% of transactions takes as much data as encoding the unordered set of all 100%. And that's only when the miner is cooperating. What happens when you get a selfish miner in adversarial conditions? They might want to have a block that propagates well to some people but not to others. In this circumstance, they can just randomize the order to make it as difficult as possible to transmit, and preload it on special servers near their friends.
Can you explain this? I don't see how now. I can avoid encoding order for 95% of the items (resp. compress it down to a couple bytes) and the last 5%, I can encode with less bits.

Also, on the attack angle: You have argued that we have an algorithm that can be used equally with any transaction order now. Doesn't exactly that make the attack vector you described go away? (Now I still assert that partial ordering will likely still be best for validation even with this algorithm, but this needs more data to demonstrate either way.)

The attack is limited to causing log2(n) bits of extra graphene order data per transaction for a very unordered block.

I also am not sure yet I fully understand the whole scenario of making hard-to-propagate blocks yet. That seems counterproductive to any miner, selfish or not.
 

Tom Zander

Active Member
Jun 2, 2016
208
455
A miner that "attacks" by creating a bad graphene block is stupid. They can just not create a graphene block and send the raw block if they want the relay to take longer.

And as soon as you realize that then you also realize that an "evil" miner that creates a bad to propagate block is only hurting himself. Nobody else. As his block has a much higher chance of being orphaned.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@jtoomim

Let’s imagine 1 GB blocks full of 500 byte transactions. That’s 2 million transactions per block, or 3,333 transactions per second sustained. In other words, Visa level throughput.

Let’s assume 98% of the block can be ordered canonically. That means we need to provide ordering instructions for 40,000 transactions.

Log2(40,000) = 15 bits.

15 bits x 40,000 = 75 kB.

75 kB seems tiny in the world of gigabyte blocks.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
A miner that "attacks" by creating a bad graphene block is stupid. They can just not create a graphene block and send the raw block if they want the relay to take longer.

And as soon as you realize that then you also realize that an "evil" miner that creates a bad to propagate block is only hurting himself. Nobody else. As his block has a much higher chance of being orphaned.
Indeed, naively increasingly the propagation time of your own block only results in a profit advantage if you control more than 50% of the hash power. To gain an advantage at lower hash power fractions, your block withholding must be strategic, as explained by Eyal & Sirer.
 
  • Like
Reactions: Norway

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
What happens when you get a selfish miner in adversarial conditions? They might want to have a block that propagates well to some people but not to others. In this circumstance, they can just randomize the order to make it as difficult as possible to transmit, and preload it on special servers near their friends.
As far as I can tell, this "attack" can be done by simply waiting a little before you send the block, and is orthogonal to transaction ordering.


as @solex inferred, what i see happening is that the BCH protocol is in the active process of ossifying down; already. and this is in spite of the scheduled q6mo hard forks; which i think are necessarily going to be abandoned. i know that comes as bad news to the more eager beaver BCH devs around here wanting to insert their favored code changes but that is what i see happening. everyone is disagreeing with everyone as to how to move forward and i don't think this is a bad thing. in contrast, it's a good thing. especially if you understand Bitcoin as Sound Money. if you don't, then you need to understand that money users/merchants need confidence via stability, reliability, and predictability that the rules won't change out from under them suddenly unless absolutely necessary to protect it's money function. imo, now is the time to move on removing the block limit entirely. and not via some complicated mechanism as coded in BU with EB, AD, BV, FG, or MG. just remove it.
As much as I would like to see a functional GROUP token, the max blocksize limit is certainly the most important issue. Ossification could be closer than we think.

Removing the max blocksize is what we should push for when we communicate with the miners. They have the tools to handle it, and they don't need babysitters.

In the unlikely event that one miner with a fancy new node rig create a block so big that the others can't handle it may create a small hickup, but it will either be orphaned or the others will pick up the slack.

I asked Joannes Velmorel this question in Hong Kong, and asked @Peter R the same question at a later point. (They gave different answers):


If you had to choose, would you:
A)
Have Bitcoin Cash choked at 128 mB for two years.

B) Have Bitcoin Cash crash and go down for 24 hours.

I would choose B without a doubt..

It's time to let the baby birds out of the nest.
 

jtoomim

Active Member
Jan 2, 2016
130
253
> If you want to ask me on importance of aesthetics now, however, I have to say that data structure aesthetics is more important to me than code.

I definitely agree on this point. I may have said something along the lines of simplifying code at a few points, but I also (and more vehemently) have said things like "remove cruft from Bitcoin's design and forge a leaner, purer, and more elegant Bitcoin Cash." By this I'm referring primarily to the protocol and the system elegance. The effects on implementation complexity are derivatives of the simplification of the data structures themselves.

> And there a continuous partial order along the chain has greater appeal to me.

We disagree on this. As the transactions naturally form a DAG and not a sequence, I think that enforcing a linear sequential data structure is arbitrary and inelegant. I think it is more elegant to describe the chain as a sequence of sets, with no arbitrary data included beyond what is inherent in the DAG and the block-level timestamp.

>> With Graphene, encoding the order of the remaining 5% of transactions takes as much data as encoding the unordered set of all 100%...

> Can you explain this? I don't see how now

Encoding order for small-ish blocks isn't too hard. It scales O(n log n) with the number of out-of-order transactions (the offset values become larger and take more varint bytes), but that's not too bad in non-adversarial cases. The reason that the order information is a big deal is because IBLTs, the core part of Graphene, are ridiculously efficient. The size of a Graphene encoding is not linear with block size or transaction count. It's only dependent on the difference in transactions found in mempool (or a feerate-truncated version thereof) versus in the block. Typical blocks with 40,000 transactions can be encoded in around 2 kB with Graphene, or about 0.4 bits per transaction. On the other hand, encoding order information for a 40k tx block with 2k of out-of-order transactions will take around 3 kB for a near-optimal-efficiency encoding (which may take quadratic time to compute and will use int lengths that aren't a multiple of 8 bits) and around 8 kB for a more typical (linear time) encoding. So yeah, the order information for 5% of a block isn't huge in absolute terms, but it will more than double to quintuple the total message size in typical scenarios and do a lot worse in adversarial ones.

Source for Graphene stats: This article, as well as personal communications and public lectures at Scaling Bitcoin Dec 2015 with Rusty Russel back when he was working on IBLTs. It's possible that I'm a bit off on the 2 kB typical number. I'm not an expert on Graphene. I'm more certain about the order numbers, as I've worked on code to describe residual order information for partially sorted blocks.
[doublepost=1534203105,1534202236][/doublepost]> As far as I can tell, this "attack" can be done by simply waiting a little before you send the block, and is orthogonal to transaction ordering.

No, delaying the transmission of a block is a bit different, as it's harder to selectively slow propagation to certain individuals that way. You can delay the block's arrival at your enemies' servers by at most one hop's delay that way. If your block has a well-known or easy-to-determine compact encoding, then that one hop duration will be short.

By changing around the order, you can create a scenario where you (plus co-conspirators) know a simple encoding (based on a secret weak block), but for which there is no publicly known simple encoding. This means that you can transmit your block quickly to wherever you want it to go, but the block will travel slowly everywhere else. If you're the 2nd largest miner and you want to use the hashrate of the 1st miner without their active participation and collusion, you can set up a VPS in the same datacenter as theirs, and transmit your block quickly to that VPS with the simple encoding, but use the large encoding and high LAN bandwidth to move it the last hop.

As a selfish miner, being able to pick and choose who gets your block is the unholy grail. It allows you to punch well above your weight in terms of selfish mining efficiency. If your hashrate is 20% and you can get your block to a 30% miner with a 1 minute latency advantage over the rest of the network, that gives you and your unwilling collaborator an expected 5% orphan rate advantage over the rest of the network.

By eliminating the order information, we can eliminate an entire class of attack vectors all at once. Nearly all blocks would propagate well, and the remaining attack vectors are not as easy to profit from.The only remaining tools I can think of that an adversarial miner would have would be blocks with secret transactions (which necessarily cost a lot in the opportunity cost of lost fees, and still take a long time to validate even with unwilling collaborators), and transactions that are designed to be slow to validate (which give no benefits with unwilling collaborators).
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@jtoomim: It feels to me like you're trying to come up with reasons to justify a decision you've already committed to.

I really don't think we should make a hard forking change to the protocol based on half-baked ideas that it might make an attack that we've never seen in the wild (at least for bitcoin) more difficult when miners use a block propagation technique that has not yet been shown to be superior in practice to Xthin.
[doublepost=1534205800][/doublepost]
 

jtoomim

Active Member
Jan 2, 2016
130
253
Peter, we agree on a lot of what you said. My opinion is that the November timeline is too aggressive and optimistic given the discomfort and opposition we have seen so far, and the timeline should be postponed indefinitely or changed to an "implementations available and benchmarked by" date. I am on the record as having said this in the past. I also want to see tests of validation performance and Graphene performance before we make a decision on this.

As for the block propagation stuff, this is absolutely not a post-hoc justification. Rather, it is something that might finally solve a problem I've been obsessed with for years. I have been saying for years (since 2015) that block propagation is the bottleneck on safe block sizes. In 2015-6, after Hearn's publication of thinblocks in XT, thebluematt's work on Relay Network, and Rusty/Kalle/Gavin's work on IBLTs, I noticed that all three systems would fail in adversarial conditions and fall back to plain getblock requests, so I decided that Bitcoin needed a propagation scheme that would work for adversarial conditions as well as normal conditions. I began work on the Blocktorrent proposal and implementation. Xthin, Falcon, and FIBRE beat me to the market with implementations, and performed well in typical conditions (and, for Falcon, also in adversarial conditions), so I stopped working on blocktorrent. Since then, I've worked on p2pool, and have struggled with inadvertent selfish mining being a standard feature, not an occasional mishap, due to poor share/block propagation performance. I even once took advantage of flaws in p2pool's design to intentionally use selfish mining techniques to punish a specific user who was generating invalid-but-difficult-to-detect blocks on Litecoin p2pool, so I'm well aware of what I could do to destroy p2pool if that were my aim.

The point I'm trying to make is that CBO has a lot of potential merits, and it's worth getting excited about. I discounted IBLTs in 2015 because they were so easy to attack with transaction order information, but CBO solves that entirely. (IBLTs can still be attacked with transactions that didn't come from mempool, but those are expensive to attempt and hard to make profitable.) It's too early to tell this will help as much as we think it will, but it's definitely worth writing some test implementations to see how it goes. If it's worth implementing, then it's worth specifying how the implementations should work. We can decide later whether we want to do the fork, but we should be deciding now whether we want to invest in writing the code.
 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
No, delaying the transmission of a block is a bit different, as it's harder to selectively slow propagation to certain individuals that way. You can delay the block's arrival at your enemies' servers by at most one hop's delay that way. If your block has a well-known or easy-to-determine compact encoding, then that one hop duration will be short.
If I understand this correct, is this based on the assumption that an attacking miner can't fiddle with the open source code and insert a timer event for his enemies.

EDIT: Most miners are directly connected, right?
EDIT2: I'm also assuming that collaborating miners all delay their blocks to their enemies.
 
Last edited:

jtoomim

Active Member
Jan 2, 2016
130
253
@Norway, I'm not quite sure I understand what you mean by timer event. I assume you mean just making the code delay transmission to a.b.c.d by 15 seconds compared to everyone else. Is that correct?

If I choose to publish a block to Pool A but not to Pool B, then I can delay block propagation by the latency of 1 hop over the standard p2p network, since A will immediately forward the block to B upon receipt. We need to make sure that 1 hop of latency is within tolerable limits, so that we can keep the spread of orphan rates among different miners within a certain bound -- say, 1%. That means that a single hop should take about 6 seconds or less in typical conditions.

In adversarial conditions (i.e. nonstandard blocks), the chance of punishing a misbehaving miner can counterbalance the potential benefit of a selfish mining attack, but I think we should still endeavor to keep 1-hop latencies below 30 seconds or so in adversarial cases.
 

jtoomim

Active Member
Jan 2, 2016
130
253
I'm going to have to duck out of this conversation pretty soon, at least for a while, I'm afraid. I've got a 3500 word proposal which I have submitted to my power company which I will need to defend tomorrow at their Commission meeting. It's sort of one of those things on which $56 million pear year might depend. I'll look into the math in a few days, I hope.

Here's another non-mathematical description:


There's probably a mathematical description somewhere already from the Core folks. They spend a lot of time worrying about this sort of thing. But I mostly know about it from my own analysis, which I have not yet formally written down. To-do.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@Norway, I'm not quite sure I understand what you mean by timer event. I assume you mean just making the code delay transmission to a.b.c.d by 15 seconds compared to everyone else. Is that correct?
That is correct.


If I choose to publish a block to Pool A but not to Pool B, then I can delay block propagation by the latency of 1 hop over the standard p2p network, since A will immediately forward the block to B upon receipt.
If you are colluding with Pool A, Pool A will also delay the the block to Pool B.

Colluding miners will put the rest on hold. Nothing we can do about that.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@jtoomim, I tried to make a simple model for this. I think the attack is real, although it seems pretty weak to me. I did this math pretty fast, so it could be full of errors :)

Assume there are three groups:
  1. The attacker, who controls hash power fraction A
  2. His victims, who control hash power fraction B
  3. The other miners, who control hash power fraction C = 1 - A - B
Assume that the attacker uses a deviant strategy that results in blocks propagating normally to the other miners, but propagate to his victims with an extra delay of time t. Both his victims and the other miners use the default strategy of publishing blocks as fast as possible without discrimination. To make the math easier, let's assume that if all miners used the default strategy, blocks would propagate quickly, and the orphan rate for all miners would be approximately zero (I think the following analysis holds more generally, however).

Proof 1 below shows that the attacker will increase his share of the block rewards (i.e., he will benefit from the attack) if the ratio of his victims' orphan rate (rb) to his own orphan rate (ra) satisfies

rb / ra > (1 - A) / B.
(Eq. 2)​

Now let's try to calculate the orphan rates ra and rb for this attack, to determine how much of the network the attacker needs to isolate, as a function of the attacker's share of hash power.

In our simplified model, a block is orphaned if and only if the victim finds a block in the interval (0, t) immediately after the attacker finds a block. When such an event occurs, there are two competing chains of the same length: the chain with the attacker's block at the tip, being mined by the attacker and the other miners, and the chain with the victim's block at the tip, being mined only by the victims. One of these two chain tips will get orphaned -- it depends on who finds the next block.

If the attackers or the other miners find the next block, then the victims will re-org to the new longer chain and suffer an orphan. This event happens with conditional probability A + C = 1 - B. If the victims find the next block, then the attacker and the other miners will re-org, and the attacker will suffer an orphan. This event happens with conditional probability B.

These two conditional probabilities are enough to say that the orphan rate ratio between the victims and the attackers will be

rb / ra = (1 - B) / B

Substituting this into Eq. (2) gives

(1 - B) / B > (1 - A) / B

which simplifies to

A > B.​

In words, the attacker needs to be able to partition off a fraction of the network no larger than the fraction that he himself controls for the attack to be profitable. For example, if he controls 10% of the hash power, he needs his blocks to propagate quickly to at least 80% of the network in order the benefit, and slow to no more than 10%. If he accidentally partitions off too much of the network, and his block propagates slowly to 20% instead, then he will lose rather than gain. Or at least that's what this simple analysis suggests.



PROOF 1

The orphan rate can be defined as

r = number of blocks orphaned / number of blocks found,

from which it follows that the rate at which blocks are committed to the blockchain is

1 - r = number of blocks committed to the blockchain / number of blocks found.

The expected number of blocks committed to the blockchain by each group over time T is

Na = A T (1 - ra),

Nb = B T (1 - rb),

Nc = C T.

The total blocks added to the blockchain over this period is

N = Na + Nb + Nc = A T (1 - ra) + B T (1 - rb) + C T

The fraction of blocks won by the Attacker is clearly

Na / N = A (1 - ra) / [A (1 - ra) + B (1 - rb) + C].
(Eq. 1)

This is an important result. Since the network difficulty target adjusts so that the total number of blocks added to the blockchain per unit time remains approximately constant, if the attacker can increase his share of blocks then he will see a corresponding increase in revenue per unit time. That is, if application of his strategy results in

Na / N > A / (A + B + C),

he will come out ahead.

It is straightforward from Eq. (1) to show that the above inequality is equivalent to

rb / ra > (1 - A) / B.
(Eq. 2)
 
Last edited:

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@jtoomim: I read your described scenario as 'encode the order of the unordered set' which is why I was completely confused by your assertion. Thanks for clearing that up. Further thanks to erring on the side of caution regarding a push for ordering when I feel there's a significant fraction of folks in BCH who don't see the light yet. Maybe we're all too stupid to see the advantages and there will be brilliant code that demonstrates its superiority - and I mean that without any sarcasm intended. But given the way the arguments for it become, well, quite gray IMO, I just concur we should hold off for a while. In this sense, on the attack vectors:

The result from Peter R. is interesting and I am wondering whether it isn't - no offense meant - another red herring to look at the details of block propagation speed when the attack amounts to basically controlling all the network links of your opponents. And it looks to me that in this scenario, simple delays will be as good as block order shenanigans as the weapon to screw with my share of the network.

My approach would rather be to parallelize intake of new blocks - so parallel block validation like it exists in BU, and thus have the one who wants to do shenanigans by injecting a bullshit-ordered block do potentially quite some work but simply lose out to the other peers I have, that give me the block in sane and good order - and faster. As a remark on elegance, maybe this shows that the concept of parallel validation similarly has some aesthetics to it.

And further, as a general point on transmitting the order information, I like to point out that we do it right now by simply transmitting the order as integer numbers say which txn goes into which slot.

But who's to say that we can't use an adaption of IBLTs or another probabilistic data structure to transmit this order information more efficiently as well?

If I look at two mempools, the difference of the order information seems quite small.

Now I can of course trivially identify a map as a set by making it a set of pairs. In this case (transaction ID, <position>). If <position> would be the absolute position in the mempool index or the absolute position in the block, I would agree that this scheme is NOT going to work like that. Because the common set would be (almost) empty due to the positions never matching.

But here we are entering the unknown unknowns territory. Are we sure that there is no easy encoding of order differences between nodes that exploits the fact that the orders in the mempool are quite similar?
 
Last edited:

jtoomim

Active Member
Jan 2, 2016
130
253
@jtoomim
Let’s assume 98% of the block can be ordered canonically.
In the four BTC block templates I looked at when doing orderencode.py,, this number was around 86.5%, 88.9%, 81.6%, and 81.0%. My code wasn't quite optimal in its search, so I would guess that the true number lies between the numbers I cited and half the difference to 100% (e.g. 86.5% to 93.75% for the first block template).

So I think 88% is a more likely estimate. But we can round that up to 90% for sanity.

That means we need to provide ordering instructions for 200,000 transactions.

Log2(200,000) = 18 bits. But implementations might prefer 32 bits or 40 bits (varint encoding).

18 bits x 200,000 = 450 kB.

32 bits x 200,000 = 800 kB

40 bits x 200,000 = 1000 kB

Real-world global bandwidth for uncompressible data in the bitcoin p2p protocol often gets as low as 30 kB/s, especially when crossing the China border. I actually saw averages of 10 kB/s on a few of my China nodes in 2015. In the Gigablock tests, we got 1.6 MB/s of block size, but with an xthin compression of 50x, that means actual network bandwidth was around 32 kB/s.

This means that the 32 bit case would typically incur a delay of 25 seconds, at least on today's network. On a future network capable of 1 GB blocks (31x current limit), we might have 5x higher network throughput, so it might only take 5 seconds on average. That's not terrible, possibly tolerable, but it could be worth improving upon.

All that is in the non-adversarial case with natural transactions. A selfish miner currently can shuffle transactions around as desired, and get 10x higher encoded lengths. That's 20 bits x 1e6, or 2.5 MB. Over the China border at 30 kB/s, that could be as long as 83 seconds. Clearly too long, and long enough to do a propagation-delay selfish mining attack.

(If we were using UDP with FEC instead of TCP, these awful network throughput issues would largely go away. Do you know if anyone is working on that?)

> But who's to say that we can't use an adaption of IBLTs or another probabilistic data structure to transmit this order information more efficiently as well?

In the non-adversarial case, this is possible. If you basically reimplement the CreateNewBlock algorithm when decoding graphene blocks, you can generate an order guess that will be very darn close, except for potential implementation differences between different bitcoinds. But this means making the CreateNewBlock algorithm and sorting a pseudo-consensus rule. You don't have to follow it, but if you don't your revenue will suffer, so everyone will follow it.

In the adversarial case, the number of possible sortings is factorial(txcount), That results in entropy of about 11.8 bits per tx for a 10k tx block, and 19.48 bits per tx for a 2m tx block., with larger blocks taking more bits per tx. Practical encodings will probably end up with 32 bits per transaction, and so you're back at the calculations above.

P.S.: Talking with you guys is too much fun. I'm probably going to sleep through my presentation tomorrow if this keeps up.

P.P.S.: My orderencode.py got about 11.8 bits per out-of-order occurrence on a 2k transaction block with 379 out-of-order occurrences. That number grows superlinearly for bigger blocks, though. Probably O(n log n).
 
Last edited: