Gold collapsing. Bitcoin UP.

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
The bright side of the Hong Kong round table

I don't even have to put my tinfoil hat on to suspect that Blockstream's mission is to stall bitcoin from developing.

But what I have been really worried about, is that a big, nefarious entity has bought the asic production and mining operations, distributing hashpower to different pools.

A few days ago, it suddenly became clear to me:
The Hong Kong round table would never have happened if the miners were corrupted!

It's a good sign. Greed and open source will make scaling and bitcoin happen!

┗(°0°)┛
 

Dusty

Active Member
Mar 14, 2016
362
1,172
To take an extreme example, say I had a node which was only connected to one other node. That node would know exactly which transactions it would send me and would not try to retransmit to me.
You mean that BU keeps track of the txs sent to you so that it does not send them again when they are in a block?

So does BU keeps track separately for each of his connected peers?

This brings up the proposition that perhaps it is possible to be *too* connected.
If the former is true, x-thin could keep track also of the invites sent by his peers, to know which txs it has: this would eliminate the problems of being "too" connected.
 
  • Like
Reactions: AdrianX

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Sounds about right in terms of throughput.

What thin blocks save is latency/burst bandwidth.
I don't agree that that's "about right". To do napkin math, before xthins the txns and blocks were redundant. So that should be 50% savings with xthin blocks.

Actually your bandwidth using Xthins should be nearly that of "blocks only" mode. This makes perfect sense because both techniques send the data only once. With "blocks only" mode it is sent once in a block. With xthins, it is sent once as transactions.

This analysis does not count TX that will "never" be committed (i.e. spam). Blocks only mode won't be overwhelmed by spam. But we could pretty easily add a filter so nodes don't send you TX below a fee that you request to mostly solve that, or add the fee/byte to the INV.

But of course, the huge advantage of the "thin blocks" is that you get the transactions right away. And as others have said, the "Xthin" bandwidth use is spread over 10 minutes, rather than coming in all at once. This is very important since is means your netflix won't hiccup, the larger network is less likely to drop packets causing retransmission, and latency is much lower.


Now, if you happen to be relaying transactions to 8-10 nodes, then gmax's claim is about right (I'd guess, without doing tests).

But if you are relaying to 100 nodes, then it'll be more like 1%.

Given his position he should be a good enough engineer to realise this basic relationship.


0.12 does include a statistics tracking class. Run "./bitcoin-cli getstatlist" and "getstat". We just need to start instrumenting now.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
You mean that BU keeps track of the txs sent to you so that it does not send them again when they are in a block?

So does BU keeps track separately for each of his connected peers?
Probably not. Likely inelegant writing on my part since I believe transactions are simply relayed to each connected node only once when received. But my point is that being multiply connected requires network traffic to de-duplicate* in a simple flood model (though there are other more complicated methods to reduce this).

*Assuming one bothers to de-duplicate.

0.12 does include a statistics tracking class. Run "./bitcoin-cli getstatlist" and "getstat". We just need to start instrumenting now.
I'll have a look at that. Does it include the kind of data that would be needed for this?
 
Last edited:

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
I don't agree that that's "about right". To do napkin math, before xthins the txns and blocks were redundant. So that should be 50% savings with xthin blocks.

Actually your bandwidth using Xthins should be nearly that of "blocks only" mode. This makes perfect sense because both techniques send the data only once. With "blocks only" mode it is sent once in a block. With xthins, it is sent once as transactions.

This analysis does not count TX that will "never" be committed (i.e. spam). Blocks only mode won't be overwhelmed by spam. But we could pretty easily add a filter so nodes don't send you TX below a fee that you request to mostly solve that, or add the fee/byte to the INV.

But of course, the huge advantage of the "thin blocks" is that you get the transactions right away. And as others have said, the "Xthin" bandwidth use is spread over 10 minutes, rather than coming in all at once. This is very important since is means your netflix won't hiccup, the larger network is less likely to drop packets causing retransmission, and latency is much lower.


Now, if you happen to be relaying transactions to 8-10 nodes, then gmax's claim is about right (I'd guess, without doing tests).

But if you are relaying to 100 nodes, then it'll be more like 1%.

Given his position he should be a good enough engineer to realise this basic relationship.


0.12 does include a statistics tracking class. Run "./bitcoin-cli getstatlist" and "getstat". We just need to start instrumenting now.
I had speculated a 100X improvement for block propagation time.
a 1MB block could be communicated with like 10KB
are mem-pools really all out of sync as to not include all the same TX's
 

jl777

Active Member
Feb 26, 2016
279
345
if we assume 1000 tx per 600 second block, that is about 1.5 tx per second. Let us say 15 hops at 300 millisecond or 5 seconds average propagation

if the above numbers are anywhere close, the odds of any two nodes that are separated by more than a few hops having a different set of tx in mempool approaches 100%
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
well sure but the odds that their mem pool match up by ~90% is nearly 100%
i mean its only TX that are 5-10sec or newer that could potentially be in one mem pool but not another
so ~90% of the TX in the block are comminuted with TXID only and the rest have to be relay in full, correct?
if miners simply do not add TX that they heard about 5-10seonds ago... this works much better.

the way i see it block propagation time improvement "should be" closer to like 50-90X faster then relaying the whole block.
[doublepost=1458251663,1458250463][/doublepost]
Test Results

The following results highlight how thinblock sizes compare to actual block sizes. When the memory pool is sufficiently "warmed up", Xtreme Thinblocks are typically seen at 1/40 to 1/100th the size of regular blocks.

2016-01-20 13:20:20 Send xthinblock size: 13484 vs block size: 949164 => tx hashes: 1657 transactions: 1
2016-01-20 13:49:12 Send xthinblock size: 25024 vs block size: 949173 => tx hashes: 3095 transactions: 1
2016-01-20 13:52:18 Send xthinblock size: 6494 vs block size: 749124 => tx hashes: 781 transactions: 1
2016-01-20 14:15:07 Send xthinblock size: 9846 vs block size: 934453 => tx hashes: 1035 transactions: 4
2016-01-20 14:30:05 Send xthinblock size: 13448 vs block size: 949092 => tx hashes: 1648 transactions: 1
there you go the tests show a 40-100X improvement in block propagation time.

fucking wonderful!
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
FWIW, I believe the typical p2p model ends up using promoted nodes which are given more weight than other nodes. In such a model, one might only be relayed transactions from such a node (or several such). This may or may not play well with Bitcoin.
[doublepost=1458252715][/doublepost]Would there be space in the protocol for a node to say to another node "Dude, stop trying to send me transactions, I already have all the ones you've been trying. Try again in 10 minutes"?

I really need to look into the guts of this some more.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
I believe thats already in effect sort of, bitcoin uses a gossip network to relay TX across the network efficiently. altho gavin speculated there were some inefficiencies in its impl.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
I had speculated a 100X improvement for block propagation time.
a 1MB block could be communicated with like 10KB
are mem-pools really all out of sync as to not include all the same TX's
Sometimes we have all tx other times 800 out of 3k are missing.
 

jl777

Active Member
Feb 26, 2016
279
345
"Sometimes we have all tx other times 800 out of 3k are missing."

my guess is between any given pair of nodes, the number missing will tend to be the same due to the average hops between the two nodes. if the historical missing tx is tracked, then the most optimal sharing would be finding the peers that are "equidistant" from other peers.

Then a merkle root can be used to verify that all tx are synced
 
  • Like
Reactions: bluemoon

steffen

Active Member
Nov 22, 2015
118
163

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
QUOTE="Richy_T, post: 15529, member: 537"]Probably not. Likely inelegant writing on my part since I believe transactions are simply relayed to each connected node only once when received. But my point is that being multiply connected requires network traffic to de-duplicate* in a simple flood model (though there are other more complicated methods to reduce this).

*Assuming one bothers to de-duplicate.



I'll have a look at that. Does it include the kind of data that would be needed for this?[/QUOTE]

Its a generic template that "looks like" a number but it keeps time-series data with a variety of granularity 10sec 5 min hourly daily monthly. And you can track min/current/max rather then just current value as well.
[doublepost=1458261288][/doublepost]
"Sometimes we have all tx other times 800 out of 3k are missing."

my guess is between any given pair of nodes, the number missing will tend to be the same due to the average hops between the two nodes. if the historical missing tx is tracked, then the most optimal sharing would be finding the peers that are "equidistant" from other peers.

Then a merkle root can be used to verify that all tx are synced
That makes theoretical sense but I think that tx prop is << block discovery so very few tx are in flight. My guess is that mempools are mostly sync but there is a huge pool of old tx that are mostly gone from mempools but miners are holding and that miners include private tx.

I'd try to show this by changing the expedited algorithm to "include tx if its not in my mempool" except that having missing tx exercises the tricky code paths.