Gold collapsing. Bitcoin UP.

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@jessquit

If we use the second model (each node has a direct connection to the Internet of a certain bandwidth) than it is easy to show "hops" is faster. Consider a case with N=25 nodes.

Let

t = propagation time
B = bandwidth = 1 Gbps
Q = bits transmitted = 1 Gbits
L = lag (one-way ping time) 0.5 seconds

then

t = L + Q/B

If you send to to all nodes in parallel, it takes

t = 0.5 + (25 * 10^9 / 10^9) = 25.5 seconds

If you send to only four nodes first, this hop takes

t1 = 0.5 + (4 * 10^9 / 10^9) = 4.5 seconds

The second hop now has five (5) nodes sending data to the remaining 20 nodes that don't yet have the block. The last hope also takes 4.5 seconds:

t2 = 0.5 + (4 * 10^9 / 10^9) = 4.5 seconds

The total time is thus

t1 + t2 = 4.5 + 4.5 = 9 seconds

So, using this model, it takes 25.5 seconds to send the block in parallel, and 9 seconds to send the block over two hops.

But remember, what I'm using above is just a model. If we use the model where every node has a direct fiber connection (not packet switched) to every other node, then parallel _is_ fastest. The "truth" is probably somewhere between both models.

EDIT: fixed math error
 
Last edited:

jessquit

Member
Feb 24, 2018
71
312
@Peter R thanks for the model. I first want to say that I understand the point the model makes, and I'll expound on that later.

The model makes its point but it does so with some assumptions that seem odd to me.

Maybe I'm just very naive or missing something but a gigabit block payload seems extremely large. Why is the block payload so large?

Also a typical one-way ping time of .5 secs seems very large. And for a large mining operation 1Gbps seems very small, considering that's my home internet speed. If my pool is located in a reasonably sized data center, I should be able to utilize multiple 10Gbps pipes these days.

And remember that once we distribute our block to even the first half-dozen miners/pools, we've already reached a majority of current hashpower, because hashpower is not evenly distributed, and the "full-graph" assumption is that we are prioritizing our connections based on hashpower.

I don't mean to beat you up over these numbers. I know these figures you used are just cocktail napkin numbers used to illustrate a point, which is perfectly fine, I do understand the point they were intended to make, and you are correct -- so in that spirit, I think we could agree that they might be adjustable somewhat, that's all.

Past that -- yes, I do understand the benefits of swarm propagation, and you are not at all wrong about the benefits, but they depend on the assumptions. When the same content has to be distributed quickly to millions of endpoints all around the world, the benefits of swarm propagation are very clear. And that is the correct model to describe propagation among the thousands of nonmining nodes, I'm sure we agree here.

I guess my point is that I strongly see the mining topology - particularly among the hashpower that actually matters - looking much more like the (A) assumption. That is the paradigm I'm asking us to explore. The hashpower that matters is large operations with fat pipes and compact blocks so payloads are not bottlenecking.

Assuming that blocks are emanating from a well-connected data center somewhere in the world, and needing to reach a small number of other well-connected data centers in other parts of the world, all of whom have a strong financial incentive to be very well connected, a very high degree of parallelization should be expected. It doesn't literally have to be a physical fiber link from each point to each point to logically resemble this from the point of view of the model.

It strikes me as self-evident that since miners (pools) have a strong incentive to connect as well as they can to other known miners (pools) and since we're talking about relative handfuls with deep expenditures in infrastructure, the assumption should be that the miners that matter should have this emergent topology. If it turns out that they really don't, I have to ask, why wouldn't they, since it is advantageous.

Thanks for your time.
 
Last edited:
  • Like
Reactions: freetrader

Tom Zander

Active Member
Jun 2, 2016
208
455
my issue is a whole slew of possibly incorrect assumptions we make about the mining network
Bingo!

Here is some 3rd party actual data;

https://gist.github.com/hellerbarde/2843375

Things like SSD got faster since 2012, but sending a packet did not.

Send packet CA->Netherlands->CA .... 150,000,000 ns = 150 ms

Then you have to realize that sending a new block isn't just one packet, its many. And you have to realize that sending, for instance, 100KB can't be neatly calculated to be a pre-defined number of packets (see TCP Window Size).
Which means that you will get many round-trips for larger blocks of data. Where larger is anything over the standard window size (typically a couple KB, but can get large on a really nice connection).

This means that the speed of your pipe is completely irrelevant if you end up sending a larger amount of data over a larger distance. The amount of roundtrips is relevant.
 
Last edited:

throwaway

Member
Aug 28, 2015
40
124
Don't forget that you only need to send the block header in order for other miners to begin mining on top of it.

It might be best to send the header to all miners simultaneously, and the rest of the block sequentially.
 
  • Like
Reactions: jessquit

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
Cross clears 0.14 :ROFLMAO:
[doublepost=1524450646,1524449967][/doublepost]BCH over $1300
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
I dont want that type of world more than my feelings of sharing freely with all mankind. because mankind can still use the tech, on BCH.
I'm not ruling out the possibility that we (or at least I and those who share my views) may need to abandon ship on BCH at some point. If that is impossible because of patent encumbrance, that becomes a real problem and in turn effectively means that nchain has captured BCH.

If BCH is worth a damn, it will stand on its own feet and not rely on government interference for success of any kind. Do we really want it to become what it fights against?
[doublepost=1524462816][/doublepost]
Which means that you will get many round-trips for larger blocks of data.
Sounds like a reason to drop TCP for UDP to me :) Especially since x-thin doesn't transmit whole blocks anyway and transmission of complete transactions may be requested (and those should fit completely in packets, I think).
 
Last edited:

jessquit

Member
Feb 24, 2018
71
312
Don't forget that you only need to send the block header in order for other miners to begin mining on top of it
I thought this was understood. If other miners mine headers-first (as we are told) then the total amount of data needed to be transmitted to "win the race" is extremely low (~100 bytes / one single packet).

If not, then assuming miners mine on compact blocks, then the "full" block is still rather small (definitely not a gigabit).

I wasn't aware that an assumption of SM strategy is that SMs mine low-payload / empty blocks, either. This might help their race, but also makes SM even more noticeable.

Also, if SM's "listening nodes" don't validate HMs block before reacting, this also implies there exists a very simple countermeasure is possible: spoofing these nodes with tiny invalid blocks to make them divulge their contents. Then maybe blacklist them. But this is OT.

I stand by my assertion. Assuming the "fully connected" model, HM should be able to open connections to all relevant miners and begin transmitting blocks before any SM can react and begin transmitting blocks to the same miners.
 
Last edited:

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Seeing a mention of UDP, which appears to be an instant improvement, makes it worth reviewing the pros and cons. This article gives an overview.
https://www.incapsula.com/ddos/attack-glossary/udp-flood.html
That vulnerability exists whether you use it in your applications or not. In fact, the lack of application makes it worse :) "When it sees that no associated application is listening, it replies with an ICMP Destination Unreachable packet."

I suspect that the likelihood is that mixed TCP/UDP would be most efficient. Heck, in some ways, it's a shame that multicast never really made it.
 
Last edited:
  • Like
Reactions: solex and jessquit

jessquit

Member
Feb 24, 2018
71
312
@Richy_T "I suspect that the likelihood is that mixed TCP/UDP would be most efficient." -- isn't this basically analogous to what FIBRE does?

That's the other thing about the SM discussion: it seems to leave out the fact that we already know that miners already use a mining-priortized network that is not generally available to the thousands of nonmining Sybil / relay nodes. As I'm not an expert in the actual mechanics of these networks I try to avoid bringing them up, but I do know: they exist, miners use them to propagate new blocks, and nonminers generally don't play on them. In short their existence merely confirms my hypothesis: miners prioritize connections to other miners.

____

I've always understood that miners begin mining on top of the first-seen block as soon as they see it, and don't wait around to see if someone else is making a "better block." In that case, none of the arguments about round-trip times or large payloads have any relevance to this discussion because HM merely needs to get his header to the other dozen or so major miners to "win" and he can be lazy about sending the rest of the transactions.

Assuming that the mining network takes the shape of a near-complete graph means that it is virtually impossible for SM to beat HM no matter how many Sybils you use and where you put them. There is no way for SM's Sybil in Tokyo to learn about HMs block found in New York before HM has already started telling the local miner in Tokyo about it.

____

Edit: I just want to add, that what I'm discussing is not some sort of anti-SM countermeasure adopted by miners as a response to SM. The argument I'm advancing is that as a result of the mining incentives, we should expect the mining network to form a logical (not physical) "near complete graph" structure, and that once we presume the existence of this block propagation topology among miners, things change, one of which may well be SM theory.
 
Last edited:

Tom Zander

Active Member
Jun 2, 2016
208
455
@jessquit

yesterday you wrote;

When a miner finds a block, the miner has an incentive to make sure as many other miners find out about the block as fast as possible. A miner does not want to waste its time broadcasting a newly found block to thousands of nonminers randomly.
And today you write;

it seems to leave out the fact that we already know that miners already use a mining-priortized network that is not generally available to the thousands of nonmining Sybil / relay nodes. As I'm not an expert in the actual mechanics of these networks I try to avoid bringing them up, but I do know: they exist, miners use them to propagate new blocks, and nonminers generally don't play on them

I'll take that as a positive sign that you went and read up on how miners connect. They indeed do not use the p2p network that the rest of the network uses.

I'll close with stating that as far as I know nobody is worried about Selfish mIning being a problem. I certainly am not and the only reason this has ever been an interesting topic is because the SM ideas make us learn a lot about how mining works.

And thats ultimately why we are all on this forum, to learn.
[doublepost=1524477923][/doublepost]
I've always understood that miners begin mining on top of the first-seen block as soon as they see it, and don't wait around to see if someone else is making a "better block."
Its important to realize that two siblings are per definition equal. The POW between them can not be different.
 

throwaway

Member
Aug 28, 2015
40
124
@jessquit

I was agreeing with you when I talked about the headers.

If miners needed the full block, then a honest miner with a 1GB block might have a disadvantage against a selfish miner with a 500MB block.

But since only the header is needed, the selfish miner can't win the race.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
What about this: even if it's slower to transmit the new block to all other miners at once, they do it anyway. Miners only need the header; they'll mine an empty block until they've received and validated the full block. They don't really care if they mine on top of the previous block or the new one (mining being memory-less and all (right?)), but they choose to mine on top of the new one because that's what everyone else will do.

(Edit: I'm addressing the medium post with this.)
https://medium.com/@bloxroutelabs/the-scalability-problem-very-simply-explained-5c0656f6e7e6

Edit 2
Whenever there's a new block:
The miner who found it wants everyone else to get it ASAP so they stop looking for competing blocks of the same height.
The other miners want the header of the new block ASAP so they can stop wasting hashes on top of an old block.
@throwaway

we've actually known this to be the miner's strategy since Spring 2015 when Wang Chun revealed this headers first strategy (aka SPV mining) on the mailing list which resulted in the accidental ~8 block hard fork of July 2015 as a result of the Core intended soft fork DER strict encoding fix to prevent negative signatures during which it was found post mortem that Antpool forgot to upgrade one of their non mining stratum nodes on the network which inadvertently relayed a block from another small miner (5%) that hadn't upgraded despite most miners saying they would. @Peter R and i had a field day taking that one apart.
 

bitsko

Active Member
Aug 31, 2015
730
1,532
I'm not ruling out the possibility that we (or at least I and those who share my views) may need to abandon ship on BCH at some point. If that is impossible because of patent encumbrance, that becomes a real problem and in turn effectively means that nchain has captured BCH.

If BCH is worth a damn, it will stand on its own feet and not rely on government interference for success of any kind. Do we really want it to become what it fights against?
[doublepost=1524462816][/doublepost]
I'm not sure I understand the sentiment...

Everything BCH is today, which is real and proven as it is, cannot be taken away from you. It is well distributed and open source.

It is not as though protocol development is at any risk of encumberance, and if it were, it is simple enough to not upgrade if you hash at all...
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
i still don't get why ABC devs aren't signing their releases and allowing verification with checksums? @deadalnix @sickpig
 
  • Like
Reactions: throwaway

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
Isn't that the job of the committers/maintainers like you? I hesitate to say job because presumably it's a voluntary process. But if you're going to take the lead and get all the glory that goes with that I'd think you'd prioritize the security and confidence of your releases.