@Peter R : I might be interested in participating in this.
I am very excited to hear this! Your help would be very valuable. Your work on the "Block Size Olympics" was awesome!
I could add a physical node in Moses Lake, WA, USA. I can put an SSD for the chainstate (240 GB, or larger upon request) plus an HDD for the unpruned blocks (as many TB as it takes). Depending on how long the experiment was to go for, I could either have it share our office's 100 Mbps symmetric fiber line or I could get a new (dedicated) line.
This would be fantastic.
(The initial experiment (Experiment #1) should be completed by the end of 2017 at the latest and hopefully much sooner. But we intend follow up with continued experiments and eventually grow the Gigablock Testnet into a permanent and self-sustaining resource of the Bitcoin community.)
I should warn you, I've had bad experiences in the past with VPSs from Contabo. It appears that they use spinning HDDs for most of their storage, and they overbook their VPSs with respect to storage bandwidth, so IO will almost certainly be the bottleneck. In the past, I've had it take 30 seconds to run a simple command like ls via ssh. Utterly ridiculous.
Like
@sickpig mentioned, this is a physical server in a rack in Munich.
We had good results from Linode. Digital Ocean's performance was okay, but not up to Linode's specs. Aliyun was also decent except for the issue with crossing the GFW. That wasn't Aliyun's fault, though.
All good to know. We should continue this discussion on Slack, as we're spinning up servers already...
Your node in Beijing will not keep up very well unless you're using a UDP-based relay method. I'm not sure what the state of the art in BU is these days -- do you have something equivalent to Corallo's FIBRE yet? (UDP with forward error correction and optimistic forwarding of packets before the full block has been transferred.) Xthin might work well enough with 1 MB blocks because it gets the block down to a handful of TCP packets, but once the Xthin message is a couple of megabytes in size you're going to have severe problems unless you are using UDP with FEC.
All very good points.
But keep in mind that the purpose of "Experiment 1" is just to get a sort of "baseline" with all nodes connected in a standard way (i.e., with Xthin). It is totally fine if Experiment #1 shows that the nodes behind the GFC get partitioned at (say) Q = 100 MB. It gives us something to fix for Experiment #2 (or #3 or #4).
The goal after Experiment #1 isn't to say "look we've got to sustained 3000 TPS throughput -- all the problems are solved." The goal is to say:
"We achieved 3,000 TPS sustained throughput at 1 GB blocks. Although this is a significant milestone, we didn't really test for A, B and C, the Chinese nodes became partitioned, two nodes with less than 16 GB RAM crashed, etc, etc. We've identified bottlenecks X, Y and Z to fix for the next test."
UTXO lookup is going to be a big bottleneck. Disk IO performance will probably be as important as network connectivity and more important than CPU.
Absolutely! But for Experiment #1, our UTXO set will not be _that_ large (we can fairly easily control this based on the algorithm our transaction-generating nodes use). Our current plan is to stress test the UTXO set (and disk IO, etc.) in Experiment #2.
Is there a chat group or a central point of contact for this project?
For now, I am the central point of contact. I have set up a channel in BU Slack and I just sent you an invitation.
Looking forward to your participation!