BUIP065: Gigablock Testnet Initiative

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
BUIP 065: Gigablock Testnet Initiative

1. Project Title

Gigablock Testnet Initiative: A Global Test Network for Bottleneck Analysis Under Very High Levels of Transaction Throughput

2. Bitcoin Address

3NpUpMAFxtBfxY68nvM9sgJE3go6X7A1E7 (this is an address within the BU 2-of-3 multisig wallet)

3. Motivation

The Bitcoin network is currently plagued by record-high fees and unreliable confirmation times. For example, while fees of only a few pennies were sufficient in 2014 to get a transaction confirmed in the next block (~10 min), by June 2017 the average transaction fee had risen to $4 while the average confirmation time was over an hour. This has made using bitcoin as a means of payment impractical in many cases.

It is well understood that an increase in the network's block size limit (presently 1 MB) would dramatically reduce fees and make confirmation times reliable once again. However, concern regarding the ability of the Bitcoin network to safely and reliably handle the associated increase in transaction throughput is a primary technical factor preventing the block size limit from being raised.

What is needed is a global test network that can be "stress tested" at very high levels of transaction throughput. Such a test network will allow bottlenecks to be identified and fixed ahead of time, providing a safe path to larger block sizes for the Bitcoin network.

4. Objectives

The objectives of this project are to:
  • setup and maintain a global test network capable of supporting blocks up to 1 GB in size and sustained Visa-level transaction throughput (3,000 TPS),
  • perform continuous experiments related to on-chain scaling on that test network, and
  • identify bottleneck based on the results of those experiments, and
  • disseminate those findings to the broader Bitcoin community.

5. Project Duration


The project is intended to run for five years. However, after the first 12 months, BU members can vote to wind down the project ahead of schedule. In such a scenario, the project would continue to be funded for 3 additional months during the wind-down period.

6. Project Team

The Gigablock Testnet is intended to eventually become a self-sustaining resource of the Bitcoin community, with contributors from across the world. The team committed to bootstrapping the project consists of individuals from Bitcoin Unlimited, nChain and the University of British Columbia (UBC):

Bitcoin Unlimited
  • Andrea Suisani
  • Peter Rizun
  • Peter Tschipper
  • Andrew Stone
  • Andrew Clifford
  • Erik Beijnoff
nChain
  • Stefan Matthews
University of British Columbia
  • Prof. Victoria Lemieux
  • Prof. Chen Feng
  • Prof. Alexandra Federova
  • Post-doctoral researcher 1
  • Student 1
  • Student 2

7. Summary of Work Completed to Date

The precursor to this project was the NOL (no-limit) test network setup by Andrew Stone in late 2015. He used the test network to ensure that the BU client dealt with excessive blocks and reorgs correctly prior to releasing the first version of Bitcoin Unlimited. Later, Stone used the test network to produce blocks over 50 MB in size. However, these experiments were limited in scope to proving that blocks significantly larger than 1 MB could indeed be mined, propagated, and verified using the BU client.

In July 2017, BU members met in Vancouver with representatives from nChain, where it was agreed that a global test network was needed in order to identify bottlenecks to scaling and carry out scaling-related research demonstrating the network’s ability (or lack of) to handle significantly increased transaction throughput. It was agreed that BU and nChain would contribute equally to setting up, maintaining, and carrying out experiments on this network, subject to receiving authorization from their respective governance bodies.

nChain recently received authorization from its board of directors to contribute up to $150,000 per year to this project for 5 years.

BU and nChain are now (paid for in part by an advance from nChain) adding nodes to the network and carrying out Experiment #1 [test plan].

Additionally, Peter Rizun and Prof. Victoria Lemieux (who is spearheading the Blockchain UBC initiative) have identified synergies between BU’s research & development goals and the goals of Blockchain UBC. One potential strategy to simultaneously increase the hours of “scaling” research & development and amplify BU funds, is to take advantage of Canadian government programs designed to facilitate collaboration between Canadian universities and industry, such as the NSERC Collaborative Research & Development Grants program and the various Mitacs programs (for funding students and post-docs).

7.1. Why not use the BSafe network?

The current BSafe focus is on monitoring and conducting research on the network as currently configured (i.e., with a 1 MB block size limit) and not on measuring performance under alternate design patterns or bottleneck analysis under heavy load. The objectives and capabilities of the two test networks may converge in the future, but in the short-term the Gigablock Testnet will enable rapid experimentation and testing with alternate network configurations and vastly higher levels of transaction throughput.

8. Description of Activities

In “Year 1” of this project, we intend to:
  • Complete “Experiment #1” [test plan] and disseminate the results (ideally at Scaling Bitcoin Stanford)
  • Develop a block explorer for the Gigablock Testnet
  • Apply for government funding (e.g., NSERC / Mitacs) through our UBC partner (Victoria Lemieux) to amplify funds and establish a small research initiative related to blockchain scaling and the Gigablock Testnet at UBC.
  • Design and carry out “Experiment #2” based on the findings of Experiment #1, and present these results at the next “Future of Bitcoin Conference.” This experiment will likely involve increasing the number of mining nodes beyond the 8 nodes used in Experiment #1.

Mining node set-up for Experiment #1. Refer to the test plan for more details.

9. Anticipated Challenges and Uncertainties

We may find that Visa-level transaction throughputs (~3,000 TPS) and gigabyte blocks are not possible with current technology, or that the scope of changes to BU to enable these throughput levels is so vast that we are unable to test up to 1 GB blocks in Year 1. It is also possible that we determine that blockchain technology is simply not suitable for a global payment network, and that scaling must be carried out on “second layers.” In such a scenario, the Gigablock Testnet Initiative would likely be terminated prematurely.

10. Budget

The requested budget from BU is $150,000 per year. nChain has agreed to match BU funds 1:1, giving a project budget of $300,000 per year for five years ($1.5M total project budget). Approximately half of the budget is expected to be used to cover server costs, while the other half of the budget is expected to be paid out as wages for contractors, employees and students.

11. Impact

The scalability of blockchain-based cryptocurrencies is a hotly-debated topic. We suspect this project will clearly demonstrate that Visa-level throughput can be reached and sustained on a global test network of mining nodes with today’s technology and for costs affordable to businesses, universities, and hobbyists. The results from an ongoing series of experiments carried out on this test network will add to the growing body of evidence that Bitcoin can indeed scale into a payment network for planet Earth, following the design laid out in the original Bitcoin white paper written by Satoshi Nakamoto.
 
Last edited:

bitPico

New Member
Mar 7, 2017
21
5
1GB is not enough, by 2020 Visa will be doing close to 35K TPS. You guys clearly don't have any bank friends involved in this project. The only way Bitcoin can scale to beat centralized systems is by "off chain" transactions and settle them later. This is identical to the Visa network. This proposal is DOA without an "off chain" overlay network to handle the micro-transactions. Lastly, nobody is going to wait 10 mins for 1 confirmation (Cash is Instant settlement) and you have no secure support for 0 confirmations like SegWit.

Conclusion: The real Bitcoin works best as Cash already because it needs 0 confirmations today, has no fees or backlog (go look now that Team Roger (ex-con) Ver has stopped the SPAM).

Screenshot of this post taken in case of censorship it will be posted on /r/bitcoin

Best of luck anyways!
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
@bitPico
Screenshot of this post taken in case of censorship it will be posted on /r/bitcoin
Please don't wait. Post on r/bitcoin right away. Gigablock Testnet is the most ambitious practical research project ever initiated for Bitcoin scalability and it would be great to have the r/bitcoin readers know about it so they they can comment and critique it.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@bitPico:

Achieving up to 1 GB blocks is the goal of Experiment #1. Who knows how large of blocks we'll be able to handle a few years down the road, as more and more bottlenecks are removed. Finding out is the purpose of this project!

I second what @solex said: please post to r/bitcoin. We'd love the gain more exposure for this project, but our posts get censored on that forum.
 

chrism82

New Member
Sep 4, 2017
9
3
I think you need to specify what do you define as "blockchain technology" - I'm pretty sure it won't scale to these sizes defined as the original "user=node=miner" (that model already collapsed). A clear definition of what you mean by blockchain technology needs to be addressed.

R3 already concluded it didn't make sense for their centrally managed platform.

I think it's a promising experiment but in order to evaluate scalability you need to establish what the constraints and the goals are. For instance, regarding the bandwidth and time required to launch a node and catch up with the blockchain. How many individual nodes running in different places is an acceptable minimum (is just 1 node ok?). Etc etc.

We suspect this project will clearly demonstrate that Visa-level throughput can be reached and sustained on a global test network of mining nodes with today’s technology and for costs affordable to businesses, universities, and hobbyists.
Cost for what? For running a node locally? for mining meaningfully? for running a node remotely? for running a wallet in an android phone?

Regards,
ChrisM82
 

satoshis_sockpuppet

Active Member
Feb 22, 2016
776
3,312
original "user=node=miner"
A) that was never the original model, that is luke-jr's model.
B) How is a user defined?

R3 already concluded it didn't make sense for their centrally managed platform.
Good, we talk about Bitcoin here, not R3 nor Ripple.

bandwidth and time required to launch a node and catch up with the blockchain.
That's something to explore, though I'd guess it's pretty good calculable from known data.

How many individual nodes running in different places is an acceptable minimum (is just 1 node ok?).
This is a research project, the planned nodes are listed.

Cost for what? For running a node locally? for mining meaningfully? for running a node remotely? for running a wallet in an android phone?
For running a node, how is that not understandable from the text?

(And yes, I am 100% sure, that the costs are laughable.)
 

chrism82

New Member
Sep 4, 2017
9
3
A) that was never the original model, that is luke-jr's model.
B) How is a user defined?
It's what's implied in the whitepaper and in the original software. In the beginning, you ran "Bitcoin" - a concrete software, and you were miner, node and user (its wallet was the only wallet). As explicitly defined, 1 CPU 1 vote was the original idea and that quickly evolved. The concept of bundling the miner was progressively phased out in practice. The idea of running a node just to operate is also outdated.

Several things have been outdated by the circumstances. The reality of Bitcoin crushed this model over time, there's no denying that.

I'm not defending any agendas. Just pointing out that the exact meaning of what a blockchain is - and the goals and constraints of the experiment - need to be defined more precisely beforehand or else it will not be an experiment in the scientific meaning of the word. It will a study. Which is fine, too. But the claim to have proven anything unknown beforehand will be very questionable.

Good, we talk about Bitcoin here, not R3 nor Ripple.
That is fine, but their research is actually pretty decent. Ripple is a completely different system, not sure why you bring it up. R3 intended to be a transaction system just like we seem to be understanding Bitcoin in this thread.

For running a node, how is that not understandable from the text?
It really isn't, and you still have not clarified if running this node remotely is what you mean (pros and cons to that). Definitely, running the node locally is a much tougher benchmark to meet, even more so if we consider the different circumstances across the world.
 

satoshis_sockpuppet

Active Member
Feb 22, 2016
776
3,312
It's what's implied in the whitepaper
Read section 8 again.

1 CPU 1 vote was the original idea
Nothing changed, the CPU's just got better. The only guys who are arguing that principle are the Bitcoin core devs and their minions.

The reality of Bitcoin crushed this model over time, there's no denying that.
Bitcoin worked pretty much like the idea Satoshi described for years. It only stopped when we ran into a retarded tx cap.

I'm not defending any agendas. Just pointing out that the exact meaning of what a blockchain is - and the goals and constraints of the experiment - need to be defined more precisely beforehand or else it will not be an experiment in the scientific meaning of the word. It will a study. Which is fine, too. But the claim to have proven anything unknown beforehand will be very questionable.
You will see, what blocksizes are possible in practice in regards to bandwidth, distance and hardware power. That might prove, that Bitcoin with 1 GB blocks is operable for a Setup X. How is that "questionable"?
Or do you already know, that 1 GB blocks work or don't work and what the constraints are? If so, why don't you publish your knowledge?

R3 intended to be a transaction system just like we seem to be understanding Bitcoin in this thread.
Did R3 include mining/PoW?

Definitely, running the node locally is a much tougher benchmark to meet, even more so if we consider the different circumstances across the world.
Local for one means remote for somebody else What kind of question is that. For me it's definitely easier to run a node locally than in e.g. China, I wouldn't have any idea where to start. So your "definitely" is definitely wrong.
 

chrism82

New Member
Sep 4, 2017
9
3
Read section 8 again.
SPV has nothing to do with it.

Bitcoin worked pretty much like the idea Satoshi described for years. It only stopped when we ran into a retarded tx cap.
No, it certainly doesn't work the same way. Many milestones have changed the reality of the way it works. GPU mining changed the game. ASIC mining changed the game. Even pools changed the game. The first exchanges completely changed the game. Adoption progressively changed the game in many ways.

This is devolving into complete sophism if you are going to keep denying that, so I'm out of that part of the conversation.

Did R3 include mining/PoW?
Pretty sure they played with it, but it didn't make sense for any of their prototypes.

It may also be the conclusion of this experiment. Who knows. Depends on WHAT THE MODEL IS AND WHAT THE GOALS ARE.

Local for one means remote for somebody else What kind of question is that. For me it's definitely easier to run a node locally than in e.g. China, I wouldn't have any idea where to start. So your "definitely" is definitely wrong.
Local as in your machine physically under your control, on an internet connection to your name.
 

satoshis_sockpuppet

Active Member
Feb 22, 2016
776
3,312
"user=node=miner"
vs.
SPV has nothing to do with it.
Hmhm... :cautious:

No, it certainly doesn't work the same way.
I guess you are aware of Satoshis "server farm" comments and that everybody who could add 1 and 1 knew that hardware would be specialized (which is good).
The way Bitcoin worked never changed for the user.

Pretty sure they played with it, but it didn't make sense for any of their prototypes.

It may also be the conclusion of this experiment. Who knows. Depends on WHAT THE MODEL IS AND WHAT THE GOALS ARE.
The goal is to see if the Bitcoin described in the Bitcoin whitepaper works with up to 1 GB blocks. What's so hard to understand about that. Read the fucking whitepaper if you are interested in the model lol.

Local as in your machine physically under your control, on an internet connection to your name.
Yeah, you will see the specs of the used connections and hardware. It is not so hard to see if that will work at your place and for what cost...
 

chrism82

New Member
Sep 4, 2017
9
3
You're reading as user=node=miner something I didn't mean. User=node=miner was simply the mode of operation when Bitcoin started and the whitepaper implies it as well several times. The fact that SPV mode is outlined doesn't change that. If only means people are expected to be able to generate transactions without the blockchain, and to check for the validity of transactions via trusted third parties (as many people do today). There's no hidden agenda there.
 

omersiar

New Member
Sep 5, 2017
2
7
1 CPU 1 vote was the original idea and that quickly evolved
Original idea was not 1 physical CPU = 1 vote, you should not get confused by it. Specialized server farms were already foreseen and people (businesses, individuals) are mounting more CPU power to network day by day since day 0. By pointing CPU power we mean Hash calculations, it can be calculated on a general purpose PC CPU or on a GPU or on an ASIC. As you can see there is no change of the "1 CPU = 1 Vote" idea, it is still the same.

By mounting an ASIC to Network, you are contributing to the security of Bitcoin, for a 28,734,861 MH/s CPU Power you will have exact 28,734,861 vote. As 2017-09-05 Hash rate is 7,682,319,667 GH/s. Your voting power is proportional to your CPU power, not to number of physical CPUs you have.

No, it certainly doesn't work the same way. Many milestones have changed the reality of the way it works. GPU mining changed the game. ASIC mining changed the game. Even pools changed the game.
If you mean market value of a satoshi by saying "game", you are right, GPU mining changed the game (value of a single satoshi) but not changed the how Bitcoin works.

Let's see what the creator said:

http://satoshi.nakamotoinstitute.org/emails/cryptography/2/#selection-67.0-79.54

Long before the network gets anywhere near as large as that, it would be safe
for users to use Simplified Payment Verification (section 8) to check for
double spending, which only requires having the chain of block headers, or
about 12KB per day. Only people trying to create new coins would need to run
network nodes. At first, most users would run network nodes, but as the
network grows beyond a certain point, it would be left more and more to
specialists with server farms of specialized hardware.
Several things have been outdated by the circumstances. The reality of Bitcoin crushed this model over time, there's no denying that.
Please tell us what is outdated? Creator's quote is from 2008 and still makes sense but your considerations on nodes, on votes and on game theory, do not.
 

chrism82

New Member
Sep 4, 2017
9
3
You're quoting selectively in the middle of sentences, so I'm not going to bother. It's also tangential to the subject of the thread (the last post).
 

omersiar

New Member
Sep 5, 2017
2
7
Sorry, i did see your last post right after sending my reply. I hope you understand Bitcoin protocol designed to be future-proof. Original idea can scale beyond VISA levels and there will be always more than one miner and you will not have to have operate full node just for transactions. You can even calculate a transaction on a piece of paper then broadcast to the network from your wrist watch.

https://bitcointalk.org/index.php?topic=1391350.0
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
The only way Bitcoin can scale to beat centralized systems is by "off chain" transactions and settle them later.
@bitPico the legacy banking system are free to build a system on top of whatever settlement system they want.

If your projection is true they will. You provide no reasons to limit on-chain scaling, nor a reason to changing the incentive rules and force bitcoin users off the bitcoin network onto layer 2 networks.

if your concerns about censorship are correct your post on r/bitcoin should preserve this historic record for us to look aback on. If you did post it please let me know I couldn't find it but would like to up-vote.
 

satoshis_sockpuppet

Active Member
Feb 22, 2016
776
3,312
User=node=miner was simply the mode of operation when Bitcoin started
Naturally.

the whitepaper implies it as well several times.
Definitely not.

The fact that SPV mode is outlined doesn't change that.
You are to stupid to understand the text Satoshi has written and you ignore the zillion of emails and forum posts where he made it extremely clear, that the goal isn't for Joe Doe to store the blockchain and or mine blocks.

If you disagree with that you are either completely retarded or a troll. Sorry, but there aren't two ways of understanding that.

The people diverging from the original idea are the fucking idiots who think everybody needs to run a "full node", it's a perverted fetish for dickheads like luke-jr, not for thinking people.

This doesn't have to be discussed over and over again. The small blockers are wrong. Period.
 

jtoomim

Active Member
Jan 2, 2016
130
253
@Peter R : I might be interested in participating in this. I could add a physical node in Moses Lake, WA, USA. I can put an SSD for the chainstate (240 GB, or larger upon request) plus an HDD for the unpruned blocks (as many TB as it takes). Depending on how long the experiment was to go for, I could either have it share our office's 100 Mbps symmetric fiber line or I could get a new (dedicated) line.

I should warn you, I've had bad experiences in the past with VPSs from Contabo. It appears that they use spinning HDDs for most of their storage, and they overbook their VPSs with respect to storage bandwidth, so IO will almost certainly be the bottleneck. In the past, I've had it take 30 seconds to run a simple command like ls via ssh. Utterly ridiculous.

We had good results from Linode. Digital Ocean's performance was okay, but not up to Linode's specs. Aliyun was also decent except for the issue with crossing the GFW. That wasn't Aliyun's fault, though.

Your node in Beijing will not keep up very well unless you're using a UDP-based relay method. I'm not sure what the state of the art in BU is these days -- do you have something equivalent to Corallo's FIBRE yet? (UDP with forward error correction and optimistic forwarding of packets before the full block has been transferred.) Xthin might work well enough with 1 MB blocks because it gets the block down to a handful of TCP packets, but once the Xthin message is a couple of megabytes in size you're going to have severe problems unless you are using UDP with FEC.

UTXO lookup is going to be a big bottleneck. Disk IO performance will probably be as important as network connectivity and more important than CPU.

Is there a chat group or a central point of contact for this project?
 
Last edited: