It's a very interesting post, thanks for writing it. The concept of defining goals and searching systematically methods to achieve them, is required and to some part lacking by the current dominant bitcoin development team.
I also highly admire the work you spent into developing Bitcoin Classic. I'm optimistic, that the research you do about scaling will help to develop solutions to overcome current restrictions, and I also agree that a more systematic, goal- and demand-driven approach is what bitcoin-developments requires. But ... your post misses some essential arguments of a large number of small block advocates and with this of the discussion we have for years. So I don't think you will get small blockers to agree, and at this step of the community-split this is what is needed to achieve anything at all.
So ...
1.) "mid-range hardware"
"Last week
forum.bitcoin.com published a
video about time it takes to download, fully validate and check 7 years, or 420000 blocks of Bitcoin history (from day one of Bitcoin). This is 75GB of data which took 6 hours and 50 minutes to fully validate on mid-range hardware. It wasn't cheap hardware, but it was certainly not top-of-the-line or server hardware. In other words, it is a good baseline."
I bought a new cpu not more than 3 month ago, and I know nobody who uses (privately) a better cpu than me. I needed more than 12 hours, I think somethhing about 18 hours. 6 h 50 needs today in most places on earth using an extraordinairy fast cpu, not a mid-range-hardware.
Today it''s already impossible to set up a node on an old laptop, and it's already one of the most intensive memory-tests a system can burden (an extremely tiny bug in the memory is enough to kill the verification process at some time while validating the 100 million transactions, which is why I was completely unable to set up a node on my old system, that was fine for every other allday task)
2.) What needs work?
Your say your goal is to carry 50 million transactions a day. One blockchain-download at this time means downloading and veriying 100 million transactions. So obviously even the "mid-range hardware" would need +6h50 every two days to be installed. After one year you'd need something like 50 days to set up your node.
If you have a design-idea to solve this problem, something like selective validating or checkpoints, you should share and discuss it, because this can be a highly needed, but also controversial subject. In fact I think node setup is the most critical part of scaling.
3.) typical home node
"This preserves Bitcoins greatest assets, you don't have to trust banks or governments. People trusting their local church or football club is much more normal and common to do."
Hm. I don''t like discussions about what bitcoin's "greatest asset" is, since everybody has another idea but nobody can proof anything. However I guess many people thought it was that you have to trust NOONE. Saying people have to trust their local church is somehow ... strange ...
4.) Bandwrith
"The last of the big components for our hone node is Internet connectivity. To reach our goal we need to be able to download 50 million transactions at about 300 bytes each over a 24 hours period ... Ideally we go 5 times as fast to make sure that the node can 'catch up' if it were offline for some time. We may want to add some overhead for other parts as well. But
1.39Mbit minimum is available practically everywhere today. 2Gb which is 1440 times faster than what we need to reach our plan in 5 years is available in countries like Japan today."
I am aware that in the current implementation of the bitcoin protocoll we have a lot of redundancies that waste bandwrith. But 1.39 Mbit for 50 Mio tx / day? This is based on wrong assumptions, even if you use thinblocks (I think the BU-devs can confirm this). 1. a node does not only download, but upload. Upload capacity is usually rarer, so it is the bottleneck. 2. for the network to work efficiently, every node should share his data with some other nodes, usually 8+, and it should do this even in time of transaction peaks in a short frame of time. 3. If a node misses operation-time he has to download archival transactions and new utxo beside the transaction flow 4. If other nodes have missed operation time your node has to provide old blocks and transaction (more upload)
It is possible to imagine a bitcoin-like system, that doesn't have this problems, maybe by finding a more efficient method to propagate transactions. But this is not the Bitcoin we know.
5.) Latency
If I read your post correct, you didn't spend a block on latency, which ist, according to the only person on earth, who wrote his phd about bitcoin scalaiblilty, Dr. Christian Decker, THE major destricting factor on scalability.
Please, don't misunderstand this. I read your post, I read it with delight, but I was not able to not talk about this issues. I'm curious about your answers and I seriously hope, your post and the research you do will result in great solutions to improve bitcoin scalability.