@Yoghurt114 Thanks for the constructive post. Each of the world's volunteer auditors cannot audit all the world's coffee transactions, agreed.
I'm arguing that they should [be able to], we know of no way to have a system where full auditability is not a requirement without compromising integrity.
Only one direction has been explored to solve this problem, namely: LIMITING the capacity of the entire system.
Yes.
I'm not sure you got the meaning of my earlier post. It suggested another direction: developing a scalable auditing system.
So the direction you propose be explored further is anything that has scalability that is more favorable than O(n) - fully validating everything.
Bear in mind, the concept of 'fully validate everything' inherently scales like O(n) per validator, so it can pretty reasonably be assumed a total encompassing solution here is impossible without completely rethinking things.
But, there are a ton of proposals that try and address this; none of them appear to work without compromising at least one of the security pillars we currently have.
- UTXO commitments, where miners commit to the composition of the UTXO set in a block, allows nodes to fetch the UTXO set (which, really, is all that's needed to fully validate going forward). It compromises in that you have to trust that miners were not dishonest at any point in the past (which is reasonably safe because presumably many auditors existed in the past, to check up on miners). In any case, UTXO commitments have other advantages for SPV wallets (allows them to determine whether an output exists without checking the full branch of ownership) and will help full nodes get up and running faster while they're validating the past. They should be implemented regardless for those reasons alone. (how, specifically, is still in the clear)
- zkSNARKS, an extremely sketchy branch of mathematics that I can't quite wrap my head around. (despite trying) But they would allow to retrieve a proof of validation of the full blockchain in O(1) time - if I understand correctly then the conditions under which this proof must be constructed need to be honest, which, again, requires you to trust that they were. Similar compromise as UTXO commitments.
- Sharding, similar in concept to Treechains, this is essentially a full p2p and concept rewrite. The idea in a sentence is that everyone doesn't validate everything, everyone validates a subset. Further, miners don't even mine everything, they mine a subset. Validation complexity under this model can be thought of as O(sqrt(n)), so that's great. As a downside, it's a full rewrite, unfinished (in concept), untested, unproven, and tends to depend on some clever economics to solve edge case problems such as the way fees work and what happens with reorganisations/forks at the tip. (as I'm writing this, I discover I need to do more research on this concept - so thanks
)
[edited the following into post]
- Bitcoin-NG, this isn't a 'fully validate everything' scalability solution, but it's an interesting proposal nonetheless. Also a full p2p rewrite. PoW is replaced by a 'leader election', so miners still 'mine/do work' like they do now, but in this model when they find a block, everyone essentially agrees that until the next block is found, this miner is the sole entity allowed to add transactions to the block chain in so-called microblocks they broadcast every 10 seconds or so. It primarily solves the propagation problem and makes for more streamlined network activity. (this is actually ~0 propagation impedance and would allow full block composition freedom, well not really because microblocks still need to propagate, but impedance can be arbitrarily lowered under this model) But it doesn't work (as proposed) because the incentives are skewed in favor of paying a miner out-of-band, in which case there is no incentive to maintain the longest/best chain.
[/edit]
Only UTXO commitments are anywhere near a reality, the compromise there is well understood - and whether or not it's an acceptable one remains to be seen. I don't think it'd be wise to scale up hugely under the assumption that it is.
In any case, until such time that scalability solutions exist and actually work, we need to live with the current model and its inherent problems, not ignore them.