Agreed with
@freetrader
Gavin was referring to the hard limit, which was 1MB at the time. 2147MB is effectively unlimited for the software as sustained capacity, measured by the BCH network stress test, is 16-32MB.
We have moved on since the debate started. Previously, we knew that network capacity was more than 1MB per 10 minutes. What should have been an easy job was increasing a simple constant in the software. This proved so difficult that the whole ledger had to be forked!
Now we have the opposite problem where the hard limit is
above network capacity. The difficult job is safely making many different improvements to the software to handle volume which the hard limit allows. This work is being done tirelessly in the background by people like
@Peter Tschipper and
@theZerg. It is work thousands of times more difficult than changing a "1" in the software, which your granny could do (excepting the grannies of the core devs).
We need to move on from focus on the block hard limit to focus on true scalability: parallelism including sharding, optimising, and many smaller techniques. This includes contributions such as graphene, where we have a BUIP for funding phase II, and also evaluating CTOR/Merklix where ABC is headed. True scalability is way more than naively changing a number and "getting out the way of the users".