@molecular : I see it like this:
8MB, filled with
whatever, is not a short- or long-term problem for
any reasonable node today. Of course, recent traffic patterns don't really look organic, but so what. I also think 8MB is plenty for the next couple months at least.
At $3k/BCH, a Satoshi is worth about 30 microdollars. Meaning a transaction with 1Sat/B costs about 30udollar * 300 = 9millidollars ~= 0.1ct. I calculated earlier that the cost for validating/storing a transaction across a thousand nodes is on the order of tens of microdollars, so 1Sat/B is still plenty of money if it flows into the right hands (which is not really happening yet, but I think is a temporary thing depending on further technology as below).
If they are attackers and want to drive the fees high, even at forcing fees up to a level of just 5ct/txn (which I find to be a reasonable figure for transacting on the network though am aware that others disagree), that is about $200,000 / day of fees that attacker has to pay. That's quite a bit of money. And if you pay that much money to 'attack' BCH, that also means that you see it as a viable threat. I am not sure this won't heavily backfire.
Anyways, I do think just lifting the cap completely without anything else will likely expose some nodes' misconfiguration (too low minrelayfee, too high mempool size) and might lead to some unfortunate crashes etc. For example, I would not accept <1Sat/B on any node I'd run but it looks like the JoHoe mempool nodes accept free transactions as well.
Also, we still do not have the ability to start a node from a committed UTXO snapshot and thus we're in a situation where we fought a long time against the (wrong) Core ideas but didn't have the time yet to implement what I think will make the blocksize problem a whole non-issue.
Personally and depending on future UTXO set sizes and usage patterns, I'd also be ok with UTXOs being dropped and just hashes of UTXO merkle trees being stored, and users having to provide merkle branches down those trees to prove their transactions as valid.
Others might be ok with the probabilistic approach to mining that Gavin proposed, using Cuckoo-filters. I don't like that kind of randomness in that part of the system and also the Cuckoo-filters would still have a size proportional to the UTXO set all other things being equal, but at least this shows me that there's more than one viable path on this problem as well.
If it even becomes a problem.
Regarding UTXO coalescing, it is an interesting question whether nodes/miners making acceptance of your transaction dependent upon ancillary data that you have to provide is or is not a change of consensus critical rule set. I would argue it is not. As I could go and start implementing such a node today and implement my own little scheme to drop UTXOs and say "these are old".
And that means, depending on market conditions, and whether mining nodes feel like they want to store the UTXOs of everyone, might create a market both for archival nodes as well as shifting the burden of long-term storage of valid UTXOs back to the users. Some really dislike this idea, but I must say that I would be ok with that if that means I have to save a couple merkle branches for my long-term UTXOs every decade or so.
On blocksize, what I would prefer, right now, is for political/marketing/being-ready reasons to implement a dynamic or miner-voted blocksize limited ASAP. I am not afraid that the miners would vote for insane blocksizes now, nor am I afraid that they'll try to milk the users as much as they milk the BTC users now with insane blocksizes on the other end. After all, the reason the miners can take those fees is because a large fraction of the latter cohort seems to (have been socially enginered) to want that.
Dynamic limit or miner voting would be a solution which would also work, though might not be super-optimal, under all conditions in the long term, in the IMO unlikely scenario that outside forces attempt and succeed at forceful ossification of BCH like they did for BTC.