in what year was it introduced?
does anybody believe that if it were to be introduced now that a 1MB limit would be promoted by either of them?
1MB used to be a lot bigger when 1mbps was the typical home connection, now when things are 10x faster, even in a lot of the world not usually considered as advanced countries, the tradeoff decisions made for practical reasons in the PAST are no longer valid NOW.
Using this adherence to ancient history would be like all the chip and OS manufacturers sticking to the 640kb RAM limit, since that was what it was originally, plus it implements a turing machine so is equivalent to any other RAM limit. better to not change it, maybe something will break.
[doublepost=1457474502][/doublepost]
Is it? Doesn't it require an agreement on what weak block to append to? And how will it be negotiated what block is the one to append to? How can you append to a block if you (still) don't even know all the details about one or more of its transactions? The bitcoin9000 paper solves this problem by allowing miners to independently choose where to append their diff blocks. I guess they will most often choose to append the longest diff block chain.
I hope I am not just creating unnecessary confusion. I am still trying to learn about the different weak block proposals.
some sort of deterministic affinity metric can be used to determine what stronger block a weaker block should append to. Maybe hamming distance of the hashes, but anything that is deterministic would allow to resolve such things and I think if all the tx within a subblock are valid, then why allow any to be invalidated?
the indeterminacy of a tx's validity based on totally external factors is not a good thing
a valid tx should be valid and not be overrideable by an external factor. Isnt that the philosophical argument against RBF?
[doublepost=1457474954,1457474303][/doublepost]
If memory serves for the first year or so bitcoin clients run without the cap, the only limit was due to an implementation constraint, namely the max network message can't be higher than 32 MB.
Even with a network message limit (which is a good thing), it is possible to do a tx level sync, so if this was the determining factor strongly indicates it was an implementation shortcut to get a stable system and not any deeper fundamental thing. The fact that some magical historical implementation choice is being used to justify not changing things, this is the luddite position, isnt it?
remove the limits
subchains, interleaves, thin blocks, encoded blocks, compressed blocks
these take time and effort as opposed to:
#define ARBITRARY_BLOCKSIZE 1000000 ->
#define ARBITRARY_BLOCKSIZE 10000000
So if reducing the time required to increase tx capacity is the constraint so that the core devs can take long vacations, have island retreats and spend all their time with politics and business issues, ok, do the above.
But if improving bitcoin is the issue, then we need to find the devs who are able to implement the scalable solutions (gee, where an we find such devs?) and get a working version on testnet