Everybody bashes Greg Maxwell for misunderstanding in economics, but check this statement out which is right in his "core competency", his bread and butter if you will, crypto and computer science:
He's putting the cart before the horse... if you must send N pieces of unrelated data then from a theoretical standpoint that takes more information then sending N-M pieces (where 0 < M < N of course). I think that is the "theoretical perspective" he is arguing from.
However, this is not the situation here. The information he wants removed is not pieces of unrelated data but information on how the bytes in the data stream are related. If we can remove that data, then the predictability of the data stream is reduced. This must necessarily result in a larger minimum data size.
Think about it from an information compression perspective. If you have a white image it can be compressed into a very small file because there is a simple relationship between all the data (every pixel is the same as the first one). Images of trees, people, etc still are compressible, but less then that of a white image, based on how predictable their content is. However, if you have an image of random colors it cannot be compressed at all -- the data stream size cannot be reduced.
Now let's create a reversible mathematical function that transforms the white image (or any image) into pseudo-random colors. If I do that, the relationship between all the pixels will be obfuscated, and the resulting image file will be incompressible -- that is, larger than the original. The same should be true for Bitcoin transaction streams. Any function that transforms the data in such a way as to obfuscate its inter-relationships must necessarily be less compressible than the original data.
I mean its basic cryptography. If I encrypt the message "11111111111111111111111111111" (obfuscating the message), I'll get an incompressible random-looking result. One of the most basic and effective approaches in cryptanalysis is to analyse the data stream and find out how it is not random. The original Enigma cypher had a fatal flaw which is that it never allowed a letter to be encrypted into itself. This tiny piece of predictability was used to crack it (along with other data). Anyway I digress, this obfuscation is why Bitcoin transactions and block don't compress well.
Anyway, I'm posting here not there because I don't have time for a pissing contest... and instructing the opposition is rarely valuable unless you think it will change their minds about the point under contention.
He's putting the cart before the horse... if you must send N pieces of unrelated data then from a theoretical standpoint that takes more information then sending N-M pieces (where 0 < M < N of course). I think that is the "theoretical perspective" he is arguing from.
However, this is not the situation here. The information he wants removed is not pieces of unrelated data but information on how the bytes in the data stream are related. If we can remove that data, then the predictability of the data stream is reduced. This must necessarily result in a larger minimum data size.
Think about it from an information compression perspective. If you have a white image it can be compressed into a very small file because there is a simple relationship between all the data (every pixel is the same as the first one). Images of trees, people, etc still are compressible, but less then that of a white image, based on how predictable their content is. However, if you have an image of random colors it cannot be compressed at all -- the data stream size cannot be reduced.
Now let's create a reversible mathematical function that transforms the white image (or any image) into pseudo-random colors. If I do that, the relationship between all the pixels will be obfuscated, and the resulting image file will be incompressible -- that is, larger than the original. The same should be true for Bitcoin transaction streams. Any function that transforms the data in such a way as to obfuscate its inter-relationships must necessarily be less compressible than the original data.
I mean its basic cryptography. If I encrypt the message "11111111111111111111111111111" (obfuscating the message), I'll get an incompressible random-looking result. One of the most basic and effective approaches in cryptanalysis is to analyse the data stream and find out how it is not random. The original Enigma cypher had a fatal flaw which is that it never allowed a letter to be encrypted into itself. This tiny piece of predictability was used to crack it (along with other data). Anyway I digress, this obfuscation is why Bitcoin transactions and block don't compress well.
Anyway, I'm posting here not there because I don't have time for a pissing contest... and instructing the opposition is rarely valuable unless you think it will change their minds about the point under contention.
Last edited: