Gold collapsing. Bitcoin UP.

bluemoon

Active Member
Jan 15, 2016
215
966
GMax doesn't see any reason not to support a hardfork to 2MB of non-witness data, after SegWit, as long as it is done appropriately. It now looks like Core may have this done in a few months, is this sufficient for you guys now, or do you still insist on complaining and attacking?
So Core remains non-commital and drags its heels even now, yet it is somehow our fault?

Get real, Jonny. Start thinking outside your blinkers.
 

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
Hey - there is no dislike-button...
I know, the reality hurts. Who is dominating the market places since its invention?
The war lords (state and church) together with its corrupted minions and accomplices at Monsanto, Goldman Sachs, Amazon, Axa, PwC, GM an alikes.

Of course I prefer our Swiss market collectivism before the US/EU/Chinese/Saudi variant, but no market (selfsufficiency / anarchy) was still much better than becoming a caricature of a homo sapiens: a homo oeconomicus (slave) who has to produce surplus (tribute to the war lords) on a market place. Tempi passati ...
 
  • Like
Reactions: 8up

albin

Active Member
Nov 8, 2015
931
4,008
Is it technically possible for a soft forked extension block scheme to have no size limit criteria for block validity? Crufty for sure, but that could be an end run around the politics to advocate Unlmited in a way that can't be a FUD target for this "strong consensus" nonsense.
 

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797

GMax doesn't see any reason not to support a hardfork to 2MB of non-witness data, after SegWit, as long as it is done appropriately. It now looks like Core may have this done in a few months, is this sufficient for you guys now, or do you still insist on complaining and attacking?
That you are supporting the terror of that vandal is very sad.
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
If there is significant doubt as to which chain is "Bitcoin", then Bitcoin is a failed experiment
Failed experiment in whose eyes?

● Certainly not current holders, because their purchasing power is unaffected (by any ledger copies due to having two persistent chains).

● Certainly not to prospective investors, since the worst case is that they see as investment options "Bitcoin" and its minor offshoot, called Kitcoin or whatever (the chain that was defeated in fork futures trading). Not a problem for current holders, because whichever coin the new investors invest in we all get richer by default, exactly the same as now and in exactly the same proportions as now.*

Here's where it might seem confusing if you have your eyes on the hairy CS details of the "chains" like a coder, rather than on the simpler ledger aspect like an economist or investor. If you look at the ledger, noting importantly that copying the ledger doesn't change the nature of sound money because current holders' purchasing power is completely unaffected, the picture becomes clearer. No one really cares about what is and isn't "Bitcoin." They care if their purchasing power is preserved without them having to make investing judgments (i.e., pick the right altcoins). They thus care only that the World Wide Ledger is preserved.

And a property of ledgers is they can be copied without affecting purchasing power (note the same is true of gold and even fiat money: the ONLY reason central bank inflation steals the purchasing power of the common man is that it falls disproportionately on the population; a magical "double every bill in everyone's pockets and every figure in every bank database" would be completely harmless "inflation," other than the hassle of having to change all the price labels and such).

The Bitcoin experiment is to see if sound money can be preserved. A persistent chain split resulting in an effective ledger copy, even if both ledger-copies retain significant value, preserves sound money. Completely. It does not make the experiment fail at all.

It is in fact is the ultimate mechanism by which the experiment can succeed in overcoming all obstacles. Bitcoin can fork and split as many times and in as many ways as the market allows, and the market will ensure that our sound money is preserved - without inflation (read: excess mining subsidy, no going over 21M) but also without the inflexibility that your Extreme Consensus view entails.

I saw you getting so pessimistic on reddit today about the politics of mining and the HK "dipshit" agreement. You lamented how fragile Bitcoin was for such an agreement to be necessary, hoping some success could somehow be eked out. It was unnecessary. Bitcoin is far more flexible and powerful - and more solid and immutable sound money - than you have yet imagined. You just have to get out of the trees of the CS aspects a little to see the forest of market reality.

*A dollar newly invested into Bitcoin or Kitcoin increases current hodlers' purchasing power the exact same amount as a dollar invested in Bitcoin now - do the math to see this for yourself.
 
Last edited:

Erdogan

Active Member
Aug 30, 2015
476
855
jonny1000 said:
GMax doesn't see any reason not to support a hardfork to 2MB of non-witness data, after SegWit, as long as it is done appropriately. It now looks like Core may have this done in a few months, is this sufficient for you guys now, or do you still insist on complaining and attacking?
So Core remains non-commital and drags its heels even now, yet it is somehow our fault?

Get real, Jonny. Start thinking outside your blinkers.
No, johnny1000, it is not sufficient.
 

Dusty

Active Member
Mar 14, 2016
362
1,172
- The nodes keep sending each other updated Bloom filters as they get new transactions in their mempool, say every 10 seconds (this frequency would have to be tuned).
Can heuristics be used to eliminate the bloom filter exchange? What heuristics work?
Since every new transaction hash is broadcasted to every peer in the form of INV bitcoin message, a node should be able to update the bloom filter of a peer autonomously by checking the INVs that peer is sending him: if a peer INVites a certain tx hash it means that he knows it, and hence he should be able to update his bloom filter consequently.
 
  • Like
Reactions: Mengerian

jonny1000

Active Member
Nov 11, 2015
380
101
Certainly not current holders, because their purchasing power is unaffected (by any ledger copies due to having two persistent chains).
In my view this is very naive. Most of the liquid capital are not following the nuances of this blocksize debate. To most people Bitcoin is already a confusing, abstract and vague concept that they do not understand.

If we split into two and there is a significant dispute about which chain is Bitcoin, market participants may give up and divest. Potential new investors will be even more confused and regard Bitcoin as a failed experiment. The value of the sum of any parts of the system will fall by at least an order of magnitude, compared to the current system, in my view.
[doublepost=1464954905][/doublepost]
You lamented how fragile Bitcoin was for such an agreement to be necessary, hoping some success could somehow be eked out. It was unnecessary. Bitcoin is far more flexible and powerful - and more solid and immutable sound money - than you have yet imagined.
Let me try and put it this way then.
Which attitude do you think is better to ensure long term success? Being paranoid about vulnerabilities and fragility, and always looking to improve resilience. Or being complacent and overconfident about the strengths of the system? Which is the side you would rather we lean towards? What is more healthy?

The truth is of course that we need balance. People like GMax have played an extremely important and crucial role, in identifying risks and advocating a cautious stance. Please respect that this is a valid and important part of the community.
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Is it technically possible for a soft forked extension block scheme to have no size limit criteria for block validity? Crufty for sure, but that could be an end run around the politics to advocate Unlmited in a way that can't be a FUD target for this "strong consensus" nonsense.
Yes I posted how to do it on this forum about 6mo to 1yr ago...
 

jonny1000

Active Member
Nov 11, 2015
380
101
You just have to get out of the trees of the CS aspects a little to see the forest of market reality.
Although I have a technical background, in mathematics and elliptical curves, which I am sure you will associate with my views on capacity and scaling, in a negative way. It may surprise you, but I am actually a professional investor, rather than a CS professional.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
We haven't presented our results on the number of bytes required the propagate an Xthin block, so I don't understand what Greg is talking about. We will present
@Peter R

I think gmax is referring to this post https://bitco.in/forum/threads/buip014-testing-a-bu-x-relay-network-for-miners-in-china.929/page-2#post-14335

and he's right in the early stats we didn't take into account bloom filter size in compression ratio calculation.

I don't know if it's still the case, @Peter Tschipper mabey could clarify it. In any case in the data we gathered there are all the needed info to compute it correctly.

On a related note last changes by @Peter Tschipper (see branch 12bu_priority_bloom) reduce the size of bloom filter to ~5KB without any significant loss both in terms of re-req and propagation time.
 

Peter Tschipper

Active Member
Jan 8, 2016
254
357
@sickpig those resuts were pretty old and yes we were just calculating the compression of the xthin not including the bloom filter. But you're results from our most recent testing does include the bloom filter size and also the new and more comprehensive "thinstats" that we will see on the next release BU PR#31 does factor in the bloom filter size for the overall daily compression rates (rpc getnetworkinfo command) however in the log files you will still see things broken down by xthin, bloom filter, timing info etc, but that is for debugging purposes and should stay that way IMO. The client facing data derived from getnetworkinfo however does have all the relevant data accounted for.

Also, yes you are right about bloom filter sizes. For instance when the mempool is overrun as has been the case of late, bloom filter sizes can get quite big averaging lately around 16KB form what I see. Now if we had bigger blocks that problem would go away because the mempool would be much more well behaved and smaller hence smaller bloom filters. However, given that this is the reality today I've been working on "Targeted Bloom Filters" which is pretty much finished but still testing it out.

With targeted bloom filters (a suggestion by @hfinger quite a while back) we only create a filter from the most likely to be mined tx's in the pool. I didn't really think it would work well but after testing it out it seems to be very good at reducing the size of the avg filters from 16KB to just 4KB no matter what the size of the mempool and giving us a little better overall compression rate by just over 1%! The only downside is that the targeting process eats up some time. On my i7 it take about 20ms to do the targeting but on my 7 year old laptop takes 150ms...so not terribly bad, and not so important to p2p anyway but still going over the GFC a 4KB filter will travel much faster and maybe make up for the loss and some.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@Peter Tschipper thanks for the explanation. if you don't mind could you also briefly explain his the targeting process work? (even a pointer to where explanation has been given a already is OK)