Gold collapsing. Bitcoin UP.

go1111111

Active Member
if just 1/3 of the miners who pinky swear that they will honor SW, decide to change their mind, well then the longest chain being built will spend those ANYONE CAN SPEND outputs, and the majority of non-updated nodes will go along.
And what will the merchants and exchanges do? Non-updated nodes don't have any significant power if they're not participating in the Bitcoin economy.

This is not too much different than the protection you have against miners stealing your coins. Sure, miners could all start running code that let them steal your coins, but you rely on the rest of the network not accepting blocks that they produce. Same thing for segwit. Miners could choose to not enforce the new soft fork rules, but if the ecosystem adopts the soft fork then the ecosystem is protecting you from this.

It does raise the question of "how do I know when the ecosystem has accepted segwit to a degree where I trust them to continue to enforce its rules?"
 

jl777

Active Member
Feb 26, 2016
279
345
I think I have solved pretty much all issues that are needed for scaling, other than blocksize increase (or maybe interleaving) that will need a hardfork. Still waiting for any actual design flaws to be found with my method.

Once all nodes can catch up to the current block in an hour (assumes 200 mbps connection), then staying current is not much of an issue. By reducing the data needed in half to get started and using bittorrent network, anything that is in the readonly dataset means that even relay nodes wont need to incur the bandwidth and what bandwidth is used is half. The vin data is half the total data though, so I understand the desire for segwit, but it is still data that is needed and I havent bothered to optimize its space usage as a full node doesnt have any problems with 30GB right now.

new CPU's are not getting any faster clock rates, but rather are getting more cores. the parallel design takes advantage of this and allows for multicore searching of the dataset and since just having bitcoin rpc level queries is not enough to have a block explorer, I pack in the data that allows that too. Yesterday I realized I can make it so it will be able to calculate the balance of any address as of any block, for a cost of 4 bytes (actually less as no need for 32 bits for blockheight, but the compression will squeeze out this "waste").

I found someone that will generate a brute force list of all balances as of block 400,000 to use as a validation for iguana. Once that is passed, then what is left is tracking and integrating the realtime bundle to the search result.

Over time as we get more bundles, just new readonly files get pushed into the torrent space. Initial sync will continue to grow in time linearly based on blocksize, but I dont see it taking more than a day even if things get quite a bit larger, assuming just small amounts of regular bandwidth increases for the network.

HDD space usage will grow too, but iguana takes advantage of reused addresses and such tx take a small fraction of the space for a onetime used address.

Looking for some alpha testers to fine tune and verify the initial sync speeds
 

rocks

Active Member
Sep 24, 2015
586
2,284
The public test trial for Satoshi’s Bitcoin is now ready for use.

To participate you only need to compile and start the trial client, everything has been setup to automatically run from there. The public trial fork is scheduled for this coming Sunday at around noon eastern time US (block 403562 specifically). I have run many trial tests and everything is working great. The actual launch will follow this test and activate mid-April. To participate in the test:
  • Download and compile the public test branch “0.11.2_PublicTest_At403562” from github. The build environment is identical to Classic (link below).
  • Backup your datadir, after the fork the datadir may not be compatible with the core client anymore
  • Run bitcoind or bitcoin-qt
The public test branch to download is here:
https://github.com/satoshisbitcoin/satoshisbitcoin/tree/0.11.2_PublicTest_At403562

You can compare all code changes from Classic 0.11.2 here (click Files Changed to see the diff):
https://github.com/bitcoinclassic/bitcoinclassic/compare/0.11.2...satoshisbitcoin:0.11.2_PublicTest_At403562

I hope you will join the public trial, there is zero risk to participate and you are able to run a true full node that mines blocks at home again for fun. If the project does not take off there is nothing lost, but if it does you have the chance to mine early adopter blocks.

Difficulty for the trial is set so a smaller number of nodes will find blocks every 10 minutes. If more nodes join it will create faster blocks, this is being done intentionally to both stress test a fast block scenario and to allow the test to run through a few difficulty adjustments without waiting months. Also the test client will only run for 10K blocks post-fork before automatically stopping to prevent the test chain from continuing.

For the actual fork difficulty will be adjusted so the number of nodes that join the trial test mine at the expected 10 minute interval.

It was a lot of fun digging into the code to figure out how to implement this fork. Below is a list of some of the main work that went in. If you feel something else is needed please let me know.
  • Following BIP009 conventions for parallel soft forks, a higher block version byte is used to tag blocks as full fork compatible. Version 0x00000100 (256) is used for the fork.
  • The block height for the fork is set to 403562. At this point only blocks tagged for the full fork are accepted
  • The block size post-fork automatically increases to 2MB (follows Classic)
  • A new DNSSeed server was setup to help forked peers find each other after the fork
  • The difficulty adjustment at the fork point is re-set to a value where the expected number of nodes will create blocks every 10 minutes.
  • A difficulty retargeting overflow bug in core was found and fixed
  • A new modified scrypt POW algorithm was implemented in the crypto library to re-enable CPU mining
  • The new POW algorithm activates at the fork height by using the new version tag to select which algorithm
  • To improve performance due to how long the new hash algorithm takes (~1 sec / hash), a caching method was implemented to save previous block hashes.
  • Startup performance issues related to the new POW were found and fixed
  • Multiple performance improvements were made to the bitcoind miner
  • The difficulty adjustment band was increased to allow for faster difficulty adjustment in case hash power increases rapidly
  • Informational debug.log messaging was improved to better communicate block rejection after the fork
  • The alert key was update since it has been compromised by Theymos
Assuming the public trial is successful, the official release client will be made publicly available within a few days. The release version will be identical to the public test with just the fork height changed and the automatic stop removed. The official release will be set to fork in mid-April. This will provide several weeks to enable and build the ecosystem. The idea is that as more clients appear on the P2P network this will generate interest and encourage others to run a client as well.

Please post questions to the main thread here
https://bitco.in/forum/threads/announcement-bitcoin-project-to-full-fork-to-flexible-blocksizes.933/page-5#post-15421
 
Last edited:

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@Justus Ranvier Interesting anecdote about Adam Back and his "ossification" worries. Although I think his worry is misplaced, I do have some sympathy for that point of view.

It can be difficult to understand how Bitcoin can simultaneously 1) be adaptable and upgradable for desired technical improvements, and 2) be resistant to changes that would undermine its qualities as sound money.

Many people are attracted to Bitcoin for its sound money properties, and subscribe to an "ossification" type theory to give them comfort that these properties will not be violated. Phrases like "Bitcoin is backed by math" exemplify this mindset.

But the real solution to this seeming discord is to realize that it is ultimately the market which is in control.

In our world of fiat money and central banks, people are not used to thinking about money as a market phenomenon. It takes an understanding of Austrian School monetary theory to understand how freely interacting individuals, acting according to their subjective preferences, will imbue value to whatever good in the market best fulfills the criteria of sound money.

Once this understanding is fully appreciated, it leads to a realization that the possibilities for Bitcoin's future are truly amazing. Not only can its sound money properties be preserved, they can also be improved.

For example, let's imagine the inflation schedule were trivially user-configurable. I think it is more likely that the market could converge on a lowering of inflation than on an increase (although it's most likely the 21M coin limit Schelling point will remain unchanged). Similarly, when the market in unleashed, Bitcoin can become more fungible, more transportable, easier to store, more secure, more durable, more identifiable, more divisible, more convenient, and more widely marketable.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
Once this understanding is fully appreciated, it leads to a realization that the possibilities for Bitcoin's future are truly amazing. Not only can its sound money properties be preserved, they can also be improved.
You've highlighted the positive side, but that's only half the story.

There is no scam more profitable than corrupting the money, so the fight to make and keep sound money will never end because the efforts to undermine it will never end.
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
@Justus Ranvier Yeah, I agree that the fight will never end. But I also think it is important to highlight why this fight is so important.

It is generally under-appreciated how valuable sound money is. It could be so wonderful and enriching to all of society. The problem is that the benefits are dispersed, so that incentives to defend it are less pronounced, and coordination of action can be difficult. The scammers and underminers have the advantage of concentrating their ill-gotten gains among a smaller group. So although the profit from their scam pales in comparison to the value they destroy, it is easier for them to focus their attacks on any chink in the sound-money armor.

The sound money defenders, though, have the advantage that they can use market mechanisms to coordinate their actions. This allows them to be innovative, use resources efficiently, and pursue multiple strategies simultaneously.

Maybe if we understand the dynamics of this battle better (even perceiving that there is a battle), it will help us to be more effective in achieving our goals.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
Jorge is my new hero (after @Peter Tschipper of course)

https://www.reddit.com/r/btc/comments/4ata42/jorgestolfi_on_bct_nailed_it_the_only_cost_that/

https://www.reddit.com/r/btc/comments/4ata42/jorgestolfi_on_bct_nailed_it_the_only_cost_that/
Jstolfi on BCT said:
The only cost that really matters is the marginal cost for a miner to add another transaction to his candidate block. That is the cost that the transaction fees have to cover. The magnitude of that cost is one of the great mysteries of bitcoin, extensively discussed but never estimated. But it seems to be very small (at least for competent miners) and is probably dependent only on the total size of the transaction. But anyway the developers have no business worrying about that cost: the fees are payment for the miners, it should be the miners who decide how much to charge, and for what.
https://bitcointalk.org/index.php?topic=1398994.msg14224749#msg14224749
 

Dusty

Active Member
Mar 14, 2016
362
1,172
Well, to be precise these are not the only costs: the whole network of full nodes has to bear the costs of increased block size, but we have moore's law on our side for this problem.

What I don't understand exactly is the bandwidth saving he talks about using a new set of RPC APIs, someone can elaborate on the subject?
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@Dusty

WRT full node bearing cost: just use BU 0.12 you can save 90% bandwidth while relaying/receiving block (and also reduce block validation time).

Or if you are willing to accept to give up a little bit of trustlessness you can use a -blocksonly option and reduce ~80% of the over all bandwidth requirement. (see https://bitcointalk.org/index.php?topic=1377345.0)

That said I don't think gmax solution (blocksonly) is ok, or at least a full node run with blocksonly=1 it would not be considered a "proper" full node by me.

Mind you, your objection applis equally to SegWit, though I don't think Core will integrate Xthin block anytime soon.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Has anyone looked into Maxwell's claims that x-thin blocks is only a 12% saving at most? I suspect this is "correct" but in specific conditions. Most relevantly, I think the numbers would be different if someone were restricting connections with maxconnections= and possibly if one were on a bandwidth-restricted connection.
 
  • Like
Reactions: AdrianX

Dusty

Active Member
Mar 14, 2016
362
1,172
Is it possible to enable some kind of debugging info in BU so to collect real world statistics after a few days of running the node?

This would put to rest all speculations.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@sickpig, I agree, the node cost is a moot point.

What about this?
we should ask Jorge I suppose. But the gist of it is that there' more than one way to skin a cat...

Has anyone looked into Maxwell's claims that x-thin blocks is only a 12% saving at most?
he is correct, that fact is that that 12% bandwidth is far more critical that the remaining 88%.

For the latter you have on avg 10 mins to relay it, the former need to be relayed as fast as possible

Luckily the 12% happens to be a subset of the 88% chunk.... that's the leverage Xthin is using.

Is it possible to enable some kind of debugging info in BU so to collect real world statistics after a few days of running the node?
sure put debug=thin in your bitcoin.conf and you'll have plenty of data to work with.

By the way the claim of 90% BW reduction on the BU 0.12 announcement is based on a lot of data coming form such logging facility.
 
Last edited:
  • Like
Reactions: Dusty and bluemoon

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Sounds about right in terms of throughput.

What thin blocks save is latency/burst bandwidth.
Ah, but my point is, does it improve in certain circumstances.

To take an extreme example, say I had a node which was only connected to one other node. That node would know exactly which transactions it would send me and would not try to retransmit to me. There would also not be any other nodes trying to transmit duplicate transactions to me. I would also not be transmitting any transactions to any nodes. In that case, x-thin blocks would be close to a 50% saving on bandwidth.

So there would be some function which would be 50% at x=1 and 12% at x=(whatever node count Maxwell didn't publish), allowing one to tune the savings one could achieve.

This brings up the proposition that perhaps it is possible to be *too* connected.

Also, maybe the way you retransmit on the network might be better to be different for blocks and transactions.

Out of interest, is it possible to disable x-thin blocks on BU?
 

rocks

Active Member
Sep 24, 2015
586
2,284
The average chinese bitcoin forum member is in favour of Classic:
http://8btc.com/thread-30703-1-1.html
The problem is the average US/EU forum member has been in favor of larger blocks / Classic for some time as well, but this has not resulted in a super super majority (>75%) of the hash rate following suit. Between Mow and others there is enough hash rate to block the upgrade despite what the average forum member in any country prefers. That is a fundamental problem of losing one CPU one vote, we all have many CPUs idling all day long at home.
 
  • Like
Reactions: majamalu and Norway