Gold collapsing. Bitcoin UP.

@sickpig

Ok. I use a system I bought in mid 2013 with monitor for ~400€, so maybe my hdd is low quality.
My processor is 2x3,3ghz, 16gig ram. dbcache in the gui is max 1024, I set it to 1000.

my download is far from being on its limit. This is not the bottleneck. CPU is working most time with ~100%, but sometimes it goes down for some minutes.

Would it be possible that a node with a faked blockchain is still a node? And how should it be possible to profit from any kind of blockchain-manipulation, if every nodes checks the utxo?

Edit: some weeks ago I tried it with an old laptop (bought mid 2013 for 200€). After three days I gave up. But maybe I had the wrong cpu settings.

Can unlimited integrate some "modi" in the configuration, like weak system, medium system, fast system, minimal ressources, maximal ressources? As I see it, the default configuration is minimal ressources of a weak system, right?
 
Last edited:

Dusty

Active Member
Mar 14, 2016
362
1,172
What I really miss while setting up a new node is the capability to define a manual checkpoint up until disable the verification of signatures (but leaving all the other checks in place, like PoW, hashes, etc).
Core does that, but only for (old) hardwired checkpoints.

E.g.: I already have a running node I trust, I ask for the hash of a recently mined block with sufficient confirmations and I fire up the new node skipping signatures up to that block.

I think Classic is trying to do something similar by choosing automatically a block 24h in the past.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
[doublepost=1458313084][/doublepost]
What I really miss while setting up a new node is the capability to define a manual checkpoint up until disable the verification of signatures (but leaving all the other checks in place, like PoW, hashes, etc).
Core does that, but only for (old) hardwired checkpoints.

E.g.: I already have a running node I trust, I ask for the hash of a recently mined block with sufficient confirmations and I fire up the new node skipping signatures up to that block.

I think Classic is trying to do something similar by choosing automatically a block 24h in the past.
if you already run a node you trust, simply do an rsync of the folder .bitcoin. very fast and reliable.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
Can unlimited integrate some "modi" in the configuration, like weak system, medium system, fast system, minimal ressources, maximal ressources? As I see it, the default configuration is minimal ressources of a weak system, right?
i'm guessing you are syncing, and the fact that the system is trying to verify every single TX as it gets downloaded is eating up your CPU time 100%

once your fully synced, will it continue to kill your CPU? ( i doubt it.. )
it should be possible to implement what your asking.
simply having a mechanism to reduce the rate at which it tries the verify TXs should do it?
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
anyone know how to verify this claim?

 
i'm guessing you are syncing, and the fact that the system is trying to verify every single TX as it gets downloaded is eating up your CPU time 100%

once your fully synced, will it continue to kill your CPU? ( i doubt it.. )
it should be possible to implement what your asking.
simply having a mechanism to reduce the rate at which it tries the verify TXs should do it?
No, if I'm synced it doesn't eat my cpu. Maybe 10 percent in high-times. It runs absolutely stable.

(and that my cheap computer and my really bad internet-connection enable me to run a node without problems is one of the reason I think we should increase the blocksize. Syncing is the onliest problem I can take serious)

Core discussed this problem recently. See the core-website. Jonas Schnelli made some proposal like core should offer a blockchain verified by core, but the rest of the gang vehemently refused it.
 

Dusty

Active Member
Mar 14, 2016
362
1,172
if you already run a node you trust, simply do an rsync of the folder .bitcoin. very fast and reliable.
Of course, but my question is expressly to avoid having the need to copy file system data.
For example I could ask a friend for an hash, or my reference node could have a small bandwidth, etc etc.
Plenty of use case for this, and great performance enhancement.
 
what happens when I sync?
I download the blocks. This files, do they alter after they are downloaded?
Then I have the index-files, that I build by validating signatures?

I'm really a bit clueless about the details.

Isn't there a possibility to include some kind of check of the index-files, so that a new client just has to download the blockchain and then he gets the index-files after they are checked by a random number of random other nodes?

I really think solving this problem could be a breakthrough, both for bitcoin and for unlimited.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
great graph from Bitpay with orange line representing the ongoing block limit in an adaptive scenario:


[doublepost=1458314869][/doublepost]https://bitpay.github.io/blockchain-data/
[doublepost=1458315239,1458314555][/doublepost]so this in a significant piece of info. Luke-jr is actually a co-founder of Blockstream unlike he has been representing. he's been calling himself a contractor, not an employee and certainly not a founder. what a piece of shit:

 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
No, if I'm synced it doesn't eat my cpu. Maybe 10 percent in high-times. It runs absolutely stable.

(and that my cheap computer and my really bad internet-connection enable me to run a node without problems is one of the reason I think we should increase the blocksize. Syncing is the onliest problem I can take serious)

Core discussed this problem recently. See the core-website. Jonas Schnelli made some proposal like core should offer a blockchain verified by core, but the rest of the gang vehemently refused it.
i believe the validation during syncing is extremely redundant

its checking every single TX. ( right? )

if segwit does what it think it does, future blockchains will not include signatures and old TX will simply not be possible to validate, the idea is that you do not need to check every single TX simply validating the block itself is OK, because we can assume once a TX is part of a block it has already been validated so if the block pases the check there is no need to check every single TX within that block.

so in theory clients could simply NOT check every single TX only the check the block sig.
[doublepost=1458316023][/doublepost]
typical core retardedness

unwilling to compromise on things that matter to poeple ( 2MB blocks )

but perfectly willing to rush a complex implementation which no one actually cares about except for its "lets perdent we increased block space" property.

this is deeply concerning...

the lie about segwit being "fastest way to increase block size" is mind blowing, they WILL be implementing a 75% trigger + grace period to active segwit too ( right ? ), how exactly is this "fastest way"

i am deeply concerned.
 
@adamstgbit

Yes, it does. My poor cpu has to check every single signature, after I got the blockchain-pieces of several peers that already checked it. Extremely redundant nails it.

You think we need segwit to delete signatures so that the client can stopp checking them ???

Why can't we just tell it to not check signatures older than 1 month or something like this? Or to check signatures older than 100 blocks? Or do something like checking every n-th signatur, with n randomly decreasing with the depth of the blocks?

And, btw, I think segwit doesn't do that, it saves the signatures in another directory, so my poor cpu has to read the hash in the transaction that shows where he can find the signature he can validate. This is what I think it does, but I'm not sure. (like nobody seems to be sure)
 
  • Like
Reactions: steffen

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
I also role my eyes when I read Luke's reason to rush SegWit but then he use community sentiment to justify avoided initial download optimization.

He's learning from Maxwell but is still lacking on optimizing conflicts in hes justifications.
 
  • Like
Reactions: majamalu

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@Christoph Bergmann

And, btw, I think segwit doesn't do that, it saves the signatures in another directory, so my poor cpu has to read the hash in the transaction that shows where he can find the signature he can validate. This is what I think it does, but I'm not sure. (like nobody seems to be sure)
yes, your poor cpu does have to do that. somehow the data is mapped to the sig in the witness block thru some ordering mechanism. seems like it would take longer but i'm not sure. also, iirc, the plan from Core is to remove checkpointing at some point. personally, if they can speed this process up to allow that, i'm for it. checkpointing is cheating, imo. i remember when the altcoin space was really raging a few years ago, one of the altcoin core devs chief strategies was to checkpoint their blockchains repeatedly and as close to the tip as possible to prevent 51% attack rewrites of their weak unstable chains. it was scummy.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
@Christoph Bergmann
Yes, it does. My poor cpu has to check every single signature, after I got the blockchain-pieces of several peers that already checked it. Extremely redundant nails it.
I used to think a similar way but I've come to the conclusion that this is not a download problem or a tax on network resources. It's not redundant in my mind.

It's needed for me to trust the the utxio dataset. Without it I can't trust that some old bug or some other hack could not someday be exploited to somehow represent dormant or old coins.

It's a one off cost a individual node has to pay to run a node.
 
Last edited:

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
@AdrianX
you dont need to check every TX to know your utxio is perfect
after checking a block you can assume the TX within that block are valid
so the client should only do the necessary hashing that is required to check the block.

not sure we can trust this FOOL but luke-jr said:
The other day, I realised a potential cleanup that might make it practical to do the IBD (initial blockchain download) optimisation (that is, skipping signatures on very old blocks) apply to pre-segwit transactions as well
post-segwit, old blocks will not be able to validate individual TX, we will need to assume they are valid by checking the block's hash.

if this is OK, then this can be applied to pre-segwit old blocks as well.
 
  • Like
Reactions: AdrianX

jl777

Active Member
Feb 26, 2016
279
345
my strategy for iguana is to skip sig validation until everything else is done.
so it would be up to the user if they want to do tx with sigs not validated yet, ie they check the tip and it matches trusted nodes, then all the sigs can be assumed to match.

if they didnt, some txid or merkle root or blockhash would have changed and you couldnt be having the same tip.

validating signature is one of the most CPU expensive operation and it is about half the total space, so I also segregate the vindata into a separate directory. After you have validated it (or decided you dont want to), you can just delete it.

when there are enough iguana nodes live, then the streamlined dataset without sigs can be synced, that would be a bit less than 20GB. an optimization to this would be to use the actual bittorrent network to sync all the readonly bundle files. that would mean you can sync the full chain with about 1% the network load on relay nodes

Oh, this is all via totally local changes, ie. no forks of any sort needed
 
  • Like
Reactions: awemany