Gold collapsing. Bitcoin UP.

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
I think it's a Russian buttcoiner parodying the whole situation from various angles:

- crackdown on Bitcoin in Russia & elsewhere
- the amount of unity in the Bitcoin space
- private blockchain ventures

I didn't read all of it.
 
That number rings a bell. I think when I has a Raspberry Pi node, it got stuck on that block. Unfortunately, the filesystem took a dump and it wouldn't boot before I could look much into it.
hehe, that's funny. When the "monster block" was propagated, I didn't even notice it.

I'll try it again. It runs for ~16 hours, and I'm on block 351129. Gave it the max cpu priority. What was the reason we can't use gpu to validate signatures?*

Sidestory: One of my readers also tries to set up a node and told me about problems. She is always stucked ~28 weeks behind. This could be he monster block (I didn't look at the week count). So block 374963 seems to be an obstacle in running a node.

Some questions:
- what would have happened if we had 2 / 4 / 8 mb blocks?
- what would have happened if everybody had used xt? I remember Mike Hearn implemented a solution to the sig(o) problem, not as elegant as core's ideas, maybe, but - would it have prevented that some nodes are stucked at one transaction?
- would the fixes in classic prevent such a block to emerge?
- what would happen if eveybody uses unlimited?
- block 374963 seems to be not really special - https://blockchain.info/block/0000000000000000028d61ec5c8bae802c03eef856fdd53310df15f8a87661b4 - it just has a lot of outputs, but that should be no problem. Do you know what's wrong with it?
[doublepost=1458387852][/doublepost]
F2Pool just mined a Classic block. Cool!

http://nodecounter.com/#block_explorer
Yay! Just 1% more and we break core's holy "95% consensus" --

What I like on nodecounter currently is that
- core's node is finally dropping. Not fast, but it happens and they are on a all-time-low.
- core 0.12 is very slowly adopted and seems to stagnate. That means less than 25% of the networks supports their update.

How long untill they wake up and realize they lost the support of the network?

*that there is no optimization of syncing, no optimization of block-relaying, and so on, while core cries we can't have bigger blocks is one of the reason I'm happy that they are loosing support. It would have been so easy if they had said: let's achieve some optimizations on all of the fields, together, and than we launch a version with bigger blocks, and when this version has x % support by miners and nodes, we activate - but no, they have been asked so often and refused, they reject all performance improvements not made by core and dismiss every developer who is not core as stupid and every disagreement with them as an offense to those brilliant minds - so I'm happier every day when I see their share of nodes drop.

But let's stop talking of core. Let's talk about blockstream-core - the gang of core-developers on blockstream's payroll.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
but no, they have been asked so often and refused, they reject all performance improvements not made by core and dismiss every developer who is not core as stupid and every disagreement with them as an offense to those brilliant minds - so I'm happier every day when I see their share of nodes drop.

But let's stop talking of core. Let's talk about blockstream-core - the gang of core-developers on blockstream's payroll.
it's very important that all devs scorned by core dev come out and make that known.

Lukey et al continue to go around and say crap like "i'm not aware of any devs out there who are unhappy with our policies of free and open access to core development. afaik, we've been fair and welcoming in our policies to all" (paraphrased).
 

Aquent

Active Member
Aug 19, 2015
252
667
Luke-jr can say what he likes. It's like Putin saying the Russian soldiers were on holiday in Ukraine, they just happened to be enjoying their annual leave when they annexed Crimea.

Don't expect them to acknowledge anything. It's not like we dealing with honest people when Adam Back says Classic is "semi-tested alpha code"
Whatever they may say, that's pushing an intentionally deceptive narrative of Gavin and Jeff somehow being inexperienced, and Greg who hardly does much codding, who dishonestly attributes other's commits to himself, being somehow an expert, although he joined bitcoin development three years later than Gavin and Jeff and therefore by definition is less experienced.

Those are just facts though. Nothing binds people like Putin to facts. Nothing stops him from saying the uniformed Russian soldiers are civilians.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@Aquent

note my question to Jonas about his facts in that tweet convo. all i see in his image is ipv6 addresses and no evidence of them being channeled thru one ipv4 address or subnet, unless i'm missing something. also grepping from 125 connections max on his local pc isn't the same as a network wide crawler like Bitnodes is using. he never answered my question. i think it's important to pin this guy down on this to establish whether or not he's a credible actor as well. i'm beginning to have my doubts given he spams the Classic Slack continuously and his Soundcloud interview a few weeks ago demonstrated severe logic insufficiencies of his what he deems fair play.


 
  • Like
Reactions: dlareg and Norway

jl777

Active Member
Feb 26, 2016
279
345
it's very important that all devs scorned by core dev come out and make that known.

Lukey et al continue to go around and say crap like "i'm not aware of any devs out there who are unhappy with our policies of free and open access to core development. afaik, we've been fair and welcoming in our policies to all" (paraphrased).
does being treated like a bad dog count?

"Shame on you, and shame on you for having no shame."
https://bitcointalk.org/index.php?topic=1398994.msg14222399#msg14222399

Not sure exactly what I did wrong as I pointed out (N + 2*numtx + numvins) > N
the left side is the HDD space and bandwidth used by segwit vs right side 2MB HF

I guess because I didnt read in detail all the various places, including some youtube video of a conference where it is clearly explained how in the future it would be possible for some new signature method that saves 30%. Instead I just read the BIP

I didnt realize that I was supposed to analyze a hypothetical future improvement when I was analyzing the effect of the upcoming segwit softfork in 6 weeks?

He keeps saying how I dont understand bitcoin, but then when I describe what iguana does, he gets mad and says the bitcoin has been doing that for years.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
Whatever they may say, that's pushing an intentionally deceptive narrative of Gavin and Jeff somehow being inexperienced, and Greg who hardly does much codding, who dishonestly attributes other's commits to himself, being somehow an expert, although he joined bitcoin development three years latter than Gavin and Jeff and therefore by definition is less experienced.
And you know what is even more funny? all of them are using or have used code written by Jeff when he was a linux kernel developer.

He has been the major developer[1] for the linux kernel SATA interface / protocol. Hence if you'are currently using a linux pc/server/vps you're running on Jeff code.

[1] https://www.kernel.org/doc/htmldocs/libata/
 
Last edited:

albin

Active Member
Nov 8, 2015
931
4,008
Is this going to be a tasteless ambush like when they got the wrong Toomim brother and tried to pull all this insulting gotcha-type stuff that was pointless?
 

nomnomnom

New Member
Mar 14, 2016
2
4
Because all the effort went into optimizing secp256k1 on CPUs and if we don't use that code for at least a few major revisions, it will offend the people who spent two years writing it.
Someone already made a version of secp256k1 which uses opencl, but it seems only slightly faster for single signature validations, but maybe it can be more optimized in the future: https://github.com/hhanh00/secp256k1-cl
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
it's very important that all devs scorned by core dev come out and make that known.

Lukey et al continue to go around and say crap like "i'm not aware of any devs out there who are unhappy with our policies of free and open access to core development. afaik, we've been fair and welcoming in our policies to all" (paraphrased).
Quotes give me quotes!
[doublepost=1458422554][/doublepost]QUOTE="nomnomnom, post: 15741, member: 956"]Someone already made a version of secp256k1 which uses opencl, but it seems only slightly faster for single signature validations, but maybe it can be more optimized in the future: https://github.com/hhanh00/secp256k1-cl[/QUOTE]

I don't get the stress here about sig Validation. Y not just take advantage of homomorphism to gain arbitrary speedup (up to your ability to load the data)?
 
  • Like
Reactions: solex

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
you guys might appreciate this: 4 hops of an xthin block in .25 seconds. Each hop validates difficulty and then forwards.

(San Jose)
2016-03-18 02:40:38.092887 Sending expedited block 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 to 169.55.99.89:41273.

( Washington DC)
2016-03-18 02:40:38.134474 Received new expedited thinblock 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 from peer 169.45.95.196:8333 (5). Size 6844 bytes. (status 0,0x0)
2016-03-18 02:40:38.134574 Sending expedited block 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 to 159.8.161.36:8333.

(London)
2016-03-18 02:40:38.201305 Received new expedited thinblock 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 from peer 169.55.99.89:35863 (5). Size 6844 bytes. (status 0,0x0)
2016-03-18 02:40:38.201460 Sending expedited block 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 to 159.122.78.214.

(Frankfurt)
2016-03-18 02:40:38.241775 Received new expedited thinblock 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 from peer 159.8.161.36:45173 (19). Size 6844 bytes. (status 0,0x0)
2016-03-18 02:40:38.241916 Sending expedited block 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 to x.y.z.q:52623.

(Boston)
2016-03-18 02:40:38.345334 Received new expedited thinblock 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 from peer 159.122.78.214 (38). Size 6844 bytes. (status 0,0x0)
2016-03-18 02:40:38.416495 Reassembled thin block for 000000000000000003b276adfbb4a70b7895283636fc0cff7db21a80e6f9bc42 (522648 bytes). Message was 6844 bytes, compression ratio 76.37
Visualized:

 

jl777

Active Member
Feb 26, 2016
279
345
I'd expect the speedup would come from being able to process more signatures in parallel than what a CPU can handle rather than speeding up the individual signature operations.

I'd really like to know what one of those 4 GPU mining rigs could do in terms of ecdsa verifications.
The problem with mapping algos to GPU is that it is super optimized for SIMD computational tasks. Single instruction, multiple data

Basically if you cant map the task to vectorized algorithm, you are reduced to executing on the different actual CPU cores in the CPU. Usually there are 8 to 32 of those vs thousands of fully parallel data flows.

Basically an intel core will totally outperform an GPU if you use the GPU as independent cores. I highly doubt the signature process is easy to map to vectors.

I would estimate 4GPU mining rig to do about as much as a single high end intel CPU. Also, there is the latency to start tasks and get data in and out of the GPU. For a mass validation scenario, a pipeline can be created to work around part of this, but it is a lot of work and unless the signing can be fully vectorized, moot.

I havent looked at the secp signing, but if it is anything like curve25519, each signatures ends up executing different paths, so first it needs to be changed to a fixed algo, like changing qsort to sortnetwork
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088

VeritasSapere

Active Member
Nov 16, 2015
511
1,266
@jl777 @Justus Ranvier I am running GPU rigs with six cards per board. I bet with some very powerful cards it could potentially validate very quickly, though it definitely does seem a bit excessive for the transaction volume we have now, also considering the gains for a pool implementing something like this would be very marginal.

It is a cool idea, it definitely shows that there is so much possibility that counters the false meme of "Bitcoin can not scale". Bitcoin can scale and if it does not scale then it will simply be out competed and obsolesced by alternative cryptocurrencies that can and are willing to scale. Since the security, value and utility of cryptocurrencies increase with scale, it is this virtuous cycle that was also the original vision for Bitcoin, this is the vision that Core is now diverging from.

@rocks @Justus Ranvier You made a good point in regards to the blocksize limit, that demand will stay aligned with the artificial scarcity enforced through the blocksize limit, even if Bitcoin loses its dominant position in the cryptocurrency space, these Core developers can always still claim they did not need to increase the blocksize limit. I suppose their egos would only be threatened if the blocksize limit is actually increased and Bitcoin would be allowed to thrive, proving them wrong.
 
Last edited:

jl777

Active Member
Feb 26, 2016
279
345
Code:
   secp256k1_ecmult_gen(ctx, &rp, nonce);
    secp256k1_ge_set_gej(&r, &rp);
    secp256k1_fe_normalize(&r.x);
    secp256k1_fe_normalize(&r.y);
    secp256k1_fe_get_b32(b, &r.x);
    secp256k1_scalar_set_b32(sigr, b, &overflow);
    if (secp256k1_scalar_is_zero(sigr)) {
        /* P.x = order is on the curve, so technically sig->r could end up zero, which would be an invalid signature. */
        secp256k1_gej_clear(&rp);
        secp256k1_ge_clear(&r);
        return 0;
    }
    if (recid) {
        *recid = (overflow ? 2 : 0) | (secp256k1_fe_is_odd(&r.y) ? 1 : 0);
    }
    secp256k1_scalar_mul(&n, sigr, seckey);
    secp256k1_scalar_add(&n, &n, message);
    secp256k1_scalar_inverse(sigs, nonce);
    secp256k1_scalar_mul(sigs, sigs, &n);
the above is the inner workings of the secp256k1 signing. and looking a bit into the various functions they actually look SIMD'able. You would load a large number of hashes to be signed, then all the GPU cores would execute the identical instructions on the different data and spit out the sigs.

I was wrong with my assumption about the internal codeflow, an OpenCL version can probably be made and I would think that if you are generating 100+ sigs it will be faster on GPU, but that is just a guess and depends on GPU specifics, motherboard bus bandwidth, etc.

With onCPU GPU coming, this bodes well!
Also the iguana parallel sync data is well suited for GPU searching.

What scalability problem?
 

Members online