Gold collapsing. Bitcoin UP.

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
this is why i don't like the USD even in an up and coming recession:

Fully 53% of respondents said the dollar was the most crowded trade (short commodities was a distant second at 16%). That’s up sharply from just over 30% last month.

http://www.zerohedge.com/news/2015-12-15/long-usd-trade-has-never-been-more-crowded

10yr weekly $DXY. looks like a double top to me. and if you need a "reason" it'd be, what do you think the Fed is going to do in a deflationary recession/depression?. i highly doubt the US is going to be so lucky as to have *both* the USD and UST's rise like they did in 2008-9. we've abused the power of the USD way too much with TARP, bailouts, corruption, military industrial complex, and QE:


[doublepost=1450725083][/doublepost]from the same zerohedge article linked above. note first item:


[doublepost=1450725178][/doublepost]
 
  • Like
Reactions: solex and majamalu

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
besides, what is central banking all about?

they're all about abusing the currency, period. it's their policy tool (only tool) which they use as a hammer for every nail they see. it's about transferring the value of hard work from those who do it to those who don't. IOW, from workers to speculators. from the middle/lower classes to the financial elites; which is what Wall Street is all about.

imo, we're going to do a Japan.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@bucktotal

thank goodness. thought i might have to go back to trolling the trolls.
[doublepost=1450729694][/doublepost]how brutal. not for those following this thread:



ETF Investors Have Spent $24 Billion Trying to Call a Bottom in Oil

No matter which path they chose, the trade has been a disaster, as seen in the chart of USO and XLE below.

In 2013, investors went wild trying—and also failing—to call a bottom in gold miners.

When you see a chart like that, it means investors are losing their shirts.

http://www.bloomberg.com/news/articles/2015-12-21/etf-investors-have-spent-24-billion-trying-to-call-bottom-in-oil

 
Last edited:
  • Like
Reactions: majamalu

rocks

Active Member
Sep 24, 2015
586
2,284
@Aquent
No one should use this library. There is no way it is possible to gain a 700% performance improvement on straight forward algebraic calculations on modern systems, unless you fundamentally change the calculations that are performed. Modern compilers do an excellent job with optimizing cryptography type functions, which consist of straight forward arithmetic instructions in some number of loops. And modern hardware with out-of-order execution and branch prediction will fix any remaining optimization missing and ensure the hardware pipeline remains full.

The only way to get a 700% performance gain is if you change the very nature of the algorithms, in cryptography that breaks the entire security model. Period.

Greg simultaneously claims an unprecedented amount of testing has been done while also saying the techniques they used have not been seen elsewhere. This is impossible as these two statements are not compatible. An unprecedented amount of testing would mean open academic testing and probing for flaws over ~20 years. That is the level of testing new crypto algorithms are put through.

If a flaw is found here (and there almost always is a flaw found eventually) then that will be worse for Bitcoin then RBF and 1MB combined because it breaks all trust.

Greg's stated believe that performance is critical to the security model is flat out wrong. Trust in the algorithms is critical to the security model, if performance is a problem just scale out to more cores, that is what all applications do.

Given what I've read on this topic, I would never consider using the newer crypto toolkit they're providing, well maybe after 20 years of testing and verification. I guess that means I will stay with an older client or a forked client, which I know will protect my coins. But if just one person gets their coins stolen as a result of this, then the value of those coins will probably be zero...
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@Peter R and everybody else.

One sentence got my attention during the Scaling Bitcoin conference in Hong Kong. I've been thinking about it after that.

In jtoomim's presentation about BIP 101 test results, he says this:

"We need to get the Chinese pools to move their block creation out of the country. They can stick the coinbase transaction into the header in China, but everything else can be done outside of China. This would only require 1 kilobyte of traffic through the Great Firewall, without increasing or decreasing block size."

This goes off course both ways. Western miners can/should also have nodes in China.

Is this a good solution to the problem of the great firewall?

I don't think miners need to have everything handed to them on a silver plate as a level playing field. It never is. They have to make all kinds of considerations from electricity price and capacity, local weather, isolation of the building, rent of property, deals with ASIC manufacturers, transportation of hardware, workforce, number and size of windows in the building... the list goes on and on.

At the same time, it's a problem if the playing field is too uneven due to the firewall.

So... is jtoomim's suggestion a reasonable trick to solve the block propagation issue?
 
  • Like
Reactions: majamalu

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
No one should use this library.
hmmm. given his unexplained disappearance, this seriously needs to be considered.

isn't it already released in a working implementation? pwuille also needs to be confronted with this.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
honest question: why a particular feature, judged as controversial if introduced by an hard-fork, magically become accepted as uncontroversial if deployed through an soft-fork? (e.g. SegWit)

to put in another form, if I were able to find a way to introduce an max block size increase through a soft-fork, would such a change become uncontentious?

the latter is a rethoric question to me.

I spent quite a bit of time thinking above questions.

the only answer I was able to find is: "control".

Core devs are able to somewhat influence miner behaviours (as scaling HK miners panel has shown), on the other hand they have almost 0 control on the rest of the ecosystem, the ecomic majority if you will.
[doublepost=1450731580][/doublepost]
hmmm. given his unexplained disappearance, this seriously needs to be considered.

isn't it already released in a working implementation? pwuille also needs to be confronted with this.
I share @rocks worries.

@cypherdoc 0.12 will be the first release where openssl will be ditched in favour of pwuille's libsecp256k1.

If memory serves it was previously used only for signing (0.11), in the 0.12 it will be used both for signing and validating
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
of course introducing a blocksize increase via a softfork would still be contentious as Core dev would reject it b/c it would interfere with their alternative scaling plans of LN & SC's.

pwuille is terrified to do a hardfork of SW b/c he is afraid it might expose his lack of support by the wider community not upgrading to his new version which risks him ending up being a core dev of a small poorly supported fork. imo, it's a reflection of his deeper worry that he is in fact not doing what is broadly viewed as the best thing for the community that instead wants bigger blocks. therefore, he feels a need to "hide" his SW by forcing forward compatibility via a softfork. otoh, if he was confident he was doing what was best for the economic majority, he wouldn't hesitate to risk his reputation and control of Core by doing a hardfork. b/c we all would be likely to come along by actively upgrading.

nope, that's not happening.
 
Last edited:

rocks

Active Member
Sep 24, 2015
586
2,284
hmmm. given his unexplained disappearance, this seriously needs to be considered.

isn't it already released in a working implementation? pwuille also needs to be confronted with this.
The standard for testing in the cryptography space is literally decades of testing and probing. Whenever a new standard is being decided on, several existing options are put on the table as candidates for years while everyone tries to break them (and this is the pool of the best candidates known that have already been tested for years).

For an algorithm to make it as a standard, it will be publicly attacked for over a decade. That is standard testing. Greg's unprecedented testing would mean something more, but it is actually much less. I always become worried when people present their own work as being thoroughly tested, when it is plain that it has not been testing at the level required, because that means they are over estimating how strong their work is. Maybe it's fine, maybe it's not. That is too weak IMHO.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@rocks

you are absolutely right.

since Greg's disappearance, we have an entirely new situation with libsecp256k1. even though pwuille was involved, Greg flat out admits that he personally wrote large swaths of the code, which as you point out needs extensive testing and review. it's not enough for core devs to do it. it needs serious worldwide academic and industry scrutiny over long periods of time to be acceptable.
[doublepost=1450732514][/doublepost]just wanna fully copy this here by nullc:

Libsecp256k1 has many custom algorithms, it would not have this performance otherwise.


The group law and constant time group law are algebraic optimizations not know elsewhere, the particular windowing construction is not implemented elsewhere, we use an algebraic optimization I invented to eliminate a modular inversion, we use a curve isomorphic trick invented by dettman to allow using the faster gej+ge adds inside the exponentiation ladder, and so on. Some of these are specific to secp256k1 (or at least j-invariant 0 curves), some are generic-- but in all these cases they were first implemented in this library.


CT is fairly 'boring' by comparison.


If you're going to fuel trolling and attacks, at least get the details right.


What our dear OP sockpuppet, and you both miss is that for Bitcoin performance is a security consideration; because without sufficient performance the decentralized security arguments for the system will fail. There are risks in libsecp256k1, though we've done an unprecedented amount of review, testing, and analysis to mitigate them; just as there are risks that OpenSSL is not consistent with itself or other implementations. There are-- in our opinion-- even greater risks in not using it: We've been anticipating this improvement for some time-- counting on it to keep up with the growth of the blockchain, and it's overdue... and we think our work is also at this point better reviewed and tested than the part of OpenSSL that Bitcoin Core was previously using for this (in particular, our tests allowed Pieter to find a serious vulnerability in OpenSSL).
 
  • Like
Reactions: Windowly

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
to put in another form, if I were able to find a way to introduce an max block size increase through a soft-fork, would such a change become uncontentious?
I don't assume anything Core and Co. say about "consensus," "contentiousness," or "hard/soft forks" serve as anything more than convenient ad hoc excuses. I don't claim this is a conscious strategy; for some people at some times it may be, but it could easily be unintentional - a natural result of people settling into power positions.

Power doesn't just corrupt in the sense of changing people's incentives. Power corrupts in an even more pernicious way: it distorts your own reasoning about your actions. If you're the Core maintainer and hope that certain changes aren't pushed through, the idea that consensus is required for changes starts to sound like a reasonable and obvious principle, regardless of whether it is, because you have the power to make it so. Then if you want a change pushed through but someone is holding out, it suddenly starts to seem reasonable and obvious that certain exceptions should apply to the consensus requirement. Not because it is, but because you can make it so.

If it becomes more convenient to interpret "consensus" in a more relaxed way, in terms of numbers and scope of participants, you will tend to do so and feel it is reasonable without noticing how your bias being in a power position is affecting your judgment. Power corrupts your mind itself, by diminishing your ability to adjust for your own bias. You start to reinterpret words as you see fit, not noticing the contradictions. This corrupts your own judgment systematically. This game can continue for a long time, creating more and more "obvious and reasonable" exceptions to the stated rules.

This hides to everyone, often even yourself, that you are merely exercising your own judgments under the guise of impartiality. Your decisions will always be perfectly objective and in accordance with the rules when you get to interpret the rules, even if you are simply dictating policy completely as you see fit. And that's even once you get to the stage where people try to force you to have hard and fast rules. You never want something concrete like "5 out of 5 of the Core committers must agree, or the change will not be merged." Instead you can do something like this, keeping it as vague as possible and adjusting things as needed:



You can do better still if you have various people deliver your pronouncement of different aspects of these rules. That way if anyone is pinned down on some contradiction, they can always say that isn't their understanding of the rules and you'll have to ask the other guy.

Even when the contradictions becomes too obvious to ignore, you still have an easy way out: just say you're changing policy (to something more "reasonable" of course, hopefully with a lot of vagueness and shaped most conveniently for the agenda you want to push while still appearing plausibly objective).

It's a tiresome game, and it ends when people get fed up and decide to fork Core off.
 
Last edited:

molecular

Active Member
Aug 31, 2015
372
1,391
@Aquent
No one should use this library. There is no way it is possible to gain a 700% performance improvement on straight forward algebraic calculations on modern systems, unless you fundamentally change the calculations that are performed. Modern compilers do an excellent job with optimizing cryptography type functions, which consist of straight forward arithmetic instructions in some number of loops. And modern hardware with out-of-order execution and branch prediction will fix any remaining optimization missing and ensure the hardware pipeline remains full.

The only way to get a 700% performance gain is if you change the very nature of the algorithms, in cryptography that breaks the entire security model. Period.

Greg simultaneously claims an unprecedented amount of testing has been done while also saying the techniques they used have not been seen elsewhere. This is impossible as these two statements are not compatible. An unprecedented amount of testing would mean open academic testing and probing for flaws over ~20 years. That is the level of testing new crypto algorithms are put through.

If a flaw is found here (and there almost always is a flaw found eventually) then that will be worse for Bitcoin then RBF and 1MB combined because it breaks all trust.
I think you might be overreacting. It's not a new crypto algorithm being invented. It's just an implementation of existing stuff (EC math, EC signature generation, checking,...). I'm no expert, but it makes sense to me that specific properties of the specific curve used in bitcoin can be exploited in addition to other optimization techniques. I have optimized code (for a long time) and there are things a compiler can't do automatically that don't change the function that is being calculated and yet yield massive performance gains.

There's a simple explanation why those optimization haven't been done to this extent before sipa did them for bitcoin: there just hasn't been the need to justify the amount of work. No other application needed such massive amount of EC math done and could've gained such a huge upside from it. Especially not the secp256k curve.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@molecular: Thanks for running a BU. I'm going to add signatures to the BU download page tonight and test the 32 bit build. At that point we should be good for all Linux users.

(Anyone else want to do a gitian build? @awemany, @sickpig, @bitcartel -- if so, verify the sha256sum and then use the Bitcoin client to sign a message)

@sickpig RE hard vs soft fork controversy: I posted how you could soft-fork BIP-101 here using the same mechanism as SW: https://bitco.in/forum/threads/soft-fork-bip101.461/. I think that the idea that this is possible needs to get a lot more visibility so people really understand the power of using the ANYONECANPAY "trick" and how it makes practically any change soft-forkable.
 

rocks

Active Member
Sep 24, 2015
586
2,284
I think you might be overreacting. It's not a new crypto algorithm being invented. It's just an implementation of existing stuff (EC math, EC signature generation, checking,...). I'm no expert, but it makes sense to me that specific properties of the specific curve used in bitcoin can be exploited in addition to other optimization techniques. I have optimized code (for a long time) and there are things a compiler can't do automatically that don't change the function that is being calculated and yet yield massive performance gains.

There's a simple explanation why those optimization haven't been done to this extent before sipa did them for bitcoin: there just hasn't been the need to justify the amount of work. No other application needed such massive amount of EC math done and could've gained such a huge upside from it. Especially not the secp256k curve.
Assembly level optimizations that do not change an algorithm generally yield very little benefit over modern compilers and hardware (but can yield some benefits). Usually in order to get 2x performance benefit (let alone the 7x being claimed) you need to make significant changes to the algorithm itself. And I don't mean unrolling loops (Intel's branch prediction is perfect now on <64 loops), I mean changing what is inside and outside of critical loops themselves.

The EC algorithm from a code perspective is very straight forward to write and implement. To get 7x performance improvement (on a single thread) I'm guessing the only way to do so is to eliminate some calculations being performed, i.e. reduce the number of multiply and addition instructions by moving some from inside a loop to outside a loop. But that is an algorithm change.

I've always found this explanation of ECDSA to be the most straight forward and comprehensive. The read only takes an hour or so and I'd suggest anyone interested in Bitcoin's public key algorithms to read it.
http://kakaroto.homelinux.net/2012/01/how-the-ecdsa-algorithm-works/

Once you understand the math, ECDSA is very straight forward to write a c++ version. For any normal implementation, I can't imagine there exist 700% worth of performance gains to be had by tweaking. I can only imagine significant changes to the computations themselves, which is dangerous.

I'd like to understand exactly what is different in the new version from a standard ECDSA implementation. If the changes are tweak level optimizations that leave the main computation intact, then great. If they changed the nature of the computations themselves then everyone should have a problem with that. But no one seems to have shown this one way or another.

This is such a critical layer to Bitcoin that there should be a very simple and easy to follow FAQ for what is being done here and exactly where the 700% came from .
 
Last edited:

sickpig

Active Member
Aug 28, 2015
926
2,541
@theZerg mine was a rhetoric question made just to prove the point that @Zangelbert Bingledack described so well in the post above.

That said I find your idea brilliant.

Nevertheless sometime is better to use an hardfork to make the change active for all the network as soon as the activation rule is triggered.

E.g. SegWit will bring you a 75% discount in terms of signature block size, namely you can put in a 1MB segwit enabled block as much txs as 1.6-2MB normal block. this range is determined by the composition of the block in terms of txs types.

So hypothetically if wallets/payments processors/full nodes adoption
will take 6 month to get to 50% after the segwit soft-fork activation, this
means that actual network capacity will be increased by:

1.75 x 0.5 + 1 x 0.5 = 1.375

re deterministic build: I'll be able to work on it tomorrow morning (UTC).