Gold collapsing. Bitcoin UP.

satoshis_sockpuppet

Active Member
Feb 22, 2016
776
3,312
I don't think designing the perfect, egalitarian pow algorithm is possible or necessary. "Good enough" is why bitcoin works (worked).

The current mining algorithm combined with a stagnating bitcoin economy* favors the big Chinese mining farms. If the Chinese mining farms don't serve the users they should be abandoned for a forked bitcoin with a new pow and new players. If they serve the users I don't care if it's Chinese or American or Mars people who are mining and why it's profitable for them. But if the communist party overtook the Chinese mining farms tomorrow and made it their Chinese peoples coin we had to fork immediately. The question is, if blockstream-core is already bad enough to make this step reasonable. (There are astonishing similarities between these two entities..)

*I firmly believe, that we had a much more diverse mining community if something like bip101 would have been implemented. If Bitcoin was allowed to grow there were so many opportunities for small mining devices everywhere. But nobody invests in a 3 tps blockstream database secured by the communist party.

p.s. I already see Adam Back taking to his lawyer friends to please write an article, that changing the pow is high treason and punishable by death.

p.p.s The forum spell checker marks "bitcoin" as well as "Bitcoin". Is there no right way to write it? ;)
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
Namecoin being a more well known example of this. It was a conventional fifty one percent attack and the project was simply abandoned.
Just a quick fact check here. I believe they had a problem where one pool had the majority hashpower, but to my knowledge the project is still ticking along quite nicely. If fact i'd say the argument of a single pool having 51%+ hashing power 'is always a bad thing', has been disproven by namecoin.
Project seems very active https://zmoazeni.github.io/gitspective/#/timeline/namecoin
 

rocks

Active Member
Sep 24, 2015
586
2,284
In the end of the day electricity cost and equipment cost is all that matters.
This is exactly my point, and different algorithms when fully optimized come to different balances of electricity cost vs. equipment cost. SHA256 mining when optimized creates a cost structure that is dominated by electricity costs (>90%), which puts those with low cost electricity at a significant advantage. Other algorithms when fully optimized will come to a different balance, if that balanced is dominated by equipment costs we have a more fair system.
specialized hardware always > generalized hardware when it comes to such things. Even if you came up with something that was absolutely ASIC resistant, gains could still be had vs a general computer system by tearing out everything else that is irrelevant to the job at hand and bundling 100 of what's left on a die, 100 to a board.

If ASICs hadn't have come along, it would have been GPU farms. If GPUs had not been viable, it would have been CPU farms. There was, and is, just too much money on the table.
True, but as long as the "economic unit" is small enough then it is still profitable for small miners to participate.

Let's use your example of large scale GPU farms. If let's say GPU was the optimal efficiency point, then yes we would see some large scale farms. But since the base unit is the GPU small scale miners with 1-2 GPUs could participate in the system as well.

SHA256 ASIC mining is the same, although there are large farms the base unit is a 1U sized board which small miners can compete with. The problem though is unless you have cheap electricity you will be at a disadvantage.
@bitcartel The reason is that mining will evolve towards ASICs anyway, it is inevitable. Unless an algorithm can be developed that is completely ASIC proof, I would change my position if that was the case but as far as I understand that is theoretically impossible. Therefore we need to embrace this as part of what a proof of work cryptocurrency is.
As someone who as ported multiple software algorithms to ASIC implementations I have to disagree. There are most definitely code paths that do not translate well to a hardware implementation and are best done by a generic CPU core. Unless you have spent a lot of time in VHDL and Verilog it may seem as if everything can be ported, but there are significant limitations in what can be done in hardware blocks. Anyone who has spent significant time with VHDL or Verilog who disagrees with that I'm happy to dig into it with and happy to provide you code that I'd love to see if you could port efficiently.

I will take that discussion offline to @bitcartel 's thread, but I have to strongly disagree ASIC resistance it is not possible.
 
  • Like
Reactions: freetrader

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
I think this exchange crystallizes what /r/bitcoin has degenerated into.

When asked if he supports Classic, Aaron Voisine gives a very thorough well-reasoned answer:


Which is then met with this garbage:

lol, and they were championing Voisine when he first said he liked SW!

how fast those twits pounce.
 

VeritasSapere

Active Member
Nov 16, 2015
511
1,266
Just a quick fact check here. I believe they had a problem where one pool had the majority hashpower, but to my knowledge the project is still ticking along quite nicely. If fact i'd say the argument of a single pool having 51%+ hashing power 'is always a bad thing', has been disproven by namecoin.
Project seems very active https://zmoazeni.github.io/gitspective/#/timeline/namecoin
You are missing out on one important aspect to this, namecoin is a merged mined coin. fifty one percent attacks are not fatal but they are certainly also not good. Namecoin is doing just fine I know, like I said cryptocurrencies tend to refuse to die even when attacked or abandoned.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
You can design a chip to do anything, $100 buys you billions of transistors at scale today. I'm not sure how this would help.

I think what you are getting at is a dynamically changing POW that makes specialization hard. There was exploration of this in the alt space and there are methods. I think the right approach is not a fixed algorithm change which can be pre optimized for, but a dynamically changing algorithm based on the hash of the previous block which makes changes unpredictable. This plus off chip communication would create a very different econimc situation than we have today.
I've mentioned this a few years back but let me remind you all that its impossible to make an "ASIC proof" algorithm because a CPU IS an ASIC. Bitcoin created a POW algorithm that was inefficient on a general purpose CPU and then people built optimal ASICs for it. Next time, we build the POW algorithm such that the Intel Core I5 (and generally other general purpose CPUs) IS the optimal ASIC to run it on. This will require an analysis of the real estate of the CPU and a set of arbitrarily chosen problems to exercise that real estate -- memory buses, caches, FPU, Multimedia extensions, threading, GPU, etc. CPU benchmarking would be a great place to start pulling algorithms from...
[doublepost=1456192767][/doublepost]
@Norway: While the coming roundtable already has a reasonably good list of confirmed attendees, I find it a little unfortunate that in terms of Bitcoin dev, only Core is represented (Gavin and Jeff technically being there not under Classic banner, XT not having any rep there AFAICT, likewise Unlimited).

@theZerg: any plans on someone from BU attending?

Also, while this is a roundtable in North America, I hope that it gets a little balance from the Asia Pacific region (big mining), and that it can shed its somewhat closed-off image in favor of a more open forum in future. We don't need a G8 meeting for crypto.
AFAIK no one was invited... I think BU is pretty young still for this kind of old boys networking kind of thing...
 

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
So their "plan" is a hard fork by July 2017?

This is so ridiculous, I can't even take it seriously. I mean, do they expect everyone to just sit around passively for the next year and a half waiting on the Blockstream/core go-ahead?

So much can happen in that time, I can't imagine the signatories to the statement seriously expect things to play out that way. The miners who signed may not even be significant players by then.

The other thing to consider is that the details of the "planned" hard fork are completely unspecified. I suspect they will try to push some kind of "flex-cap" scheme that trades increased block size against difficulty or coinbase reward. They were pushing flex-cap openly a few months ago, but have been strangely silent on that as of late (perhaps because the miners won't like it?)

This "roundtable consensus" has made two things crystal clear:
1) The "statements" from miners have little value. Their "support" of classic obviously quite soft, so why should we think their "support" of this scheme is any more solid?
2) Blockstream/core are using stalling tactics and cannot be counted on to follow through on their "plan". We already knew this, but it is even clearer now.

So now we have more information. This is a good thing. It is not a time to get depressed, but instead to incorporate new information as we keep pursuing a path towards a better Bitcoin. Miners may not stick to their statements, but they are profit motivated. So the best way to influence their behavior is to appeal to their financial interest. If we really believe that Bitcoin can work, based on reason and evidence, then let's use what we learn to improve our efforts for positive progress.
 

dlareg

Member
Feb 19, 2016
39
202
I've noticed very often those advocating the Core Roadmap will often use the term "we get." Like "we get set-wit in April" and "we get a hard fork in 2017". The use of this particular phrase always strikes me as odd, like terminology a child would use. If we are good little boys and girls "we get to go to the mall" or 'we get to go swimming". If we diligently run Core, we get set-wit in April. And we get a hard fork in 2017. But only if we're good little miners! It is a strange reinforcement of this idea that the Core Devs know best and they are our great and wise parents who are looking out for us and must be obeyed. Once noticed it sticks out like a sore thumb.
 

VeritasSapere

Active Member
Nov 16, 2015
511
1,266
This is exactly my point, and different algorithms when fully optimized come to different balances of electricity cost vs. equipment cost. SHA256 mining when optimized creates a cost structure that is dominated by electricity costs (>90%), which puts those with low cost electricity at a significant advantage. Other algorithms when fully optimized will come to a different balance, if that balanced is dominated by equipment costs we have a more fair system.
That is a good point, though I am not convinced that a different POW would bring about more expensive equipment costs. Though I am not well informed enough to know these things, I would have thought that the initial development costs would be high but chips could then be mass produced at a similar cost, keeping in mind that the complexity of the rest of the unit would not be that different. Though I am at least wise enough to recognize the limits of my knowledge, I do not know enough about the intricacies of chip manufacturing and the economies of scale involved.

My tendency was to think that the more decentralized manufacturing is, the more competitive and large of an industry it became. The more egalitarian it would become because of the forces of competition. That this is more likely to bring about a situation where electricity costs are better balanced compared to equipment costs, like you say.

Though I do not think that this will ever be the case that mining will not centralize around places with low electricity cost, that is just how mining works. This also makes much economic sense, I think that the only time when mining is more about equipment cost then electricity cost are in those rare opportunistic moments. Like the golden days of Bitcoin mining. I just do not think that this is a sustainable reality. This is even the case with GPU mining now, it only became highly profitable again when Ethereum was launched, which I would consider one of those rare opportunistic moments where it just takes time for more infrastructure to be build up so that it returns to a competitive equilibrium, where most GPU mining is done in large GPU server farms. Which once Ethereum switches to POS or even before that I suspect, this will happen again with the present GPU mining space.
True, but as long as the "economic unit" is small enough then it is still profitable for small miners to participate.
I do see how this is relevant, we can still have small "economic units" with ASICs, we can even have smaller units that are possible with ASICs compared to CPU and GPU, remember the ASIC USB miners? It all just depends on the market and demand for ASICs which I think is still maturing and improving in Bitcoin.
Let's use your example of large scale GPU farms. If let's say GPU was the optimal efficiency point, then yes we would see some large scale farms. But since the base unit is the GPU small scale miners with 1-2 GPUs could participate in the system as well.
I think you are mistaken in this assesment. There is no difference in terms of efficiency of scale when it comes to mining. It is no different when GPU mining compared to ASIC mining, what matters is equipment cost and electricity cost, in a healthy market these should be very similar for ASICS and GPU's.

The amount that avarage users contribute with a few GPU's is very small I think the vast majority of the GPU mining power are coming from more serous mining operations like my own, as opposed to gamers doing some mining over night for instance with their GPU. It is hard to get a good grip on these numbers but I suspect that is already the case with GPU mining today.

Unless you live in the netherlands, where I am residing at the moment, where the more energy you consume as a business the less tax you pay over it which accounts for two thirds of the cost by the way. This is an exception to the point that I am making but I will put this more down to social engineering then free market dynamics. ;)
SHA256 ASIC mining is the same, although there are large farms the base unit is a 1U sized board which small miners can compete with. The problem though is unless you have cheap electricity you will be at a disadvantage.
Yes my point exactly, I would argue that it is the same for GPU mining. I have a few GPU rigs setup next to my ASICs, which had a three year ROI, which would not have been profitable if I did not have a competitive electricity rate, this was before Ethereum was released, since then they have ROI'd mainly due to Ethereum, this was less then a year ago that I set them up. My point here is that even with GPU mining this centralization towards cheap electricity still happens. Though I would consider many competing miners "centralized" in places with cheap electricity still decentralized.
As someone who as ported multiple software algorithms to ASIC implementations I have to disagree. There are most definitely code paths that do not translate well to a hardware implementation and are best done by a generic CPU core. Unless you have spent a lot of time in VHDL and Verilog it may seem as if everything can be ported, but there are significant limitations in what can be done in hardware blocks. Anyone who has spent significant time with VHDL or Verilog who disagrees with that I'm happy to dig into it with and happy to provide you code that I'd love to see if you could port efficiently.
Sounds like you do have far more technical knowledge on this subject then I have. Like I said I am open to being proven wrong.
I will take that discussion offline to @bitcartel 's thread, but I have to strongly disagree ASIC resistance it is not possible.
I think ASIC resistance is possible just not a completely ASIC proof algorithm, though I am sure that is what you meant. Like I said I am open to being proven wrong on this point.

Maybe there should be two chain forks of Bitcoin, one with the original POW and one with the new POW. If people feel strongly enough about this. You can already see the beginnings of the splintering, hold on to those original chain keys people, there might be multiple spin offs in the future that could be worth a lot, by holding the original keys you hold keys to all of the chains that are spawned off from it, like branches in a tree. ;)
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
So their "plan" is a hard fork by July 2017?

This is so ridiculous, I can't even take it seriously. I mean, do they expect everyone to just sit around passively for the next year and a half waiting on the Blockstream/core go-ahead?

So much can happen in that time, I can't imagine the signatories to the statement seriously expect things to play out that way. The miners who signed may not even be significant players by then.

The other thing to consider is that the details of the "planned" hard fork are completely unspecified. I suspect they will try to push some kind of "flex-cap" scheme that trades increased block size against difficulty or coinbase reward. They were pushing flex-cap openly a few months ago, but have been strangely silent on that as of late (perhaps because the miners won't like it?)

This "roundtable consensus" has made two things crystal clear:
1) The "statements" from miners have little value. Their "support" of classic obviously quite soft, so why should we think their "support" of this scheme is any more solid?
2) Blockstream/core are using stalling tactics and cannot be counted on to follow through on their "plan". We already knew this, but it is even clearer now.

So now we have more information. This is a good thing. It is not a time to get depressed, but instead to incorporate new information as we keep pursuing a path towards a better Bitcoin. Miners may not stick to their statements, but they are profit motivated. So the best way to influence their behavior is to appeal to their financial interest. If we really believe that Bitcoin can work, based on reason and evidence, then let's use what we learn to improve our efforts for positive progress.
one thing i've learned, esp from my talk with Samson Mow, is that "price will go UP" is all important to the Chinese. i'm used to avoiding talking in terms of price b/c of my extensive experience with speculation and manipulation playing a significant role in price movements in the stock mkt certainly in the short term and sometimes even intermediate term. i'm also a believer in cycles in technical analysis as a result of the business cycle influenced by central bank monetary policy. watching the Chinese emphasize price so much while demonstrating a surprising lack of understanding of Bitcoin economics has been a surprise. which is why i am now going to couch my arguments more in those terms for effect.
[doublepost=1456198017][/doublepost]Phasing Out Third party trust
In the long term assurance based on third party trust is not great because you are trusting someone and it is possible for that trust to be broken, willingly or not. Lightning is posing to be the superior mechanism for instant confirmation without the need to trust anyone. We intend to support Lightning in the future and believe that RBF will be extremely important for maintaining the trustlessness of Lightning (as you need to be able to update the fee for transactions timing out from lightning peers).

http://blog.greenaddress.it/2015/12/09/why-replace-by-fee-is-good-for-bitcoin/
 

VeritasSapere

Active Member
Nov 16, 2015
511
1,266
I've mentioned this a few years back but let me remind you all that its impossible to make an "ASIC proof" algorithm because a CPU IS an ASIC. Bitcoin created a POW algorithm that was inefficient on a general purpose CPU and then people built optimal ASICs for it. Next time, we build the POW algorithm such that the Intel Core I5 (and generally other general purpose CPUs) IS the optimal ASIC to run it on. This will require an analysis of the real estate of the CPU and a set of arbitrarily chosen problems to exercise that real estate -- memory buses, caches, FPU, Multimedia extensions, threading, GPU, etc. CPU benchmarking would be a great place to start pulling algorithms from...
This blows my mind, this could actually be the solution to the perfect algorithm. It is actually almost the opposite approach often taken to achieve ASIC resistance. However I see one glaring problem with this, which is what happens when CPU technology changes over time. Does that mean such an algorithm would then lose its ASIC resistant properties?
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Compared to the sha256 asics vs cpus you'd find that all general purpose cpus perform reasonably similarly including future ones (judging by past cpus). But yes it is possible that in the 20 year time frame new capabilites are added to cpus that are not reflected in the chosen algorithm. In that case, someone might be better served using the Silicon to create a 1000 core i5 rather than a 4 core i100. But I still think that the speedup will be orders of magnitude less than what we see with bitcoin.
 
  • Like
Reactions: VeritasSapere

rocks

Active Member
Sep 24, 2015
586
2,284
I've mentioned this a few years back but let me remind you all that its impossible to make an "ASIC proof" algorithm because a CPU IS an ASIC. Bitcoin created a POW algorithm that was inefficient on a general purpose CPU and then people built optimal ASICs for it. Next time, we build the POW algorithm such that the Intel Core I5 (and generally other general purpose CPUs) IS the optimal ASIC to run it on. This will require an analysis of the real estate of the CPU and a set of arbitrarily chosen problems to exercise that real estate -- memory buses, caches, FPU, Multimedia extensions, threading, GPU, etc. CPU benchmarking would be a great place to start pulling algorithms from...
Yes, yes, all hardware chips are ASICs....

What matters for this discussion is the mode of computation. For these purposes we have three main buckets (to keep it simple)

CPU - Computation is performed as a series of software instructions on a generic core that are processed one after another
FPGA - Computation is performed as a fixed hardware block on a chip can be re-configured (I have extensive domain knowledge here BTW)
ASIC - Computation is performed as a fixed hardware block on a chip that is fixed and can not be changed
GPU - Same as CPU, it is just a different configuration (more simple core with less functionality, multiple cores that operate together the same sequence of steps on different data)

When we say ASIC-resistant we are not saying an algorithm can not be ported to an ASIC. Instead we are saying that doing so provides such limited benefit over a standard CPU that there is no point in doing so and it is always more economical to run the algorithm on some CPU core than a custom chip regardless of scale.
 
  • Like
Reactions: VeritasSapere

sickpig

Active Member
Aug 28, 2015
926
2,541
A solution to the blocksize debate? Or am I being to optimistic? o_O

https://bitcointalk.org/index.php?topic=1330553.msg13977157#msg13977157
I didn't read carefully enough. I simply can't do it anymore I've developed some sort of allergic reaction to threads like those hosted on BCT. But what catch my eyses is this phrase by Lauda:
Lauda said:
Segwit now -> block size limit increase in 2017. That's also a sort of compromise is it not?
Independently from any considerations on block space and its scarcity, SegWit introduce two different economic goods, namely tx space and signature space.

The latter would be blessed by an arbitrary discount of 75% (50% in the last Corallo's proposal) when accounting for block space.

I've inferred after reading scattered references from various sources that the discount is justified by the fact that signatures could be discarded by full nodes after the validation step and are not necessary at all for SPV clients.

Nevertheless signatures have to be relaid by the network and consume bandwidth as the txs themselves. For the typical LN settlement txs, witness bandwidth could be even 4 times higher than base space.

Take into account that multisig (m-of-n) txs by definition have an advantage in respect of the "normal" ones, in fact as general rule: the higher the number of signatures the bigger the witness space.

I would say the above should be enough to make people pay more attention in comparison to what has been given to SegWit deployment and the following consequences.

Weren't core devs obsessed by orphan rate and bandwidth consumption?

Still I haven't see by SegWit proponents an economical/game theory analysis related to the above changes, and to me economic incentives are even more important that all the tech aspects considered together.

That said I dare saying core proposal is not a compromise and by a long shot.

Even worst time for comprises is ended.

The max block size cap has already changed bitcoin economic incentives, as @Justus Ranvier already mentioned on twitter, for the first year since its birth bitcoin was not able to double in terms of txs number.

edit: grammar
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@sickpig

Thank you for starting to pay attention to this important matter.
[doublepost=1456216310,1456215218][/doublepost]Note the trollish comments coming from Samson Mow these days mixed with arrogance. Matches with what I told you of our discussion, right?

[doublepost=1456216454][/doublepost]I'll bet he loved seeing his picture in the news:

[doublepost=1456216767][/doublepost]Armstrong has been nothing but cordial and conciliatory to Mow on Twitter congratulating him on his efforts at Roundtable. Look at what he gets in return.

I think there's no coincidence that they each respectively run close to the 2 largest exchanges in the world East vs West. Maybe a little cultural envy going on?
 

Members online

No members online now.