Gold collapsing. Bitcoin UP.

satoshis_sockpuppet

Active Member
Feb 22, 2016
776
3,312
BTW, Huge kudos to XT for deliberately not including the fatal commit from core and seeing the danger from the start.

Maybe it wasn't too smart by core to surround themselves with Yes-sayers and exclude anybody being critical..
And maybe it isn't to smart to attack the people disclosing bugs to them...
 

lunar

Well-Known Member
Aug 28, 2015
1,001
4,290
While everyone is scrambling around patching bluematts t̶r̶o̶j̶a̶n̶ 'premature optimization' Ive been reading this very timely article, from @zhell

Maybe optimizing the base protocol of Bitcoin Cash does not even make economical sense: A strange idea.

I don't think this is a strange idea at all. If we can agree bitcoin is a primarily an economic system, based around the ruthless competition between miners, then it stands to reason making unilateral changes to the protocol to 'aid mining' is probably a bad idea, as it will reduce the window where true competition can occur. Essentially these global optimizations can be seen as well intentioned, but actually introduce an incompatible, Socialist aspect, into what is fundamentally a Capitalistic system. Many of these recent 'improvements' act like subsides for weaker miners.

"Money is one of the very few things that actually benefits from being stable as a protocol since it is based on trust. Trust grows in the absence of change, but we can still innovate on the entire ecosystem built above the protocol.
"Improving" the base protocol can be seen as only a way to delay the competition process of miners for a few months or years. What is really gained from it ?"

"I am not saying that any improvement is a bad idea. For example if improvements can be done on a individual miner's level, then the miner will do it as long as he still follows the consensus rules, aka the improvement is private. This is competition 101."


"We need to make sure that upgrades are done when they actually improve the USER EXPERIENCE, (lower fees) not the miner experience. For example, raising the blocksize improves the user experience, at the expense of the weakest miners who do not have good hardware, good software or good connectivity to process larger blocks. Yet it is a positive change."

"So what's the point of changing the consensus to "help" miners when the goal is to have miners compete as hard as they can ? This is why we do not try to "improve" the energy that is spent in bitcoin mining at a protocol level, but each miner has incentives to individually and privately improve it to remain competitive."

"In a nutshell: the system is already designed such that the competitive mining incentives drive only the optimizations that secure and improve the network."

This is going to be controversial here, but in general I think Devs need to get out of the way.

Perhaps a rule of thumb could be - If the proposed changes help mining, then the code should be proprietary and the dev employed by miners/pools who see it's value. Whereas if the changes are for users benefit then work open source, and be thinking about starting a business.

This is Capitalism.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
I had almost given up on these recordings coming out, but what a nice surprise on this day!

https://www.thefutureofbitcoin.cash/
what's sad is i kept dozing off to the monotonous drone by dev after dev that we "need" to keep a dev enforced protocol limit in place to prevent nodes from crashing. #GTFOofTheWay
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
While everyone is scrambling around patching bluematts t̶r̶o̶j̶a̶n̶ 'premature optimization' Ive been reading this very timely article, from @zhell

Maybe optimizing the base protocol of Bitcoin Cash does not even make economical sense: A strange idea.

I don't think this is a strange idea at all. If we can agree bitcoin is a primarily an economic system, based around the ruthless competition between miners, then it stands to reason making unilateral changes to the protocol to 'aid mining' is probably a bad idea, as it will reduce the window where true competition can occur. Essentially these global optimizations can be seen as well intentioned, but actually introduce an incompatible, Socialist aspect, into what is fundamentally a Capitalistic system. Many of these recent 'improvements' act like subsides for weaker miners.

"Money is one of the very few things that actually benefits from being stable as a protocol since it is based on trust. Trust grows in the absence of change, but we can still innovate on the entire ecosystem built above the protocol.
"Improving" the base protocol can be seen as only a way to delay the competition process of miners for a few months or years. What is really gained from it ?"

"I am not saying that any improvement is a bad idea. For example if improvements can be done on a individual miner's level, then the miner will do it as long as he still follows the consensus rules, aka the improvement is private. This is competition 101."


"We need to make sure that upgrades are done when they actually improve the USER EXPERIENCE, (lower fees) not the miner experience. For example, raising the blocksize improves the user experience, at the expense of the weakest miners who do not have good hardware, good software or good connectivity to process larger blocks. Yet it is a positive change."

"So what's the point of changing the consensus to "help" miners when the goal is to have miners compete as hard as they can ? This is why we do not try to "improve" the energy that is spent in bitcoin mining at a protocol level, but each miner has incentives to individually and privately improve it to remain competitive."

"In a nutshell: the system is already designed such that the competitive mining incentives drive only the optimizations that secure and improve the network."

This is going to be controversial here, but in general I think Devs need to get out of the way.

Perhaps a rule of thumb could be - If the proposed changes help mining, then the code should be proprietary and the dev employed by miners/pools who see it's value. Whereas if the changes are for users benefit then work open source, and be thinking about starting a business.

This is Capitalism.
that was good. the only thing i wish he'd have mentioned was what drives this competitive process; Sound Money.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
what's sad is i kept dozing off to the monotonous drone by dev after dev that we "need" to keep a dev enforced protocol limit in place to prevent nodes from crashing. #GTFOofTheWay
Bitcoin Unlimited let miners set their own limit. But the devs still set a default value. And this default value carry a lot of power. The miners don't have a leader, so they tend to look to the devs for a number they can all agree on.

I understand that a lot of people may have been baffled by the number 10 terabyte in BUIP101. But what are the alternatives? 32MB? 128MB? The capacity of BU nodes, that soon will be a lot higher than 128MB on a range of hardware?

Should devs manage these limits every 6 months?
 

satoshis_sockpuppet

Active Member
Feb 22, 2016
776
3,312
Don't like the 10 TB proposal, I see no reason for it.

I don't see a reason for any hard coded max block size anymore.

That the network can't handle more than ~20 MB blocks at the moment seems to be one of the conclusions of the stress test, right? But we didn't even hit the 32 MB limit then, so why should we hit a 10 TB or infinite limit now?

"We" can't do 128 MB blocks now, the networks client apparently need a huge amount of work in regards to handling bigger blocks. And there is a lot of research to be done, which size of blocks is doable with which kind of technology.

But just remove the fucking limit from the consensus code already.

Yes, in the end it's all just a coming together of parties with some shared goals, no matter if you hardfork the limit away if necessary or if you remove it and the network converges to some kind of organic max block size (for the time being). But, as history has shown, the fucking hard coded limit is a mental barrier for builders and investors.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Don't like the 10 TB proposal, I see no reason for it.

I don't see a reason for any hard coded max block size anymore.
BUIP 101 is a way to remove the default max blocksize. 10 TB and infinity is pretty much the same in practice.

The alternative to 10 TB is 32MB, 128MB or something else the BU devs figure out.
 
  • Like
Reactions: throwaway

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Challenging BU-member @dgenr8 on BUIP101 in this twitter thread:


Long story short, Tom thinks the default value should be 32MB. I think this number is influenced by his anti-CSW position.
 
  • Like
Reactions: AdrianX

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Don't like the 10 TB proposal, I see no reason for it.

I don't see a reason for any hard coded max block size anymore.

That the network can't handle more than ~20 MB blocks at the moment seems to be one of the conclusions of the stress test, right? But we didn't even hit the 32 MB limit then, so why should we hit a 10 TB or infinite limit now?
I didn't listen to ALL of the BKK recordings, but even just Day 1 @deadalnix explained quite early before lunch the answer to your question.

It is because a miner (hostile) can easily produce a huge block at very little cost, and the size of the block is not covered by POW so it's easy to DoS the network by this way.

This isn't anything to do with non-hostile miners and regular conditions.

But just remove the fucking limit from the consensus code already.
Solution is simple (for all of you proposing this):

Fork BU and remove the fucking limit, and see who runs your code. Or just write up a BUIP to do it, or vote for the 10TB proposal. If you're going to argue that 10TB constitutes "a mental barrier for builders and investors" then IDK what to say.

Me, I'm not going to vote for it. There is a good engineering way, even for BU, to scale the capability of its software. Simply removing the limit or pushing it way beyond what you know the software is capable of -- I don't see how that's helping BU.

Maybe it's all psychology and BU users will be happy if their systems crash or get forked off onto another chain with bigger blocks than other nodes.

P.S. just in case you think I've suddenly turned into a small blocker, I'd be perfectly fine with BU changing its default EB to > 32MB *today* if realistic tests indicates that it could easily handle that.
I'm not one for "everyone has to extremely agree on a blocksize consensus lalala". If a client can safely do more, it should market itself by all means. Although I do still think the more responsible approach would be to set the default to something sensible that you know is sufficiently above current demand (I think 32MB qualifies currently, I agree with @dgenr8's tweet there) but still within the capabilities of most of the network (sadly, 32MB doesn't qualify there right now, mainly because of networking factors that @jtoomim has pointed out). So yes, I think the 32MB current default is a bit on the overly optimistic side (I'd call it marketing), and should've probably been set to 16 until the kinks were worked out and we should've forked to 32 then, to celebrate regaining Satoshi's original limit.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
BUIP 101 is a way to remove the default max blocksize. 10 TB and infinity is pretty much the same in practice.

The alternative to 10 TB is 32MB, 128MB or something else the BU devs figure out.
To be clear, the alternative to 10 TB is 2147MB, which the BU client has already permitted for nearly 3 years. I still don't get why BU is getting criticism for this while the low limits on ABC and SV are ignored in this debate.

Except, maybe its because BU's governance model enables community input into its development.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
Although I do still think the more responsible approach would be to set the default to something sensible that you know is sufficiently above current demand (I think 32MB qualifies currently, I agree with @dgenr8's tweet there) but still within the capabilities of most of the network (sadly, 32MB doesn't qualify there right now, mainly because of networking factors that @jtoomim has pointed out). So yes, I think the 32MB current default is a bit on the overly optimistic side (I'd call it marketing), and should've probably been set to 16 until the kinks were worked out and we should've forked to 32 then, to celebrate regaining Satoshi's original limit.
Why do you want devs to be "responsible", as in having the responsibility of the miners? This mindset is holding bitcoin back. I don't want the laggards to be able to keep up without any effort.

The next BU client handles 1746 transactions per second according to @solex.

Why put the bar so low, @freetrader ?

To be clear, the alternative to 10 TB is 2147MB, which the BU client has already permitted for nearly 3 years.
This is just wrong, @solex. I have pointed it out several times in the BUIP101 thread. BUIP101 is about the default EB-setting.
 
  • Like
Reactions: throwaway

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
It is because a miner (hostile) can easily produce a huge block at very little cost, and the size of the block is not covered by POW so it's easy to DoS the network by this way.
what hostile miner do you speak of? be specific please; a large one or small one? how much hash do they have to manufacture or buy/blow, as in waste, to accomplish this bloat attack block in a reasonable period of time? as in, how many $millions or $hundredsofmillions would they have to invest in ASIC's? or do they risk buying them from Bitmain? and what do they do with all that hardware once they destroy the Bitcoin network? do they buy up several dozens of acres to setup a manufacturing plant or do they rent warehouse facilities or apply for permits? if so, in what jurisdiction and how do they exactly prevent discovery or avoid public documents/permits for land, building, electricity use, waste, water, etc? who do they source parts from and who do they hire for expertise in manufacturing/engineering to prevent discovery? is it a gvt or a private actor who executes this attack? what stops the rest of the network from deciding to orphan this block? and how does this block overcome the propagation deficiencies we already know about? at what point do the motivations flip to where they might find it more profitable to mine honestly? what exactly did Satoshi mean by this statement? i need details, please. b/c afaic, Satoshi included bad actors, incl gvts, in his game theory:

The incentive may help encourage nodes to stay honest. If a greedy attacker is able to assemble more CPU power than all the honest nodes, he would have to choose between using it to defraud people by stealing back his payments, or using it to generate new coins. He ought to find it more profitable to play by the rules, such rules that favour him with more new coins than everyone else combined, than to undermine the system and the validity of his own wealth.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
My view is that default limits do not hold miners back. The default limit in BTC is 750KB, yet it was never of consequence to the scaling debate. That was, still is, all about the hard limit of 1MB, now redefined slightly to base + witness.

We have real world evidence that miners lift soft limits when market demand requires. It is the hard limits of what the software permits which is the real constraint.

To clarify about the 1746 txns per second, this is benchmarking the improved wallet. There is still the separate txn propagation and block building overhead afterwards.