Gold collapsing. Bitcoin UP.

bluemoon

Active Member
Jan 15, 2016
215
966
Lots of non-segwit slush blocks recently: https://www.blocktrail.com/BTC/pool/slush
I've just been looking at the Slush blocks.

Of their last hundred blocks (449564 - 451416):

65 = segwit
15 = BIP109/16MB
5 = BIP109
3 = 8MB
12 = unmarked

According to Slushpool's stats page miners are currently divided:

23% Core
18% BU
10% Classic
4% 8MB
1% Bitpay's bitcoin
28% Don't care
15% Not voted

How do Core end up with so many segwit blocks?

In my innocence I had thought Slush was dividing the don't cares and unvoted pro rata, but it's obviously not so.
[doublepost=1486156145,1486155070][/doublepost]I see this was discussed before:

https://np.reddit.com/r/btc/comments/5gle2o/slush_mines_80_segwit_but_just_29_are_voting_for/

At least Core has weakened from its 32% around November 21st and BU (then 13%), Classic (then 6%), and 8MB (then 1%) have each strengthened.
 

bluemoon

Active Member
Jan 15, 2016
215
966
@Norway
I'm sure I read statements by mr slush, long ago, that he intended to divide the miners who had no opinion between those who did have an opinion (which then encourages interested miners to join the pool). So it shocks me to see most slush resources voting segwit: don't care + non votes + core = 66% v 65% = segwit blocks, when core only muster 23% support.

Clearly he wants segwit, as you say, but it is disappointing.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@bluemoon
I would not mine on Slushpool, because there is no way to verify what the real votes are. If 70% voted BU, he could in theory say that just 30% did it, and mine according to the fake number. I'm not accusing him of that, but it's better to be sure and mine on a pool 100% aligned with your preferences.
 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
Nice article
Luke-Jr stunned the bitcoin community recently when he “recommended” a 70% reduction of the blocksize to 300KB or an increase of the current transaction capacity in seven years. Zhuoer’s view of that proposal was to state:
This could be seen as small block propaganda with PR/Spin infecting the framing of the situation It's not just Luke-Jr but the Segwit, BS/Core representatives who put forward that solution. Luke is being used as a scapegoat.

LukeJr submitted a fork proposal that reduced the block size limit to 300Kb. He refers to **we** as in "We have been doing our best" and "So either way, we have kept the promise" (the HK promise)

that "We", Luke submitted the BIP on behalf of, is:
Adam Back CEO Blockstream
Johnson Lau Bitcoin Core Contributor
Matt Corallo Bitcoin Core Contributor
Peter Todd Bitcoin Core Contributor
Bobby Lee BTCC
Samson Mow BTCC
and more.

source:
https://medium.com/@bitcoinroundtable/bitcoin-roundtable-consensus-266d475a61ff#.c1ld4l6a4


 
Last edited:

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@Mengerian
My favourite quote from that article:

“We have prepared $100 million USD to kill the small fork of CoreCoin, no matter what POW algorithm, sha256 or scrypt or X11 or any other GPU algorithm. Show me your money. We very much welcome a CoreCoin change to POS.”
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
While this is admittedly tangential to your point, I struggle to understand the fetishization of the quadratic hash time issue. While any single node may fall victim to its characteristics, mining incentives -- already today, with no changes to the protocol -- are such to render this a non-problem from the standpoint of the system as a whole.

If one miner churns endlessly to validate an aberrant block, it will certainly be out-competed by another miner that gets back to hashing. Any block built atop the aberrant block cannot be validated before the aberrant block itself. Another solved block -- peer to the aberrant block, and mined by one operating in the mode described above -- will be validated and built atop before the aberrant block can be validated. Net result is that the aberrant block gets orphaned.

Why the angst?
this is only true because parallel validation!
is parallel validation a standard?
probably not yet?

most likely, with current standards of the bitcoin minning software, miners would end up producing empty blocks one on top of the other untill the long-to-validate-block gets validated. ( which is currently kept to a min with 1MB)

thank you BU for introducing the very obvious and nessary upgrade that is parallel validation.

any node not doing parallel validation is a retarded node, unfortunately most nodes are core nodes( read: retarded nodes).

edit: i'm sorry, did that make sense? its friday nite i had a few drink.. point is BU rocks!
 
Last edited:

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
Why the angst?
I agree. Generally it is a non-issue. The only time it has actually cropped up was a miner trying to do a good deed. Nonetheless, it would be better if it were addressed.

Do we know if miners currently have the ability to abandon a block if it is taking too long to validate? My understanding was that this is what parallel validation is supposed to bring to the table (amongst other things)
 

Roger_Murdock

Active Member
Dec 17, 2015
223
1,453
“I know big blocks are very important for bitcoin, but in October 2015 I only had thousands of bitcoins, but no btc hashrate or pool. To protect my btc, I begun to buy ASIC miners, get into the btc mining industry, and created BTC.TOP in 2017.”
...
“I have worked for 1 year to protect my BTC,” – he says. “My flight mileage for this year is more than the past 10 years combined. You know what that means.”
...
It’s an interesting twist in bitcoin’s story. To protect their bitcoin investments or due to personal convictions, new miners are entering, some leaving their cushy jobs. As their hashrate grows to meet market demand, established miners gradually reduce in hashrate, eventually being fully replaced.​

Yes! This is exactly the kind of thing I've been waiting for and predicting: the harm from Core's mismanagement finally getting bad enough to motivate major holders to decisive action. Sounds like the Ents have begun their march!
 

bluemoon

Active Member
Jan 15, 2016
215
966
The article shows how much Core relies on censorship, misinformation, and the language barrier:

“China has many big miners. They don’t know what happen and what is BU before I contacted them one by one, but they all support big blocks once they understand what we are arguing about. They are the silent majority.”

Now, despite huge inertia, their wall is falling.

Great article!
 

Zarathustra

Well-Known Member
Aug 28, 2015
1,439
3,797
Perhaps a saving grace, though, is that there's a possibility that the shift away from inefficient State organization of society could unleash immense economic energy, creating incredible prosperity. Similar to how increases in productivity (and imports from China) have masked the effects of monetary inflation in the US. If there is a large productive surplus, it might keep standards of living rising even for those who lose their state-supported privileges.

So that's my hope.
There is only one engine that creates economic prosperity: The state. People who are not (yet) nationalized don't create ever growing surplus production. They never did. They still produce the same amount as 100'000 years before. Zero growth. You produce surplus as soon as you are inforced to: paying tribute to the war lords (aka church and state).
That's not music for Austrian ears, but it's a historic fact. ;)
 
  • Like
Reactions: AdrianX

jbreher

Active Member
Dec 31, 2015
166
526
this is only true because parallel validation!
is parallel validation a standard?
Parallel validation does not need to be 'a standard' in order to render the quadratic hash time attribute a non-problem. The protocol already allows it. As such, no specific change is necessary in order to protect the network from this quadratic hash time issue.

If at least one miner implements to this idea, the effect of aberrant blocks is nullified. Further, any such miner who implements this will win over any other miner who spins too long in trying to validate an aberrant block. The incentives already ensure this 'problem' gets resolved.

If blocks containing these long validation time transactions ever occur in any significant number, rational miners will modify their code too operate in this manner. In order to avoid orphaning, in time all miners will cease including such transactions in their blocks. This will put back pressure on users to not create such nasty transactions, lest they never be confirmed.

By all means, we should in the longer term transition to a linear algorithm. However, it need not be a priority.

People talk as if this quadratic hash time issue has implications to the health of the network as a whole. This attitude or belief is pervasive. Such claims are full of shit.
[doublepost=1486210076][/doublepost]
Do we know if miners currently have the ability to abandon a block if it is taking too long to validate?
Of course they have the ability to abandon the validation of any given block. Who is holding a metaphorical gun to their collective head?
 

albin

Active Member
Nov 8, 2015
931
4,008
Super speculative shower thought --

Suppose you had a txid collision in the utxo set (I understand of course highly highly highly improbable but obviously not impossible).

I'm not totally sure how the code that tests tx validity works, but presumably you would be able to decipher what's going on when a tx spending either of these utxo's is issued, by looking at the signature.

Now suppose we have segwit utxos deployed. One of the selling points of this scheme as far as rationalizing why it's ok for full node resources is that you could prune the signatures once you no longer need them.

Couldn't this potential break re-sync'ing old nodes or even bootstrapping a new node completely if such a tx were ever mined in the past relative to the current tip, because in the event of a txid collision, there would no longer be any way to determine which tx was actually spent?
 
  • Like
Reactions: janko33 and AdrianX

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
Is it just me, or does libsecp256k1 look more and more like a well-thought-out distraction by Core to have something to point to in regards to 'look at us we are doing much for scalability' while at the same time not improving those areas which would be much more urgent to improve? (e.g. UTXO commitments)

Because, if you take an honest look at all this, CPU validation speed is not the main scalability concern at all.

Of course, I am neither complaining about libsecp256k1 (I am thankful for that improvement), nor am I arguing that I have any say on how Blockstream/Core wants to spend their time and money developing.

What I am complaining about, however, is the constant, dishonest blathering how 'Core is very concerned and doing a lot about on-chain scalability'.

Because their actions simply don't fit their words. And also and especially not in this regard.