Gold collapsing. Bitcoin UP.

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
I think no one has a fix for the quadratic scaling of tx validation in "current implementation."

core plans to fix it with segwit activation
BU ????

I look forward to a solid response on this important issue.
 

albin

Active Member
Nov 8, 2015
931
4,008
The current solution in the alternate clients community seems to be parallel validation, which I think is a great idea even on top of other solutions, because in general it exposes malicious blocks to orphan risk on the validation end analogous to propagation.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
parallel validation isn't really much of a fix for the "quadratic problem".

i guess with parallel validation a block which validates faster is more likely to be the next valid block, while the block which contains a "quadratically hard to compute TX" more likely to be orphaned.
is that the idea?

anyway, that doesn't really solve the issue, solving it with parallel validation only means its very unlikely minners will ever want to include a "quadratically hard to compute TX". which is as bad as setting a limit as to how complex a TX can be.

simply changing the way of calculating TX_ID with a HF is the best way to go imo.
 

molecular

Active Member
Aug 31, 2015
372
1,391
I don't know. Maybe this quadratic validation problem is really a non-issue? If there is disincentive for miners to mine these huge transactions, why limit tx size by other means at all?

What are the supposed attack vectors here? A "malicious" miner mining a hard-to-validate block doesn't seem to make much sense other than creating incentive for validationless mining.

What else?

(Sorry, this has probably been discussed at length elsewhere, but I haven't thought this one through yet)
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@molecular

I'm sure you already know this, but just to be clear, the "quadratic validation issue" applies to large transactions, not large blocks: a transaction of size 2q typically requires 4 times more hashing than a transaction of size q. However, a block of size 2Q requires only 2 times more validation time than a block of size Q, assuming a similar mix of transactions in both blocks.

Our plan at BU at the moment is to do two things:
  • Temporarily retain the 1 MB effective size limit for a single transaction
    • Right now no transaction can be larger than 1MB because no block can be larger than 1 MB
    • BU will treat any block that contains a TX larger than 1MB as "excessive" and apply the same acceptance depth algorithm that it does for excessive block sizes.
    • The user will be able to change this TX size limit
  • Begin rolling out parallel validation
    • This will add a cost due to orphaning risk of approximately $30 / second of extra validation time for slow-to-validate blocks, assuming today's bitcoin exchange rate.
The limit on the max TX size can then be removed in the future if there is demand and once parallel validation is widely deployed and understood.
 
Last edited:

albin

Active Member
Nov 8, 2015
931
4,008
It could conceivably become an issue if utxo set hash commitments are ever added to the protocol, because if you require the hash of the canonically-ordered resulting set to be included in the block, that would eliminate the possibility of validationless mining, and there probably would be some strong incentive to do so at some point to dramatically improve pruning functionality (i.e. a pruning node could just start at the tip).
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
"What are the supposed attack vectors here?"

The problem is that right now Bitcoin Core's validation engine is "single thread." Let's imagine that someone mines a slow to validate block as shown below and shortly after another miner mines a normal block. With parallel validation, we start validating both blocks as soon as we know about them. In the case shown below, the smaller block wins and the larger block gets orphaned. But the way Bitcoin Core is written right now, a Core node won't even bother to look at the smaller block. The smaller block will lose no matter what. There is no disincentive to making your block slow-to-validate.



What's worse, is that the Core node will be sorta "paralyzed" while it's processing this block and (@theZerg correct me if I'm wrong) won't relay new transactions either. Parallel Validation for Bitcoin Unlimited fixes that too.
 
Last edited:

jonny1000

Active Member
Nov 11, 2015
380
101
say i make a segwit TX old miner see this as anyone-can-spend, so HE spends it and mine the block including my and his TX! you telling me the segwit miners will say this is OK even tho they know its not really an anyone can spend tx?
No. I am saying the old miner won't put this transaction in a block, since it's non standard. Miners need to upgrade to a new client to include these transactions in blocks.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Please consider the following scenario:
1. You run a BU node with AD = 4 and EB = 1MB
2. A 1.1MB block is mined
3. This 1.1MB block then receives three additional confirmations and now has a 4 block lead over the chain with blocks less than 1.1MB
4. Shortly after the 4 block lead is taken, a miner extends the smaller block chain by one block

What happens next?
Like many other people have said, your Option B was the best answer:

"B: Since the larger block chain had a 4 block lead in the past, the node remembers this and does not build on the smaller block chain, therefore the node continues to build on the larger block chain until the smaller block chain actually retakes the lead."

Here's the diagram:



Once the new chain tip is created, mining a new block on the old chain tip would be no different than mining a new block at some other random point in the chain. I tried to explain this visually by showing the blockchain "straighten out" thereby forgetting the previous chain tip completely.

So your Option B was wrong in the sense that the node doesn't have to remember anything, it simply forgets the old chain tip.
 
Last edited:

jonny1000

Active Member
Nov 11, 2015
380
101
No, it is not "apparently how BU works".

AD is Acceptance Depth. It refers to how many blocks are built on top of the "Excessive Block". If AD is 4, and 4 blocks are built on top of the Excessive Block, BU will track that chain, and build on it for miners.

It has nothing to do with a "lead" that has to be tracked. It has nothing to do with the order blocks are received.

BU is convergent. It converges to the longest Proof of Work chain.
Ok. you seem to have not understood my original point. My point was if AD = 4, then falls to 3, nodes need to track that it was 4 in the past, according to you.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
Begin rolling out parallel validation
  • This will add a cost due to orphaning risk of approximately $30 / second of extra validation time for slow-to-validate blocks, assuming today's bitcoin exchange rate.
I must say i dont like this "solution"
it basically limits how complex a TX can be... because no minner will ever include a TX which gives him to much orpen risk

i guess, if we're talking huge TX that are sure to be BS attack TX that get rejected by miners, fine ok.

but long term TX are only going to get more and more complex, and this solution does NOTHING for helping legitimate TX compute faster.

IMO there is no way out of this, BU must plan to make it so TX dont validate quadratically.
 
  • Like
Reactions: xhiggy

jonny1000

Active Member
Nov 11, 2015
380
101
yes, exactly.

and then this begs the question: WHY hasn't it been done in this way?

The only answers I was able to come up with turned out to be very disturbing.
That would freeze everybody's money. People need to be able to spend their funds.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
I must say i dont like this "solution"
it basically limits how complex a TX can be... because no minner will ever include a TX which gives him to much orpen risk

i guess, if we're talking huge TX that are sure to be BS attack TX that get rejected by miners, fine ok.

but long term TX are only going to get more and more complex, and this solution does NOTHING for helping legitimate TX compute faster.

IMO there is no way out of this, BU must plan to make it so TX dont validate quadratically.
Good point.

BU is not against defining a new transaction type that doesn't have the quadratic hashing issue, and is in fact looking into this. The general feeling is just that this is lower priority than increasing the block size limit.

If people want to make complex transactions that are cheaper, then they'll have to use the new transaction format that doesn't have the quadratic hashing cost.
 

jonny1000

Active Member
Nov 11, 2015
380
101
Once the new chain tip is created, mining a new block on the old chain tip would be no different than mining a new block at some other random point in the chain. I tried to explain this visually by showing the blockchain "straighten out" thereby forgetting the previous chain tip completely.

So your Option B was wrong in the sense that the node doesn't have to remember anything, it simply forgets the old chain tip.

Sorry Peter, I still do not understand what you mean. Please can you try to explain again. The diagrams do not seem to include the scenario I provided.

If AD = 4, and the larger block chain had a lead of 4 in the past (but the lead is now 3), yet the node does not remember this 4 block lead, how can the node know to build on the larger block chain? Please try to explain with the more simple example of a new node with AD = 4, joining the network when the lead is 3 (but was 4 before the node joined the network). This new node will build on the smaller block chain, 3 behind the larger block chain, right?
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@jonny1000: Your scenario:
  1. You run a BU node with AD = 4 and EB = 1MB
  2. A 1.1MB block is mined [THE RED BLOCK IN THE DIAGRAM]
  3. This 1.1MB block then receives three additional confirmations and now has a 4 block lead over the chain with blocks less than 1.1MB [THE THREE BLOCKS AFTER THE RED BLOCK ON THE RIGHT SIDE]
  4. Shortly after the 4 block lead is taken, a miner extends the smaller block chain by one block [THE BLUE BLOCK ON THE RIGHT SIDE]
Here's an annotated version of the diagram that might be easier to understand:

 

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
@molecular

I'm sure you already know this, but just to be clear, the "quadratic validation issue" applies to large transactions, not large blocks: a transaction of size 2q typically requires 4 times more hashing than a transaction of size q. However, a block of size 2Q requires only 2 times more validation time than a block of size Q, assuming a similar mix of transactions in both blocks.

Our plan at BU at the moment is to do two things:
  • Temporarily retain the 1 MB effective size limit for a single transaction
    • Right now no transaction can be larger than 1MB because no block can be larger than 1 MB
    • BU will treat any block that contains a TX larger than 1MB as "excessive" and apply the same acceptance depth algorithm that it does for excessive block sizes.
    • The user will be able to change this TX size limit
  • Begin rolling out parallel validation
    • This will add a cost due to orphaning risk of approximately $30 / second of extra validation time for slow-to-validate blocks, assuming today's bitcoin exchange rate.
The limit on the max TX size can then be removed in the future if there is demand and once parallel validation is widely deployed and understood.
@Peter R do you have any reason why nodes can't set a parameter to avoid relaying excessive transactions, or only relay transactions of a cretin size on condition the fee complies with xyz rules.

miners obviously would want to a void transactions that don't propagate through the network efficiently when using Xthin.

I can see applications for excessively sized transactions but it looks like they degrade the primary function of the network and should be avoided the exception being the user or service that benefits from such transactions. I would prefer to see a fee table that increases exponentially relative to transaction size. the net benefit is to penalize those who use the blockchain to do work and encourage other off chain solutions to carry the cost and risk.

The morel reason to avoid large transactions is that all users indirectly pay for security as a user tax, first by way of the inflation subsidy and then with user fees. Those services that offload work onto the blockchain with excessive transaction seizes are actually guilty of a tragedy of the commons violation.