What do Classic developers think of BIP131?

priestc

Member
Nov 19, 2015
94
191
I am the author of BIP131. It is a proposed enhancement to the protocol that makes on-chain scaling more feasible. You can read the text of the BIP here: https://github.com/bitcoin/bips/blob/master/bip-0131.mediawiki

I wrote the text of the BIP to be easy to understand. Even if you're not a cryptographic expert, you should still be able to read the BIP and understand whats going on as long as you know the basic of how bitcoin works.

I've heard Gavin say multiple times at various speaking events something like "I wish satoshi had included some way to include multiple inputs to a transaction without having to list out every single one". Well, this BIP does exactly that.

I've already spent a lot of time discussing this with core developers, but I have not heard the opinions of anyone from the Classic team.

Previous discussions are here:

Me and luke jr:
https://github.com/bitcoin/bips/pull/268

Me and sipa:
https://github.com/bitcoin/bips/pull/353

Me and G. Maxwell:
https://bitcointalk.org/index.php?topic=1377298.0

As you can see, the blockstream establishment does not like this change. I don't think any of their arguments hold any weight.

I want to know what the Classic developers think of this change.
 

jl777

Active Member
Feb 26, 2016
279
345
iguana internally coalesces as much as possible as a lot of tx are to the same addresses. But each signature is different (might want to clarify that in the BIP text), and uncompressible and unique, so your idea makes too much sense.

address reuse???? What are people supposed to do if they have more than one unspent to the same address? Just never spend it?

And almost everybody has public BTC addresses, so for each payment they get, it is a potential duplicated.

assuming there is a tx with multiple vins from the same address. So this is a tx that would be:

sig0 pubkey | voutscript
sig1 pubkey | voutscript
sig2 pubkey | voutscript
sig3 pubkey | voutscript

For a tx spending funds that have been sent to the same pay to pubkeyhash. So even using compressed pubkey we are looking at (33 + 72) * 3 of data that never has to be on the blockchain. this is 315 bytes of savings, at the cost of a little encoding. I did not look at the encoding in detail, but I am not smart enough to see any downside and this tx is created and signed by the node with the privkey and instead of doing it 4 times, it is done once. So it saves CPU time also.

maybe just make the tx that is signed have all the voutscripts put into the vinslot for calculating the signing. And maybe define a new SIGHASH type instead of tx version. It is after all about the signing.

#define SIGHASH_MULTIPLE 8

something like that, I am not 100% sure about constraints in the existing implementations regarding sighash type,but in iguana I have one function that just needs to be changed and supporting SIGHASH_MULTIPLE would just verify all vins with matching voutscript, so I dont think anything else needs to be changed.

I have a verify_vins function
Code:
           msgtx->vins[vini].spendscript = vp->spendscript;
            msgtx->vins[vini].spendlen = vp->spendlen;
            msgtx->vins[vini].sequence = vp->sequence;
changing the above to:

Code:
         for (i=0; i<numvins; i++)
         {
              if ( i == vini || (hashtype == SIGHASH_MULTIPLE && memcmp(vp->spendscript,msgtx->vins[vini].spendscript,vp->spendlen) == 0) )
              {
                   msgtx->vins[i].spendscript = vp->spendscript;
                   msgtx->vins[i].spendlen = vp->spendlen;
                   msgtx->vins[i].sequence = vp->sequence;
              }
         }
I think the above is pretty close to implementing your BIP. Some more bookkeeping to mark all the vins as being signed already, but not much. Both signing and verifying use the same verify_vins so I think the above might be the only change needed, but would need to look a bit deeper to make sure

James
 
Last edited:

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
The privacy argument is as follows. Let's say address A and B both pay to C. Now if I kidnap A and force him to tell me who C is, then I know who B paid.

However lets be realistic. everybody uses public addresses and no change in plumbing that is not understood by 99.99% of the users is going to make a whit of difference. This proposal is orthogonal to that issue but actually might be very valuable for micropayments of a non payment channel sort. Devs who want to fix the address reuse issue should solve the user experience not block plumbing optimizations.

So without examining the technicalities of the proposed methodology I'd support the concept (BU developer).
 
  • Like
Reactions: priestc

jl777

Active Member
Feb 26, 2016
279
345
The fact that your wallet optimizes tx size if and only if it has multiple unspents to the same address, this I cannot see as incentivizing people from reusing addresses. They get no direct benefit from it and if they care about privacy, they wouldnt reuse addresses

it is an under the hood change, but one that might well reduce the average size per tx significantly
 
  • Like
Reactions: priestc

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
I think coalescing transactions could be useful in some cases. The best way to make them work would be to use QR codes or similar, where you could switch to another address when you have coalesced ("used up") the old address. This would not prevent other users to go on paying to the old address but in practice it would minimize it.

This pretty much counters most of the core devs criticism - just allow the users to design around the system instead of straitjacketing them.

However, one point remains and that is how to implement this efficiently. Gmax notes that you'd have to comb through the UTXO set and that takes time. Maybe this would work best in conjunction with @jl777 's optimizations, and in any casetogether with a hard fork with other features.
 

jl777

Active Member
Feb 26, 2016
279
345
a wallet has to comb through all the utxo set whenever it creates a new transaction.
it ALREADY does this

if the BIP is changed to add a new SIGHASH type, then it is only affecting the signing and not anything else. no change to transaction format/versions, etc.

I posted the code changes needed above. Granted for the reference bitcoind it would probably take more than 5 lines of code changes, but still it cant take that much effort to do.
 

priestc

Member
Nov 19, 2015
94
191
Hi James, thanks for your post,

I think an actual implementation of this BIP will require more changes that what you posted. The wildcard input is supposed to include every UTXO in the entire mempool, which should include some database query, which I don't see in your code.

Also, in your post you bring up the idea of implementing the "wildcard bit" as an opcode or sighash type rather than from the tx version field... I guess it doesn't matter. Which ever is easiest and makes the most sense to implement is how it should be.
 

jl777

Active Member
Feb 26, 2016
279
345
SIGHASH type only affects signatures and I believe it is all that is needed to make an combined signature. I think your usage of "aggregate" confused the core devs as that has specific meaning in encryptions, ie aggregating sigs from many different keys.

your use case is just having one signature apply to multiple things in the same tx. So all the vins for that tx are already selected and it just affect the signrawtransaction and validate transaction, not the creating the transaction

it has nothing to do with the mempool. using mempool or not, that is a wallet local implementation and should not be part of the protocol.
 

priestc

Member
Nov 19, 2015
94
191
jl777: I think you have the mempool and UTXO pool confused. The mempool contains unconfirmed transactions, the UTXO pool contains all confirmed unspent outputs found throughout the entire blockchain. The terms are similar, but are very different.

My main language is not C++, so I have not yet looked much at the bitcoin codebase, so I don't know for sure, but I assume the UTXO pool is implemented in code through a database layer (which would be levelDB). If this is true then the levelDB indexes and scanning algorithms should ensure there is no major performance implications. The UTXO pool as of a few days ago is 35 million rows. If those 35 million rows are not handled by any kind of database aparatus, then implementing this BIP will have to come after doing that.
 

freetrader

Moderator
Staff member
Dec 16, 2015
2,806
6,088
Chris, have you tried talking to the Classic developers directly on their Slack channel? Might be worth a try. FYI, the right channel to use would be #implementation.

https://bitcoinclassic.slack.com

If you want, I could post a link there referring them to this thread here, but I'm not sure which of them have forum accounts here.
 

Gavin Andresen

New Member
Dec 9, 2015
19
126
I've heard Gavin say multiple times at various speaking events something like "I wish satoshi had included some way to include multiple inputs to a transaction without having to list out every single one". Well, this BIP does exactly that.
Weird, I don't remember ever saying anything like that.

I don't understand this wording in the BIP:

A wildcard input beings the value of all inputs with the exact same scriptPubKey in a block lower or equal to the block the wildcard input is confirmed into.​

"brings the value of all inputs..." ? And by "lower" I assume you really mean any previously unspent transaction outputs up to the point where the transaction is included in the chain?

That means all full-node implementations need to maintain an index of scriptPubKeys that are in the UTXO set. Which isn't great-- new features that cost something even if they are never used are generally a bad idea, unless you're certain they'll be used most of the time. If coalescing transactions are almost never used, full nodes will waste a lot of time adding and removing entries in the unspent-scriptPubKeys multimap.

I don't think the benefits outweigh the costs.
 
  • Like
Reactions: Peter Tschipper

priestc

Member
Nov 19, 2015
94
191
I think you're overestimating how expensive an index is. In all my years of building stuff, I've never once had to remove a db index because of performance reasons. Indexes only really become a performance problem when you're updating orders of magnitude more than reading. Have you ever used a database index before?

The idea behind the BIP is to make on-chain micropayments feasable. People always talk about how terrible bitcoin is for micro-payments, but I think bitcoin is actually *very good* at doing micro-payments. The only problem with on-chain microtransactions is the problem that this BIP solves.

Without this BIP, on-chain microtransactions are pretty much dead in the water. No one is going to spend $50 in fees to move 10000+ inputs worth $50. Microtransactions will move off-chain, ala the Brave browser where the off-chain entity has to take a cut of the donations.

> If coalescing transactions are almost never used,

Lots of people re-use addresses. Lots of people will use this feature if it becomes in the protocol. If it saves the user money, they will be motivated to use it.

> I don't think the benefits outweigh the costs.

This is the kind of stupidity that I'd expect from GMaxwell, but not you. You realize that bitcoin runs on computers, and computers are machines? Its not like theres a tiny human being inside your computer that can storm off if you give it too much work. The machine can handle a few extra instructions that you give it.

Its a shame that such an improvement will never go live because of such petty reasons.
 
Last edited:

Gavin Andresen

New Member
Dec 9, 2015
19
126
Have I ever used a database index before: Yes, I spent a few years writing a content management system that used MySQL.

If you want to convince me, run some numbers -- take the last few blocks, and compute:

+ How many transactions would benefit from this BIP
+ How many wouldn't
+ How much smaller blocks would be if all transactions that could benefit upgraded to use coalescing transactions
+ How much extra work miners would need to do to keep the index up-to-date (how many inserts/removes versus lookups, again assuming that all transactions that could benefit, did benefit).
 

jl777

Active Member
Feb 26, 2016
279
345
I think aggregating the sigs for the inputs used to fund the tx makes sense, as it either saves space or it has no effect.

why not split this into 2 BIPs? One with a new SIGHASH mode and one about the searching all the unspents.

also, searching all the unspents is an internals issue and I dont think it should creep into the protocol. some implementations will be fast and others slow, but both would conform to the protocol
 

priestc

Member
Nov 19, 2015
94
191
Have I ever used a database index before: Yes, I spent a few years writing a content management system that used MySQL.
Then you should know how little effect indexes have on performance. Maybe if you have a table with 150 columns, and an index attached to every single, that will result in performance problems. In teh case of the UTXO database, there are 4 columns (txid, amount, block_height, and scriptPubKey), and only one of those columns needs an index (scriptPubKey). One single database index is not going to affect performance is any major way. The 25 extra CPU cycles per write in my opinion is worth it when you consider it allows people to use bitcoin in a *monetary capacity* that was not previously possible.

If you want to convince me, run some numbers -- take the last few blocks, and compute:

+ How many transactions would benefit from this BIP
Any transaction that has an output that uses a non-unique scriptPubKey.

+ How many wouldn't
Any transaction that has all outputs that goes to a unique scriptPubKey. (unique across the entire blockchain)

I'm not sure on how to get the actual numbers. I made this a few days ago: http://bitcoin.stackexchange.com/questions/43725/where-can-i-query-the-utxo-database but no answers yet. I predict 20% to 60% of all unspent outputs belong to non-unique scriptPubKey.

+ How much smaller blocks would be if all transactions that could benefit upgraded to use coalescing transactions
Its not about keeping blocks smaller, even though it will incidentally have that effect. The main purpose if this is to save people money on fees. The main factor keeping microtransactions from being on-chain is the problem of fees being so high when spending money received through microtransactions.

I'm going to go ff on a rant now... Humans are the masters and the computers are the salves, not the other way around. When you say "How much smaller blocks would be", it sounds like you're negotiating on the behalf of the computer. The computer does not get to decide how big blocks get to be, humans do. Its like a plantation owner negotiating with his slaves on how much cotton will be picked in a day. To me this is completely backwards. It would be one thing if there were an actual human being inside the computer who has feelings and does work with pen and paper, but instead its just electricity and silicon. Blocks don't need to be any smaller than what they already are, as that would mean humans get a degraded experience.

If your slaves can't handle picking the cotton that they have been assigned, then you either whip your slaves harder (this is a terrible example), or you buy more slaves. If your node can't handle the size of blocks, then you upgrade the machine. Either way, there is no negotiating. OK rant over.

+ How much extra work miners would need to do to keep the index up-to-date (how many inserts/removes versus lookups, again assuming that all transactions that could benefit, did benefit).
Each time the UTXO database is updated, the index has to be updated. I don't know how many CPU cycles get used in updating an index, but it can't be much. Its really hard to tell what the exact benefit will be due to the chicken and egg problem. Peple aren't doing on-chain microtransactions today because its not profitable. If this change goes into the protocol, people are incentivised to move on-chain.
 

jl777

Active Member
Feb 26, 2016
279
345
CPU usage has nothing to do with index performance. iguana doesnt even use a DB and still the HDD performance becomes an issue when needing to do seeks, ie 10 milliseconds. So one seek 10 milliseconds, which nowadays is time enough for 100 million+ CPU operations.

So the performance question is the number of DB requests and how they perform on a largish dataset, on home computers and smartphones.

I like the idea of removing redundant info, ie multiple sigs from same signer in same tx. this is achieve with a new SIGHASH mode without any other changes needed.

combining that with mandatory DB searching and making that part of the protocol, first assumes that all implementations have a DB, or an efficient one, or can achieve the same result. So, this is getting into some very implementation specific things.

James

P.S. try doing an importprivkey and measure how long that takes with bitcoind, with and without txindex=1
 

priestc

Member
Nov 19, 2015
94
191
CPU usage has nothing to do with index performance.
There are two aspects to database indexes: updating the index, and querying through the index. When querying through an index, you are right, its the underlying hardware that determines the performance. But when updating the index, you have to do a bit of calculating, as well as one extra write operation. The amount of CPU it takes to update a db index is a function of how big the index is.
I like the idea of removing redundant info, ie multiple sigs from same signer in same tx. this is achieve with a new SIGHASH mode without any other changes needed.
That will only result in a 20% decrease in fees. Full wildcard input support can potentially result in 10,000% and higher reductions in fees.

The formula for determining fees today is:

fee = (Nouts * 34 + 148 * Nins + 10) * recommended_fee_per_byte

so if you have 2 inputs, 3 outputs, and the recommended fee is 80 satoshi/byte, your fee is:

fee = 3 * 34 + 148 * 2 + 10 * 80
fee = 32640 satoshi

with your proposal, where just the signature is removed the formula becomes

fee = (Nouts * 34 + 76 * Nins + 10) * recommended_fee_per_byte + 72

or

fee = (3 * 34 + 76 * 2 + 10) * 80 + 72
fee = 21192 satoshi

or a decrease in fee by 35% for a typical transaction.

with wildcard inputs, the upside is much larger:

the new formula for determining fee is as follows:

fee = (Nouts * 34) + 158 * recommended_fee_per_byte

(essentially always one input no matter how many *actual* inputs there may exist that will get spent)

for same 2 input / 3 output typical transaction as in the above example, coalescing both inputs instead of including them to a transaction the fee is:

fee = (3 * 34) + 158 * 80
fee = 20800 satoshi

or a reduction in fee by 36%

The best savings come from large amount of inputs. The cost to include an input to a transaction is determined by dividing the fee by number of actual spent inputs:

A typical transaction with a fee of 32640 satoshi, and 2 inputs, the cost to include an input is 16320 satoshi. For a coalescing transaction you're coalescing 10,000 inputs and there are 3 outputs, the cost is (20800 / 10000 = 2.8 satoshi per actually spent input). A 10,000 input transaction without coalescing transactions will cost 3 * 34 + 148 * 10000 + 10 * 80 == 118408960 satoshi fee or 11840 satoshi per actually spent input, in other words, a savings of 99.9%.
 

jl777

Active Member
Feb 26, 2016
279
345
most transactions use a bit less than 10000 inputs...

Let us be more realistic in the cases we analyze.

why not use iguana indexes to specify the spend? then it is ~8 bytes per (txid/vout) vs (32 + 2).

the cost per tx is all about the blockchain space it takes, so to reduce the cost we can create a highly efficient encoding language so any transaction can be specified very efficiently. maybe a way to encode a list of destinations, a list of input addresses, so future referalls to them can be done with the index of the list, instead of the list itself. and why not require whatever sort of lookup and optimization logic to also be encoded...

The reality is that any change that requires modification to all daemons and wallets will be very difficult to be adopted.

Purely local optimizations are not needing any approvals or changes to the protocol.

It is the wallet that is selecting the inputs to spend. It is the daemon that does the signing. To me this indicates that there are two different BUIP's needed to cover the scope of what you want to do

James
 

donald26

Active Member
Feb 2, 2024
188
0
HOW YOU CAN RECOVER YOUR LOST CRYPTO/FUNDS: Lost hope in a crypto scam? I got my $394,330 back! I invested $430,000 in a bogus crypto site. Big returns promised, withdrawals blocked, extra fees demanded – it was a nightmare. Feeling trapped, I even considered the unthinkable. My family helped me through it, and my niece suggested HACKERSTEVE911. They'd helped her with grades, but I'd never thought of them for this. I contacted them, expecting little. But within four days, they recovered $394,330 back to my wallet! My hope, my life, was back. If you're in a similar situation, don't lose hope. Contact them on hackersteve911@gmail.com
 

Shahadatkhan00

Active Member
Jan 28, 2024
187
2
Seeking a financial system that empowers individuals to control their own destinies? Stake $UP on TonTogether's #TonUp Launchpad and venture into the world of decentralized finance. By securing your share of the 100M $TOT tokens, you're taking a step towards a future where financial decisions are in your hands. Don't wait, act now: https://go.tonup.io/GldKvG #TogetherTON #Tot #TON #web3 #Crypto