Gold collapsing. Bitcoin UP.

My point is that it's not an issue for the larger community until it actually starts causing problems. Whatever the position of most of us in this thread, we are used to paying fairly close attention and extrapolating conditions. When we see Bitcoin BTC is going to hit the 1MB limit in two years time, we're "Guys, isn't it time we start doing something?". But there's a large part of the community which are all about moon-lambos and trusting "the worlds greatest developers" as long as the price is still wandering upwards. This was, and continues to be, BCH's problem with displacing BTC and will be BSV's problem with displacing BCH if they fail to address the limit (which may be several years away as an issue yet).

Of course, if you buy into BSV as filesystem, you're probably take a different position but that's a different discussion.
Yes, maybe it is not much of a problem, idk.

For me, I joined bu in 2016 because I wanted a permanent solution. This solution was to replace developer consensus with emergent consensus. Abc has made it very clear they will not allow this but want a central planning of capacity.

It's more of a mindset. You see the central planning by devs approach in bch all the time, and like so often with devs, they dev for devs and want to decide what's important. You also have constant political gaming with devs central planning, and as devs have no natural power in the system, you have posm.

Just ask yourself: if you have the choice of a final solution - miner / emergent consensus instead of dev central planning - and you have a solution requiring trusting devs for the future - which would you pick?

And file system... Well... This is not what I expected, but given that thousand coins compete for being paycoins on a tiny market of people actually wanting to use it, it seems good to survive the time needed. Also, it increases requirements for wallets and payment solutions, as users need to create complicated transactions without knowing it. This improves the payment functionality itself.

Also, bsv proofs to provide capacity up to 128mb blocks, soon much more. Bch just promises it for the future, while not allowing Stresstests to proof anything.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
why is IBS and improvement over xthins, ignoring that xthins was the root of the bug attacks x3, and that patches have been applied?
Short version: As a miner, you stream the block you are building on the fly to the other miners. When you find a block, the other miners know what your block look like. All you have to send, is a block header. Block propagation time is less than 1 second, no matter how big the block is.

You should read the paper. It's just 2,5 pages with graphics and lots of white space. IBS Paper
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
i did read your paper and well done. it's just that i happened to reread the xthins tldr and wonder how IBS would be better. isn't it the same general concept except using bloom filters? don't bloom filters take up even less bandwidth?
 
  • Like
Reactions: Norway

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
isn't it the same general concept except using bloom filters?
No, it's different. With IBS, you stream the block while mining. When you find the block, the other miners know what the block look like. With Xthin, you send the whole filter after a block is found.

It's possible to stream different info to identify the transaction itself.
- The transaction.
- The INV message (we used this in our paper).
- The tx ID (I think the Bitcoin SV node is going for this one).
- Another type of dynamic Bloomfilter/IBLT (Invertible Bloom Lookup Table) like Xthin or Graphene.

EDIT:
I wrote "dynamic" because the population of transactions for a filter would change constantly over time. Then it hit me: The data to identify the transactions could be compressed A LOT if the population for a filter is the transactions received in a - say 10 second - window. This would make the filter really tiny compared to a filter for 10 minutes. (y)

EDIT2: Let's say you identify a tx in a 10 second window population with just 3 bytes (16,777,216 combinations). That would cover a helluva lot of traffic.

EDIT3: But this optimization would come at a cost. Maybe simple is better. With sub-second block propagation independent of blocksize, the bandwidth may not be a bottleneck at all. Maybe a filter like this would in fact introduce a bottleneck.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
great. so then how is it different than a relay network which sends tx's amongst miners on the fly as well?
 
  • Like
Reactions: Norway

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
It's not a relay method.
(Page 1, paragraph 4 in the paper).

Let's take a sci-fi scenario. A BSP (Blockchain Service Provider) pay each miner to send their IBS stream to him (Miners need incentives). EDIT: 1 miner could send all the IBS streams to the BSP.

The BSP can now confirm to a (paying) merchant that x% of the hashpower have included a transaction in their block candidate. In practice, this would be like a 0.99 confirmation in seconds.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
For me, I joined bu in 2016 because I wanted a permanent solution. This solution was to replace developer consensus with emergent consensus. Abc has made it very clear they will not allow this but want a central planning of capacity.
agreed. i think the key point to understand regarding blocksize limit is where the exact point of coordination for ABC and Core lies; in their github repositories. where any protocol changes need to be signed off by the github key holders (the devs). BU was designed to move blocksize decision making away from the github repository and it's highly centralized nature and out to the actual network (where decentralization is the emphasis) via miner "signaling" using user agent scripts (a strategy which i never agreed with) and the feedback generated by actual blocksizes produced (based on the creeping linear blocksize increases produced btwn 2009 and when we hit the 1MB limit a couple of years ago). that was the definition of "emergent concensus", ie, let the relevant market economic actors out on the network (miners, users, merchants, payment processors) etc work out the tx fees based on supply and demand which then determined the resultant blocksizes produced while removing the devs from the process. anyone closely examining what ABC has done and said in this regard should realize that what they are doing is NOT emergent concensus as was originally defined, as their coordination point for blocksizes "allowed" has been forcibly moved back to the github.
 
Last edited:

Epilido

Member
Sep 2, 2015
59
185
This binary thinking regarding security is part of how bitcoin got on the wrong track for many years.
Are you ascribing this "binary thinking"to me?

Your arguments for waisting the useful data of transaction order for CTOR because it's not 100%
I have never made an argument for or against CTOR.

If you can't step out of a theoretical binary mindset, I can't explain to you how this ordering data may have value in the future. Remember, bitcoin was never 100% or binary in terms of security. It's an economic game where honesty is rewarded.
You seem to know my mindset better than I do.

From your example
The board call for a meeting with the CEO. There have been a large sum of money going to and from the company account. The CEO is the only person that can sign transactions on behalf of the company, and the two transactions happened in the same block.

"What happened?", ask the chairman.
"Oh, nothing important. I just sent money to the company account by mistake and retrived them.", answer the CEO.

With BCH, the board has to accept this, they have no indication that anything else happened.

What really happened, was that the CEO was drowning in debt by coke, hookers and gambling. In a desperate move, he borrowed money from the company, placed a large bet on an online casino, won(!) and paid the company back just in time before a new block was found.

Assuming bitcoin is unfucked and miners see the value of ordering transactions in blocks as they arrive, the CEO would not get away with these shenanigans on BSV.
Your example seems to have a few problems.

The "casino" accepts zero conf for large sums. Then they also pay out immediately. I find this dubious.

The CEO could also just say I accidentally sent some money out and returned it.

The recourse for the chairman in either case would be to have the CEO sign a message with the private key of the incoming transaction. If he had sent the funds to a casino he would be unable to sign and shown to be a liar.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
@Epilido
I didn't mean to strawman or offend you in any way. I tried to explain my point of view. I will not try to convince you about the value of transactions ordered as the miner see them. I don't have to convince anyone.

But I do believe it will be possible to have a reliable timestamp for transactions down to milliseconds in 4 years based on the tx order and statistics ;)
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
who determines what gets merged?
 
  • Like
Reactions: AdrianX and Norway

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
The owner of the repo, I guess.

Wladimir Van Der Laan can be boss in an onchain GitHub repo too o_O

You can fork the code if the right license is in place.
 
Last edited:

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
a select group of devs having control over the repository and the protocol was the glaring single point of failure for BTC. yet the problem got exactly replicated in BCH, with analogous consequences resurfacing immediately. the model does not work, it did not get fixed in BTC, and i don't anticipate it will be fixed in BCH. for even if --improbably-- miners tire of ABC and make BU the reference implementation, the can will just have been kicked down the road, with the BU membership now acting as gatekeepers to what purports to be a decentralized financial instrument, liable to make similar mistakes as their predecessors, and exposed to similar flak.

power is corrosive. bitcoin aims to disperse all power among a group of economically vested profit-seeking parties, the miners. when someone other than those parties wields any power, the system is not operating as designed. the facts witnessed over the last half decade bear this out.

before i get told that BSV is no different from BTC and BCH, as it is currently under fully centralized development control: yes it is, currently. but they have a plan to definitively abandon the model, and appear to be steadily bringing it to fruition, opening prospects not available for the other variants. to present and bring about a solution to the single point of failure requires clarity not on display elsewhere. this, not the ability or willingness to satisfy requirements imposed by entitled newcomers, is the important datum.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
We'll have to see if they maintain the will to do so and even if they do, whether that is attainable. I know it's the biggest problem with crypto that I have at the moment and I'm quite unoptimistic about it being a solvable issue.
 

trinoxol

Active Member
Jun 13, 2019
147
422
Germany
I just noticed that there is an impediment to parallelizing the processing of an incoming block: The transactions are serialized linearly in the block. The format is "[count of transactions] [tx1] [tx2] ...". This means that transactions must be deserialized sequentially. There is no random access.

This means that deserialization cannot be parallelized. I wonder what the single-core speed is for deserializing a stream of transactions. Deserialization can be a fairly slow process.

For that reason, it seems bad to IBS stream the raw block data. It would be better to stream individual transactions.

Or, as @Norway points out, we can stream TxIDs or use even more sophisticated synchronization algorithms. If we transmit each transaction as an 8 byte identifier and we assume an average size of 200 bytes per transaction, we obtain a data reduction of 25x which is very powerful.

That way we can use bandwidth-efficient gossip synchronization for transactions. The quadratic cost of all miners IBS-streaming to all other miners is only required for the much smaller TxIDs.

When a block is discovered there would be a final synchronization step in which the receiving peers ask the sender for any transactions which are still missing. This data should be the size of a few seconds worth of transactions. It's small.

Also, newer CPU models have SHA256 hashing helper instructions which give us 2.1 GB/s on a single core. https://crypto.stackexchange.com/questions/50618/how-fast-can-a-sha-256-implementation-go/50620
 
  • Like
Reactions: Dubby the Goldfish

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
IBS is like the expedited flavor of xthin, except that since every forwarded transaction is going to go into the next block there's no need to send the short hashes of the transactions that have actually gone into the block when the block is discovered.

The order of INV receipt determines the order of tx in the block so parallelization needs to happen carefully.

Note also that there is a lot of redundancy in having every connected node pass an INV with a 32 byte block hash (this is a problem today, its not specific to IBS). You can pass a shorter hash, saving a lot of bandwidth. To avoid deliberate collisions, you could use the siphash scheme although this is still probabilistic.

You can minimize the probabilistic collisions by making sure your short hash doesn't collide with any prior INVs you've already sent and if it does send the whole hash. This eliminates collisions between tx that the same node relays, but there could still be a collision 2 different nodes relay different tx with the same short hash.
 

Richy_T

Well-Known Member
Dec 27, 2015
1,085
2,741
I just noticed that there is an impediment to parallelizing the processing of an incoming block: The transactions are serialized linearly in the block. The format is "[count of transactions] [tx1] [tx2] ...". This means that transactions must be deserialized sequentially. There is no random access.
The truth is that there are very few dependent transactions in blocks. I believe I ran the stats once and it was almost vanishingly small. So it becomes a fine method to simply defer processing any transactions which can't be immediately validated.
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
IBS is like the expedited flavor of xthin, except that since every forwarded transaction is going to go into the next block there's no need to send the short hashes of the transactions that have actually gone into the block when the block is discovered.
No, IBS is not a flavor of Xthin. IBS can be used with or without a filter. Xthin is a filter.