BUIP010 (passed): Xtreme Thinblocks

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Peter Tschipper:

With 'cut bandwidth in half', I am simply doing the rough approximation that currently, each transaction gets gossiped twice: Free-flying and as part of a block. Your code removes 98% of the later part, so from a bird's eye view, that should cut Bitcoin's bandwidth use about in half, or shouldn't it?
 

Zangelbert Bingledack

Well-Known Member
Aug 29, 2015
1,485
5,585
What does this mean for feasible blocksizes? Like, what blocksize cap could we have if everyone was using Xtreme Thinblocks and have it be acceptable to small blockists who think 1MB is the ideal cap now?

2MB? 40MB? 100MB? Or is it more complicated than that?
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@Zangelbert Bingledack : Good question. I'd say at least 2MB as bandwidth was the worry brought forward by all the small blockists in the war. (Personally, I of course think open-ended is nice and absolutely feasible... :D)

I remember people saying (and I think that includes gmax) that the sudden line saturation from block propagation is a huge problem for them. Because they lack proper traffic shaping or whatever.

With this patch, we (or rather @Peter Tschipper ) pretty much reduce traffic to just the Geiger tick noise of randomly arriving transactions.

Excuse my language, but I personally think that BC as well as BU should market the shit out of this feature!

BC/BU not only 'fiddled with some constants' but have more and more worthwhile features, including features worthwhile to miners and their block propagation issues.

The traffic shaping code was nice, but IMO this is the biggest and most user-oriented positive change since a long while in Bitcoin.
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
Frankly, the new version of XT should also use Peter's Xtreme thinblocks. If Classic used it too then the whole block limit debate would be all but kicked into touch.

Xthin The rail-gun for blocks

 
Last edited:

Peter Tschipper

Active Member
Jan 8, 2016
254
357
@awemany Yeah, I see where you're coming from with reducing overall bandwidth by half. I suppose not quite half since a lot of bandwidth gets used serving up historical blocks also. Probably we'll save more like a 1/3. But really this is about block relay although the bandwidth savings are sure good to have in addition.
 
  • Like
Reactions: auxon

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
This is an exciting project! Congratulations @Peter Tschipper!!

I had always assumed that Bloom filters would be really complicated and hurt my brain, so I didn't actually dig into them until I saw this thread on Reddit. I realize now that they are a very easy-to-understand and elegant data structure!

Just to make sure I understand, am I correct that this is how Node A gets the thin block and any missing transactions from Node B?:

- Node A encodes the transactions in its mempool using a Bloom filter.

- A Bloom filter is just an array that contains m (initially-empty) slots.

- Node A figures out which slots in the array to set to "1" by passing each transaction in its mempool through a special hash function that "maps" each transaction to k of the slots.

- For example, if the node's mempool contained Transactions x, y and z, then these transactions might get mapped into the bloom filter as shown below (from Wikipedia):



- When Node A makes its "get_data()" request to download the thin block from Node B, it sends this Bloom filter at the same time.

[OK, let's imagine that the block actually contains Transactions x, y and w....]

- Node B hashes Transaction x and confirms that all of the bits that it "maps to" have been set in the Bloom filter. It now knows for sure that Node A has Transaction x.

- Node B hashes Transaction y and again confirms that Node B has Transaction y as well (all its bits were set).

- Node B hashes Transaction w but this time finds that one of the bits that w maps to is NOT set. This means that Node A must NOT know about Transaction w.

- Node B thus sends the thin block (containing 64-bit hashes of all the TXs) to Node A along with Transaction w in full.

- Node A receives the thin block and the missing transaction and can verify that it has the complete block information!

Questions:

1. [theory] Am I correct in assuming that Node B needs to cross-check each TX in the block against Node A's Bloom filter? [I.e., there is no way to take the intersection of two Bloom filters or something faster than checking for elements one by one]

2. [implementation] How many bits long is the Bloom filter used for Xtreme thin blocks (i.e., what is m)?

3. [implementation] What are the hash functions used?

4. [implementation] How many bits in the Bloom filter does each transaction map to (i.e., what is k)?
 
Last edited:
  • Like
Reactions: Mengerian

Peter Tschipper

Active Member
Jan 8, 2016
254
357
@Peter R it's getting late but i'll try to answer:

1) yes you are correct, node b will check every tx in the block against the bloom filter sent from A. It will know which tx's node A definitely doesn't have. however, there may be some "false positives". it may think that node A has something when in fact it doesn't. that's one of the limitations of bloom filters and why we need to be able to re-request a tx if it's missing. but fortunately we can set the false positive rate we are willing to accept.

1b) the checking one by one is actually very fast, there are only 500 to 2000 entries in a block. as far as intersections, i don't believe you can do that with bloom filters, there's no way to know once you've got the intersection what that entry represents. with iblt, i believe you can do that but iblt structures are going to be larger, but perhaps a little faster to create...a trade off....you lose on the one hand, you gain on the other. i don't really believe it's worth the added complexity but i don't have numbers to back that up.

2) the size of the bloom filter and how many bits we will have are again set by us to some extent. we have currently a maximum byte size of 36KB allowed...that will typically allow 20000 entries with at 0.01 percent false positive rate if my memory is correct.

3) hashing algorithm is murmur which was well chosen by I believe Matt Corallo or Mike Hearn when they built the bloom filter class. murmur is an extremely fast hashing algorithm well suited to what we need.

4) it varies. in the beginning when you start up the client the bloom filter will be sparse... 1.5 times the number of memory pool entries and 0.001 percent false positive rate however with such few tx's in the memory pool the bloom filters are very small, starting at 11 bytes and moving upwards as tx's arrive in the mempool. As the mempool grows we use a linear decay to drop the number of bits relative to the mempool entries as well as slowly increases the false positive rate. ( just realized i've stopped using capitals , pardon , a bad habit) you might think that as being backwards to what we want but it works for our needs and for one keeps the bloom filter from getting overly large. But more importantly as the mempools fill and get more in sync we are in a probabilistic sense actually leaning on the mempool to have the tx's and relying less on the bloom filter so we can allow the bloom filter to degrade and become less sparse the bigger the mempool gets while still preventing re-requests for tx's. Occasionally there is a re-request, so far i see about 1 in a day...so it's a trade off. If we wanted to use sparse filters all the way then the bloom filters would end up being double in size...so for example during this latest spam attack we see bloom filters getting up to 15KB in size, so if we went to sparse filters they would be 30KB in size...so we save 15KB per block request while maybe incurring 1 re-request for a tx in a day.

The other option i haven't investigated is creating a sparse filter and then compressing it...i read somewhere that sparse filters compress very well but i haven't done any experimentation on that yet.

as an aside, on a normal day bloom filters would be in the 1 to 3KB range.
 
Last edited:

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
What does this mean for feasible blocksizes? Like, what blocksize cap could we have if everyone was using Xtreme Thinblocks and have it be acceptable to small blockists who think 1MB is the ideal cap now?

2MB? 40MB? 100MB? Or is it more complicated than that?
I think we could little by little start thinking about the deeper aspects of big blocks.
Beyond block propagation time problems (ameliorated by this BUIP solution), the other issue is the time and processing power that are required for validating blocks. The tax on resources grows roughly quadratically in relation to the block size: for block size B, the time to validate is B^2.

There is an interesting discussion on reddit, where it is proposed that instead of block size limits, we could have limits on incoming block complexity, and this leads to idea of developing a block validation metric.
https://www.reddit.com/r/btc/comments/41j0j7/different_look_on_the_block_limit_is_it_possible/

Such a metric would be hugely useful for BU, since we could then have algorithm in the software that tries to evaluate the optimal excessive block size based on the user's processing resources. This would IMO be preferable to users just selecting settings that just "feel right".

So it is a little more complicated and XThinblocks aren't a panacea, but certainly a huge step in the right direction.
 

sickpig

Active Member
Aug 28, 2015
926
2,541
@Peter Tschipper:

With 'cut bandwidth in half', I am simply doing the rough approximation that currently, each transaction gets gossiped twice: Free-flying and as part of a block. Your code removes 98% of the later part, so from a bird's eye view, that should cut Bitcoin's bandwidth use about in half, or shouldn't it?
The good thing is that we are cutting by 98% the "bad" part of bandwidth consumption :)

Fortunately we have 10 mins on avg to relay the other part.
 
  • Like
Reactions: awemany

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
I think we could little by little start thinking about the deeper aspects of big blocks.
Beyond block propagation time problems (ameliorated by this BUIP solution), the other issue is the time and processing power that are required for validating blocks. The tax on resources grows roughly quadratically in relation to the block size: for block size B, the time to validate is B^2.
It was my impression that limiting the size of a transaction until a saner, linear signature validation scheme has been implemented was pretty uncontroversial. Is that not the case?

With 1MB-limited transactions (as e.g. suggested by some people on bitcoinclassic.consider.it) we won't have that O(n^2) problem.
 

YarkoL

Active Member
Dec 18, 2015
176
258
Tuusula
yarkol.github.io
@awemany
I don't think there is much controversy regarding the precautions. But hard-coded limits have the same problem as the max block size, they are artificially imposed from "outside" and should therefore be taken as temporary measures.
To get the correct picture of the state of network, it would be very good if the excessive block size settings reflect adequately the readiness of nodes to accept and validate blocks.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@YarkoL: That's a good point. Maybe I am not creative enough, but I think an IMO generous limit on transaction size such as 1MB has a much lower likelihood of damaging Bitcoin than the 1MB does and did already.

Because it cannot be used for crippling as effectively as sitting on the 1MB, I think the paid-for bullshitting around lifting it (when it works) should be much less.

The only scenario I could imagine is some centralized company offering 'Huge Bitcoin transactions', profiting from a small limit. Bitcoin would be very usable without a blocksize limit and today's hardware as a worldwide electronic P2P cash system. The market of use cases for very large transactions (though there might be valid reasons, such as Lighthouse) is -as far as I can see- much smaller.
[doublepost=1453649621][/doublepost]
@awemany Yeah, I see where you're coming from with reducing overall bandwidth by half. I suppose not quite half since a lot of bandwidth gets used serving up historical blocks also. Probably we'll save more like a 1/3. But really this is about block relay although the bandwidth savings are sure good to have in addition.
One could further argue that nodes which are connected just intermittently to the Bitcoin network are nicer in the sense that they use less bandwidth already - they only need the transactions once (in a block) and not twice (block + instantaneous).
 

Peter Tschipper

Active Member
Jan 8, 2016
254
357
Well I really don't want to get sidetracked here too much. This BUIP is about improving block relay for nodes, it wasn't written with the intention of replacing Matt's network. It does however come close (and is decentralized) and there are improvements in the pipeline already, a good idea by @YarkoL to, as incredible as it seems, even better the compression ratio by a good margin over this BUIP. In the end though, this is not about any one persons opinion, (my own included), here at Bitcoin Unlimited it's the community of participants that will decide the future of any implementation. Everybody gets their equal vote and only one vote.

As far a Greg's understanding of Datastream compression he is really not correct and is comparing apples and oranges. What Matt tried to do is use LZMA compression. It's a great compression library, probably the best in terms of compression ratio, however, it's very slow, especially at decompressing, something like 20 or 30 times slower than LZO. LZO is what I recommended after a great deal of testing an analysis which showed not only a fairly decent 20 to 30% compression but also a benefit about the same amount in response times. LZO is well regarded, portable and is lightening fast but you lose a little on the compression ratio. I think in the future, what I was really thinking about for this BUIP is experimenting with LZO to compress a sparse bloom filter as we might see a really good benefit there.. As far as compressing our already xthinblocks, I don't see much need right now but in the future when all the performance and scalability issues are worked out we could layer LZO in to squeeze the last drop of blood so to speak, out of not only block relay but transaction relay as well. By bundling transactions together and compressing them we could achieve an extremely high tx rate...enough said, I think we're getting off topic here.

@theZerg By the way, I just wanted to thank you for putting this community together. I listened to your talk online yesterday and was very impressed. I already had the feeling this was a community that valued decency and respect for others but was also great to hear you talk about bringing in the younger or new members and helping to not only give them a place where they can be heard but also to help them develop their skills and knowledge in a positive environment. That's really what attracted me to Bitcoin Unlimited in addition to the underlying idea behind the implementation.
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Peter R it's getting late but i'll try to answer:

1) yes you are correct, node b will check every tx in the block against the bloom filter sent from A. It will know which tx's node A definitely doesn't have. however, there may be some "false positives". it may think that node A has something when in fact it doesn't. that's one of the limitations of bloom filters and why we need to be able to re-request a tx if it's missing. but fortunately we can set the false positive rate we are willing to accept.

1b) the checking one by one is actually very fast, there are only 500 to 2000 entries in a block. as far as intersections, i don't believe you can do that with bloom filters, there's no way to know once you've got the intersection what that entry represents. with iblt, i believe you can do that but iblt structures are going to be larger, but perhaps a little faster to create...a trade off....you lose on the one hand, you gain on the other. i don't really believe it's worth the added complexity but i don't have numbers to back that up.
Thanks for the response. I would expect that checking the TXs one-by-one would be fast--I was asking more out of theoretical curiosity.
2) the size of the bloom filter and how many bits we will have are again set by us to some extent. we have currently a maximum byte size of 36KB allowed...that will typically allow 20000 entries with at 0.01 percent false positive rate if my memory is correct.
Here's my first shot at some calculations:


3) hashing algorithm is murmur which was well chosen by I believe Matt Corallo or Mike Hearn when they built the bloom filter class. murmur is an extremely fast hashing algorithm well suited to what we need.
Agreed.
4) it varies. in the beginning when you start up the client the bloom filter will be sparse... 1.5 times the number of memory pool entries and 0.001 percent false positive rate however with such few tx's in the memory pool the bloom filters are very small, starting at 11 bytes and moving upwards as tx's arrive in the mempool. As the mempool grows we use a linear decay to drop the number of bits relative to the mempool entries as well as slowly increases the false positive rate. ( just realized i've stopped using capitals , pardon , a bad habit) you might think that as being backwards to what we want but it works for our needs and for one keeps the bloom filter from getting overly large. But more importantly as the mempools fill and get more in sync we are in a probabilistic sense actually leaning on the mempool to have the tx's and relying less on the bloom filter so we can allow the bloom filter to degrade and become less sparse the bigger the mempool gets while still preventing re-requests for tx's. Occasionally there is a re-request, so far i see about 1 in a day...so it's a trade off. If we wanted to use sparse filters all the way then the bloom filters would end up being double in size...so for example during this latest spam attack we see bloom filters getting up to 15KB in size, so if we went to sparse filters they would be 30KB in size...so we save 15KB per block request while maybe incurring 1 re-request for a tx in a day.

The other option i haven't investigated is creating a sparse filter and then compressing it...i read somewhere that sparse filters compress very well but i haven't done any experimentation on that yet.

as an aside, on a normal day bloom filters would be in the 1 to 3KB range.
Makes sense! That's pretty cool!

So I suppose Node B can determine what Node A used for m and k (and any other hashing details required) based on the some header bytes attached to Node A's Bloom filter?
 
Last edited:
  • Like
Reactions: YarkoL

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Regarding the discussion about 32-bit versus 64-bit hashes, etc., would it be possible to make this more Unlimitedesque?

Could the latest release default to some smart decision (e.g., optimized based on current network traffic stats) but allow the operator to adjust it as the need arises? [You'd still have a simple fall black plan to deal with collisions]

The type of thin block Node A would request could be encoded in the header bytes of the Bloom filter it sends with the get_data() request. Node B would just need to know how to prepare thin blocks in each of the possible formats. [I'm assuming the Bloom filter already encodes information about the m and k parameters and hashing details of the Bloom filter anyways].
 
Last edited:
  • Like
Reactions: Peter Tschipper

Peter Tschipper

Active Member
Jan 8, 2016
254
357
@Peter R Yup, you're on the right track, that's what @YarkoL was thinking about regarding 32bit and up tx hashes but I think the implementation and the especially the testing of it will require time so I wouldn't opt for adding that in the first release. Keep the first release simple and build on success, I would say.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
"Keep the first release simple and build on success, I would say." -- @Peter Tschipper

Yes, completely agree. Don't mind me--I'm a theorist...I like to try to figure out what would be possible in the future!