Gold collapsing. Bitcoin UP.

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
great video demonstration of why Core Commissars like gmax, Todd, & LukeJr are idiots believing they can force a fee mkt. how many times have i said we should leave the price of block size supply and tx's to the negotiation btwn the economic actors involved, namely miners and users, and not core dev (who has not the information nor knowledge to do so)?
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
You know the best thing about Bitcoin?

You can sit around and buy even on Thanksgiving ;)
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
This article is kind of interesting:

https://bitcoinmagazine.com/articles/how-the-magic-of-iblts-could-boost-bitcoin-s-decentralization-1448382673

Does this undermine Friedenbach's persistent naysaying that such a compression solution has to work in an adversarial situation? If that scheme works then clearly a malicious miner could only slow down the first hop.
@albin
This is also interesting because Greg is effectively writing off IBLT and quoting Gavin to backup his naysaying view:

Gavin's own comments was that IBLT probably doesn't not make sense with blocks under "hundreds of megabytes":
09:49 < gavinandresen> morcos: e.g. the IBLT work really doesn’t make any sense until blocks are in the hundreds of megabytes size range.
Given my expirence with an attempted implementation of the earlier block network coding proposal, I wouldn't be shocked to find there was no size at which using set reconciliation over the whole block was a win for normal connectivity and normal CPU speeds (as opposed to things like sattelite connectivity) though we won't know for sure until its implemented.
https://bitcointalk.org/index.php?topic=68655.msg11997363#msg11997363

(my bold emphasis)

Yet, now we have Rusty (who is one of the few people to actually write an IBLT test harness for Bitcoin block creation), saying that it is so quick that it might be viable even if each node encoded the block during propagation.

So, who is right? This is really important because IBLT is far superior to other efficiencies such as the relay network especially as it democratic, i.e. all nodes can have it as part of their basic client software.

I love this from Rusty (and wish he was partnering on XT with Gavin rather than working for BS):
“Ideally, if we can cram this thing into two IP packets,” he said. “We are lightning fast.”
 
Last edited:

albin

Active Member
Nov 8, 2015
931
4,008
People argue that CPFP only allows the receiver to speed up the TX but this is not true: in (nearly) all cases, the TX would have a change-to-self output. The user would just re-spend this change output back to himself with a high fee.
This is a super critical point that I can't even recall seeing articulated anywhere.

@solex

That situation with the politics surrounding IBLT reminds me strongly of something that you sometimes see in a business context with toxic managers. I've seen a few situations when in the context of a project, subordinates are undermined and/or put in their place by progressively broadening the scope the moment it appears that they've produced a real deliverable on some action item. A great example is creating a system/process to automate some currently manual process, then the moment there is some very workable prototype or proposal in place, the manager playing games will start subtly (or even not so subtly) creeping in more requirements, even to the extreme point of totally sabotaging whatever process improvement was gained. Some very simple procedure capable of producing undeniably measurable soft savings and quality improvements transforms into an unending albatross around the neck of the responsible parties. Usually these kinds of people don't last, because the inability to produce results catches up to them, but over the short term these kinds of toxic people can do serious damage.

I get that vibe from IBLT because it seems like there's always going to be something to invent to naysay about even if it turns out to be a very elegant and efficient solution.

In a way the entire blocksize "debate" resembles this kind of situation, because somehow we went from "hey guys, maybe we should look at scheduling a block size increase sometime" to multiple conferences about scalability in the general and endless pontification on highly theoretical and nowhere near implementable topics.
 
Last edited:

rocks

Active Member
Sep 24, 2015
586
2,284
This is also interesting because Greg is effectively writing off IBLT and quoting Gavin to backup his naysaying view:
...
I love this from Rusty (and wish he was partnering on XT with Gavin rather than working for BS):
“Ideally, if we can cram this thing into two IP packets,” he said. “We are lightning fast.”
One way to assure adoption of XT or BU would be to add IBLT or thin blocks to these alternatives. Doing so immediately reduces a miner's orphan risk, which should make most adopt them.

The fact that some of the main BS devs are not actively working on these projects shows they are not interested in addressing people's needs, but instead pursuing their own projects.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
The advantage of techniques like IBLT is not necessarily from reducing gross bandwidth usage - it's due to temporally smoothing block transmission over the entire 10 minute hashing period and thus requiring less burst bandwidth to rapidly propagate a block once a solution is found.

It's entirely possible that using IBLT to allow miners to pre-broadcast their blocks would require more bytes to be transferred in total, but would still be a net win because the bandwidth would be easier to provision since it would be more temporally consistent.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
One way to assure adoption of XT or BU would be to add IBLT or thin blocks to these alternatives. Doing so immediately reduces a miner's orphan risk, which should make most adopt them.

The fact that some of the main BS devs are not actively working on these projects shows they are not interested in addressing people's needs, but instead pursuing their own projects.
That's a really good idea and I wonder what the sentiment is with regards to implementing something like that with other people around here?

The advantage of techniques like IBLT is not necessarily from reducing gross bandwidth usage - it's due to temporally smoothing block transmission over the entire 10 minute hashing period and thus requiring less burst bandwidth to rapidly propagate a block once a solution is found.

It's entirely possible that using IBLT to allow miners to pre-broadcast their blocks would require more bytes to be transferred in total, but would still be a net win because the bandwidth would be easier to provision since it would be more temporally consistent.
We have transactions being broadcast essentially twice - flooded when they arrive and mined into a block. So IBLT should be able to roughly cut bandwidth usage in half, or am I missing something?
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
We have transactions being broadcast essentially twice - flooded when they arrive and mined into a block. So IBLT should be able to roughly cut bandwidth usage in half, or am I missing something?
There are two general techniques you can use to compress block announcements.

The first way is to compress blocks at the time they are produced. Thin blocks (only transmitting the transaction hashes) is one technique. IBLT can also be used this way. These techniques can asymptotically approach half the gross bandwidth usage of the existing network.

At even higher transaction rates, those techniques won't be good enough. At some point you have to use the entire 10 minute hashing period to transmit the information about what will appear in the next block, which means miners have to broadcast information about the blocks they are working on ahead of time.

Not all miners will be working on the exact same block, even if they want to. Speed of light delays and the fact that transactions enter the network from many different locations make that impossible. This means the network is carrying information about more than one potential block, only one of which will actually become the next block.

All these set reconciliation messages have overhead, so depending on how many miners exist in the network, how much their blocks differ, and how much overhead the reconciliation messages contain, gross bandwidth usage might be higher than the lowest achievable compression (that waits until a block solution is found to broadcast anything), but that will be acceptable because even though the total number of bytes transferred is higher, the large peaks in bandwidth needed every 10 minutes will be eliminated.
 

kyuupichan

Member
Oct 3, 2015
95
348
wow, just wow. I've just managed to watch the video. her thinking of bitcoin "maximalists" as a bunch of naive folks prove her own naivite, IMHO.

further demonstration of her attitude is this continuous reference to blockchain-tech narrative in which all the transnational transactions inefficiencies will be magically whashed away.

tech to solve such problems existed well before bitcoin appearance, and they are called distributed databases, two phase commits, etc etc.

those mechanisms were not applied simply because banks and financial institutions had no incentives to do it.

now such institutions fear to loose their dominant position and they are slowly reacting, of course using the wrong paradigm.
Sorry I'm quite behind as you can probably tell. Just watching this video, with people pontificating on things they know nothing about, letting slip occasionally that it's really all about control, and when talking about Bitcoin putting the "en" in front of the words beginning with "crypt" just convinces me they are clueless and out of their depth. Long may they stay that way.

But still at least they revealed their disgusting world view.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
Date: 2015-11-26 22:20:52 Size: 21665 NumTx: 95 Ver: 20000007 Hash: 00000000000076179998ebfbdeabeb09765597c7e8e3c9cf4242e5ae7907d3cd
** Date: 2015-11-26 22:00:34 Size: 2230046 NumTx: 5866 Ver: 40000007 Hash: 00000000e205579ee2e0e027ad5197619374e31b81ac4b802e0a1b7400be28b1
Date: 2015-11-26 21:40:20 Size: 261418 NumTx: 638 Ver: 20000007 Hash: 00000000000003b1613837114cf698b31dd8ed5e897222b323d65554d0a7dd08

BU produced a 2.2MB block on testnet last night! Its all done via GUI configuration, I have not even shut the client down since I produced the < 1MB blocks 2 night ago.
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
There are two general techniques you can use to compress block announcements.

The first way is to compress blocks at the time they are produced. Thin blocks (only transmitting the transaction hashes) is one technique. IBLT can also be used this way. These techniques can asymptotically approach half the gross bandwidth usage of the existing network.

At even higher transaction rates, those techniques won't be good enough. At some point you have to use the entire 10 minute hashing period to transmit the information about what will appear in the next block, which means miners have to broadcast information about the blocks they are working on ahead of time.

Not all miners will be working on the exact same block, even if they want to. Speed of light delays and the fact that transactions enter the network from many different locations make that impossible. This means the network is carrying information about more than one potential block, only one of which will actually become the next block.

All these set reconciliation messages have overhead, so depending on how many miners exist in the network, how much their blocks differ, and how much overhead the reconciliation messages contain, gross bandwidth usage might be higher than the lowest achievable compression (that waits until a block solution is found to broadcast anything), but that will be acceptable because even though the total number of bytes transferred is higher, the large peaks in bandwidth needed every 10 minutes will be eliminated.
Yes, thanks, I get all that. I was more referring to this idea in the context of an advertisement of BU.
Because we could get down to about half the bandwidth without thin blocks or any other additional change. When/if BU is finally accepted by many, fancier schemes might later be implemented - and yes - bandwidth might actually rise (together with functionality!). But that would happen only in the scenario where Bitcoin doesn't die or get overtaken anyways - and where us 'bigblockers' won.

I also suspect people who are annoyed by Bitcoin's bandwidth usage are mostly annoyed by the line-saturating aspect of block transmission, not so much the regular Poisson-distributed TXN trickling in.
[doublepost=1448635063][/doublepost]Greg's engineering the Bitcoin system

(And that's the problem - he's apparently thinking of himself as the engineer of a whole system with a lot of people who he's also engineering..)

The best bit is this:

When it comes down to it: The vast bulk of the engineers working on the system are not going to do things which they believe break the damn system.
Nice way of twisting everything 180° - he's currently, visibly breaking the system by inaction and blockade.
 
Last edited:

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
bullish:

 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
The bull is horny, the cow willing but the shed's roof doesn't allow him to get onto the cow. The roof is just a thin layer of wood, but workers arrived already to pour concrete on top. Will the bull be able to break the roof in time?

;-)
 
  • Like
Reactions: majamalu

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
Peter Todd's Opt-In Full Replace-by-Fee patch has been pulled in to Bitcoin Core: https://github.com/bitcoin/bitcoin/pull/6871

It basically allows the user to create transactions that are flagged with a sequence number that indicates they can be replaced with a higher fee transaction. So a node receiving transactions with these sequence numbers will know that they have a higher chance of being double-spent.

Features like this remind me of @Zangelbert Bingledack 's thought experiment of making block reward user configurable. Because all it's really doing is taking something that is already possible (double spending) and making it more convenient. 0-conf security can't be maintained by trying to prevent these features, in the longer run the system has to develop a kind of anti-fragility, and merchants will also push for features that help them assess their confidence in 0-conf transaction reliability.

Whatever people think of Peter Todd, I like that he codes up his ideas and puts them out there. It would be nice if Bitcoin software could develop into a more vibrant ecosystem of node implementations where different people could code up their ideas and put them out there to compete for mindshare. I expect this to happen in the future, we are just going through growing pains at the moment.
 

rocks

Active Member
Sep 24, 2015
586
2,284
@Mengerian
The problem is yes anything may be possible, but what is the functionality/behavior the ecosystem considers to be socially normal.

Currently socially normal behavior is for nodes to refuse to propagate double spend transactions, which limits their movement through the network. Similarly socially normal behavior for pools is to reject a double spend transaction. Yes today pools can, but it is considered to be anti-social and risks losing their miner base (think Ghash.IO) largely because it is considered to be anti-social.

Peter Todd's fork (and it is a fork) if accepted changes what is considered to be socially normal behavior. That is significant.

Today if a pool is caught enabling double spends for themselves or others, there is a back lash because it is considered to be illegal behavior. If after Peter's fork double spends becomes socially normal will we still see the same back lash? Maybe maybe not. That weakens the zero-confirm security model.

There is absolutely zero reason to include this functionality. CPFP works perfectly fine for fee uping and fully maintains current zero-confirm security.

Honestly it looks to me that Peter, Greg and crew are trying to intentionally break bitcoin so they can then say "see it never was going to work, now here is our LN centralized DB to replace the broken bitcoin"
 

awemany

Well-Known Member
Aug 19, 2015
1,387
5,054
@rocks, @Inca:
Agreed at trusting.

However, I don't think it is that damaging. I think it will simple fizzle because people are actually not interest in FRBF. I wrote slightly more on what I think on reddit.

In any case, it is unnecessary complexity and also otherwise not necessary at all - a sign of the unhealthy environment that is bitcoin-core-dev.

Someone just wanted to do something, mess with the code, because 'Hey I am Peter Todd and I am important!'
 
  • Like
Reactions: majamalu