Gold collapsing. Bitcoin UP.

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995

jtoomim

Active Member
Jan 2, 2016
130
253
If you insert a transaction at position 1 (after the coinbase), that will change every node in the tree. The amount of hashing needed to do this is equal to 1 + 0.5 + 0.25 + 0.125 + ... = 2 times the summed length of the txids, multiplied by two because we do two rounds of SHA256. For a 1 GB block with 2.5 million transactions, that's 320 MB of hashing. SHA256 can do around 300 MB/s on a single core, so this will take about 1 second without any parallelization.
Sounds great, can you make a prototype and show us some data before changing the BCH consensus rules in an irreversible way?
Here you go, I hopped into my time machine and wrote the algorithm up for you:
https://github.com/bitcoin/bitcoin/blob/78dae8caccd82cfbfd76557f1fb7d7557c7b5edb/src/consensus/merkle.cpp#L46
 
  • Like
Reactions: adamstgbit

Tomothy

Active Member
Mar 14, 2016
130
317
I'm not sure if this was posted, but here's the writeup of the ongoing evaluation by Rawpool. A lot of this stuff is above me, but I like how everyone is discussing and doing research. I still kinda have some DAA/EDA deja vu going on, where other superior options were found but disregarded. I dunno. I'd rather we measure twice and cut once then cut this all to pieces. lol.

https://medium.com/@Maylee0508/research-on-ctor-and-ttor-of-bitcoin-447ffde49a91
 
  • Like
Reactions: AdrianX

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
Here you go, I hopped into my time machine and wrote the algorithm up for you:
Seriously build it into a client let me take part on a test net with gigabit blocks and let's see how it measures up to BU, XT, ABC and SV.
[doublepost=1536777154][/doublepost]
Are you sure you don't mean BUIP 101, Adrian? ;)
BUIP 101 is too shocking for many people, kids need training weals for confidence. It's only a problem when they begin to depend on them that you can't take them away.

The stepped block limit ends at the same place but it does so in little steps to give those who want to manage block size the illusion they have a limit and it gives them dates to work towards to be prepared for the new limit.
 

adamstgbit

Well-Known Member
Mar 13, 2016
1,206
2,650
I still think BU's EC is the way to go. and this buip101 seems like a fine idea too, by default any blocksize should be accepted.

i wish that come Nov most are minning BU blocks, and bitcoin ABC and VS both CANT fork off with majority hash rate. that would be nice.
 

jtoomim

Active Member
Jan 2, 2016
130
253
Seriously build it into a client let me take part on a test net with gigabit blocks and let's see how it measures up to BU, XT, ABC and SV.
*whoosh*
This algorithm is in BU, XT, ABC, and SV already. It was already tested with gigabyte blocks in the Gigablock Testnet Initiative. If you want to test this algorithm yourself, all you need to do is download any Bitcoin full node client ever made and see how it runs.

It appears to me that you're trying to take the moral high ground, and are saying "Prototype and test it before changing the BCH consensus rules in an irreversible way!" as a political/rhetorical talking point, and a method of shifting the burden of proof. The way you are framing it, it's my burden to prove things so air-tightly that even you can understand it.

Unfortunately, that would be a futile endeavor. The fact that you did not recognize such a basic algorithm as the Merkle tree root calculation indicates that your understanding of technical Bitcoin matters is full of holes on even the basic points.

It may be the obligation of someone proposing a fork (i.e. not me! I just support it) to provide enough supporting evidence such that most reasonable people who are technically knowledgeable about Bitcoin would be convinced. However, it is not a dev's obligation to repeatedly explain the technology to every person whose opinions exceed their knowledge.

I personally tend to be a lot more patient than most in this respect, and will spend many hours patiently explaining things to nearly anyone who shows legitimate curiosity. But even I have my limits. When someone is arguing for the sake of argument, or actively trying to waste my time, I'm likely to just walk away and get back to coding.

And if a person twice asks me to prototype an algorithm that was written by Satoshi himself, that sounds a lot like someone actively trying to waste my time.
 
Last edited:
  • Like
Reactions: imaginary_username

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
  • Like
Reactions: Norway

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
BIP 101 would mean cutting the current limit down to 16 MB in november
I'm not a fundamentalist, start at 32MB and double it from there, do it every 18 months and not every two years, it's the principal I like, not the numbers.
[doublepost=1536802257][/doublepost]
*whoosh*
This algorithm is in BU, XT, ABC, and SV already.
I haven't seen CTOR parallelized gigabit test blocks results, where can I read about this fully parallelized implementation with empirical test results?

Let's not rush anything. We shouldn't be adding consensus rules without actually validating the benefits. Developers should be simplifying the protocol kernel and innovating on top of it.
 
Last edited:

jtoomim

Active Member
Jan 2, 2016
130
253
Serialized LTOR has been tested, and performs just as well as serialized TTOR for validation. It also performs substantially better (2.16x for Xthinner, 7x for Graphene) than the status quo for block propagation, which is the actual bottleneck.

It is not necessary to prove that parallelized LTOR is faster than serialized TTOR, as easier parallelization is merely a side-benefit.

Let's not rush anything
Better yet, let's not bother making any progress at all. That way, we will never have to worry about annoying anybody by changing rules or code. /s

CTOR is not rushed. ABC put CTOR on the roadmap in 2017. The only reason why it feels rushed is that some people (e.g. you) have decided to wait until the 11th hour before objecting to it.

Developers should be simplifying the protocol kernel and innovating on top of it.
Funny, that's exactly what we're trying to do. Lexical order is simpler than TTOR. It reduces the amount of entropy in blocks and simplifies the assumptions one needs to make when validating blocks. It also allows innovative new techniques like Xthinner or transaction proofs-of-absence.
 
Last edited:

AdrianX

Well-Known Member
Aug 28, 2015
2,097
5,797
bitco.in
It also allows innovative new techniques
Just saying let's actually write the code, make the product, put these innovative new techniques into a beta release and see how innovative they are before we commit to changing consensus rules. If they are good, add them; if not, shelve them.
it doesn't matter when ideas are scheduled, we should only go to market with the best ideas after they have been built and tested.

I like the ideas proposed but you're justifying consensus rule changes based on a 2017 scheduling date. That is irrational. Build the application that's going to use this, run it on a 50 core machine in a network with multiple miners in multiple locations all over the world, validate that the changes are worth it empirically.

You are trying to justify a change that is irreversible based on a narrow band of theoretical assumptions.

Software developers far too often think, because the cost of deploying their code is so low they can deploy it and fix it on the fly. They tend to avoid building something that doesn't get used. This is wrong, Bitcoin is money not software.

In the real world people build hundreds of prototypes.There are multiple theoretical design iterations for each prototype, only the viable ones are developed. Each prototype and every assumption is tested then the empirical data from each prototype is analyzed before committing to Beta testing and production.

The design process goes: 1) concept evaluation, 2) theoretical design, 3) prototype design, 4) prototype building, 5) prototype testing (repeat hundreds of time as necessary), 6) data analysis, 7) redesign - Alpha prototype design and building, 8) Alpha prototype testing, 9) Beta design prototype testing, 10) summit production prototype candidate 1 for market evaluation, 11) schedule release.

ABC developers are jumping form step 2 to step 11, and your justification is a random roadmap that makes ABC the network authority that governs consensus rule changes.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
Feb 27, 2018
30
94
I think this is the timportant part:

But it also proves that the current version is not helpful for any performance improvement.
They are correct, there is no short term performance improvement. I think we won't see any gains from CTOR until graphene is in widespread use. The gain will be that ordering information can be trivially removed, i.e., we won't have to wait until more code gets added to handle special orderings.

(When they say "worse performance" by the way, this is just their guess -- the actual benchmarks show that the additional code has no extra burden)
 

Norway

Well-Known Member
Sep 29, 2015
2,424
6,410
You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency.

https://bitcointalk.org/index.php?topic=144895.msg1537333#msg1537333
Great Gavin Andresen quote, @cypherdoc . I will use it on my quest to get a majority vote for BUIP 101.

Full quote:
I really don't understand this logic.

Yes, it is a fact of life that if you have a system where people are competing, the people who are less efficient will be driven out of business. So there will be fewer people in that business.

You seem to be saying that we should subsidize inefficient miners by limiting the block size, therefore driving up fees and making users pay for their inefficiency.

All in the name of vague worries about "too much centralization."
 
Last edited: