Gold collapsing. Bitcoin UP.

imaginary_username

Active Member
Aug 19, 2015
101
174
@Dusty I've had this same discussion with the Electron-cash folks just two days ago! It really comes down to how much you care about the "order" within the same block: is it that important? UI-wise you might be able to sidestep the issue by "grouping" all the tx within the same block, and only displaying the balance once per block. Probably much easier than attempting to reconstructed topo order.
 

Dusty

Active Member
Mar 14, 2016
362
1,172
It really comes down to how much you care about the "order" within the same block: is it that important
Yes, very much so.

Melis is an advanced wallet, it records all movements and need to rebuild the history of transactions, and give them label, add meta informations, and so on.

In order to create the right data in the database I need to know the exact order of processing, and the "natural ordering" inside a block is perfect for that.

If the order changes I need to reorder the transactions with a valid topological order first, and then process them, otherwise I may skip some intermediate transactions from the ones of the last block to the last (topologically speaking) of the current one.

Of course this can be quite a burden when processing very big blocks, and it seems natural that this kind of work should be done by miners.
 

79b79aa8

Well-Known Member
Sep 22, 2015
1,031
3,440
@Dusty It really comes down to how much you care about the "order" within the same block: is it that important? UI-wise you might be able to sidestep the issue by "grouping" all the tx within the same block, and only displaying the balance once per block. Probably much easier than attempting to reconstructed topo order.
that's just a compromise. when i bet on satoshi dice, i want my wallet to show me my bet go out first, and my payout (if any) come back as a consequence. these happen in the same block.
 
  • Like
Reactions: Dusty
Feb 27, 2018
30
94
Yes, very much so.

Melis is an advanced wallet, it records all movements and need to rebuild the history of transactions, and give them label, add meta informations, and so on.

In order to create the right data in the database I need to know the exact order of processing, and the "natural ordering" inside a block is perfect for that.

If the order changes I need to reorder the transactions with a valid topological order first, and then process them, otherwise I may skip some intermediate transactions from the ones of the last block to the last (topologically speaking) of the current one.

Of course this can be quite a burden when processing very big blocks, and it seems natural that this kind of work should be done by miners.
I was part of the aforementioned conversation with im_uname. It's definitely true that CTOR means traditional tx history systems (which use in-block position) will now show negative balances, and I confirmed this with Electron Cash on the CTOR testnet. A few answers were proposed:

* Just show negative balances, doesn't matter.
* Only show updated balance once per block.
* Within each block, order all balance increasers before all balance decreasers.

On a philosophical level, I think CTOR implies a certain view of the blockchain that all transactions in one block happen at the same time. i.e., they were all processed simultaneously. This is definitely at odds with the other philosophical views of blockchain!

(I am not sure what you precisely mean by wallet checking "order of processing", since miners can currently mess with the order of any non-dependent transactions. Your labelling system is not bothered by this?)
 

imaginary_username

Active Member
Aug 19, 2015
101
174
otherwise I may skip some intermediate transactions from the ones of the last block to the last (topologically speaking) of the current one.
That's exactly what I was suggesting for minimal work - whether you want to adopt it is up to you.

You will probably face the same problem whether it's CTOR or AOR (which a lot of people who are against CTOR is incidentally for).

>Of course this can be quite a burden when processing very big blocks

If a wallet already has the transactions you care at hand, a small subset of a given block, it's trivial (computationally; I very much sympathize with the burden of writing new code) to rearrange them in topological order; in no way will the wallet need to re-organize the whole block, all the irrelevant transactions included, topologically. It seems to me that whether blocks get really big or not will not have an effect on this.

Also: as @MarkBLundeberg said, if you're trying to "order" tx that are not mutually dependent including other people's transactions, miners already can and will order them however they please; due to propagation differences, there can be no real consensus on which of two non-dependent transactions "came first" anyway.
 
Feb 27, 2018
30
94
I suspect it may not always be true that transactions can be topologically ordered by a light wallet, at least not in the 'correct' way. Consider this chain of 3 dependent txes:

Alice send -> Bob
Bob send -> Charlie
Charlie send -> Alice

Alice's light wallet won't know about the Bob -> Charlie transaction since it doesn't involve her address. Thus for Alice's wallet, the first and third transactions appear to be independent and could be sorted either way.
 

bitsko

Active Member
Aug 31, 2015
730
1,532
Not only does luke junior refuse to share how he gets those stats, undoubtedly double counting nodez (archival validators) Using something akin to the gini coefficient to measure decentralization has no basis in reality.

network decentralization of the kind that creates censorship resistance occurs no matter the ratio of validation activists to the whole.
 
  • Like
Reactions: AdrianX and Richy_T

imaginary_username

Active Member
Aug 19, 2015
101
174
I suspect it may not always be true that transactions can be topologically ordered by a light wallet, at least not in the 'correct' way. Consider this chain of 3 dependent txes:

Alice send -> Bob
Bob send -> Charlie
Charlie send -> Alice

Alice's light wallet won't know about the Bob -> Charlie transaction since it doesn't involve her address. Thus for Alice's wallet, the first and third transactions appear to be independent and could be sorted either way.
The Bob->Charlie transaction can be fetched by Alice since Charlie's input will refer to prevout txid; if the prevout fetches a transaction from a previous block then there's no further need to check, otherwise Bob/Charlie (or more links) can be fetched until it reaches Alice->Bob. This is more work for the light wallet, but the work still doesn't grow with blocksize.

From Charlie->Alice, Alice can fetch all the related txids all the way to where all her coins were generated from coinbases if she wants to.
 

cypherblock

Active Member
Nov 18, 2015
163
182
Not only does luke junior refuse to share how he gets those stats, undoubtedly double counting nodez
Luke has actually admitted to screwing up and double counting nodes due to IPv4 and IPv6. Who knows what else is incorrect. But no matter it was "merely a doubling" in his words.

luke-jr [8:30 PM]
cypherblock: it was merely a doubling - so they were accurately showing growth still

cypherblock [8:30 PM]
lol

luke-jr [8:31 PM]
it's also a doubling present on every other node counter

[8:31]
IPv4 + IPv6

[8:31]
I "fixed" it by ignoring IPv6
From May 2017.

I was eventually kicked from Core slack for bringing this up.
 
  • Like
Reactions: AdrianX and bitsko
Feb 27, 2018
30
94
The Bob->Charlie transaction can be fetched by Alice since Charlie's input will refer to prevout txid; if the prevout fetches a transaction from a previous block then there's no further need to check, otherwise Bob/Charlie (or more links) can be fetched until it reaches Alice->Bob. This is more work for the light wallet, but the work still doesn't grow with blocksize.

From Charlie->Alice, Alice can fetch all the related txids all the way to where all her coins were generated from coinbases if she wants to.
Oh, right! I wasn't thinking about using the information of whether the ancestor tx is in same block or not. Yeah definitely more work but it's not related to block size.
 
  • Like
Reactions: imaginary_username

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
Agreed with @freetrader
Gavin was referring to the hard limit, which was 1MB at the time. 2147MB is effectively unlimited for the software as sustained capacity, measured by the BCH network stress test, is 16-32MB.

We have moved on since the debate started. Previously, we knew that network capacity was more than 1MB per 10 minutes. What should have been an easy job was increasing a simple constant in the software. This proved so difficult that the whole ledger had to be forked!

Now we have the opposite problem where the hard limit is above network capacity. The difficult job is safely making many different improvements to the software to handle volume which the hard limit allows. This work is being done tirelessly in the background by people like @Peter Tschipper and @theZerg. It is work thousands of times more difficult than changing a "1" in the software, which your granny could do (excepting the grannies of the core devs).

We need to move on from focus on the block hard limit to focus on true scalability: parallelism including sharding, optimising, and many smaller techniques. This includes contributions such as graphene, where we have a BUIP for funding phase II, and also evaluating CTOR/Merklix where ABC is headed. True scalability is way more than naively changing a number and "getting out the way of the users".
in general i agree with this but let us remember: Necessity is the mother of invention.
[doublepost=1538669297][/doublepost]
To be clear, the hard limit is what the software will permit, while default / soft limits are a user setting between zero and the hard limit.
is your definition different than the way i've seen stated elsewhere, that being, the hard limit is what the miners choose to accept while the soft limit is what they choose to produce?
[doublepost=1538669608,1538669006][/doublepost]
On a philosophical level, I think CTOR implies a certain view of the blockchain that all transactions in one block happen at the same time. i.e., they were all processed simultaneously. This is definitely at odds with the other philosophical views of blockchain!
if this is the case, and i'm not sure it is, doesn't this totally screw FSFA? which would be a huge mistake.
 
  • Like
Reactions: Norway

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
The hard limit in the software is the max block size value supported. The physical capacity of the network is a fuzzy value which changes slightly every day and can only be estimated.
isn't this all that is needed @solex? by the time a bloat block attacker gets big enough to execute his attack he will have had to invest enormous hardware resources and risk to produce this one-off block in a reasonable period of time while each day prior to this having to run faster to catch up with an ever increasing hashrate that diminishes his success. at that point, he'd be better off just mining honestly.
[doublepost=1538670998][/doublepost]
I don't expect that scenario again in BCH
why not? look at all the arguing we're having now about what's to go into the next hardfork. it's best to solve the blocksize issue today once and for all.
 
Feb 27, 2018
30
94
if this is the case, and i'm not sure it is, doesn't this totally screw FSFA? which would be a huge mistake.
Well, this does have to do with blocks alone. The selection of transactions to go into a block is another matter. Currently, we unfortunately still have a lot of fee market code left over in ABC and BU, which is frustrating applications like memo.cash which are hitting the 25-dependents limit (a limit introduced due to inefficiencies in fee market code).

Back to the topic of the philosophical view for CTOR... I was talking with Amaury about this and it's interesting to see his perspective. I don't think he will mind me quoting him here:
block always have been transactional (as in transactional databases)
either the block is valid and the whole content of the block is applied, or the block isn't and it has no effect.
Intermediary steps are not shown to users
also, I don't think the negative balance thing is a huge deal.
From what I can tell, this is indeed a precise description of what happens inside the bitcoin software. A block is treated as an atomic database commit that updates the "state of bitcoin" (i.e., the UTXO set). This commit can be unrolled (as happens in reorg) and reapplied as need be. But unless they need to be unrolled, bitcoin doesn't even care about the contents of old blocks because everything important has been fully incorporated. This point of view has been there at the core from the very start, as seen by satoshi's pruning concept.

Other concepts, like 'the history/balance of address 1XXX', are not intrinsic to bitcoin, even though they are the basis of third party services like block explorers and electrum wallets. Still to this day, the bitcoin core sofware does not even build an address history database (even though it could). Even the historical transaction index is optional and turned off by default.
 

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,693
is your definition different than the way i've seen stated elsewhere, that being, the hard limit is what the miners choose to accept while the soft limit is what they choose to produce?
While that is true with software like Core's, the EB is flexible based upon the AD value. If the AD is zero the EB has no effect. If AD is a high value then the EB behaves like a hard limit.

I am thinking that the best way to consider a hard limit is a value which gets reinforced with a new release of the software. i.e. If software X has a block limit constant of 32MB, and a user changes his own value, say to 64MB, then takes a new version of software X, the limit becomes once again 32MB.
In contrast, a user set limit which persists after a new release is a soft limit. This is the case with BU, where a user can set their EB to any value and that value remains unchanged when new BU releases are installed by the user.

In a decentralised environment this is important because the Schelling point strength of a hard limit grows proportionately with the number of users.

edit: The point about "attack" blocks is not always to think about what is economically sound, but also to consider the effects from a miner where the economics is not a concern.
With regards to the block limit, and a repeat of the 1MB, I am sympathetic to the goal of BUIP101 to do away with the limit, however, I do not agree with abandoning dev determined defaults, which just help users to get their node running with minimum learning curve.
 
Last edited:
  • Like
Reactions: AdrianX

Mengerian

Moderator
Staff member
Aug 29, 2015
536
2,597
Webinar / Q&A with Bitcoin ABC Devs and guests:


From the description:
Participants/panel:

Amaury Séchet (Bitcoin ABC). Shammah Chancellor (Bitcoin ABC), Antony Zegers (Bitcoin ABC), Jason B. Cox (Bitcoin ABC), Chris Pacia (BCHD & OB1), Jonathan Toomim (Toomim Brothers Mining), Juan Garavaglia (Bitprim) & Guillermo Paoletti (Bitprim).



Pre-announced subjects were Canonical/Lexical Transaction Ordering, OpCheckDataSig, 100 byte limit for transactions and Block Size. Other subject were also brought up by the attendees and addressed by the panel.