Is Trustlessness Terminating Valid Agreement?


New Member
Dec 28, 2015

A short paper that starts from an established view of trustlessness to prove distributed centralized permissioned tokenized ledgers cannot prove their own consistency, which is a necessary condition for concurrency.

Various attempts have been made in recent years to state necessary and sufficient conditions for someone’s trusting a given transaction. The attempts have often been such that they can be stated in a form similar to the following:

(a) S trusts that P IFF
i. P is a valid transaction,
ii. S agrees that P, and
iii. S is settled in agreeing that P.

For example:

(b) S trusts that P IFF
i. S accepts P,
ii. S holds for P,
iii. P is a valid transaction.

Also necessary and sufficient conditions for trustlessness as follows:

(c) S trusts that P IFF
i. P is a valid transaction,
ii. S is sure P is a valid transaction
iii. S has a model for P.

I shall argue that (a) is false in that the conditions stated therein do not constitute a sufficient condition for the truth of the notion that S trusts that P. The same argument will show that (b) and (c) fail if ‘holds for’ or ‘has a model for’ ('model', 'ledger', etc.) is substituted for ‘is settled in agreeing that’ throughout.

I shall begin by noting three points. First, in that sense of “terminating” in which S’s being settled in agreeing P is a necessary condition of S’s trusting that P, it is possible for a person to be settled in agreeing to a transaction that is in fact invalid. Secondly, for any transaction P, if S is settled in agreeing P, and P entails Q, and S deduces Q from P and agrees Q as a result of this deduction, then S is settled in agreeing Q. Finally, we attach a negligible amount of metadata as proposition that describes the transaction itself, on the assumption that transaction identity rules satisfy people identity requirements. Keeping these three points in mind, I shall now present a case in which the conditions stated in (a) are true for some transaction, though it is at the same time false that the person in question trusts that transaction.

This data is signed by public keys which are participants in a ring such that trusting the signed data does not reveal the participant’s unique key. We imagine transactions, generally, are predictions (costly propositions about the world).

Case I

Suppose that Pim and Godot have placed a bid for the same item. And suppose that Pim has strong evidence for the following conjunctive transaction (with metadata):

(d) Godot is the man who will get the item, and Godot has ten
coins in his pocket.

A caveat is that: All processes that decide choose the same value[0]. Pim and Godot decide on (d), and therefore (e).

Pim’s settlement (he sells) for (d) might be that the president of the company assured him that Godot would in the end be selected, and that he, Pim, had counted the coins in Godot’s pocket ten minutes ago. Transaction (d) entails:

(e) The man who will get the item has ten coins in his pocket.

Let us suppose that Pim sees the entailment from (d) to (e), and accepts
that (e) on the grounds of (d), for which he has strong evidence. In this case, Pim is clearly settled in agreeing that (e) is valid.

But imagine, further, that unknown to Pim, he himself, not Godot, will get the item. And, also, unknown to Pim, he himself has ten coins in his pocket. Transaction (e) is then valid, though transaction (d), from which Pim inferred (e), is invalid. In our example, then, all of the following are true: (i) (e) is valid, (ii) Pim agrees that (e) is valid, and (iii) Pim is settled in agreeing that (e) is valid. But it is equally clear that Pim does not trust that (e) is valid; for (e) is valid in virtue of the number of coins in Pim’s pocket, while Pim does not trust how many coins are in Pim’s pocket, and bases his agreement in (e) on a count of the coins in Godot’s pocket, whom he falsely agrees to be the man who will get the item.

This example shows that definition (a) does not state a sufficient condition
for someone’s trusting a given transaction. The same cases, with appropriate changes, will suffice to show that neither definition (b) nor definition (c)
do so either.

Found originally at

Last edited:


Well-Known Member
Aug 28, 2015
Welcome and thanks.
What is IFF?

And for the ADD like myself could you include a conclusion or an abstract before launched onto your proof. Non the less sounds interesting.

Thanks for posting.
  • Like
Reactions: nerdfiles


Active Member
Feb 26, 2016
any derived crypto is subject to the unstoppable hardfork attack and is thus nearly worthless as any sort of trustless instrument. There is implicit trust that the underlying crypto will not be hardforked to change some or all aspects of any instrument built on top.

NXT core vs NXT assets
ETH vs DAO contract

With recent examples of the hardfork attack, it cannot be assumed that there wont be a hardfork that changes fundamental things. And if fundamental things can be changed, how can any agreement built on top of this be trustless?
  • Like
Reactions: nerdfiles


New Member
Dec 28, 2015
It's clear that proving only transitive trust is bad for trust, but at the same time, trustlessness seems like an immanent property, not an emergent property (where metaphor of /foundations/ or /fundamentals/ becomes errant to apply to immanent entities or states of the physical world, like Cartesian Dualism became errant to apply to psychological laws in the 20th century), which cannot be conferred by "tried and tested" methods of capitalism.

One option is to expand the definition of "trustlessness" to include a fourth condition, like a feature of a reliability condition — or to remove a premise of trustlessness, and device systems that are "minimalistically trustless". Like trustlessness is (p is a valid transaction) and (S has a model for p) strictly and only, but redefine what "trust" is for. What role does it play in the particular handshake process, for instance? Or how does it play a role in the actual development of algorithms we presumably trust developers to implement.

I've been working on a thesis concerning concept of what I call "deflationary orders of trust" to expand and with hope strike up a viable response to such an inexorable state of affairs. Generally, I'd say the problem is deeper than another code library — but it is one more akin to the philosophical debates that began between logically positivists and their interlocutors of the early 20th century. Remember, many of them came to the conclusion that "folk meaning" is systematically in error, and that almost all religious "proposition" was technically meaningless expression, albeit "in use".

But I think it's a question now of: okay, do we know what properties our system needs upfront before we unveil it? As a system, rather than merely a hodgepodge of philosophically questionable assumptions.

For instance, it begins with the very possibility of dialogue:

"Every economic consensus protocol gives proportionately more weight to rich people." - some random guy on the internet

This is what a very important member of the cryptocommunity has said. Is it true? Is it false? What is its status as a proposition? Is it a conjecture, a prediction, or a hypothesis? Is it testable? Why are developers building toward end goals that are fundamentally untestable? Why have we seen "consensus" implemented by almost every successful anarchism, where it is anarchism, yet we seem to think technology must obey our cognitive and epistemic breaks with facts of the world, and is such that we are tossing around ontological commitments as if they were necessarily true?
[doublepost=1466697612,1466696998][/doublepost]Might suggest that the author is reviewing an established view of trustlessness such that

Q is trustless iff S trusts P iff HJK

The problem is that the established view holds trustlessness to be meaningful, but under the model of trust as inflationary: truth-apt sentences about the world needing to correspond to anything — mental states, physical states, length of the chain, consensus of vote (assuming 1Person = 1Vote).

The paper demonstrates that this model enables counterexamples under ideal circumstances of exchange wherein atomically the actor does not in fact trust the transaction; nor can we prove a correspondence of the scope of the transaction to either actor. I think an interesting corollary is the consequences for proof of stake, such that the value of a coin might leak through the errancy of excess (think XMLbombs or "proof of encoding" contracts) in transactions that are UTXOs for longer than our expectations.


Active Member
Feb 26, 2016
I dont really understand a lot of the things you are saying, but maybe if you can make sure theory takes into account the very real possibility that fundamental properties that most people assume can (and likely will) be violated.

like 21 million bitcoins being increased
txfees going up 100x
other clever ways a new hardfork would transfer value from the bitcoins to some specific party, ie. enabling certain company's tech solutions while disabling alternatives