Block space as a commodity (a transaction fee market exists without a block size limit)

Bagatell

Active Member
Aug 28, 2015
728
1,191
From the perspective of participating in this network as an independent auditor, I must consider the possibility of the block size growing 1000-fold - or any size which prohibits my ability to validate the network - in the next year or whatever (had there been no limit) likely.
What a curious perspective. Is an independent auditor possible (let alone necessary)? Don't the miners validate the network?
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
@Yoghurt114

Thank you again for the thoughtful post. I am going to break your post down into three pieces:

1. Historical growth rate as an indicator of the future growth rate
2. Fast block propagation
3. Bitcoin as a settlement layer versus a payment network

We clearly disagree on the usefulness of #1, and #3 is more a question of personal opinion and ideology than science. That leaves #2 up for discussion...

Things such as IBLT (and sort of also the relay network) don't work in ways like '7 seconds per MB', it works like '7 seconds for the full block - regardless of its size'. Yes, it will take time for this constant-sized message to propagate through the network, but it does not affect the orphan rate...
I agree that if messages contain precisely the same amount of information (Shannon Entropy) then, all else being equal, they will take the same amount of time to propagate. I disagree, however, that such a scheme is feasible in a scenario where miners are free to build blocks according to their own volition.

Consider IBLTs, for example. It is true that we could all agree on some IBLT design that would allow miners to announce their solved blocks with a constant amount of bytes. However:

(a) A miner's solved block may still include a small percentage of transactions the other miners are not aware (even after reception of the IBLT). These need to be communicated in full.

(b) The IBLT needs to be "designed" for a certain operating condition. If it's designed for 8 MB blocks at 95% mempool homogeneity (or whatever the proper lingo is), then it will not be constant size if people try to use it for 128 MB blocks.

(c) If even 10% of the hash power doesn't play along, then the network propagation impedance grows dramatically (the cost to produce block space for all miners increases).

So, there are at least two reasons the full propagation time will still depend on the size of the block. (1) As the blocks get bigger, and if we imagine a certain fraction of transactions need to be communicate in full (due to mempool heterogeneity), then the total amount of information still grows with the size of the solved block. (2) If a rogue miner decided to spam a massive block, then he couldn't even take advantage of the IBLT coding gain and he'd be at a greater relative disadvantage than if blocks were propagated in full.



Now here's a challenge for you. If you read this post and the responses (you'll have to skip over some unrelated posts), there's a lot of good arguments that it will not be possible to use a block size limit to drive up fees: either the protocol would fork around the limit or people would voluntarily exit the currency system. What is your opinion on this?

https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-39#post-1255
 
Last edited:
  • Like
Reactions: awemany

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
What a curious perspective. Is an independent auditor possible (let alone necessary)? Don't the miners validate the network?
what's more curious to me is that somehow a 1000 fold increase in usage of the network is considered a bad thing.
 
  • Like
Reactions: awemany and Peter R

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@Yoghurt114

thx for participating. we've had pleasant convos on reddit. here's my lazy man's counterpoint list. sorry not time for detail:

pruning
lg miner lg blk vs sm miner attack contradiction
price inc w/ inc use
inc merch
inc FN's
dectzln mean users not FN
minfee adj
spv mining defense
gov dynam
miners good not bad
 

Yoghurt114

New Member
Sep 17, 2015
21
8
What a curious perspective. Is an independent auditor possible (let alone necessary)? Don't the miners validate the network?
An independent auditor (also described as an economically dependent auditor) is a full node. They are not required if you are prepared to trust that miners and miners alone have ownership of the knowledge that the system works as is intended (or doesn't); if that were true then we are back to a inefficiently distributed and costly form of Paypal.

The ability for anyone to audit the blockchain (at reasonable cost) is what makes Bitcoin Bitcoin.

What is curious, is the idea of this perspective being curious. Why would you use a money if you cannot be certain of its integrity and correctness - or have no way to find out?

cypherdoc said:
what's more curious to me is that somehow a 1000 fold increase in usage of the network is considered a bad thing.
It isn't unless it puts the network in a situation where ability to validate is strained or must otherwise be deferred to incentivized miners and such that require trust.

Peter R said:
We clearly disagree on the usefulness of #1, and #3 is more a question of personal opinion and ideology than science. That leaves #2 up for discussion...
Agreed.

I will digest your response and linked topic, and reply after.
 

dgenr8

Member
Sep 18, 2015
62
114
But what if propagation impedance drops to, effectively, 0 - as an IBLT or some such scheme could pull off. What then? Do we have a boundless block size limit? Would transaction fees not be 0 satoshi? And would not a single entity in the world be able to validate all throughput?
@Yoghurt114

Since IBLT is used to forward validated transactions, assuming 0 impedance also assumes that validation is costless. That's not the case, and the hypothetical is uninteresting.

More generally, showing that orphan costs are extremely low in no way "breaks" Peter's work. It only changes the quantitative results toward a higher optimum block size.

You seem to think an end goal is to establish non-zero transaction fees. In fact, zero transaction fees would be fantastic, unless achieving them came at too high a cost to some other metric, such as network hashrate. Peter's model is more holistic because it does not require an assumption about the desirability of any absolute level of transaction fees.

You are not engaging Peter on a coincident plane. Your questions assume results that Peter's model is capable of finding with more primitive inputs. In other words, he is trying to solve the problem, whereas you think you already know the solution. You neither improve his model nor suggest a superior one.
 
  • Like
Reactions: awemany and Peter R

Yoghurt114

New Member
Sep 17, 2015
21
8
@Yoghurt114
Consider IBLTs, for example. It is true that we could all agree on some IBLT design that would allow miners to announce their solved blocks with a constant amount of bytes. However:

(a) A miner's solved block may still include a small percentage of transactions the other miners are not aware (even after reception of the IBLT). These need to be communicated in full.
So long as the base set of data is sufficiently similar, these transactions may be extracted from the IBLT; it is an error correcting data structure. But yes, a fallback to getblock is required if IBLT reconciliation fails.

@Yoghurt114
(b) The IBLT needs to be "designed" for a certain operating condition. If it's designed for 8 MB blocks at 95% mempool homogeneity (or whatever the proper lingo is), then it will not be constant size if people try to use it for 128 MB blocks.

(c) If even 10% of the hash power doesn't play along, then the network propagation impedance grows dramatically (the cost to produce block space for all miners increases).
Yes, the requirement for mempool homogeneity among miners is perhaps the fatal flaw of an IBLT-like scheme to achieve true O(1) block propagation. Rusty Russel gave an excellent talk at the Scaling Bitcoin conference addressing this. It'd make for a less magical-sounding IBLT scheme was originally perpetuated, but it would still drastically reduce propagation impedance and orphan rates to the point block sizes under your paper are huge.

But, lest not forget IBLT is not all there is, as a failure of IBLT does not mean O(1) block propagation is impossible:

Other O(1) block propagation proposals exist. Take, for example, the recent discussion about 'weak blocks' or 'near blocks', which could be thought of as adding a p2pool-type of share propagation method on top of bitcoin's p2p network, where miners announce what they are working on before finding a valid block solution by broadcasting 'nearly valid' blocks (valid at lower difficulty).

These blocks can be pre-validated by competitors and built upon after a constant sized header + O(log n) sized merkle branch containing the coinbase is passed to them when a miner has found a valid block. Allowing anyone to complete the pre-announced block with some 800 bytes of data. Pretty much exactly as much information as the Stratum protocol job announce message, which is even less than IBLT in the best scenario.

This last scheme holds under all 3 of your arguments, and would similarly push propagation impedance into 0.

@Yoghurt114
So, there are at least two reasons the full propagation time will still depend on the size of the block.…

Now here's a challenge for you. If you read this post and the responses (you'll have to skip over some unrelated posts), there's a lot of good arguments that it will not be possible to use a block size limit to drive up fees: either the protocol would fork around the limit or people would voluntarily exit the currency system. What is your opinion on this?

https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-39#post-1255

I think you're referring to this post:

https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-40#post-1275

Right. So if I understand correctly then Bitcoin is implied to not be interesting if pushing a transaction onto the blockchain is perceived to be expensive. If that perception exists then it would follow that incentives through fees over valued content are not a good way at all to run a decentralized broadcast system. Perhaps that's true, we haven't seen a fee market form out of dust yet, so I couldn't argue one way or the other. Incentives in the longer term are interesting, but one can only speculate.

But consider: where would these people move to? If it's a block size limitless Litecoin fork then why would this fork suddenly have solved the reason Bitcoin *does* have a limit. The reason there is a limit in Bitcoin is not an arbitrary one, while the limit is currently perceived to be quite low and rather a nuisance for a great many people, myself included, it undeniably allows for the system to remain decentralized at much greater easy, while the underlying problems it is protecting us from are being fixed.

Check out this thread: http://lists.linuxfoundation.org/pipermail/bitcoin-dev/2015-May/008091.html

But fine, let's speculate (and this is 10-year long term). I can perceive of a future where if the Lightning Network exists and turns out to be useful, on-chain transactions will be becoming very expensive. 1, 2, 10 dollars, perhaps more (or, optimistically, less). Those, say, 10 dollars - then considered an on-ramp similar to opening a bank account - allow you to participate in a highly efficient payment network that scales, allowing you to transact at sub-sub-cent fees. I don't think paying for coffee with an on-chain transaction is feasible in the long term as has been said many many times, by many people, before.

These costly on-the-blockchain transactions and super-cheap off-the-blockchain-but-still-Bitcoin transactions keep the system as a whole relatively economical; the quadratic broadcast network cost problem is dodged by using a scalable network on top, while still providing sufficient incentives to contributors on the bottom layer.

I also think miners will mature, they will actually know what they're on about instead of say 'just tell us what software to run and we'll run it, we'll also leave all defaults unchanged and irresponsibly focus on nothing-but-short-term profits'. That would massively mitigate 'spam/stress attacks', aswell as make the chance of a reoccurence of a July 4th fork negligible. They will develop appropriate fee policies according to all manner of constraints. Quite possibly orphan risk will be a part of that, though I think by that time we'll have O(1) block propagation - so the size of the blocks will not be as relevant. Instead, I think we'll see fee policies according to some 'time of day/week' schedule; we'd be seeing low priority transactions form that can wait for, say, half a day before being included.

As for user demand: there's always money in the banana stand. Bitcoin provides unique utility and it is only logical a market would form around it so long as that is true.
 

Yoghurt114

New Member
Sep 17, 2015
21
8
@Yoghurt114
Since IBLT is used to forward validated transactions, assuming 0 impedance also assumes that validation is costless. That's not the case, and the hypothetical is uninteresting.
Validation of an IBLT is constant-time, it would result in 0 (but sort of agreed; other variables would make the value negligibly close to 0 rather than actually 0). So long as the impedance does not change, or changes utterly negligibly, respective to the block size - does the result break.

@Yoghurt114
You seem to think an end goal is to establish non-zero transaction fees. In fact, zero transaction fees would be fantastic, unless achieving them came at too high a cost to some other metric, such as network hashrate.
Zero fees, they *would* be fantastic, but unless there is some other way to incentivize miners to secure the network - and I would very happily be demonstrated such an incentive, I don't really think it's that great a feat.

@Yoghurt114
You are not engaging Peter on a coincident plane. Your questions assume results that Peter's model is capable of finding with more primitive inputs. In other words, he is trying to solve the problem, whereas you think you already know the solution. You neither improve his model nor suggest a superior one.
Right.

I'm exploring the validity of the claims perpetuated (widely) in the paper. If that is something to object to then I withdraw everything I've said here.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
My problems with LN hubs:

Centralized
Possible MSB's
Untested
Vaporware
Fee siphoning weakening mining
Bitcoin constraining thus centralizing
Requires sunk money
Requires 24/7 uptime
Requires several soft forks
Requires bigger blocks
Code complexity
Probable high fees
Contrary to Satoshi's original vision
User complexity-Bitcoin ed & LN ed required
 
Last edited:
  • Like
Reactions: dgenr8

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Here's a new "half-baked" diagram I've been working on to help understand how information enters the Bitcoin Network and how we come to consensus on the state of the ledger.



Assume that (transactional) information enters the Bitcoin Network at some constant rate. The uncertainty of the state of the ledger thus increases with time as miners work to find a new block solution. However, using techniques like weak blocks, mempool syncing, etc., a lot of this "uncertainty" can be eliminated (the grey region). However, there will still exists some uncertainty (it's represented by the purple region). The point of finding blocks is to sort of "collapse the wave function" and come to a new state of agreement. This necessitates the communication of information to resolve the remaining uncertainty. The amount of information depends on how much uncertainty exists. I argue that (all other variables held constant) the amount of uncertainty depends on the rate at which new information enters the system in the first place (i.e., to the number of TXs per second).
 
Last edited:

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
Yes, the requirement for mempool homogeneity among miners is perhaps the fatal flaw of an IBLT-like scheme to achieve true O(1) block propagation. Rusty Russel gave an excellent talk at the Scaling Bitcoin conference addressing this. It'd make for a less magical-sounding IBLT scheme was originally perpetuated, but it would still drastically reduce propagation impedance and orphan rates to the point block sizes under your paper are huge.
It sounds like we are in agreement then. IBLTs could significantly reduce the propagation impedance, but they won't in general make the impedance zero. Reduced propagation impedance would allow miners to produce large blocks for a lower "orphan-adjusted" cost per byte.

This seems like a good thing to me.
 
  • Like
Reactions: awemany

Yoghurt114

New Member
Sep 17, 2015
21
8
Kind of getting off-topic, but I'm happy to engage.

My problems with LN hubs:

Centralized
Possible MSB's
Untested
Vaporware
Fee siphoning weakening mining
Bitcoin constraining thus centralizing
Requires sunk money
Requires 24/7 uptime
Requires several soft forks
Requires bigger blocks
Code complexity
Probable high fees
Contrary to Satoshi's original vision
User complexity-Bitcoin ed & LN ed required
I will try and be as concise as you are ;)

> LN hubs

They are not hubs, they are nodes.

> Centralized

It is a peer-to-peer network much like Bitcoin's, difference is it's point-to-point, not broadcast.

> Untested

Well yes. There isn't even a fully fledged implementation yet. It's being worked on on the Alpha sidechain. But agreed, it will be interesting to see how this works in the wild.

> Fee siphoning weakening mining

Demand begets demand.

> Bitcoin constraining thus centralizing

I don't understand that.

> Requires sunk money

It requires active money; holding/saving and doing nothing with it is sunk money.

> Requires 24/7 uptime

With BIP62's malleability fix it allows outsourcing and further reduction of risk (such as outages).

> Requires several soft forks

None of which are controversial and allow for an entire plethora of additional smart contracts that are useful.

> Requires bigger blocks

Doesn't necessarily, but yes big blocks are implied if this takes off in a big way ... this is a problem now?

> Code complexity

None of which siphons through to the existing network.

> Probable high fees

This is possibly a fundamental disagreement I've expanded on earlier, but you're right.

> Contrary to Satoshi's original vision

Don't know of any proof of that, nor do I find it likely (he himself designed the smart contract system and proposed a method of doing high frequency transactions using nSequence - and whatever the case, I don't think this is very relevant.

> User complexity-Bitcoin ed & LN ed required

Smart wallet designs can actually make the general Bitcoin UX simpler; no more waiting for confirmations, no more zero-confs. It's literally 'pay, send, move on'. As for privacy (also a complex issue in Bitcoin) - LN is a huge step forward.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,995
@Yoghurt114

let's make a deal ;)

you give us a no limit now and we'll give you the LN. by the time the network explodes next year as a result of no limit, LN will be ready to take over as either a stress reducer or as the primary pmt network as you envision it.

how's that?
 

Bloomie

Administrator
Staff member
Aug 19, 2015
511
803
We would actually benefit from a thread discussing the LN specifically. Any takers?
 

Yoghurt114

New Member
Sep 17, 2015
21
8
The point of finding blocks is to sort of "collapse the wave function" and come to a new state of perfect agreement. This necessitates the communication of information to resolve the remaining uncertainty.
(if I understand that diagram correctly - which is nice by the way)

There is no reason for uncertainty not to remain after a block is found by a miner; finding a block does not make all uncertainty dissipate into it. Only that which has been included in the block becomes 'more certain' as new miners are working on top of it, that which has not been included remains uncertain.

For example, the stress test that happened a few weeks ago allowed for the formation of a backlog of, what was it? 50 or so thousand transactions? Those transactions remained uncertain for a long time, and the 'wave function collapse' left most of those transactions untouched for many blocks.

----

The amount of information depends on how much uncertainty exists.
So, again, this assumes O(1) block propagation is impossible, which I argue it isn't. But regardless, see below.

I argue that the amount of uncertainty depends on the rate at which new information enters the system in the first place (i.e., to the number of TXs per second.
Is it not more logical this depends on _the way_ a miner composes his block, *or* how much peers know of its contents?

I'll break down what O(1) block propagation methods are being tinkered with today.

- Weak blocks, they allow peers to know fully the composition and contents of a block a peer is working on (with some delay - admittedly causing orphan risk), but that would fully covers knowledge of contents.

- IBLT (proper), which requires homogeneous mempools - requiring everyone to remember everything, or everyone to forget equally as much. But it covers knowledge of the contents of a block regardless. There's orphan risk when peers don't play along.

- IBLT (rusty), it makes some assumptions on block composition policies by assuming they are sorted by fee density (the same assumption your paper makes in order for a fee market to form) - that covers knowledge of the method of composition. At the cost of this assumption not being true, and introducing orphan risk.

- Fast relay network, uses 'last-seen' tx selection, which in combination with IBLT would also result in highly efficient O(1) propagation, but it's fairly centralized and vulnerable to DDoS.

So, the critical reader will observe all of these current designs do (possibly or otherwise) introduce orphan risk.

In other words:

It sounds like we are in agreement then. IBLTs could significantly reduce the propagation impedance, but they won't in general make the impedance zero.
Somewhat agreed with current proposals, to a point. I must stress I am thoroughly unconvinced O(1) propagation is outright impossible - which would eliminate block-size dependent orphan risks, and I think it is unsafe to assume that it is.

But. At an impedance indistinguishable from zero (as even current O(1) designs would pull off), block sizes that are far greater than what is currently proposed by even BIP 101 would be possible under your result. From previous observation in this thread I noticed this is generally regarded as a good thing; it is not. Not if these block sizes (looping back to my original objection) prohibit independent auditors from validating the network and the system. If you cannot know the system is what the system is, Bitcoin holds no interest as a decentralized money and network, and if that is true, then there is no reason to, instead, use Ripple or Paypal. Audit-ability is not something we can just throw in the bin.
 

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595


There is no reason for uncertainty not to remain after a block is found by a miner; finding a block does not make all uncertainty dissipate into it. Only that which has been included in the block becomes 'more certain' as new miners are working on top of it, that which has not been included remains uncertain.
Agreed. Like I said the diagram is only "half backed." I think it still communicates the point I'm trying to make:

- the blocks serve to remove uncertainty regarding the state of the ledger
- the amount of uncertainty they remove is proportional to how much information needs to be communicated during the block solution announcement (by Shannon-Hartley theorem)

I think the above two points are facts. Agree?

I argue that the amount of uncertainty depends on the rate at which new information enters the system in the first place (i.e., to the number of TXs per second).
Is it not more logical this depends on _the way_ a miner composes his block, *or* how much peers know of its contents?
Yes, that matters too. I agree. But I'm holding that constant to isolate for the just effect of more transactions per second.

It sounds like we are in agreement then. IBLTs could significantly reduce the propagation impedance, but they won't in general make the impedance zero.
Somewhat agreed with current proposals, to a point. I must stress I am thoroughly unconvinced O(1) propagation is outright impossible - which would eliminate block-size dependent orphan risks, and I think it is unsafe to assume that it is.
Good. Originally, people were saying that IBLT proved that my paper was "fundamentally flawed." Now we agree that the results still apply even considering IBLTs.

I believe that any scheme for true network-wide (100% hash power participation) O(1) block propagation will have similar "gotchas" if you permit miners to build blocks according to their own volition.
 
Last edited:

Yoghurt114

New Member
Sep 17, 2015
21
8
Do you have any comments on the effect of prohibiting nodes from the ability of validating; my primary concern?
 

dgenr8

Member
Sep 18, 2015
62
114
Zero fees, they *would* be fantastic, but unless there is some other way to incentivize miners to secure the network - and I would very happily be demonstrated such an incentive, I don't really think it's that great a feat.
That's an easy one. The incentive will come from the same place it's come from so far: rising exchange rate (and equivalently, in some far off post-fiat era, deflation).

The big worry about miner incentives is due to halving. But the exchange rate increase since the last halving has not only nullified its effect, but paid for the next 3 halvings as well.

You don't help the exchange rate by artificially restricting the capacity of the system. Quite the opposite.


Not if these block sizes (looping back to my original objection) prohibit independent auditors from validating the network and the system.
The auditors can just plug into the same weak blocks, IBLTs, or whatever scheme you claim allows "zero propagation impedance."


Validation of an IBLT is constant-time, it would result in 0
Weak blocks and IBLT do not increase the a miner's validation rate.

If the network ever produces transactions faster than our miner is able to validate them, at those times his blocks will be limited in size by his validation speed.

At those times, such miners will produce smaller blocks than more powerful miners who are only constrained by the chance of OTHER miners not being able to receive and validate their blocks fast enough (orphan risk).

If by some miracle our miner knows that everyone else is totally caught up with the tx set he is about to produce (an impossible condition), then the model places no restraint on the blocksize. The consequences of an impossible antecedent aren't very interesting.
 
  • Like
Reactions: awemany

Peter R

Well-Known Member
Aug 28, 2015
1,398
5,595
"Do you have any comments on the effect of prohibiting nodes from the ability of validating; my primary concern?"

I do, but, like LN, it is not relevant to this thread. The hypothesis in this thread is that a fee market exists without a block size limit. Whether it results in an equilibrium block size ten years down the road that allows raspberry-pis to act as full nodes, or one that requires specialized hardware and Gbps connections, is outside the scope of this thread.

I will say one thing, however. Right now, the network propagation impedance is ~7 sec/MB. We would need to divide that by about one thousand in order for miners to efficiently produce GB blocks. So I'm more worried about the propagation impedance not falling fast enough than I am about it falling too fast.
 
Last edited:
  • Like
Reactions: awemany and dgenr8

Yoghurt114

New Member
Sep 17, 2015
21
8
That's an easy one. The incentive will come from the same place it's come from so far: rising exchange rate (and equivalently, in some far off post-fiat era, deflation).

The big worry about miner incentives is due to halving. But the exchange rate increase since the last halving has not only nullified its effect, but paid for the next 3 halvings as well.
So far it hasn't come from a rising exchange rate, it's come from inflation due to the generation of new coins.

(Price) deflation of a money happens when the economy it is being employed by increases in productivity and efficiency, it is not an inherent property of a scarce money. You therefore cannot base the incentive structure that maintains the scarce properties of the money on price deflation (except by somehow guaranteeing continuing increased productivity of said economy).

You also can't guarantee continual increase of the exchange rate because the free market equilibruim of the exchange rate would immediately compensate.

The auditors can just plug into the same weak blocks, IBLTs, or whatever scheme you claim allows "zero propagation impedance."
None of the O(1) block propagation proposals fix the inherent O(n^2) scalability problem of a broadcast network.

If the network ever produces transactions faster than our miner is able to validate them, at those times his blocks will be limited in size by his validation speed.

At those times, such miners will produce smaller blocks than more powerful miners who are only constrained by the chance of OTHER miners not being able to receive and validate their blocks fast enough (orphan risk).

If by some miracle our miner knows that everyone else is totally caught up with the tx set he is about to produce (an impossible condition), then the model places no restraint on the blocksize. The consequences of an impossible antecedent aren't very interesting.
Miners are given massive incentives which allows them to validate and compose an inconceivable amount of transactions the likes full nodes can not. Full nodes have an indirect incentive they can use to subsidize their validating, but this is not nearly as boundless as those given to miners - for this system to be viable, there need to be many independent auditors (or economically dependent, whichever you prefer). The weakest link in the situation where a block size limit does not exist - full nodes - would be left dead in the water.
 

Members online