Gold collapsing. Bitcoin UP.

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
GMO!-->GMI!
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
The author of this book probably didn't intend this, but he exactly described why the DAC/DAO (or whatever they're calling it these days) concept will always be vaporware:

http://www.roboticstrends.com/article/why_self_driving_cars_should_never_be_fully_autonomous/

If robotics in extreme environments are any guide, Mindell says, self-driving cars should not be fully self-driving. That idea, he notes, is belied by decades of examples involving spacecraft, underwater exploration, air travel, and more. In each of those spheres, fully automated vehicles have frequently been promised, yet the most state-of-the-art products still have a driver or pilot somewhere in the network. This is one reason Mindell thinks cars are not on the road to complete automation.

“That’s just proven to be a loser of an approach in a lot of other domains,” Mindell says. “I’m not arguing this from first principles. There are 40 years’ worth of examples.”
Just replace "car" with "businesses" and all the arguments are still valid.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
@Peter R

Seems to me you're talking about pre-activation whereas I've been talking post. Once the switch is flipped to activate BU, there becomes 2 separate chains with their own rules.

But remember that BU chooses the longest chain. There is no "activation switch" like with BIP100 or BIP101. The BU chain will simply be the one with at least 51% of the mining power -- if this happens to have blocks > 1MB then I guess there might be a minority (and rapidly diminishing) chain used by the 1MB holdouts.
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
The author of this book probably didn't intend this, but he exactly described why the DAC/DAO (or whatever they're calling it these days) concept will always be vaporware:

http://www.roboticstrends.com/article/why_self_driving_cars_should_never_be_fully_autonomous/
Just replace "car" with "businesses" and all the arguments are still valid.
I don't think that you really understand DAC/DAOs... people are important parts of them just like the author above is suggesting that people remain an important part of cars. The "autonomous" part is meant differently -- its means that the "corporate strategy" (and other overarching planning) is an emergent property not a top-down decision. The theory underpinning of a DAC is really the same theory that suggests that a free market outperforms a managed one.

Basically a DAC tries to capture the free market inside the bottle of a single corporation.
 

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
I don't think that you really understand DAC/DAOs... people are important parts of them just like the author above is suggesting that people remain an important part of cars.
So a DAC/DAO has nothing to do with proposals like this?

https://bitcointalk.org/index.php?topic=919116.msg12615925#msg12615925

Or like this?

https://blog.ethereum.org/2014/05/06/daos-dacs-das-and-more-an-incomplete-terminology-guide/

A full autonomous agent, or a full artificial intelligence, is the dream of science fiction; such an entity would be able to adjust to arbitrary changes in circumstances, and even expand to manufacture the hardware needed for its own sustainability in theory. Between that, and single purpose agents like computer viruses, is a large range of possibilities, on a scale which can alternatively be described as intelligence or versatility. For example, the self-replicating cloud service, in its simplest form, would only be able to rent servers from a specific set of providers (eg. Amazon, Microtronix and Namecheap). A more complex version, however, should be able to figure out how to rent a server from any provider given only a link to its website, and then use any search engine to locate new websites (and, of course, new search engines in case Google fails). The next level from there would involve upgrading its own software, perhaps using evolutionary algorithms, or being able to adapt to new paradigms of server rental (eg. make offers for ordinary users to install its software and earn funds with their desktops), and then the penultimate step consists of being able to discover and enter new industries (the ultimate step, of course, is generalizing completely into a full AI).
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
One purpose in allowing a user selected non-verify and non-forwarding size is it limit the damage a rogue miner could do by creating a monster block.
 
  • Like
Reactions: cypherdoc

Justus Ranvier

Active Member
Aug 28, 2015
875
3,746
So a guy on Reddit comments on a thread about an image which displays a measured exponential increase in the size of blocks that, "There's no reason to believe the block size is increasing exponentially."

That is some Baghdad Bob level of shameless mind control propaganda being attempted right there.

Or maybe this:

 

rocks

Active Member
Sep 24, 2015
586
2,284
One purpose in allowing a user selected non-verify and non-forwarding size is it limit the damage a rogue miner could do by creating a monster block.
A monster block most likely would be orphaned, it would take too long to propagate and if theother miners are smart they wouldn't build on it anyway
 

theZerg

Moderator
Staff member
Aug 28, 2015
1,012
2,327
But how are they going to be "smart" without hacking the code? The options proposed in BU are exactly what is needed to allow miners (and others) to make that decision.
 

rocks

Active Member
Sep 24, 2015
586
2,284
@theZerg

Agree there is no good method for them to be smart today, except to update their code. And that is not a reliable deterant to a monster block anyway.

I still think it is bad if some level of uncertainty is introduced for miners on whether or not the network will accept a block. A configurable limit creates uncertainty around block creation.

The only uncertainty today is related to the increased orphan rate probability of larger blocks, but this can be calculated based on known statistics making it more of a calculated decision. A p2p network where nodes set there own limits, is not visable to miners and creates uncertainty.

Mining is a business, businesses are supposed to minimize risk by minimizing uncertainty. If they are uncertain they will mine smaller blocks to be safe.

Maybe if there was a mechanism where nodes could announce the limit they are using, then that would reduce uncertainty. Miners could constantly poll the network for node limits and then make educated decisions based on that information.
 

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@rocks

All miners have to do is monitor the average block size of say the last 100 blocks, or whatever number they feel comfortable with, and then mine that size.

@theZerg

I think leaving out excessive blocks would be better. It has to do with respecting the rights of a node to choose what block size they want. If they want to participate thereafter in a bigger block chain, they can change their setting upwards.

I'd also consider leaving out even shaping or anything else as you don't want any excuses not to run your code. Just my opinion.
 
Last edited:
  • Like
Reactions: AdrianX

rocks

Active Member
Sep 24, 2015
586
2,284
@cypherdoc
Then how do miners increase block size in a manner where they have strong confidence a new larger block will be propagated? The average of the past 100 blocks provides no information on if even slightly larger blocks will be transmitted.

It seems here the only real method for miners to test the network is to try a larger sizes to test, but this runs the risk being orphaned, so they are financially motivated to keep to smaller known OK sizes.
 
  • Like
Reactions: AdrianX

cypherdoc

Well-Known Member
Aug 26, 2015
5,257
12,998
@cypherdoc
Then how do miners increase block size in a manner where they have strong confidence a new larger block will be propagated? The average of the past 100 blocks provides no information on if even slightly larger blocks will be transmitted.

It seems here the only real method for miners to test the network is to try a larger sizes to test, but this runs the risk being orphaned, so they are financially motivated to keep to smaller known OK sizes.
I go into that here: https://bitco.in/forum/threads/gold-collapsing-bitcoin-up.16/page-66#post-2431

There will be a distribution of block sizes around an average. Risk taking pools, like new or small ones, can mine blocks slightly above the average but within the bounds of the distribution which will gradually push up the average and the distribution.
 
Last edited:

solex

Moderator
Staff member
Aug 22, 2015
1,558
4,695
@Peter R
Indeed. Just done so.
I am disappointed in Rusty displaying some uncharacteristic woolly thinking i.e. that 90% full-to-the-max blocks can be seen for 7 days. Many miners produce small or empty blocks regardless of the number of tx in the mempool. This will be the case until we see the block reward below block fees.

He may have missed the "brownout" color which starts at 75% and is beyond red, when legitimate user tx are left piling up.

Smart IT people do not let market facing systems run 60% of capacity if they can avoid it. That is a "red" level for action. The difference here is that most core devs still want the 1MB to squeeze out "frivolous tx" and are hoping for the settlement layer dream which fits the BS/LN paradigm.

PS. I am admiring your energy to hammer away at the unwashed masses of the 1MB troll army (picture Thorin wading into the Orcs). However, I think the real way forward is Bitcoin XT pressing on and even better: Bitcoin Unlimited, with more big-block tolerant software rolled out.
 
Last edited: