bitsko
Active Member
- Aug 31, 2015
- 730
- 1,532
based on that bsv roadmap id like to call for BU devs to help implement parallel block validation on BSV.
what does the BU members think?
what does the BU members think?
Renaissance of the Genesis ...New road map out https://bitcoinsv.io/2019/04/17/the-roadmap-to-genesis-part-1/
"We have previously signalled an intent to raise the cap to 512MB however after consultation with the Bitcoin Association (the owner of the Bitcoin SV project) and miners representing a significant majority of hash rate it has been decided that the Bitcoin SV software will implement a default of 2GB in July. "
haters r gunna pop soon.
You make an excellent point except that it is backwards:@Zangelbert Bingledack If we want to be entirely objective, no one here or on reddit seemed to mind massive ad hominem arguments against Bitcoin Core team members, but somehow Craig Wright deserves unbiased treatment now?
It was pointed out earlier that pruning for full nodes is available. This allows a node that has already caught up to the head of the network to operate from a UTXO set that it has verified and is keeping track of.New road map out https://bitcoinsv.io/2019/04/17/the-roadmap-to-genesis-part-1/
"We have previously signalled an intent to raise the cap to 512MB however after consultation with the Bitcoin Association (the owner of the Bitcoin SV project) and miners representing a significant majority of hash rate it has been decided that the Bitcoin SV software will implement a default of 2GB in July. "
You were doing well until you got to this partThis is the man who created Bitcoin
It was pointed out earlier that pruning for full nodes is available. This allows a node that has already caught up to the head of the network to operate from a UTXO set that it has verified and is keeping track of.
However, if a new node is trying to join the network, is it able to catchup to the current block and function if the only blocks available were pruned? My understanding is today that answer is, no.
Note, this isn't just for transaction data. It's possible with really large blocks that most miner and full nodes will drop large OP_RETURN data for cost purposes, or simply because they don't want to pay the network traffic.
In that case new nodes need to be able to join a network using some form of merkel blocks with a mix of available transactions and merkel tree hashes where transactions are not available.
Until the ability for client software to handle this, it does seem risky to enable very large datasets. 2GB blocks is 1PB a year, I know large scale systems, there is a non trival cost to not just storing, but processing and serving that volume of data. It will be pruned by online nodes, which means new nodes need to be able to handle a pruned history.
Yes, it's for big businesses. Not for hobbyists.there is a non trival cost to not just storing, but processing and serving that volume of data.
Still don't get it. Ok, you'll get it later.You were doing well until you got to this part
I remember this thread discussing far more about core ideas with blocksize than slandering core devs. Remember the endless discussions with Jonny?@Zangelbert Bingledack If we want to be entirely objective, no one here or on reddit seemed to mind massive ad hominem arguments against Bitcoin Core team members, but somehow Craig Wright deserves unbiased treatment now?
Be removed.
It used to just exit the script and evaluate pass or fail based on what was on the stack. As any other script would. That allowed you to use op_1 op_return in scriptsig which will always allow you spend no matter what is in scriptpubkey. IIRC it was gavin and satoshi that changed to it make op_return always fail the script as a quick bug fix. One solution is to make it illegal in scriptsig but at the time the code didn't treat scriptsig and scriptpubkey as seperate invocations of the script engine. It concatenated them with an op_codeseperator in between and ran both scripts in a single invocation. So the proper fix would have been more complex and error prone.But the blog does mention.
Perhaps before my time, but I don't remember any changes. Does anyone know whats the difference to the old functionality?
- Restoration of original OP_RETURN functionality (with a fix for the OP_1 OP_RETURN vulnerability)