Checksig and the value of bitcoin - Fintech District

For devs and advanced users that are still in the dark: Read this to get redpilled about why Bitcoin (SV) is the real Bitcoin

This post by cryptorebel is a great intro for newbies. Here is a continuation for a technical audience. I'll be making edits for readability and maybe even add more content.
The short explanation of why BSV is the real Bitcoin is that it implements the original L1 scripting language, and removes hacks like p2sh. It also removes the block size limit, and yes that leads to a small number of huge nodes. It might not be the system you wanted. Nodes are miners.
The key thing to understand about the UTXO architecture is that it is maximally "sharded" by default. Logically dependent transactions may require linear span to construct, but they can be validated in sublinear span (actually polylogarithmic expected span). Constructing dependent transactions happens out-of-band in any case.
The fact that transactions in a block are merkelized is an obvious sign that Bitcoin was designed for big blocks. But merkle trees are only half the story. UTXOs are essentially hash-addressed stateful continuation snapshots which can also be "merged" (validated) in a tree.
I won't even bother talking about how broken Lightning Network is. Of all the L2 scaling solutions that could have been used with small block sizes, it's almost unbelievable how many bad choices they've made. We should be kind to them and assume it was deliberate sabotage rather than insulting their intelligence.
Segwit is also outside the scope of this post.
However I will briefly hate on p2sh. Imagine seeing a stunted L1 script language, and deciding that the best way to implement multisigs was a soft-fork patch in the form of p2sh. If the intent was truly backwards-compatability with old clients, then by that logic all segwit and p2sh addresses are supposed to only be protected by transient rules outside of the protocol. Explain that to your custody clients.
As far as Bitcoin Cash goes, I was in the camp of "there's still time to save BCH" until not too long ago. Unfortunately the galaxy brains behind BCH have doubled down on their mistakes. Again, it is kinder to assume deliberate sabotage. (As an aside, the fact that they didn't embrace the name "bcash" when it was used to attack them shows how unprepared they are when the real psyops start to hit. Or, again, that the saboteurs controlled the entire back-and-forth.)
The one useful thing that came out of BCH is some progress on L1 apps based on covenants, but the issue is that they are not taking care to ensure every change maintains the asymptotic validation complexity of bitcoin's UTXO.
Besides that, The BCH devs missed something big. So did I.
It's possible to load the entire transaction onto the stack without adding any new opcodes. Read this post for a quick intro on how transaction meta-evaluation leads to stateful smart contract capabilities. Note that it was written before I understood how it was possible in Bitcoin, but the concept is the same. I've switching to developing a language that abstracts this behavior and compiles to bitcoin's L1. (Please don't "told you so" at me if you just blindly trusted nChain but still can't explain how it's done.)
It is true that this does not allow exactly the same class of L1 applications as Ethereum. It only allows those than can be made parallel, those that can delegate synchronization to "userspace". It forces you to be scalable, to process bottlenecks out-of-band at a per-application level.
Now, some of the more diehard supporters might say that Satoshi knew this was possible and meant for it to be this way, but honestly I don't believe that. nChain says they discovered the technique 'several years ago'. OP_PUSH_TX would have been a very simple opcode to include, and it does not change any aspect of validation in any way. The entire transaction is already in the L1 evaluation context for the purpose of checksig, it truly changes nothing.
But here's the thing: it doesn't matter if this was a happy accident. What matters is that it works. It is far more important to keep the continuity of the original protocol spec than to keep making optimizations at the protocol level. In a concatenative language like bitcoin script, optimized clients can recognize "checksig trick phrases" regardless of their location in the script, and treat them like a simple opcode. Script size is not a constraint when you allow the protocol to scale as designed. Think of it as precompiles in EVM.
Now let's address Ethereum. V. Buterin recently wrote a great piece about the concept of credible neutrality. The only way for a blockchain system to achieve credible neutrality and long-term decentralization of power is to lock down the protocol rules. The thing that caused Ethereum to succeed was the yellow paper. Ethereum has outperformed every other smart contract platform because the EVM has clear semantics with many implementations, so people can invest time and resources into applications built on it. The EVM is apolitical, the EVM spec (fixed at any particular version) is truly decentralized. Team Ethereum can plausibly maintain credibility and neutrality as long as they make progress towards the "Serenity" vision they outlined years ago. Unfortunately they have already placed themselves in a precarious position by picking and choosing which catastrophes they intervene on at the protocol level.
But those are social and political issues. The major technical issue facing the EVM is that it is inherently sequential. It does not have the key property that transactions that occur "later" in the block can be validated before the transactions they depend on are validated. Sharding will hit a wall faster than you can say "O(n/64) is O(n)". Ethereum will get a lot of mileage out of L2, but the fundamental overhead of synchronization in L1 will never go away. The best case scaling scenario for ETH is an L2 system with sublinear validation properties like UTXO. If the economic activity on that L2 system grows larger than that of the L1 chain, the system loses key security properties. Ethereum is sequential by default with parallelism enabled by L2, while Bitcoin is parallel by default with synchronization forced into L2.
Finally, what about CSW? I expect soon we will see a lot of people shouting, "it doesn't matter who Satoshi is!", and they're right. The blockchain doesn't care if CSW is Satoshi or not. It really seems like many people's mental model is "Bitcoin (BSV) scales and has smart contracts if CSW==Satoshi". Sorry, but UTXO scales either way. The checksig trick works either way.
Coin Woke.
submitted by -mr-word- to bitcoincashSV [link] [comments]

Technical: The `SIGHASH_NOINPUT` Debate! Chaperones and output tagging and signature replay oh my!

Bitcoin price isn't moving oh no!!! You know WHAT ELSE isn't moving?? SIGHASH_NOINPUT that's what!!!
Now as you should already know, Decker-Russell-Osuntokun ("eltoo") just ain't possible without SIGHASH_NOINPUT of some kind or other. And Decker-Russell-Osuntokun removes the toxic waste problem (i.e. old backups of your Poon-Dryja LN channels are actively dangerous and could lose your funds if you recover from them, or worse, your most hated enemy could acquire copies of your old state and make you lose funds). Decker-Russell-Osuntokun also allows multiparticipant offchain cryptocurrency update systems, without the drawback of a large unilateral close timeout that Decker-Wattenhofer does, making this construction better for use at the channel factory layer.
Now cdecker already wrote a some code implementing SIGHASH_NOINPUT before, which would make it work in current pre-SegWit P2PKH, P2SH, as well as SegWit v0 P2WPKH and P2WSH. He also made and published BIP 118.
But as is usual for Bitcoin Core development, this triggered debate, and thus many counterproposals were made and so on. Suffice it to say that the simple BIP 118 looks like it won't be coming into Bitcoin Core anytime soon (or possibly at all).
First things first: This link contains all that you need to know, but hey, maybe you'll find my take more amusing.
So let's start with the main issue.

Signature Replay Attack

The above is the Signature Replay Attack, and the reason why SIGHASH_NOINPUT has triggered debate as to whether it is safe at all and whether we can add enough stuff to it to ever make it safe.
Now of course you could point to SIGHASH_NONE which is even worse because all it does is say "I am authorizing the spend of this particular coin of this particular value protected by my key" without any further restrictions like which outputs it goes to. But then SIGHASH_NONE is intended to be used to sacrifice your money to the miners, for example if it's a dust attack trying to get you to spend, so you broadcast a SIGHASH_NONE signature and some enterprising miner will go get a bunch of such SIGHASH_NONE signatures and gather up the dust in a transaction that pays to nobody and gets all the funds as fees. And besides; even if we already have something you could do stupid things with, it's not a justification for adding more things you could do stupid things with.
So yes, SIGHASH_NOINPUT makes Bitcoin more powerful. Now, Bitcoin is a strong believer in "Principle of Least Power". So adding more power to Bitcoin via SIGHASH_NOINPUT is a violation of Principle of Least Power, at least to those arguing to add even more limits to SIGHASH_NOINPUT.
I believe nullc is one of those who strongly urges for adding more limits to SIGHASH_NOINPUT, because it distracts him from taking pictures of his autonomous non-human neighbor, a rather handsome gray fox, but also because it could be used as the excuse for the next MtGox, where a large exchange inadvertently pays to SIGHASH_NOINPUT-using addresses and becomes liable/loses track of their funds when signature replay happens.

Output Tagging

Making SIGHASH_NOINPUT safer by not allowing normal addresses use it.
Basically, we have 32 different SegWit versions. The current SegWit addresses are v0, the next version (v1) is likely to be the Schnorr+Taproot+MAST thing.
What output tagging proposes is to limit SegWit version ranges from 0->15 in the bech32 address scheme (instead of 0->31 it currently has). Versions 16 to 31 are then not valid bech32 SegWit addresses and exchanges shouldn't pay to it.
Then, we allow the use of SIGHASH_NOINPUT only for version 16. Version 16 might very well be Schnorr+Taproot+MAST, with a side serving of SIGHASH_NOINPUT.
This is basically output tagging. SIGHASH_NOINPUT can only be used if the output is tagged (by paying to version 16 SegWit) to allow it, and addresses do not allow outputs to be tagged as such, removing the potential liability of large custodial services like exchanges.
Now, Decker-Russell-Osuntokun channels have two options:
The tradeoffs in this case are:
The latter tradeoff is probably what would be taken (because we're willing to pay for privacy) if Bitcoin Core decides in favor of tagged outputs.
Another issue here is --- oops, P2SH-Segwit wrapped addresses. P2SH can be used to wrap any SegWit payment script, including payments to any SegWit version, including v16. So now you can sneak in a SIGHASH_NOINPUT-enabled SegWit v16 inside an ordinary P2SH that wraps a SegWit payment. One easy way to close this is just to disallow P2SH-SegWit from being valid if it's spending to SegWit version >= 16.

Chaperone Signatures

Closing the Signature Replay Attack by adding a chaperone.
Now we can observe that the Signature Replay Attack is possible because only one signature is needed, and that signature allows any coin of appropriate value to be spent.
Adding a chaperone signature simply means requiring that the SCRIPT involved have at least two OP_CHECKSIG operations. If one signature is SIGHASH_NOINPUT, then at least one other signature (the chaperone) validated by the SCRIPT should be SIGHASH_ALL.
This is not so onerous for Decker-Russell-Osuntokun. Both sides can use a MuSig of their keys, to be used for the SIGHASH_NOINPUT signature (so requires both of them to agree on a particular update), then use a shared ECDH key, to be used for the SIGHASH_ALL signature (allows either of them to publish the unilateral close once the update has been agreed upon).
Of course, the simplest thing to do would be for a BOLT spec to say "just use this spec-defined private key k so we can sidestep the Chaperone Signatures thing". That removes the need to coordinate to define a shared ECDH key during channel establishment: just use the spec-indicated key, which is shared to all LN implementations.
But now look at what we've done! We've subverted the supposed solution of Chaperone Signatures, making them effectively not there, because it's just much easier for everyone to use a standard private key for the chaperone signature than to derive a separate new keypair for the Chaperone.
So chaperone signatures aren't much better than just doing SIGHASH_NOINPUT by itself, and you might as well just use SIGHASH_NOINPUT without adding chaperones.
I believe ajtowns is the primary proponent of this proposal.

Toys for the Big Boys

The Signature Replay Attack is Not A Problem (TM).
This position is most strongly held by RustyReddit I believe (he's the Rusty Russell in the Decker-Russell-Osuntokun). As I understand it, he is more willing to not see SIGHASH_NOINPUT enabled, than to have it enabled but with restrictions like Output Tagging or Chaperone Signatures.
Basically, the idea is: don't use SIGHASH_NOINPUT for normal wallets, in much the same way you don't use SIGHASH_NONE for normal wallets. If you want to do address reuse, don't use wallet software made by luke-jr that specifically screws with your ability to do address reuse.
SIGHASH_NOINPUT is a flag for use by responsible, mutually-consenting adults who want to settle down some satoshis and form a channel together. It is not something that immature youngsters should be playing around with, not until they find a channel counterparty that will treat this responsibility properly. And if those immature youngsters playing with their SIGHASH_NOINPUT flags get into trouble and, you know, lose their funds (as fooling around with SIGHASH_NOINPUT is wont to do), well, they need counseling and advice ("not your keys not your coins", "hodl", "SIGHASH_NOINPUT is not a toy, but something special, reserved for those willing to take on the responsibility of making channels according to the words of Decker-Russell-Osuntokun"...).

Conclusion

Dunno yet. It's still being debated! So yeah. SIGHASH_NOINPUT isn't moving, just like Bitcoin's price!!! YAAAAAAAAAAAAAAAAAAA.
submitted by almkglor to Bitcoin [link] [comments]

Upcoming Updates to Bitcoin Consensus

Price and Libra posts are shit boring, so let's focus on a technical topic for a change.
Let me start by presenting a few of the upcoming Bitcoin consensus changes.
(as these are consensus changes and not P2P changes it does not include erlay or dandelion)
Let's hope the community strongly supports these upcoming updates!

Schnorr

The sexy new signing algo.

Advantages

Disadvantages

MuSig

A provably-secure way for a group of n participants to form an aggregate pubkey and signature. Creating their group pubkey does not require their coordination other than getting individual pubkeys from each participant, but creating their signature does require all participants to be online near-simultaneously.

Advantages

Disadvantages

Taproot

Hiding a Bitcoin SCRIPT inside a pubkey, letting you sign with the pubkey without revealing the SCRIPT, or reveal the SCRIPT without signing with the pubkey.

Advantages

Disadvantages

MAST

Encode each possible branch of a Bitcoin contract separately, and only require revelation of the exact branch taken, without revealing any of the other branches. One of the Taproot script versions will be used to denote a MAST construction. If the contract has only one branch then MAST does not add more overhead.

Advantages

Disadvantages

submitted by almkglor to Bitcoin [link] [comments]

Crypto Telegram Groups

Cryptocurrencies:
@AelfBlockchain - ELF@Aeternity - AE@ArdorPlatform - ARDR@ArkEcosystem - ARK@AugurProject - REP@BATProject - BAT@BeamPrivacy - BEAM@LetsLiveBela - BELA@BitbayOfficial - BCN@Bitcoin - BTC@BitcoinCore - BTC@BitcoinCashFork - BCH@BitcoinGoldHQ - BTG@Bitshares_Community - BTS@BSVChat - BSV@BTTBitTorrent - BTT@BytecoinChat - BCN@BytomInternational - BTM@CallistoNet - CLO@CardanoGeneral - ADA@CentralityOfficialTelegram CENNZ@CloakProject - CLOAK@ChainLinkOfficial - LINK@CosmosProject - ATOM@Counterparty_XCP - XCP@CryptoComOfficial - MCO@CyberMilesToken - CMT@Dash_Chat - DASH@Decred - DCR@Dfinity - DFN@DigiBytecoin - DGB@DigixDAO - DGD@TheDogeHouse - DOGE@Electracoin - ECA@Emercoin_Official - EMC@EnigmaProject - ENG@EOSProject - EOS@EthClassic - ETC@Ether - ETH@EUNOofficial - EUNO@Everipedia - IQ@FactomFCT - FCT@Filecoin - FIL@GnosisPM - GNO@Grincoin - GRIN@Groestl - GRS@Hyperledger@IOTAtangle - IOTA@KomodoPlatform_Official - KMD@KyberNetwork - KNC@LAToken - LA@Litecoin - LTC@MaidSafeCoin - MAID@MakerDAOOfficial - MKR@Monero - XMR@Namecoin - NMC@Navcoin - NAV@Nemred - XEM@Neo_Blockchain - NEO@NervaXNV - XNV@Nimiq - NIM@NxtCommunity - NXT@OmiseGo - OMG@OmniLayer - OMNI@OntologyNetwork - ONT@Peercoin - PPC@PolymathNetwork - POLY@QtumOfficial - QTUM@RavencoinDev - RVN@Ripple - XRP@RSKOfficialCommunity - RIF@Siacoin - SIA@SirinLabs - SRN@Sonm_Eng - SNM@StellarLumens - XLM@StratisPlatform - STRAT@TezosPlatform - XTZ@TronNetworkEN - TRX@UnobtaniumUNO - UNO@Vechain_Official_English - VET@VertcoinCrypto - VTC@Viacoin - VIA@ViberateOfficial - VIB@VSYSOfficialGroup - VSYS@WavesCommunity - WAVES@ZB_English - ZB@ZCashco - ZEC@ZClassicCoin - ZCL@ZCoinProject - XZC

Crypto Communities:
@Aetrader - EN@Allemaalrijk - NL@Altcoins - EN @ArgenPool - ES@AussieCrypto - EN @UKBitcoin - EN@Binarydotcom - EN@BitcoinChat - EN @BitcoinInvestimento - PT @BitNovosti - RU @BitUniverse - EN@BlockhcainMinersGroup - EN@BsodPool - RU@BTCFinland - EN@BTChat - RU@BullBearr - EN@CoinFarm - EN @CoinGecko - EN @CoinMarketCap - EN @CoinPaprika - EN @CrypticIndia - EN@Crypto_CN - ZH @Crypto_ON - RU @CryptoAdvisorOfficial - EN@CryptoAquarium - EN @CryptoBeats - EN @CryptoBoerderij - NL @CryptoCharity - EN@CryptoCoinClub - EN@CryptoExpo_Moscow - RU@CryptoGene - EN@CryptoGifs - EN @CryptoGurusOfficial - EN@CryptoInsidersLobby -@CryptoHispanonet - ES@CryptoMining - EN @CryptoMondayDE - DE@CryptoOnMining - RU@CryptoRomania - RO @CryptoTipsChatFR - EN@CrypVision - DE @ElijaBoomC - EN@FanaticosCriptos - PT @ICOCountdown - EN@ILoveNina - EN@IndiaBits - EN @Kampungkoin - EN@KriptoTurkiye - TR @KryptoCoinsDE - DE@KryptoDETrading - DE @KryptoVerSteuerung - DE@MinerSpeak - EN@MiningBazar - RU @MMCryptoENG - EN@RepublicCrypto - EN@SideChains - EN@SmartContracts - EN@SportsBet - EN @StrapeCharts - EN@TamilBTC - EN@TheCoinFarm - EN @TheCryptoMob - EN@TokenMarket - EN@TrezorTalk - EN@Trollbox - EN @WCSETalks - EN@WhaleClub - EN (Invite Only)@WhaleClubClassRoom - EN@WhalePoolBTC - EN @WhaleTankChat - EN@XMRMine - EN
Crypto News Channels:
@AltCoin - EN@Avalbit - EN@Bit_Novosti - RU @BitcoinBravado - EN @BitcoinChannel - EN@BitcoinExchangeGuide - EN@BitcoinMagazinebot - EN @BitOracle - RU@BitRu - RU@CDiamonds - EN@Coin_Analyse - DE@CoinCentral - EN@CoinDesk - EN @CoinGape - EN@CoinNewsChannel - EN@CoinNewsDE - DE@CoinTelegraph - EN @Cointified - EN @Cripto247 - ES@CriptoNoticias - ES @Crypto_News_Channel - EN@CryptoAlerts - EN @CryptoAMB - EN@CryptoAsiaNews - ZH @CryptoChan - RU@CryptoClubAlerts - EN@CryptoCurrency - EN@CryptoExplorerChannel - EN@CryptoMartez - EN@CryptoNinja_News - EN@CryptoNyka - RU@CryptoRankNews - EN @CryptoSentinel - EN@CryptoSeson2020 - EN@CryptoSlateNews - EN@CryptoSnippets - EN@DecentralBox - DE @ForkLog - RU @JingBao - ZH @Krepta_News - RU@Krypto_Deutschland - DE@KryptoNachrichten - DE@NewCryptoJournal - EN@MGonCrypto - EN @OneMinuteLetter - EN@RichardsCalls - EN@SmartLiquidNews - EN@TheBCJ - RU @TONorg - EN @Unfolded - EN @WhalebotAlerts - EN @Xblockchain - FA

Trading Analysis:
@Altchica - EN@AltcoinWhales - EN @AnhemTrader - VI@Bitafta - EN@CacheStation - EN @Checksig - EN @CryptoCharters - EN @CryptoCredTA - EN @CryptoInMinutes - EN@CryptoScanner100Eyes - EN @ExcavoChannel - EN@KXiantu - ZH @Pierre_Crypt0_Public - EN@PsychoChromatic - EN @SalsaTekila - EN@ScalpingMF - EN@T45Investments - RU@TASmartAlerts - EN@TheLionDen - EN@TraderMillClub - EN@WCSEChannel - EN@WCSERussia - RU@WhaleTank - EN@WinterWolvesTA - EN @WolfPackSignals - EN
Indicator Bots:
@Crypto_Scanner - EN@CryptoQuantBotChannel - EN@BitmexRekts - EN@BounceBotBin - EN@Coin_Pulse - EN@Coin_Pulse_Listing - EN@CoinTrendz - EN @CryptoChan_HighLowPulse - EN @DataLightMe - EN@WallMonitor - EN @Whale_Alert_io - EN @WhaleCalls - EN @WhalepoolBTCFeed - EN @WhaleSniper - EN
submitted by Aztek_btc to cryptogroups [link] [comments]

PSA: Guide on how to recover your lost Segwit coins using Electron Cash

How to get your recovered SegWit funds using Electron Cash

Background

Thousands of BCH on thousands of coins that were accidentally send to Segwit 3xxx addresses were recovered by BTC.TOP in block 582705.
This was a wonderful service to the community. This had to be done quickly as the coins were anyone can spend and needed to be sent somewhere. This all had to be done before thieves could get their dirty paws on them.
So.. How were they recovered? Did BTC.TOP just take the coins for themselves? NO: They were not taken by BTC.TOP. This would be wrong (morally), and would open them up to liability and other shenanigans (legally).
Instead --BTC.TOP acted quickly and did the legally responsible thing with minimal liability. They were sent on to the intended destination address of the SegWit transaction (if translated to BCH normal address).
This means BTC.TOP did not steal your coins and/or does not have custody of your funds!
But this does mean you now need to figure out how to get the private key associated with where they were sent -- in order to unlock the funds. (Which will be covered below).
Discussions on why this was the most responsible thing to do and why it was done this way are available upon request. Or you can search this subreddit to get to them.

Ok, so BTC.TOP doesn't have them -- who does?

You do (if they were sent to you)! Or -- the person / address they were sent to does!

HUH?

The Segwit transactions have a bad/crazy/messed-up format which contains an output (destination) which contains a hash of a public key inside. So they "sort of" contain a regular bitcoin address inside of them, with other Segwit garbage around them. This hash was decoded and translated to a regular BCH address, and the funds were sent there.
Again: The funds were forwarded on to a regular BCH address where they are safe. They are now guarded by a private key -- where they were not before (before they were "anyone can spend"). It can be argued this is the only reasonable thing to have done with them (legally and morally) -- continue to send them to their intended destination. This standard, if it's good enough for the US Post Office and Federal Mail, is good enough here. It's better than them being stolen.

Ok, I get it... they are on a regular BCH address now. The address of the destination of the Tx, is it?

Yes. So now a regular BCH private key (rather than anyone can spend) is needed to spend them further. Thus the Segwit destination address you sent them to initially was effectively translated to a BCH regular address. It's as if you posted a parcel with the wrong ZIP code on it -- but the USPS was nice enough to figure that out and send it to where you intended it to go.

Why do it this way and not return to sender?

Because of the ambiguity present-- it's not entirely clear which sender to return them to. There is too much ambiguity there, and would have led to many inputs not being recovered in a proper manner. More discussion on this is available upon request.

Purpose of this guide

This document explains how to:
Complications to watch out for:

Step 1: Checking where your coins went

To verify if this recovery touched one of your lost coins: look for the transaction that spent your coins and open it on bch.btc.com explorer.

Normal aka "P2PKH"

Let’s take this one for example.
Observe the input says:
P2SH 160014d376cf1baff9eeed943d58551d53c48377adb98c 
And the output says:
P2PKH OP_DUP OP_HASH160 d376cf1baff9eeed943d58551d53c48377adb98c OP_EQUALVERIFY OP_CHECKSIG 
Notice a pattern?
The fact that these two highlighted hexadecimal strings are the same means that the funds were forwarded to the identical public key, and can be spent by the private key (corresponding to that public key) if it is imported into a Bitcoin Cash wallet.

Multisig aka "P2SH"

If the input starts with “P2SH 220020…”, as in this example, then your segwit address is a script -- probably a multisignature. While the input says “P2SH 22002019aa2610492ee2c18605597136294596d4f0f9bc6ce0974ed3a975d65da4ca1e”, the output says “P2SH OP_HASH160 21bdc73fb15b3bb7bd1be365e92447dc2a44e662 OP_EQUAL”. These two strings actually correspond to the same script, but they are different in content and length due to segwit’s design. However, you just need to RIPEMD160 hash the first string and compare to the second -- you can check this by entering the input string (after the 220020 part) into this website’s Binary Hash field and checking the resulting RIPEMD160 hash. The resulting hash is 21bdc73fb15b3bb7bd1be365e92447dc2a44e662, which corresponds to the output hex above, and this means the coins were forwarded to the same spending script but in "non-segwit form". You will need to re-assemble the same multi-signature setup and enough private keys on a Bitcoin Cash wallet. (Sorry for the succinct explanation here. Ask in the comments for more details perhaps.)

No match -- what?!

If the string does not match (identically in the Normal case above, or after properly hashing in the Multisig case above), then your coins were sent elsewhere, possibly even taken by an anonymous miner. :'(

Step 2: How To Do the Recovery

Recover "Normal" address transactions (P2PKH above)

This is for recoveries where the input string started with “160014”.
Option 1 (BIP39 seed):
Option 2 (single key):
Option 3 (xprv -- many keys):
Code:
mkey = "yprvAJ48Yvx71CKa6a6P8Sk78nkSF7iqqaRob1FN7Jxsqm3L52K8XmZ7EtEzPzTUWXAaHNfN4DFAuP4cdM38yrE6j3YifV8i954hyD5rhPyUNVP" from electroncash.bitcoin import DecodeBase58Check, EncodeBase58Check EncodeBase58Check(b'\x04\x88\xad\xe4'+DecodeBase58Check(mkey)[4:]) 
Option 4 (hardware wallet):

How to Recover Multisignature wallets (P2WSH-in-P2SH in segwit parlance)

This is for recoveries where the input string started with "220020.
Please read the above instructions for how to import single keys. You will need to do similar but taking care to reproduce the same set of multisignature keys as you had in the BTC wallet. Note that Electron Cash does not support single-key multisignature, so you need to use the BIP39 / xprv approach.
If you don’t observe the correct address in Electron Cash, then check the list of public keys by right clicking on an address, and compare it to the list seen in your BTC wallet. Also ensure that the number of required signers is identical.
submitted by NilacTheGrim to btc [link] [comments]

Why CHECKDATASIG Does Not Matter

Why CHECKDATASIG Does Not Matter

In this post, I will prove that the two main arguments against the new CHECKDATASIG (CDS) op-codes are invalid. And I will prove that two common arguments for CDS are invalid as well. The proof requires only one assumption (which I believe will be true if we continue to reactive old op-codes and increase the limits on script and transaction sizes [something that seems to have universal support]):
ASSUMPTION 1. It is possible to emmulate CDS with a big long raw script.

Why are the arguments against CDS invalid?

Easy. Let's analyse the two arguments I hear most often against CDS:

ARG #1. CDS can be used for illegal gambling.

This is not a valid reason to oppose CDS because it is a red herring. By Assumption 1, the functionality of CDS can be emulated with a big long raw script. CDS would not then affect what is or is not possible in terms of illegal gambling.

ARG #2. CDS is a subsidy that changes the economic incentives of bitcoin.

The reasoning here is that being able to accomplish in a single op-code, what instead would require a big long raw script, makes transactions that use the new op-code unfairly cheap. We can shoot this argument down from three directions:
(A) Miners can charge any fee they want.
It is true that today miners typically charge transaction fees based on the number of bytes required to express the transaction, and it is also true that a transaction with CDS could be expressed with fewer bytes than the same transaction constructed with a big long raw script. But these two facts don't matter because every miner is free to charge any fee he wants for including a transaction in his block. If a miner wants to charge more for transactions with CDS he can (e.g., maybe the miner believes such transactions cost him more CPU cycles and so he wants to be compensated with higher fees). Similarly, if a miner wants to discount the big long raw scripts used to emmulate CDS he could do that too (e.g., maybe a group of miners have built efficient ways to propagate and process these huge scripts and now want to give a discount to encourage their use). The important point is that the existence of CDS does not impeded the free market's ability to set efficient prices for transactions in any way.
(B) Larger raw transactions do not imply increased orphaning risk.
Some people might argue that my discussion above was flawed because it didn't account for orphaning risk due to the larger transaction size when using a big long raw script compared to a single op-code. But transaction size is not what drives orphaning risk. What drives orphaning risk is the amount of information (entropy) that must be communicated to reconcile the list of transactions in the next block. If the raw-script version of CDS were popular enough to matter, then transactions containing it could be compressed as
....CDS'(signature, message, public-key)....
where CDS' is a code* that means "reconstruct this big long script operation that implements CDS." Thus there is little if any fundamental difference in terms of orphaning risk (or bandwidth) between using a big long script or a single discrete op code.
(C) More op-codes does not imply more CPU cycles.
Firstly, all op-codes are not equal. OP_1ADD (adding 1 to the input) requires vastly fewer CPU cycles than OP_CHECKSIG (checking an ECDSA signature). Secondly, if CDS were popular enough to matter, then whatever "optimized" version that could be created for the discrete CDS op-codes could be used for the big long version emmulating it in raw script. If this is not obvious, realize that all that matters is that the output of both functions (the discrete op-code and the big long script version) must be identical for all inputs, which means that is does NOT matter how the computations are done internally by the miner.

Why are (some of) the arguments for CDS invalid?

Let's go through two of the arguments:

ARG #3. It makes new useful bitcoin transactions possible (e.g., forfeit transactions).

If Assumption 1 holds, then this is false because CDS can be emmulated with a big long raw script. Nothing that isn't possible becomes possible.

ARG #4. It is more efficient to do things with a single op-code than a big long script.

This is basically Argument #2 in reverse. Argument #2 was that CDS would be too efficient and change the incentives of bitcoin. I then showed how, at least at the fundamental level, there is little difference in efficiency in terms of orphaning risk, bandwidth or CPU cycles. For the same reason that Argument #2 is invalid, Argument #4 is invalid as well. (That said, I think a weaker argument could be made that a good scripting language allows one to do the things he wants to do in the simplest and most intuitive ways and so if CDS is indeed useful then I think it makes sense to implement in compact form, but IMO this is really more of an aesthetics thing than something fundamental.)
It's interesting that both sides make the same main points, yet argue in the opposite directions.
Argument #1 and #3 can both be simplified to "CDS permits new functionality." This is transformed into an argument against CDS by extending it with "...and something bad becomes possible that wasn't possible before and so we shouldn't do it." Conversely, it is transformed to an argument for CDS by extending it with "...and something good becomes possible that was not possible before and so we should do it." But if Assumption 1 holds, then "CDS permits new functionality" is false and both arguments are invalid.
Similarly, Arguments #2 and #4 can both be simplified to "CDS is more efficient than using a big long raw script to do the same thing." This is transformed into an argument against CDS by tacking on the speculation that "...which is a subsidy for certain transactions which will throw off the delicate balance of incentives in bitcoin!!1!." It is transformed into an argument for CDS because "... heck, who doesn't want to make bitcoin more efficient!"

What do I think?

If I were the emperor of bitcoin I would probably include CDS because people are already excited to use it, the work is already done to implement it, and the plan to roll it out appears to have strong community support. The work to emulate CDS with a big long raw script is not done.
Moving forward, I think Andrew Stone's (thezerg1) approach outlined here is an excellent way to make incremental improvements to Bitcoin's scripting language. In fact, after writing this essay, I think I've sort of just expressed Andrew's idea in a different form.
* you might call it an "op code" teehee
submitted by Peter__R to btc [link] [comments]

Transcript of the community Q&A with Steve Shadders and Daniel Connolly of the Bitcoin SV development team. We talk about the path to big blocks, new opcodes, selfish mining, malleability, and why November will lead to a divergence in consensus rules. (Cont in comments)

We've gone through the painstaking process of transcribing the linked interview with Steve Shadders and Daniell Connolly of the Bitcoin SV team. There is an amazing amount of information in this interview that we feel is important for businesses and miners to hear, so we believe it was important to get this is a written form. To avoid any bias, the transcript is taken almost word for word from the video, with just a few changes made for easier reading. If you see any corrections that need to be made, please let us know.
Each question is in bold, and each question and response is timestamped accordingly. You can follow along with the video here:
https://youtu.be/tPImTXFb_U8

BEGIN TRANSCRIPT:

Connor: 02:19.68,0:02:45.10
Alright so thank You Daniel and Steve for joining us. We're joined by Steve Shadders and Daniel Connolly from nChain and also the lead developers of the Satoshi’s Vision client. So Daniel and Steve do you guys just want to introduce yourselves before we kind of get started here - who are you guys and how did you get started?
Steve: 0,0:02:38.83,0:03:30.61
So I'm Steve Shadders and at nChain I am the director of solutions in engineering and specifically for Bitcoin SV I am the technical director of the project which means that I'm a bit less hands-on than Daniel but I handle a lot of the liaison with the miners - that's the conditional project.
Daniel:
Hi I’m Daniel I’m the lead developer for Bitcoin SV. As the team's grown that means that I do less actual coding myself but more organizing the team and organizing what we’re working on.
Connor 03:23.07,0:04:15.98
Great so we took some questions - we asked on Reddit to have people come and post their questions. We tried to take as many of those as we could and eliminate some of the duplicates, so we're gonna kind of go through each question one by one. We added some questions of our own in and we'll try and get through most of these if we can. So I think we just wanted to start out and ask, you know, Bitcoin Cash is a little bit over a year old now. Bitcoin itself is ten years old but in the past a little over a year now what has the process been like for you guys working with the multiple development teams and, you know, why is it important that the Satoshi’s vision client exists today?
Steve: 0:04:17.66,0:06:03.46
I mean yes well we’ve been in touch with the developer teams for quite some time - I think a bi-weekly meeting of Bitcoin Cash developers across all implementations started around November last year. I myself joined those in January or February of this year and Daniel a few months later. So we communicate with all of those teams and I think, you know, it's not been without its challenges. It's well known that there's a lot of disagreements around it, but some what I do look forward to in the near future is a day when the consensus issues themselves are all rather settled, and if we get to that point then there's not going to be much reason for the different developer teams to disagree on stuff. They might disagree on non-consensus related stuff but that's not the end of the world because, you know, Bitcoin Unlimited is free to go and implement whatever they want in the back end of a Bitcoin Unlimited and Bitcoin SV is free to do whatever they want in the backend, and if they interoperate on a non-consensus level great. If they don't not such a big problem there will obviously be bridges between the two, so, yeah I think going forward the complications of having so many personalities with wildly different ideas are going to get less and less.
Cory: 0:06:00.59,0:06:19.59
I guess moving forward now another question about the testnet - a lot of people on Reddit have been asking what the testing process for Bitcoin SV has been like, and if you guys plan on releasing any of those results from the testing?
Daniel: 0:06:19.59,0:07:55.55
Sure yeah so our release will be concentrated on the stability, right, with the first release of Bitcoin SV and that involved doing a large amount of additional testing particularly not so much at the unit test level but at the more system test so setting up test networks, performing tests, and making sure that the software behaved as we expected, right. Confirming the changes we made, making sure that there aren’t any other side effects. Because of, you know, it was quite a rush to release the first version so we've got our test results documented, but not in a way that we can really release them. We're thinking about doing that but we’re not there yet.
Steve: 0:07:50.25,0:09:50.87
Just to tidy that up - we've spent a lot of our time developing really robust test processes and the reporting is something that we can read on our internal systems easily, but we need to tidy that up to give it out for public release. The priority for us was making sure that the software was safe to use. We've established a test framework that involves a progression of code changes through multiple test environments - I think it's five different test environments before it gets the QA stamp of approval - and as for the question about the testnet, yeah, we've got four of them. We've got Testnet One and Testnet Two. A slightly different numbering scheme to the testnet three that everyone's probably used to – that’s just how we reference them internally. They're [1 and 2] both forks of Testnet Three. [Testnet] One we used for activation testing, so we would test things before and after activation - that one’s set to reset every couple of days. The other one [Testnet Two] was set to post activation so that we can test all of the consensus changes. The third one was a performance test network which I think most people have probably have heard us refer to before as Gigablock Testnet. I get my tongue tied every time I try to say that word so I've started calling it the Performance test network and I think we're planning on having two of those: one that we can just do our own stuff with and experiment without having to worry about external unknown factors going on and having other people joining it and doing stuff that we don't know about that affects our ability to baseline performance tests, but the other one (which I think might still be a work in progress so Daniel might be able to answer that one) is one of them where basically everyone will be able to join and they can try and mess stuff up as bad as they want.
Daniel: 0:09:45.02,0:10:20.93
Yeah, so we so we recently shared the details of Testnet One and Two with the with the other BCH developer groups. The Gigablock test network we've shared up with one group so far but yeah we're building it as Steve pointed out to be publicly accessible.
Connor: 0:10:18.88,0:10:44.00
I think that was my next question I saw that you posted on Twitter about the revived Gigablock testnet initiative and so it looked like blocks bigger than 32 megabytes were being mined and propagated there, but maybe the block explorers themselves were coming down - what does that revived Gigablock test initiative look like?
Daniel: 0:10:41.62,0:11:58.34
That's what did the Gigablock test network is. So the Gigablock test network was first set up by Bitcoin Unlimited with nChain’s help and they did some great work on that, and we wanted to revive it. So we wanted to bring it back and do some large-scale testing on it. It's a flexible network - at one point we had we had eight different large nodes spread across the globe, sort of mirroring the old one. Right now we scaled back because we're not using it at the moment so they'll notice I think three. We have produced some large blocks there and it's helped us a lot in our research and into the scaling capabilities of Bitcoin SV, so it's guided the work that the team’s been doing for the last month or two on the improvements that we need for scalability.
Steve: 0:11:56.48,0:13:34.25
I think that's actually a good point to kind of frame where our priorities have been in kind of two separate stages. I think, as Daniel mentioned before, because of the time constraints we kept the change set for the October 15 release as minimal as possible - it was just the consensus changes. We didn't do any work on performance at all and we put all our focus and energy into establishing the QA process and making sure that that change was safe and that was a good process for us to go through. It highlighted what we were missing in our team – we got our recruiters very busy recruiting of a Test Manager and more QA people. The second stage after that is performance related work which, as Daniel mentioned, the results of our performance testing fed into what tasks we were gonna start working on for the performance related stuff. Now that work is still in progress - some of the items that we identified the code is done and that's going through the QA process but it’s not quite there yet. That's basically the two-stage process that we've been through so far. We have a roadmap that goes further into the future that outlines more stuff, but primarily it’s been QA first, performance second. The performance enhancements are close and on the horizon but some of that work should be ongoing for quite some time.
Daniel: 0:13:37.49,0:14:35.14
Some of the changes we need for the performance are really quite large and really get down into the base level view of the software. There's kind of two groups of them mainly. One that are internal to the software – to Bitcoin SV itself - improving the way it works inside. And then there's other ones that interface it with the outside world. One of those in particular we're working closely with another group to make a compatible change - it's not consensus changing or anything like that - but having the same interface on multiple different implementations will be very helpful right, so we're working closely with them to make improvements for scalability.
Connor: 0:14:32.60,0:15:26.45
Obviously for Bitcoin SV one of the main things that you guys wanted to do that that some of the other developer groups weren't willing to do right now is to increase the maximum default block size to 128 megabytes. I kind of wanted to pick your brains a little bit about - a lot of the objection to either removing the box size entirely or increasing it on a larger scale is this idea of like the infinite block attack right and that kind of came through in a lot of the questions. What are your thoughts on the “infinite block attack” and is it is it something that that really exists, is it something that miners themselves should be more proactive on preventing, or I guess what are your thoughts on that attack that everyone says will happen if you uncap the block size?
Steve: 0:15:23.45,0:18:28.56
I'm often quoted on Twitter and Reddit - I've said before the infinite block attack is bullshit. Now, that's a statement that I suppose is easy to take out of context, but I think the 128 MB limit is something where there’s probably two schools of thought about. There are some people who think that you shouldn't increase the limit to 128 MB until the software can handle it, and there are others who think that it's fine to do it now so that the limit is increased when the software can handle it and you don’t run into the limit when this when the software improves and can handle it. Obviously we’re from the latter school of thought. As I said before we've got a bunch of performance increases, performance enhancements, in the pipeline. If we wait till May to increase the block size limit to 128 MB then those performance enhancements will go in, but we won't be able to actually demonstrate it on mainnet. As for the infinitive block attack itself, I mean there are a number of mitigations that you can put in place. I mean firstly, you know, going down to a bit of the tech detail - when you send a block message or send any peer to peer message there's a header which has the size of the message. If someone says they're sending you a 30MB message and you're receiving it and it gets to 33MB then obviously you know something's wrong so you can drop the connection. If someone sends you a message that's 129 MB and you know the block size limit is 128 you know it’s kind of pointless to download that message. So I mean these are just some of the mitigations that you can put in place. When I say the attack is bullshit, I mean I mean it is bullshit from the sense that it's really quite trivial to prevent it from happening. I think there is a bit of a school of thought in the Bitcoin world that if it's not in the software right now then it kind of doesn't exist. I disagree with that, because there are small changes that can be made to work around problems like this. One other aspect of the infinite block attack, and let’s not call it the infinite block attack, let's just call it the large block attack - it takes a lot of time to validate that we gotten around by having parallel pipelines for blocks to come in, so you've got a block that's coming in it's got a unknown stuck on it for two hours or whatever downloading and validating it. At some point another block is going to get mined b someone else and as long as those two blocks aren't stuck in a serial pipeline then you know the problem kind of goes away.
Cory: 0:18:26.55,0:18:48.27
Are there any concerns with the propagation of those larger blocks? Because there's a lot of questions around you know what the practical size of scaling right now Bitcoin SV could do and the concerns around propagating those blocks across the whole network.
Steve 0:18:45.84,0:21:37.73
Yes, there have been concerns raised about it. I think what people forget is that compact blocks and xThin exist, so if a 32MB block is not send 32MB of data in most cases, almost all cases. The concern here that I think I do find legitimate is the Great Firewall of China. Very early on in Bitcoin SV we started talking with miners on the other side of the firewall and that was one of their primary concerns. We had anecdotal reports of people who were having trouble getting a stable connection any faster than 200 kilobits per second and even with compact blocks you still need to get the transactions across the firewall. So we've done a lot of research into that - we tested our own links across the firewall, rather CoinGeeks links across the firewall as they’ve given us access to some of their servers so that we can play around, and we were able to get sustained rates of 50 to 90 megabits per second which pushes that problem quite a long way down the road into the future. I don't know the maths off the top of my head, but the size of the blocks that can sustain is pretty large. So we're looking at a couple of options - it may well be the chattiness of the peer-to-peer protocol causes some of these issues with the Great Firewall, so we have someone building a bridge concept/tool where you basically just have one kind of TX vacuum on either side of the firewall that collects them all up and sends them off every one or two seconds as a single big chunk to eliminate some of that chattiness. The other is we're looking at building a multiplexer that will sit and send stuff up to the peer-to-peer network on one side and send it over splitters, to send it over multiple links, reassemble it on the other side so we can sort of transition the great Firewall without too much trouble, but I mean getting back to the core of your question - yes there is a theoretical limit to block size propagation time and that's kind of where Moore's Law comes in. Putting faster links and you kick that can further down the road and you just keep on putting in faster links. I don't think 128 main blocks are going to be an issue though with the speed of the internet that we have nowadays.
Connor: 0:21:34.99,0:22:17.84
One of the other changes that you guys are introducing is increasing the max script size so I think right now it’s going from 201 to 500 [opcodes]. So I guess a few of the questions we got was I guess #1 like why not uncap it entirely - I think you guys said you ran into some concerns while testing that - and then #2 also specifically we had a question about how certain are you that there are no remaining n squared bugs or vulnerabilities left in script execution?
Steve: 0:22:15.50,0:25:36.79
It's interesting the decision - we were initially planning on removing that cap altogether and the next cap that comes into play after that (next effective cap is a 10,000 byte limit on the size of the script). We took a more conservative route and decided to wind that back to 500 - it's interesting that we got some criticism for that when the primary criticism I think that was leveled against us was it’s dangerous to increase that limit to unlimited. We did that because we’re being conservative. We did some research into these log n squared bugs, sorry – attacks, that people have referred to. We identified a few of them and we had a hard think about it and thought - look if we can find this many in a short time we can fix them all (the whack-a-mole approach) but it does suggest that there may well be more unknown ones. So we thought about putting, you know, taking the whack-a-mole approach, but that doesn't really give us any certainty. We will fix all of those individually but a more global approach is to make sure that if anyone does discover one of these scripts it doesn't bring the node to a screaming halt, so the problem here is because the Bitcoin node is essentially single-threaded, if you get one of these scripts that locks up the script engine for a long time everything that's behind it in the queue has to stop and wait. So what we wanted to do, and this is something we've got an engineer actively working on right now, is once that script validation goad path is properly paralyzed (parts of it already are), then we’ll basically assign a few threads for well-known transaction templates, and a few threads for any any type of script. So if you get a few scripts that are nasty and lock up a thread for a while that's not going to stop the node from working because you've got these other kind of lanes of the highway that are exclusively reserved for well-known script templates and they'll just keep on passing through. Once you've got that in place, and I think we're in a much better position to get rid of that limit entirely because the worst that's going to happen is your non-standard script pipelines get clogged up but everything else will keep keep ticking along - there are other mitigations for this as well I mean I know you could always put a time limit on script execution if they wanted to, and that would be something that would be up to individual miners. Bitcoin SV's job I think is to provide the tools for the miners and the miners can then choose, you know, how to make use of them - if they want to set time limits on script execution then that's a choice for them.
Daniel: 0:25:34.82,0:26:15.85
Yeah, I'd like to point out that a node here, when it receives a transaction through the peer to peer network, it doesn't have to accept that transaction, you can reject it. If it looks suspicious to the node it can just say you know we're not going to deal with that, or if it takes more than five minutes to execute, or more than a minute even, it can just abort and discard that transaction, right. The only time we can’t do that is when it's in a block already, but then it could decide to reject the block as well. It's all possibilities there could be in the software.
Steve: 0:26:13.08,0:26:20.64
Yeah, and if it's in a block already it means someone else was able to validate it so…
Cory: 0,0:26:21.21,0:26:43.60
There’s a lot of discussions about the re-enabled opcodes coming – OP_MUL, OP_INVERT, OP_LSHIFT, and OP_RSHIFT up invert op l shift and op r shift you maybe explain the significance of those op codes being re-enabled?
Steve: 0:26:42.01,0:28:17.01
Well I mean one of one of the most significant things is other than two, which are minor variants of DUP and MUL, they represent almost the complete set of original op codes. I think that's not necessarily a technical issue, but it's an important milestone. MUL is one that's that I've heard some interesting comments about. People ask me why are you putting OP_MUL back in if you're planning on changing them to big number operations instead of the 32-bit limit that they're currently imposed upon. The simple answer to that question is that we currently have all of the other arithmetic operations except for OP_MUL. We’ve got add divide, subtract, modulo – it’s odd to have a script system that's got all the mathematical primitives except for multiplication. The other answer to that question is that they're useful - we've talked about a Rabin signature solution that basically replicates the function of DATASIGVERIFY. That's just one example of a use case for this - most cryptographic primitive operations require mathematical operations and bit shifts are useful for a whole ton of things. So it's really just about completing that work and completing the script engine, or rather not completing it, but putting it back the way that it was it was meant to be.
Connor 0:28:20.42,0:29:22.62
Big Num vs 32 Bit. I've seen Daniel - I think I saw you answer this on Reddit a little while ago, but the new op codes using logical shifts and Satoshi’s version use arithmetic shifts - the general question that I think a lot of people keep bringing up is, maybe in a rhetorical way but they say why not restore it back to the way Satoshi had it exactly - what are the benefits of changing it now to operate a little bit differently?
Daniel: 0:29:18.75,0:31:12.15
Yeah there's two parts there - the big number one and the L shift being a logical shift instead of arithmetic. so when we re-enabled these opcodes we've looked at them carefully and have adjusted them slightly as we did in the past with OP_SPLIT. So the new LSHIFT and RSHIFT are bitwise operators. They can be used to implement arithmetic based shifts - I think I've posted a short script that did that, but we can't do it the other way around, right. You couldn't use an arithmetic shift operator to implement a bitwise one. It's because of the ordering of the bytes in the arithmetic values, so the values that represent numbers. The little endian which means they're swapped around to what many other systems - what I've considered normal - or big-endian. And if you start shifting that properly as a number then then shifting sequence in the bytes is a bit strange, so it couldn't go the other way around - you couldn't implement bitwise shift with arithmetic, so we chose to make them bitwise operators - that's what we proposed.
Steve: 0:31:10.57,0:31:51.51
That was essentially a decision that was actually made in May, or rather a consequence of decisions that were made in May. So in May we reintroduced OP_AND, OP_OR, and OP_XOR, and that was also another decision to replace three different string operators with OP_SPLIT was also made. So that was not a decision that we've made unilaterally, it was a decision that was made collectively with all of the BCH developers - well not all of them were actually in all of the meetings, but they were all invited.
Daniel: 0:31:48.24,0:32:23.13
Another example of that is that we originally proposed OP_2DIV and OP_2MUL was it, I think, and this is a single operator that multiplies the value by two, right, but it was pointed out that that can very easily be achieved by just doing multiply by two instead of having a separate operator for it, so we scrapped those, we took them back out, because we wanted to keep the number of operators minimum yeah.
Steve: 0:32:17.59,0:33:47.20
There was an appetite around for keeping the operators minimal. I mean the decision about the idea to replace OP_SUBSTR, OP_LEFT, OP_RIGHT with OP_SPLIT operator actually came from Gavin Andresen. He made a brief appearance in the Telegram workgroups while we were working out what to do with May opcodes and obviously Gavin's word kind of carries a lot of weight and we listen to him. But because we had chosen to implement the May opcodes (the bitwise opcodes) and treat the data as big-endian data streams (well, sorry big-endian not really applicable just plain data strings) it would have been completely inconsistent to implement LSHIFT and RSHIFT as integer operators because then you would have had a set of bitwise operators that operated on two different kinds of data, which would have just been nonsensical and very difficult for anyone to work with, so yeah. I mean it's a bit like P2SH - it wasn't a part of the original Satoshi protocol that once some things are done they're done and you know if you want to want to make forward progress you've got to work within that that framework that exists.
Daniel: 0:33:45.85,0:34:48.97
When we get to the big number ones then it gets really complicated, you know, number implementations because then you can't change the behavior of the existing opcodes, and I don't mean OP_MUL, I mean the other ones that have been there for a while. You can't suddenly make them big number ones without seriously looking at what scripts there might be out there and the impact of that change on those existing scripts, right. The other the other point is you don't know what scripts are out there because of P2SH - there could be scripts that you don't know the content of and you don't know what effect changing the behavior of these operators would mean. The big number thing is tricky, so another option might be, yeah, I don't know what the options for though it needs some serious thought.
Steve: 0:34:43.27,0:35:24.23
That’s something we've reached out to the other implementation teams about - actually really would like their input on the best ways to go about restoring big number operations. It has to be done extremely carefully and I don't know if we'll get there by May next year, or when, but we’re certainly willing to put a lot of resources into it and we're more than happy to work with BU or XT or whoever wants to work with us on getting that done and getting it done safely.
Connor: 0:35:19.30,0:35:57.49
Kind of along this similar vein, you know, Bitcoin Core introduced this concept of standard scripts, right - standard and non-standard scripts. I had pretty interesting conversation with Clemens Ley about use cases for “non-standard scripts” as they're called. I know at least one developer on Bitcoin ABC is very hesitant, or kind of pushed back on him about doing that and so what are your thoughts about non-standard scripts and the entirety of like an IsStandard check?
Steve: 0:35:58.31,0:37:35.73
I’d actually like to repurpose the concept. I think I mentioned before multi-threaded script validation and having some dedicated well-known script templates - when you say the word well-known script template there’s already a check in Bitcoin that kind of tells you if it's well-known or not and that's IsStandard. I'm generally in favor of getting rid of the notion of standard transactions, but it's actually a decision for miners, and it's really more of a behavioral change than it is a technical change. There's a whole bunch of configuration options that miners can set that affect what they do what they consider to be standard and not standard, but the reality is not too many miners are using those configuration options. So I mean standard transactions as a concept is meaningful to an arbitrary degree I suppose, but yeah I would like to make it easier for people to get non-standard scripts into Bitcoin so that they can experiment, and from discussions of I’ve had with CoinGeek they’re quite keen on making their miners accept, you know, at least initially a wider variety of transactions eventually.
Daniel: 0:37:32.85,0:38:07.95
So I think IsStandard will remain important within the implementation itself for efficiency purposes, right - you want to streamline base use case of cash payments through them and prioritizing. That's where it will remain important but on the interfaces from the node to the rest of the network, yeah I could easily see it being removed.
Cory: 0,0:38:06.24,0:38:35.46
*Connor mentioned that there's some people that disagree with Bitcoin SV and what they're doing - a lot of questions around, you know, why November? Why implement these changes in November - they think that maybe the six-month delay might not cause a split. Well, first off what do you think about the ideas of a potential split and I guess what is the urgency for November?
Steve: 0:38:33.30,0:40:42.42
Well in November there's going to be a divergence of consensus rules regardless of whether we implement these new op codes or not. Bitcoin ABC released their spec for the November Hard fork change I think on August 16th or 17th something like that and their client as well and it included CTOR and it included DSV. Now for the miners that commissioned the SV project, CTOR and DSV are controversial changes and once they're in they're in. They can't be reversed - I mean CTOR maybe you could reverse it at a later date, but DSV once someone's put a P2SH transaction into the project or even a non P2SH transaction in the blockchain using that opcode it's irreversible. So it's interesting that some people refer to the Bitcoin SV project as causing a split - we're not proposing to do anything that anyone disagrees with - there might be some contention about changing the opcode limit but what we're doing, I mean Bitcoin ABC already published their spec for May and it is our spec for the new opcodes, so in terms of urgency - should we wait? Well the fact is that we can't - come November you know it's bit like Segwit - once Segwit was in, yes you arguably could get it out by spending everyone's anyone can spend transactions but in reality it's never going to be that easy and it's going to cause a lot of economic disruption, so yeah that's it. We're putting out changes in because it's not gonna make a difference either way in terms of whether there's going to be a divergence of consensus rules - there's going to be a divergence whether whatever our changes are. Our changes are not controversial at all.
Daniel: 0:40:39.79,0:41:03.08
If we didn't include these changes in the November upgrade we'd be pushing ahead with a no-change, right, but the November upgrade is there so we should use it while we can. Adding these non-controversial changes to it.
Connor: 0:41:01.55,0:41:35.61
Can you talk about DATASIGVERIFY? What are your concerns with it? The general concept that's been kind of floated around because of Ryan Charles is the idea that it's a subsidy, right - that it takes a whole megabyte and kind of crunches that down and the computation time stays the same but maybe the cost is lesser - do you kind of share his view on that or what are your concerns with it?
Daniel: 0:41:34.01,0:43:38.41
Can I say one or two things about this – there’s different ways to look at that, right. I'm an engineer - my specialization is software, so the economics of it I hear different opinions. I trust some more than others but I am NOT an economist. I kind of agree with the ones with my limited expertise on that it's a subsidy it looks very much like it to me, but yeah that's not my area. What I can talk about is the software - so adding DSV adds really quite a lot of complexity to the code right, and it's a big change to add that. And what are we going to do - every time someone comes up with an idea we’re going to add a new opcode? How many opcodes are we going to add? I saw reports that Jihan was talking about hundreds of opcodes or something like that and it's like how big is this client going to become - how big is this node - is it going to have to handle every kind of weird opcode that that's out there? The software is just going to get unmanageable and DSV - that was my main consideration at the beginning was the, you know, if you can implement it in script you should do it, because that way it keeps the node software simple, it keeps it stable, and you know it's easier to test that it works properly and correctly. It's almost like adding (?) code from a microprocessor you know why would you do that if you can if you can implement it already in the script that is there.
Steve: 0:43:36.16,0:46:09.71
It’s actually an interesting inconsistency because when we were talking about adding the opcodes in May, the philosophy that seemed to drive the decisions that we were able to form a consensus around was to simplify and keep the opcodes as minimal as possible (ie where you could replicate a function by using a couple of primitive opcodes in combination, that was preferable to adding a new opcode that replaced) OP_SUBSTR is an interesting example - it's a combination of SPLIT, and SWAP and DROP opcodes to achieve it. So at really primitive script level we've got this philosophy of let's keep it minimal and at this sort of (?) philosophy it’s all let's just add a new opcode for every primitive function and Daniel's right - it's a question of opening the floodgates. Where does it end? If we're just going to go down this road, it almost opens up the argument why have a scripting language at all? Why not just add a hard code all of these functions in one at a time? You know, pay to public key hash is a well-known construct (?) and not bother executing a script at all but once we've done that we take away with all of the flexibility for people to innovate, so it's a philosophical difference, I think, but I think it's one where the position of keeping it simple does make sense. All of the primitives are there to do what people need to do. The things that people don't feel like they can't do are because of the limits that exist. If we had no opcode limit at all, if you could make a gigabyte transaction so a gigabyte script, then you can do any kind of crypto that you wanted even with 32-bit integer operations, Once you get rid of the 32-bit limit of course, a lot of those a lot of those scripts come up a lot smaller, so a Rabin signature script shrinks from 100MB to a couple hundred bytes.
Daniel: 0:46:06.77,0:47:36.65
I lost a good six months of my life diving into script, right. Once you start getting into the language and what it can do, it is really pretty impressive how much you can achieve within script. Bitcoin was designed, was released originally, with script. I mean it didn't have to be – it could just be instead of having a transaction with script you could have accounts and you could say trust, you know, so many BTC from this public key to this one - but that's not the way it was done. It was done using script, and script provides so many capabilities if you start exploring it properly. If you start really digging into what it can do, yeah, it's really amazing what you can do with script. I'm really looking forward to seeing some some very interesting applications from that. I mean it was Awemany his zero-conf script was really interesting, right. I mean it relies on DSV which is a problem (and some other things that I don't like about it), but him diving in and using script to solve this problem was really cool, it was really good to see that.
Steve: 0:47:32.78,0:48:16.44
I asked a question to a couple of people in our research team that have been working on the Rabin signature stuff this morning actually and I wasn't sure where they are up to with this, but they're actually working on a proof of concept (which I believe is pretty close to done) which is a Rabin signature script - it will use smaller signatures so that it can fit within the current limits, but it will be, you know, effectively the same algorithm (as DSV) so I can't give you an exact date on when that will happen, but it looks like we'll have a Rabin signature in the blockchain soon (a mini-Rabin signature).
Cory: 0:48:13.61,0:48:57.63
Based on your responses I think I kinda already know the answer to this question, but there's a lot of questions about ending experimentation on Bitcoin. I was gonna kind of turn that into – with the plan that Bitcoin SV is on do you guys see like a potential one final release, you know that there's gonna be no new opcodes ever released (like maybe five years down the road we just solidify the base protocol and move forward with that) or are you guys more on the idea of being open-ended with appropriate testing that we can introduce new opcodes under appropriate testing.
Steve: 0:48:55.80,0:49:47.43
I think you've got a factor in what I said before about the philosophical differences. I think new functionality can be introduced just fine. Having said that - yes there is a place for new opcodes but it's probably a limited place and in my opinion the cryptographic primitive functions for example CHECKSIG uses ECDSA with a specific elliptic curve, hash 256 uses SHA256 - at some point in the future those are going to no longer be as secure as we would like them to be and we'll replace them with different hash functions, verification functions, at some point, but I think that's a long way down the track.
Daniel: 0:49:42.47,0:50:30.3
I'd like to see more data too. I'd like to see evidence that these things are needed, and the way I could imagine that happening is that, you know, that with the full scripting language some solution is implemented and we discover that this is really useful, and over a period of, like, you know measured in years not days, we find a lot of transactions are using this feature, then maybe, you know, maybe we should look at introducing an opcode to optimize it, but optimizing before we even know if it's going to be useful, yeah, that's the wrong approach.
Steve: 0:50:28.19,0:51:45.29
I think that optimization is actually going to become an economic decision for the miners. From the miner’s point of view is if it'll make more sense for them to be able to optimize a particular process - does it reduce costs for them such that they can offer a better service to everyone else? Yeah, so ultimately these decisions are going to be miner’s main decisions, not developer decisions. Developers of course can offer their input - I wouldn't expect every miner to be an expert on script, but as we're already seeing miners are actually starting to employ their own developers. I’m not just talking about us - there are other miners in China that I know have got some really bright people on their staff that question and challenge all of the changes - study them and produce their own reports. We've been lucky with actually being able to talk to some of those people and have some really fascinating technical discussions with them.
submitted by The_BCH_Boys to btc [link] [comments]

Trying to get a deeper understanding of atomic swaps

Hi together,
I'm an IT-student and writing a thesis about atomic swaps on BTC and BTC-like blockchains. For the thesis I decided to use BTC, LTC, BCH and DCR. These chains have a somehow similar codebase and the same scripting language (I'm not a professional, so there might be differences, but they are not that serious). And they all have a high enough marketcap to be relevant for atomic swaps.
So the goal of the thesis is to find hashed timelock contracts (HTLCs) and connect matching HTLCs from different chains to get the atomic swap. Therefore I first searched the web for anything on atomic swaps [1] and analyzed the input script of this transaction [2] to get a basic understanding how atomic swaps work and what they look like.
Then I wrote a go program to search for any script longer than simple P2PKH scripts. This gave me a list of many different scripts which I analyzed by hand to only take the HTLC ones. (Besides many multisig scripts, there is not much to find on BTC^^)
At this point I found multiple different types of HTLCs as listed below. Afterwards I crawled* BTC again saving all transactions with HTLC scripts, storing the interesting data like tx-id, input value, pubKeyHashes, the secrets and their hashes. I found about one hundret HTLCs on BTC so far.
I did the same for LTC and found about 400 HTLCs.
As far as I understood, the secrets of HTLCs have to be the same on both chains. So I wrote another go program to match the found HTLCs from BTC and LTC and got around 30 matches. The next steps would then be to crawl BCH and DCR and also match the HTLCs found there.
* Crawling in this case means that I start to search the blockchain backwards (to get the newest first, the beginning years are not that interesting in this case^^) until the beginning of 2017. So about 18 months. As stated in [1] the first known atomic swap between BTC and LTC was made on 19th April 2017 (or April 19th 2017 or 19.4.2017 or whatever you like). So there is not much sense in crawling any further.
My questions now are the following:
I'm open to any constructive input and hope you have a few answers for me. Thank you in advance.
Type 1: sha256 secret, length=97byte
63 if 82 size 01 data1 20 88 equalverify a8 sha256 20 data32  88 equalverify 76 dup a9 hash160 14 data20  67 else 04 data4  b1 checklocktimeverify 75 drop 76 dup a9 hash160 14 data20  68 endif 88 equalverify ac checksig 
Type 2a: sha256 secret, length=94byte
63 if a8 sha256 20 data32  76 dup a9 hash160 14 data20  88 equalverify ac checksig 67 else 04 data4  b1 checklocktimeverify 75 drop 76 dup a9 hash160 14 data20  88 equalverify ac checksig 68 endif 
Type 2b: sha256 secret, length=93byte
63 if a8 sha256 20 data32  88 equalverify 76 dup a9 hash160 14 data20  67 else 04 data4  b1 checklocktimeverify 75 drop 76 dup a9 hash160 14 data20  68 endif 88 equalverify ac checksig 
Type 3: ripemd160 secret, length=81byte
63 if a6 ripemd160 14 data20  88 equalverify 76 dup a9 hash160 14 data20  67 else 04 data4  b1 checklocktimeverify 75 drop 76 dup a9 hash160 14 data20  68 endif 88 equalverify ac checksig 
Type 4a: hash160 secret, length=86byte
63 if 03 data3  b1 checklocktimeverify 75 drop 76 dup a9 hash160 14 data20  88 equalverify ac checksig 67 else 76 dup a9 hash160 14 data20  88 equalverify ad checksigverify 82 size 01 data1 21 -> 33 88 equalverify a9 hash160 14 data20  87 equal 68 endif 
Type 4b: hash160 secret, length=82byte
63 if 03 data3  b1 checklocktimeverify 75 drop 76 dup a9 hash160 14 data20  88 equalverify ac checksig 67 else 76 dup a9 hash160 14 data20  88 equalverify ad checksigverify a9 hash160 14 data20  87 equal 68 endif 
Type 5a: hash160 secret, length=81byte
63 if a9 hash160 14 data20  88 equalverify 76 dup a9 hash160 14 data20  67 else 04 data4  b2 checksequenceverify 75 drop 76 dup a9 hash160 14 data20  68 endif 88 equalverify ac checksig 
Type 5b: hash160 secret, length=78byte
63 if a9 hash160 14 data20  88 equalverify 76 dup a9 hash160 14 data20  67 else 01 data1  b2 checksequenceverify 75 drop 76 dup a9 hash160 14 data20  68 endif 88 equalverify ac checksig 
Type 6: hash160 secret, length=79byte
63 if 54  b1 checklocktimeverify 75 drop 76 dup a9 hash160 14 data20  88 equalverify ac checksig 67 else 76 dup a9 hash160 14 data20  88 equalverify ad checksigverify a9 hash160 14 data20  87 equal 68 endif 
Type 7: multiple ripemd160 secrets, length=80 + n*23byte
63 if a6 ripemd160 14 data20  88 equalverify a6 ripemd160 14 data20  ... 88 equalverify a6 ripemd160 14 data20  88 equalverify 21 data33  ac checksig 67 else 04 data4  b1 checklocktimeverify 75 drop 21 data33  ac checksig 68 endif 
Type 8: multiple ripemd160 secrets, length=81 + n*23byte
74 depth 60 16 87 equal 63 if a6 ripemd160 14 data20  88 equalverify a6 ripemd160 14 data20  ... 88 equalverify a6 ripemd160 14 data20  88 equalverify 21 data33  67 else 03 data3  b1 checklocktimeverify 75 drop 21 data33  68 endif ac checksig 
[1] http://www.cryptovibes.com/crypto-news/charlie-lees-atomic-swap-between-litecoin-and-bitcoin-was-a-success/
[2] https://insight.bitpay.com/tx/0bb5a53a9c7e84e2c45d6a46a7b72afc2feffb8826b9aeb3848699c6fd856480
submitted by noobWithAComputer to Bitcoin [link] [comments]

A reminder why CryptoNote protocol was created...

CryptoNote v 2.0 Nicolas van Saberhagen October 17, 2013
1 Introduction
“Bitcoin” [1] has been a successful implementation of the concept of p2p electronic cash. Both professionals and the general public have come to appreciate the convenient combination of public transactions and proof-of-work as a trust model. Today, the user base of electronic cash is growing at a steady pace; customers are attracted to low fees and the anonymity provided by electronic cash and merchants value its predicted and decentralized emission. Bitcoin has effectively proved that electronic cash can be as simple as paper money and as convenient as credit cards.
Unfortunately, Bitcoin suffers from several deficiencies. For example, the system’s distributed nature is inflexible, preventing the implementation of new features until almost all of the net- work users update their clients. Some critical flaws that cannot be fixed rapidly deter Bitcoin’s widespread propagation. In such inflexible models, it is more efficient to roll-out a new project rather than perpetually fix the original project.
In this paper, we study and propose solutions to the main deficiencies of Bitcoin. We believe that a system taking into account the solutions we propose will lead to a healthy competition among different electronic cash systems. We also propose our own electronic cash, “CryptoNote”, a name emphasizing the next breakthrough in electronic cash.
2 Bitcoin drawbacks and some possible solutions
2.1 Traceability of transactions
Privacy and anonymity are the most important aspects of electronic cash. Peer-to-peer payments seek to be concealed from third party’s view, a distinct difference when compared with traditional banking. In particular, T. Okamoto and K. Ohta described six criteria of ideal electronic cash, which included “privacy: relationship between the user and his purchases must be untraceable by anyone” [30]. From their description, we derived two properties which a fully anonymous electronic cash model must satisfy in order to comply with the requirements outlined by Okamoto and Ohta:
Untraceability: for each incoming transaction all possible senders are equiprobable.
Unlinkability: for any two outgoing transactions it is impossible to prove they were sent to the same person.
Unfortunately, Bitcoin does not satisfy the untraceability requirement. Since all the trans- actions that take place between the network’s participants are public, any transaction can be unambiguously traced to a unique origin and final recipient. Even if two participants exchange funds in an indirect way, a properly engineered path-finding method will reveal the origin and final recipient.
It is also suspected that Bitcoin does not satisfy the second property. Some researchers stated ([33, 35, 29, 31]) that a careful blockchain analysis may reveal a connection between the users of the Bitcoin network and their transactions. Although a number of methods are disputed [25], it is suspected that a lot of hidden personal information can be extracted from the public database.
Bitcoin’s failure to satisfy the two properties outlined above leads us to conclude that it is not an anonymous but a pseudo-anonymous electronic cash system. Users were quick to develop solutions to circumvent this shortcoming. Two direct solutions were “laundering services” [2] and the development of distributed methods [3, 4]. Both solutions are based on the idea of mixing several public transactions and sending them through some intermediary address; which in turn suffers the drawback of requiring a trusted third party. Recently, a more creative scheme was proposed by I. Miers et al. [28]: “Zerocoin”. Zerocoin utilizes a cryptographic one-way accumulators and zero-knoweldge proofs which permit users to “convert” bitcoins to zerocoins and spend them using anonymous proof of ownership instead of explicit public-key based digital signatures. However, such knowledge proofs have a constant but inconvenient size - about 30kb (based on today’s Bitcoin limits), which makes the proposal impractical. Authors admit that the protocol is unlikely to ever be accepted by the majority of Bitcoin users [5].
2.2 The proof-of-work function
Bitcoin creator Satoshi Nakamoto described the majority decision making algorithm as “one- CPU-one-vote” and used a CPU-bound pricing function (double SHA-256) for his proof-of-work scheme. Since users vote for the single history of transactions order [1], the reasonableness and consistency of this process are critical conditions for the whole system.
The security of this model suffers from two drawbacks. First, it requires 51% of the network’s mining power to be under the control of honest users. Secondly, the system’s progress (bug fixes, security fixes, etc...) require the overwhelming majority of users to support and agree to the changes (this occurs when the users update their wallet software) [6].Finally this same voting mechanism is also used for collective polls about implementation of some features [7].
This permits us to conjecture the properties that must be satisfied by the proof-of-work pricing function. Such function must not enable a network participant to have a significant advantage over another participant; it requires a parity between common hardware and high cost of custom devices. From recent examples [8], we can see that the SHA-256 function used in the Bitcoin architecture does not posses this property as mining becomes more efficient on GPUs and ASIC devices when compared to high-end CPUs.
Therefore, Bitcoin creates favourable conditions for a large gap between the voting power of participants as it violates the “one-CPU-one-vote” principle since GPU and ASIC owners posses a much larger voting power when compared with CPU owners. It is a classical example of the Pareto principle where 20% of a system’s participants control more than 80% of the votes.
One could argue that such inequality is not relevant to the network’s security since it is not the small number of participants controlling the majority of the votes but the honesty of these participants that matters. However, such argument is somewhat flawed since it is rather the possibility of cheap specialized hardware appearing rather than the participants’ honesty which poses a threat. To demonstrate this, let us take the following example. Suppose a malevolent individual gains significant mining power by creating his own mining farm through the cheap hardware described previously. Suppose that the global hashrate decreases significantly, even for a moment, he can now use his mining power to fork the chain and double-spend. As we shall see later in this article, it is not unlikely for the previously described event to take place.
2.3 Irregular emission
Bitcoin has a predetermined emission rate: each solved block produces a fixed amount of coins. Approximately every four years this reward is halved. The original intention was to create a limited smooth emission with exponential decay, but in fact we have a piecewise linear emission function whose breakpoints may cause problems to the Bitcoin infrastructure.
When the breakpoint occurs, miners start to receive only half of the value of their previous reward. The absolute difference between 12.5 and 6.25 BTC (projected for the year 2020) may seem tolerable. However, when examining the 50 to 25 BTC drop that took place on November 28 2012, felt inappropriate for a significant number of members of the mining community. Figure 1 shows a dramatic decrease in the network’s hashrate in the end of November, exactly when the halving took place. This event could have been the perfect moment for the malevolent individual described in the proof-of-work function section to carry-out a double spending attack [36]. Fig. 1. Bitcoin hashrate chart (source: http://bitcoin.sipa.be)
2.4 Hardcoded constants
Bitcoin has many hard-coded limits, where some are natural elements of the original design (e.g. block frequency, maximum amount of money supply, number of confirmations) whereas other seem to be artificial constraints. It is not so much the limits, as the inability of quickly changing them if necessary that causes the main drawbacks. Unfortunately, it is hard to predict when the constants may need to be changed and replacing them may lead to terrible consequences.
A good example of a hardcoded limit change leading to disastrous consequences is the block size limit set to 250kb1. This limit was sufficient to hold about 10000 standard transactions. In early 2013, this limit had almost been reached and an agreement was reached to increase the limit. The change was implemented in wallet version 0.8 and ended with a 24-blocks chain split and a successful double-spend attack [9]. While the bug was not in the Bitcoin protocol, but rather in the database engine it could have been easily caught by a simple stress test if there was no artificially introduced block size limit.
Constants also act as a form of centralization point. Despite the peer-to-peer nature of Bitcoin, an overwhelming majority of nodes use the official reference client [10] developed by a small group of people. This group makes the decision to implement changes to the protocol and most people accept these changes irrespective of their “correctness”. Some decisions caused heated discussions and even calls for boycott [11], which indicates that the community and the developers may disagree on some important points. It therefore seems logical to have a protocol with user-configurable and self-adjusting variables as a possible way to avoid these problems.
2.5 Bulky scripts
The scripting system in Bitcoin is a heavy and complex feature. It potentially allows one to create sophisticated transactions [12], but some of its features are disabled due to security concerns and some have never even been used [13]. The script (including both senders’ and receivers’ parts) for the most popular transaction in Bitcoin looks like this: OP DUP OP HASH160 OP EQUALVERIFY OP CHECKSIG. The script is 164 bytes long whereas its only purpose is to check if the receiver possess the secret key required to verify his signature.
Read the rest of the white paper here: https://cryptonote.org/whitepaper.pdf
submitted by xmrhaelan to CryptoCurrency [link] [comments]

What's the f*****ng benefit of the reactivated OP_Codes?

Nobody explained what we can do with the soon to be reactivated OP_Codes for Bitcoin Cash, and nobody explained why we need them. It's a fact that there are risks associated with them, and there is no sufficient testing of these risks by independent developers, nor is there a sufficient explanation why they carry no risk. BitcoinABC developers, explain yourselves, please.
Edit: Instead of calling me a troll, please answer the question. If not, ask someone else.
Edit Edit: tomtomtom7 provided a resfreshing answer on the question:
https://www.reddit.com/btc/comments/7z3ly4/to_the_people_who_thing_we_urgently_need_to_add/dulkmnf/
The OP_Codes were disabled because bugs were found, and worry existed that more bugs could exist.
They are now being re-enabled with these bugs fixed, with sufficient test cases and they will be put through thorough review.
These are missing pieces in the language for which various use cases have been proposed over the years.
The reason to include these, is because all developers from various implementations have agreed that this is a good idea. No objections are raised.
Note that this does not mean that all these OP_Codes will make it in the next hardfork. This is obviously uncertain when testing and reviewing is still being done.
This is not yet the case for OP_GROUP. Some objection and questions have been raised which takes time to discuss and time to come to agreement. IMO this is a very healthy process.
Another good comment is here
https://www.reddit.com/btc/comments/7z49at/whats_the_fng_benefit_of_the_reactivated_op_codes/dullcek/
One precise thing: Allowing more bitwise logical operators can (will) yield smaller scripts, this saves data on the blockchain, the hex code gets smaller.
Here is a detailled answer. I did not goe through it if it is satisfying, but at least it is a very good start, Thank you silverjustice.
But further, if you want specific advantages for some of these, then I recommend you check out the below from the scaling Bitcoin conference:
opcodes are very useful, such as in for example with CAT you can do tree signatures even if you have a very complicated multisig design using CAT you could reduce that size to log(n) size. It would be much more compact. Or with XOR we could do some kind of deterministic random number generator by combining secret values from different parties so that nobody could cheat. They could combine and generate a new random number. If people think-- ... we could use LEFT to make weaker hash. These opcodes were re-enabled in sidechain elements project. It's a sidechain from Bitcoin Core. We can reintroduce these functions to bitcoin.
The other problem are the ... numeric operations which were disabled by Satoshi. There's another problem. Which is that the range of values accepted by script is limited and confused because the CScript.. is processed at ..... bit integers internally. But to these opcodes it's only 32 bits at most. So it's quite confusing. The other problem is that we have this.. it requires 251 encode or calculate or manipulate this number. So we need at least 52 bits. But right now it is only 32 bits. So the proposal is to expand the valid input range to 7 bytes which would allow 56 bits. And it limits the maximum size to 7 bytes so we could have the same size for inputs and outputs. For these operations, we could re-enable them within these safe limits. It would be safe for us to have these functions again.
The other problem is that we currently cannot commit to additional scripts. In the original design of bitcoin, we could have script operations inside of the signature. But the problem is that the signature is not covered by the signature itself. So any script in the scriptSig is modifiable by any third party in the network. For example, if we tried to do a CHECKSIG operation in the signature, people could simply replace it with an OP_0 and invalidate the transaction. This is a bypass of the.. signature check in the scriptSig. But actually this function is really useful, for example, we can do... delegation, people could add additional scripts to a new UTXO without first spending it. So people could do something like let's say to let their son spend their coin within a year if it is not first spent otherwise.. and also, people, talk about replay protection. So we have some ohter new opcode like pushing the blockhash to the stack, with this function we could have replay protection to make sure the transaction is valid only in a specified blockchain.
So the proposal is that in the future the CHECKSIG should have the ability to sign additional script and to execute these scripts. And finally the other problem is that the script has limited access to different parts of the transaction. There is only one type of operation that allowed to investigate different parts of the transaction, which is CHECKSIG and CHECKMULTISIG. But it is very limited. There are sighash limitations here... there are only 6 types of sighash. The advantage of doing this is that it's very compact and could use only one byte to indicate which component to sign. But the problem is that it's inflexible. The meaning of this sighash is set at the beginning and you can't change it. You need a new witness version to have another checksig. And the other problem is that the sighash can be complex and people might make mistakes so Satoshi made this mistake in the sighash design such as the well-known bug in validation time and also the SIGHASH_SINGLE bug. It's not easy to prevent.
The proposal is that we might have the next generation of sighash (sighashv2) to expand to two bytes, allow it to cover different parts of the transaction and allow people to choose which components they would like to sign. This would allow more flexibility and hopefully not overly complicated. But still this is probably not enough for more flexible design.
Another proposal is OP_PUSHTXDATA which pushes the value of different components of a transaction to the stack. It's easy to implement, for example, we could just push the scriptpubkey of the second output to the stack, okay. So it is actually easier to implement. We could do something more than just... because we have sighash, we could check where something is equal to the specified value. But if we could push the value, like the value of an output to the stack, then we could use other operations like more than or less than and then we could do something like checking whether the value of output x must be at least y bitcoin, which is a fixed value.
There are some other useful functions like MAST which would allow for more compact scripts by hiding the other unexecuted branches. There's also aggregation that would allow n-of-n multisig to be reduced to a single signature and so on. In the elements project, they implemented CHECKSIGFROMSTACK where they don't check the transaction structure but instead they verify a message on the stack. So it could be some message like not bitcoin maybe, perhaps cross-chain swap, or another bitcoin UTXO. And also we might have some elliptic curve point addition and operations which are also useful in lightning network design.
Here are some related works in progress. If you are interested in this topic, I would like to encourage you to join our discussions because it's a very active topic. jl2012 bip114 MAST, maaku's MBV, luke-jr or version-1 witness program, Simplicity, etc.
so you have your script template the amount value and there is a block impactor beause we have the sha chain whih allows you to hae the hashes.. we can hae that errortate constant beause you need the HTLC chashes, to properly reoke the prior states and if you an't do that then you can't onstruct the redeem script. Right now it ineeds a signature for eery state, you need all the HTLCs, it needs the netowrk erification state, and there's another cool thing you can do with which is like trap door erification and you can include it in the transaction itself and there can be a alsue where there is some margin for it.. Which make sit powerful, and then you can make it more private with these constructs. We only have a few minutes left, we can cover this.
One furthe rthing is that in the transformation, we have privacy issue because we need to keep going forward, we need to have hte private state, so there's a history of this in the ages in the past, the current one used replications, which was one of the cool things about lightning. We used to have deckman signatures we had a sequence value of like 30 days, we did an update, we had to switch sides then we make it 29 then 27 etc. You can only broadcast the most recent state because otherwise the other party can transact the other transaction. If you start with 30 days then you can only do about 30 bidirectiona lswitches. Then there was cdecker's payment channels where you have a root tree and every time you need to- you had two payment channels, you had to rebalance htem and then it's on your part of the channel you can reset the channel state. You can do 30 this way, you have another tree, you can do it that way, and then there's a new version of it in the indefinite lifetime... by keeping the transaction in CSV, the drawback on that paproahc because you have al arge validation tree, in the worst cas eyou have 8 or 10 on the tree, and then you nee dfor the prior state and then you do the 12 per day, and every time you have to make a state, you have to revoke the preimage from the prior state, this is cool because if they ever broadcast the entire state, eahc one has the caluse so that you can draw the entire money in the event o f a violation. There are some limitations for doing more complex verifications and you have this log(n) state that you have to deal with ehen you deal with that.
We're going to do the key power on the stack to limit key verifications on this main contract. this is all composable. You can do discreet log contracts. You can now check signtures on arbitrary messages. You can sign a message nad then we can enforce structure on the messages themselves. Right now you need to have sequene numbers. So each state we are going to increment the sequence numbers. So you give me a siequence number on that state. On the touputs we have a commitment ot the sequence number and the value r. So people on chain will know that how many places we did in that itself. The ool part about this is that because we have a seq number then I have the one if it's highest neough. Then I am opening that commitment to say this is state 5 and I present to you a new signed ommitment and open that as well, that's in a validation state. The cool things is that you only need one of those m. So we have to some auxiliary state, and each time I have a new state I an drop the old state. I have a signed commitment to revoke the prior state. This is a ibg deal beause the state is much smaller. Currently we require you to fwe use a state mahcine on state 2, and it also has implications for verifications and watch tower
So on lightning, there's this technique itself- it's timelocks CSV value and if you can't react within that value then you can't go to court and enforce judgement on this attacker. So the watchtower is a requirement, you delegate the state watching to the watchtower. They know which channels you're watching. You send some initial points, like a script template. For every one you send the signautre and the verification state. They can use the verification stat ethat collapses into a log(n) tree, you can basically use state where you send half the txids, you can decrypt this in... some time.
submitted by Der_Bergmann to btc [link] [comments]

This is how we will recover coins sent to the wrong address or an unowned address

Don't worry, I'm NOT advocating that transactions should be reversible.
Many of us have accidentally sent coins to the wrong address or an unowned address, resulting in those coins being permanently unrecoverable and unspendable. I haven't made this mistake (yet), but damn it makes me nervous when I send larger transactions.
Unfortunately, we'll never be able to revert those past mistakes, but with a small change to the bitcoin protocol, we can make it so that we can recover the coins when we make this sort of mistake in the future.
Please let me know your thoughts about my solution below, and if something like this is already in the works.

The solution, conceptually

If everybody knew everyone else's public keys, we could prevent these permanent mistakes with multisig scripts. The change I'm proposing will make it so we can prevent the mistakes without knowing each other's public keys, but I'll explain it in terms of multisig, because the solution is conceptually the same, and easier to explain:
Instead of sending coins directly to a recipient address, send your coins to a 1-of-2 multisig account, shared by both you and the recipient.
This means that effectively, the transaction is "cancellable", but only until the recipient sends the coins to his own account. At that point the coins are irreversibly his.
The downside of this is that when receiving a payment, you must explicitly accept it before the coins are truly yours -- you should not consider the coins as yours until you do this. The upside is that it guarantees that coins are never lost at inactive addresses.

Problems that this solves

  1. Sending to an unowned address (base58Check almost always protects against this)
  2. Sending to an address that was owned, but the private keys were lost and nobody has control of the address anymore
  3. Sending to the wrong (but owned) address, unless the unintended recipient is quick to claim the coins
  4. Sending your coins to the wrong address on an exchange (i.e. an address for a forked blockchain)

Implementation and technical details

We can accomplish the above without knowledge of each others' public keys, if we use a custom pubkey script. Nodes only accept transactions with standard pubkey scripts, so we'd need to define a new standard script.
The typical P2PKH script looks like this:
scriptPubKey: OP_DUP OP_HASH160 OP_DUP  OP_EQUALVERIFY OP_CHECKSIG scriptSig:   
The new standard script I'm proposing is this:
scriptPubKey: OP_DUP OP_HASH160 OP_DUP  OP_EQUAL OP_SWAP  OP_EQUAL OP_ADD OP_VERIFY OP_CHECKSIG ( would be your address, and  would be the recipient's address) scriptSig:   
This script allows the coins to be spent by either the owner of or .
I call this new transaction type Pay To Either Public Key Hash (P2EPKH), or colloquially, "fuck-up protection".
Of course, wallets would have to be able to recognize the new transaction type, and offer controls to claim coins from incoming P2EPKH transactions or to cancel unclaimed P2EPKH transactions.

Feedback, please.

What do you all think? Is this generally a decent idea? Has this idea been floated around before? Is there another solution for this issue in the works? If this is a good idea, how do I get the attention of the devs?
submitted by ransoing to btc [link] [comments]

As part of my ongoing effort to develop stupid shit for Garlicoin, I present you: W-addresses!

“Wait, what?!” I hear you asking? Well…(buckle up, this is another one of my technical posts that goes on, and on…)
For some time now, I have been using native SegWit (Pay-to-Witness-Public-Key-Hash, P2WPKH) transactions for myself. Mostly because they have a 75% fee subsidy on signature data (which comes out on ~50% fee subsidy on the entire transaction, depending on the type of transaction) and I am dutch after all ;-)
It turns out that Garlicoin Core kind of supports them and kind of does not. If you manually register the transaction redeem script to your wallet (using the addwitnessaddress command) it will start recognizing them on the blockchain but gets kind of confused on how to deal with them, so it registers them all as ‘change’ transactions. Still, this means you can receive coins using these types of transactions and pay with them in all ways you can with regular Garlicoins, except your transactions are cheaper.
However, sending coins using native SegWit is quite a hassle. You can basically only do it by creating your own raw transactions (createrawtransaction, edit it to make it native SegWit, fundrawtransaction, signrawtransaction, sendrawtransaction). On top of this, any change address the wallet creates are still legacy/normal Garlicoin addresses, so you will end up with a bunch of unspent transaction outputs (UTXOs) for which you have to pay full fee anyway. I decided we (I) could do better than this.
But first a few steps back. What is this native SegWit anyway and weren’t people already using SegWit? Wasn’t there a user that just after mainnet launched accidentally made a SegWit transaction? So what the hell am I talking about?
To understand this, you will need to know a few things about what SegWit is and how Bitcoin Garlicoin transactions work in general. Note that this bit gets really technical, so if you are not interested, you might want to skip ahead. A lot.
First thing to understand is that addresses are not really a thing if you look at the blockchain. While nodes and explorers will interpret parts of a transaction as addresses, in reality addresses are just an abstraction around Bitcoin Script and an easy way send coins instead of asking people “hey, can you send some coins to the network in such a way that only the private key that corresponds to public key XYZ can unlock them?”. Let’s look at an example: say I ask you to send coins to my address GR1Vcgj2r6EjGQJHHGkAUr1XnidA19MrxC. What ends up happening is that you send coins out a transaction where you say that the coin are locked in the blockchain and can only be unlocked by successfully executing the following script:
OP_DUP OP_HASH160 4e9856671c3abb2f03b7d80b9238e8f5ecd9f050 OP_EQUALVERIFY OP_CHECKSIG
Now, without getting too technical, this means something like this:
As you can see, the address actually represents a well-known piece of script. This start making sense if you look at the decoded address:
26 4E9856671C3ABB2F03B7D80B9238E8F5ECD9F050 F8F1F945
The first byte (0x26, or 38) is the version byte. This tells the clients how the interpret the rest of the script. In our case 38 means Pay-to-Public-Key-Hash (P2PKH), or in other words the script mentioned above. The part after that is just the SHA1 hash of the public key and the final 4 bytes are a checksum to verify you did not make a typo when entering the address.
Enter SegWit. What SegWit exactly is depends on who you are talking to, however it mostly is a different transaction format/protocol. The main change of SegWit is that signature data is not longer included in the transaction (fixing transaction malleability). Instead transaction data is sent separate from the transaction itself and outside of the (main) blocks.
This is not really that much of an issue, except for the fact that people wanted to enable SegWit as a soft-fork instead of a hard-fork. This means that somehow unupgraded nodes needed a way to deal with these new transaction types without being able to verify them.
The solution turned out to be to make use of an implementation detail of Bitcoin Script: if a piece of script executes without any errors, the last bit of data determines whether the transaction is valid (non-zero) or invalid (zero). This is what they used to implement SegWit.
So what does this look like? Well, you might be surprised how simple a P2WPKH (the SegWit way of doing a P2PKH transaction) script looks:
00 4e9856671c3abb2f03b7d80b9238e8f5ecd9f050
Yes. That’s it.
The first byte is the Witness program version byte. I.e. it tells you how the other data should be interpreted (very similar to how addresses work). Then there is the hash of the public key. As you can see, SegWit does not actually use Bitcoin Script. Mostly because it needs old nodes to ‘just accept’ its transactions. However interestingly enough, while the transaction format changed, the transaction data is pretty much the same:
This means that these kind of SegWit transactions need a new way of addressing them. Now, you might think that this is where the ‘3’ addresses on Bitcoin or the ‘M’ addresses on Garlicoin come in. However, that is not the case.
These addresses are what are called Pay-to-Script-Hash (P2SH) addresses. There scrypt is like this:
OP_HASH160 35521b9e015240942ad6b0735c6e7365354b0794 OP_EQUAL
Huh? Yeah, these are a very special type of transactions, that kind of go back to the “hey, can you send some coins to the network in such a way that only the private key that corresponds to public key XYZ can unlock them?” issue.
These transactions are a way to have arbitrary smart contracts (within the limits of Bitcoin Script) to determine whether a transaction output can be spend or not without the sender of the coins having to deal with your scripts. Basically they use a hash of the “real” script, which whoever owns the coins has to provide when they want to spend them, as well as the specific inputs required for a script. This functionality is for example used in multi-signature (MultiSig) wallets, without requiring someone sending money to these wallets having to deal with random bits of information like how many signatures are required, how many private keys belong to the wallet, etc.
This same method is used for so called P2SH-wrapped SegWit transactions (or P2SH-P2WPKH). Consider our earlier SegWit transaction output script:
00 4e9856671c3abb2f03b7d80b9238e8f5ecd9f050
Or 00144e9856671c3abb2f03b7d80b9238e8f5ecd9f050 in low-level hex. The P2SH script for this would be:
OP_HASH160 a059c8385124aa273dd3feaf52f4d94d42f01796 OP_EQUAL
Which would give us address MNX1uHyAQMXsGiGt5wACiyMUgjHn1qk1Kw. This is what is now widely known and used as SegWit. But this is P2SH-wrapper SegWit, not native or "real" SegWit. Native would be using the data-only SegWit script directly.
The reason for using the P2SH variant is mostly about compatibility. While SegWit nodes understand these newer transactions, they were never officially assigned a way to convert them to addresses. Hence, they will show up in blockchain explorers as Unparsed address [0] or something similar. Then there is also the whole thing about old nodes and relaying non-standard transactions, but I will skip that bit for now.
Bitcoin is using/going to use new BECH32 addresses for native SegWit transactions, which looks completely different from the old Base-58 encoded addresses. However, right now, especially on Garlicoin, you cannot really use them and have to use the P2SH variant. But why not use these new cool transaction types without having to deal with all that useless and complex P2SH wrapping, right? Right? …
Well, I went ahead and gave them their (unofficial) address space.
So last thursday I made some changes to Garlicoin Core, to make dealing with these native SegWit transaction a lot easier. In short, the changes consist of:
  • Assigning address version byte 73 to them, in other words addresses starting with a ‘W’ (for ‘witness’).
  • Allowing the use of ‘W’ addresses when sending coins.
  • Make the wallet automatically recognize the SegWit transaction type for any newly generated address.
  • Add the getwitnessaddress command, which decodes a version 38 ‘G’ address and re-encodes the same data as a version 73 ‘W’ address (unfortunately it is not as simple as just changing the first letter). Note that this can be any address, not just your own. (That said, you should not send SegWit transactions to people not expecting them, always use the address given to you.)
  • Added the usewitnesschangeaddress configuration setting, to automatically use the cheaper SegWit transaction outputs for transaction change outputs.
  • (Since using the 'W' address only changes the way coins are sent to you and the private key used for both transaction types is the same:) When receiving coins they show all up under the original ‘G’ address, whether a SegWit or legacy/normal transaction type was used. The idea behind this is that both are actually the same "physical" (?) address, just to the way to coins to it differs. Address book entries are also merged and default to the ‘G’ address type.
Anyway, I don’t expect people to actually use this or it getting merged into mainline or anything. I actually mostly made this for myself, but thought I should share anyway. I guess it was also a nice opportunity to talk a bit about transactions and SegWit. :-)
Btw, I also changed my pool to allow mining to ‘W’ addresses, to make coin consolidation cheaper (due to the SegWit fee subsidy). Right now this is only for instant payout though (as I would have to update the wallet node the pool is using for daily payout, which I haven’t done yet).
Also note that you can actually mine to a ‘W’ address (and therefore use cheaper transactions) even if you are running the official, non-patched version of Garlicoin Core, however:
  • You need to manually convert your ‘G’ address to a ‘W’ address.
  • You need to run the addwitnessaddress command (Help -> Debug Window -> Console) to make the wallet recognize SegWit transactions (you can ignore the ‘M’ address it produces).
  • The wallet might get a bit confused as it does not really understand how it got the coins. This is mostly notable in the ‘Coin Control’ window if you have it enabled. Apart from that everything should still work though.
submitted by nuc1e4r5n4k3 to garlicoin [link] [comments]

ep 13: How is bitcoin BITCOIN BREAKOUT COMING !?!?bitcoin litecoin price prediction, analysis, news, trading Crypto Savy LIVE STREAM! bitcoin litecoin price prediction, analysis, news, trading Servizio di custodia di bitcoin coperto da assicurazione ... bitcoin litecoin price prediction, analysis, news, trading

YouTube Introduces New Fact-Checking System – Will It Stop Bitcoin, Ethereum and XRP Scammers? by Daily Hodl Staff. April 28, 2020. in Futuremash ‏‏‎ ‏‏‎ ‏‏‎ ‏‏‎ The world’s largest video-sharing platform is introducing a new system to help viewers in the US determine whether certain videos contain misleading information. In a blog post published on Tuesday, YouTube ... As bitcoin grows in popularity and recognition worldwide, resources dedicated to providing in-depth and diverse stats on all things crypto have exploded Checksig and the value of bitcoin. Having joined the community only a few months ago, Checksig has continued to grow and make a name for itself. In order to deepen its activity and its projects we asked its founder, Ferdinando Ametrano, to tell us something about them. How is CheckSig’s business developing? What are the important milestones you have reached? We have completed the technical ... Bitcoin Cash 24h $ 269.37-3.24-1.19%. Litecoin 24h $ 55.53 +0.78%. Litecoin 24h $ 55.53 +0.431730 +0.78%. Cardano 24h $ 0.107179-3.99%. Cardano 24h $ 0.107179-0.004452-3.99%. Bitcoin SV 24h $ 164 ... The Silk Road Balance Sheet Discrepancy: Bitcoin Worth $4.8 Billion Still Missing. The original Silk Road marketplace has been shut down for well over seven years now and to this day, 444,000 ...

[index] [42332] [12715] [27456] [6637] [40418] [1648] [4214] [2552] [9473] [48809]

ep 13: How is bitcoin "locked" to an address - OP_CHECKSIG ...

Bitcoin and Litecoin the OPPORTUNITY of a LIFETIME!! Don't forget to help the channel out by checking out the affiliate links below, Thanks for your support! TIP JARS BTC ... BITCOIN 💥 Are ALTCOINS Ready to MOON? 💥 ️ LIVE Crypto Analysis TA & BTC Cryptocurrency Price News - Duration: 28:26. Crypto Kirby Trading 11,478 views New My Wife's CRYPTO MERCH page: https://inkurimage.com Don't forget to help the channel out by checking out the affiliate links below, Thanks for your support! TIP JARS BTC ... CheckSig provides transparent bitcoin custody for institutional investors and high-net-worth individuals. The custody solution is unique as it is based on a public open protocol which rejects ... Live Bitcoin Liquidation Watch: May 27 2020 IntroToCryptos 147 watching Live now 🚀BITCOIN HUGE MOVE COMING!?🚀bitcoin litecoin price prediction, analysis, news, trading - Duration: 16:04.

#