To view Part 1 of this blog series, click here.
Circling back to our main interest, the world of the IoT. In order to create a blockchain shared between autonomous devices that fulfills the security properties required to ensure operation of the ecosystem, the ‘good’ devices need to accumulate a minimum 51% share of the compute power in the system. To put this requirement in perspective, consider a Raspberry PI version 3, which represents a fairly well equipped IoT device in terms of memory, storage capacity and CPU power – know that most of the current IoT devices are far behind in terms of their computing capabilities. A RPi3 is able to generate about 10 hashes per second for the Ethereum POW. Your kid’s gaming rig, equipped with an Nvidia GTX1070 GPU, is able to perform this task at a rate of 25.1 million hashes per second. Meaning that in general, to have the same probability of completing the Proof of Work before any hacker with a modern day PC, the system needs to be composed of at least 2.5 million RPi3 devices. Or to put it differently, any IoT system using the same distributed trustless consensus paradigm used by Bitcoin needs to be larger than 2.5 million devices before it could be deemed secure from DoS and reverse attacks by individuals. This is not even taking into account government-sponsored or organized crime hackers as they have access to far more powerful systems, or people who have purposefully built hardware based on FPGAs typically used to efficiently mine Bitcoins.
Another consideration for enabling IoT devices as full nodes in a Blockchain is the storage requirement. Depending on the number of transactions and the size of each transaction in the ledger, the storage requirements will be growing faster than linear as nodes get added to the blockchain. A Bitcoin Core client, for example, requires more than 100GB of storage for downloading the blockchain and that is a real deal breaker for the limited bandwidth and limited storage devices that make up most of the IoT systems.
Considering this, I would argue that a blockchain based on Bitcoin technology, or more generally spoken based on Proof of Work for implementing trustless consensus, will not be able to provide the required levels of security with current gen IoT systems, at least not in a fully distributed and autonomous way. Solutions present themselves in terms of adding centralized cloud servers and cloud storage to accommodate the compute power and storage needs of the blockchain, and as such, reverting back to a more centralized solution while IoT devices become light clients such as proposed in the light client protocol for Ethereum.
Hard and Soft Forks
Vulnerabilities are a fact of digital life. A hard fork on a blockchain is a software upgrade that introduces new consensus rules that aren’t compatible with the older software. New consensus rules can be required to fix vulnerabilities, extend the capabilities or add new functionality. Consider a new rule to allow a block size of 4kB instead of the previous 2kB block size. In a distributed system, not all nodes will be upgraded at the same time and at some point there will be a critical amount of nodes running the new software and another amount in the old software. Because blocks generated in the upgraded nodes are 4kB, they will not be validated by nodes running old software. At that point the blockchain will break into two chains that will start an independent life. For cryptocurrencies, the entity in charge of the blockchain software will have to agree with the majority of the miners to upgrade to the new software and hope other miners follow their lead. If there is no incentive for miners or no human relation to leverage, the hard fork could result in a split. If the majority of the miners agree to upgrade however, the others will be stuck on a chain that is split off from the main blockchain and will start to lose incentive. The economics of the blockchain currency will push the miners back to the new chain through upgrading to the newest software, until the next hard fork. There might be a conflict of interest between two2 major stakeholders in the blockchain. While one wants to go in a certain direction, the other wants to stay and has other plans for the future. This might result in a permanent hard fork where two separate blockchains continue their independent lives. This is what happened to Ethereum in July 2016 when it split in two separate chains and currencies: Ethereum and Ethereum Classic, both of which have different currency and ethos.
A soft fork happens when there is a backwards compatible change in the software. Continuing on our previous example, imagine a new rule allows a block size of max 1kB while the previous rule supported 2kB block sizes. Non-upgraded nodes will accept blocks from the newer versions as they respect the consensus rules, however any new blocks mined by non-upgraded nodes will be refused by the newer version nodes and as the number of new software nodes increases, the older version miners will not be rewarded for new blocks and will be incentivized to upgrade in order to profit again.
Upgrading software and breaking backwards compatibility is fairly delicate, but without innovation becomes inhibited. In a blockchain environment where one needs to convince third parties to prevent the system from splintering, breaking backwards compatibility became even more delicate. Most of the convincing in blockchain is based on economics: the incentives are driving miners to follow the lead and vision of the blockchain developers.
Merge-Mined Sidechains – piggy-backing on established blockchains
Different blockchains are emerging, each of them solving different problems, providing less or more flexibility and extensibility, and some of them are public while others are kept private. It is clear that there will not be one blockchain that will cover all use cases and solve all our distributed problems. Bitcoin is the godfather of all blockchains and still today is considered the most secure and most mature, while having the highest commerce volume, the most developers, the highest market cap, the most code review, the highest mining hash-rate and the most academic analysis. So why not leverage the existing Bitcoin blockchain for new applications? It would help a great deal of new applications, because the idea of distributed trustless consensus and the proof-of-work requires a 51% share in computing power to secure its ecosystem against attacks. Piggy-backing on the oldest existing, most elaborate, established blockchain such as Bitcoin seems like the great idea for new and innovative applications.
Technically spoken, the Bitcoin blockchain can accommodate for any data, not only currency transactions. This was the initial idea behind the BitDNS proposal: a project to extend Bitcoin’s technology to a domain name service, expanding the software to support transactions for registering, updating, and transferring domains. The proposal was released by Vincent Durham, a pseudonymous author who later disappeared – much like Satoshi. The project eventually became an altchain with its own altcoin, known as Namecoin. The reason BitDNS morphed into an altchain project was because Satohsi Nakamoto objected to this idea, based on the paradigm that Bitcoin is a social contract: users agree to pay the cost to store the Bitcoin blockchain and in return they get to use the currency that results from it for free. If Bitcoin users will have to store non-currency data such as DNS information, that would violate the social contract. Satoshi did however propose a modified Proof of Work system that would allow miners to mine Bitcoin, BitDNS, and any other ‘BitX’ blocks that might come later, without performance loss. This idea, called Auxiliary Proof of Work (AuxPoW), allows an auxiliary blockchain to accept Auxiliary Proof of Work from one blockchain as their own Proof of Work. Satoshi argued that this would allow multiple blockchains to co-exist without being a danger to each other if many chains’ miners ganged up on one chain. This proposal resulted in a merged-mining specification and merged-mined sidechains such as Namecoin.
Merge-mining incentivizes miners by providing them not only the reward of the main currency but also the sidechain’s currency with little extra effort, again an idea driven by the economics of the system. For the sidechain developers however, it means that they need to provide a version of software that is interoperable with the main chain, e.g. Bitcoin, and their own chain (the sidechain). This severely impacts and limits the developers of the sidechain as they are completely dependent on the advances on the main chain. When Bitcoin does a hard fork, for example, all sidechains need to provide a new version of their software that is compatible with the new Bitcoin core client or they risk losing the miners that accepted to run their version of software. So, while merge-mined sidechains provide the additional security through a critical share of compute power (securing and protecting them through the economics that an attack will be too expensive to perform), they will have to follow closely any changes in direction of the blockchain they are merge-mined with, even if they do not agree or see the incentive of doing so.
Proof of Stake
Even if the theory is still catching up and the system works better in reality than on paper, Bitcoin has proven that Proof of Work provides the ability to reach consensus in a distributed system as clients simply pick the longest valid chain, which corresponds to the highest amount of work, as the correct chain. Proof of Work however is very inefficient in terms of energy consumption, one of the economic foundations it is based upon to secure the chain from malicious nodes in the trustless system. A property that also led miners to centralize their hashing power, which obviously is not in line with the spirit of a network whose intent was to minimize the dependence on few trusted parties.
Proof of Stake (POS) is not about mining but about validating. With proof of stake, every participant in the network gets awarded an amount of shares, and depending on the amount, their probability of being allowed to add the next block on the chain will be higher or lower. In POS blockchains the blocks are said to be ‘minted’ rather than ‘mined.’ Minting of blocks does not reap any rewards, hence in POS based cryptocurrencies the total number of coins typically does not change over time like it does in POW based cryptocurrencies.
Proof of Stake is non-trivial for public blockchain implementations, but is supposed to be useable as a consensus algorithm in private and consortium chain settings. Ethereum (ETH), one of the most trusted and experimented blockchains by enterprises, is preparing to move to the Proof of Stake model by the end of 2017. The expert’s opinions are differing though as some claim that Proof of Stake will never be able to provide the same level of security as is provided through Proof of Work and its inherent economic impact on the attack.
Stay tuned for Part 3 of our series tomorrow.
Download “When the Bots Come Marching In, a Closer Look at Evolving Threats from Botnets, Web Scraping & IoT Zombies” to learn more.
Recognized Cyber Security and Emerging Technology thought leader with 20+ years of experience in Information Technology As the EMEA Cyber Security Evangelist for Radware, Pascal helps execute the company's thought leadership on today’s security threat landscape. Pascal brings over two decades of experience in many aspects of Information Technology and holds a degree in Civil Engineering from the Free University of Brussels. As part of the Radware Security Research team Pascal develops and maintains the IoT honeypots and actively researches IoT malware. Pascal discovered and reported on BrickerBot, did extensive research on Hajime and follows closely new developments of threats in the IoT space and the applications of AI in cyber security and hacking. Prior to Radware, Pascal was a consulting engineer for Juniper working with the largest EMEA cloud and service providers on their SDN/NFV and data center automation strategies. As an independent consultant, Pascal got skilled in several programming languages and designed industrial sensor networks, automated and developed PLC systems, and lead security infrastructure and software auditing projects. At the start of his career, he was a support engineer for IBM's Parallel System Support Program on AIX and a regular teacher and presenter at global IBM conferences on the topics of AIX kernel development and Perl scripting.