On July 10, 2018, news broke that cryptocurrency wallet and decentralized exchange Bancor was hit with a hack. A wallet the Bancor team used to update the protocol’s smart contracts was infiltrated, and the $23.5 million vulnerability allowed the hackers to run off with $12.5 million ETH, $1 million NPXS tokens and $10 million of Bancor’s BNT token.
Following the hack, the Bancor team froze the BNT in question in an effort to stanch its losses.
The latest of its kind, the attack is an unfortunate reminder that smart contracts are not foolproof. Even built as they are on the blockchain’s security intensive network, they can feature bugs, backdoors and vulnerabilities that are ripe for exploitation.
Before Bancor, we saw the popular Ethereum wallet Parity drained of 150,000 ETH (now worth just over $68 million) in July of 2017. In November of the same year, Parity lost even more than this when a less-experienced coder accidentally froze some $153 million worth of ether and other tokens.
In perhaps the most infamous smart contract hack in the industry to date, The DAO, a decentralized venture fund, lost 3.6 million ether in June of 2016. The stolen funds are now worth $1.6 billion, and the fallout of the attack saw Ethereum hard fork to recoup losses.
The Why and How: Making the Same Mistake
If three’s company, then The DAO, Parity and now Bancor have become the poster triplets of smart contract vulnerabilities. But they’re not alone in their weakness, and similar smart contract bugs have been exploited or nearly exploited on other networks.
For such a nascent technology, such flaws may be expected, but given the mass sum of funds these contracts are supposed to protect, truly stalwart security measures are not yet routinely employed.
To Hartej Sawhney, co-founder of Hosho cybersecurity firm, the sheer amount of funds at stake is enough of an incentive to attract black hats to these smart contracts, especially if there’s a central point through which they can probe for access.
“There’s money behind every smart contract, so there’s an incentive to hack into it. And the scary part of smart contracts like Bancor is that they’ve coded their smart contracts in a way that gives centralized power to the founders of the project. They’ve put this backdoor in there,” Sawhney told Bitcoin Magazine in an interview.
Sawhney is referring to Bancor’s ability to confiscate and freeze tokens at will, as the smart contracts that govern their wallet and exchange feature central points of control. This degree of control has been widely criticized as centralized to the point that Bancor shouldn’t be able to advertise itself as a decentralized exchange.
And it may have even provided the hackers with an entry point into the network. While Bancor has not revealed the specifics of the hack and its execution, the team wrote in a blog post that “a wallet used to upgrade some smart contracts was compromised.” Sawhney indicated in our interview that “most smart contracts are coded to be irreversible,” while Bancor’s own are completely mutable. The hackers could have exploited — and likely did exploit — the same backdoor that the developers put into place to manage their project.
Bancor aside, Dmytro Budorin, CEO of cybersecurity community Hacken, echoed Sawhney’s belief that the industry’s treasure trove of assets is a powerful impetus for hackers to dirty their hands. He also believes that the relative youth of the technology makes it vulnerable to detrimental exploits.
“Coding on blockchain is something new,” Budorin added in an interview with Bitcoin Magazine. “We still lack security standards and best practices on how to properly code smart contracts. Also, when coding smart contracts, programmers think more about functionality than about security, since a programmer’s main task is to simply make the code work, and security is usually an afterthought.”
Working with new programming languages, security can take a back seat to functionality. More than just the casualty of a steep learning curve, Sawhney believes that security can slip by the eye of software engineers because they “don’t have a quality assurance (QA) mindset.”
With millions at stake and potential holes in the code to exploit, hackers are bound to drum up a scheme to breach these contracts, according to Budorin. Even if a team has audited their code for expected or known vulnerabilities, “a new type of attack can be developed any time and nothing can protect you from this.”
All it takes is a spurt of intuitive thinking to probe a smart contract’s code for an unexplored opening, Amy Wan, CEO and co-founder of Sagewise, iterated in a separate interview with Bitcoin Magazine.
“It is not often that developers are able to write perfect code that works the first time around — and even when that happens the code cannot be adapted to unforeseen situations. Code is also static, which makes smart contracts very rigid. However, humans are anything but static and very creative when it comes to problem solving. This combination creates something of a perfect storm, making smart contracts ill-suited where there are bugs in coding or loopholes/situation changes.”
Wan believes that “technology isn’t about tech itself as much as it is about how humans interact with it,” meaning that we “are always going to have folks looking for opportunities to test the shortcomings of technology, which may result in hacks.”
To Wan, smart contracts feature intrinsic vulnerabilities. To make security matters worse, she also holds that they “cannot be amended or terminated (or in technologist speak, evolved or upgraded),” and their static nature renders them susceptible to the dynamic, adaptive strategies of black hats.
“Code aside, with every situation, there are an infinite number of things that can go awry. The rigidity of smart contracts presently cannot accommodate the fluidity of the real world,” she said.
Mending the Achilles Heel
If technical flexibility is the crux of smart contract weakness, then the fix is in the inception and carry-through of their development. Developers should put preventative measures in place to ensure that their code can bend without breaking, the CEOs expressed.
“We need to have a more comprehensive approach in order to solve this problem in the long term,” Budorin argued. “First of all, even though it is impossible to make all contracts absolutely secure, smart contract risks can be reduced. The best way to secure a smart contract is to have a security engineer on staff, conduct two different independent audits, and launch a bug bounty program for a dedicated period of time before deployment.”
Hacken itself facilitates such bug bounties, and the platform, called HackenProof, has seen its white hat community audit and test such industry projects as VeChainThor, Neverdie, Legolas Exchange, NapoleonX, Shopin and Enecuum. Budorin and his team find that bug bounties provide a reliable if tertiary buffer for projects before they go public.
“We believe that the only efficient way to mitigate modern cybersecurity threats is to host bug bounty programs on bug bounty platforms. This is called a crowdsourced security approach,” Budorin explained.
“Bug bounty platforms attract a crowd of third-party cybersecurity experts (dozens if not hundreds at a time) to test the client’s software. Testing can be ongoing for months or even years.”
Sawhney agrees that projects need to house more on-staff security experts to police vulnerabilities, while lamenting the fact that some projects lack a CIO or CTO for this effect. But he also indicated that, in some cases, companies need only to submit themselves to a proper audit to avoid a fate similar to Bancor’s.
“Some of these companies believe that they have the world’s best engineers, so they think they don’t need an audit. And if they get one, chances are they’ve done a third-party audit that was in their favor. Even if they’re getting an audit, some of these audit companies aren’t doing what we deem to be a professional audit. They’re taking the code and putting it through automated tooling. They’re not taking the time to do some of the more manual tasks which includes a dynamic analysis, quality assurance,” he explained.
The manual tasks that Sawhney lauds are at the heart of Hosho’s own auditing processes. They allow Hosho’s team to sniff out coding errors that automated tooling might miss, like discrepancies between the smart contract’s token algorithms and a white paper’s business model.
“So the most manual part of conducting an audit is marrying the code to the words — we call it dynamic analysis. Most of the time when we find errors with a smart contract, we’re finding colossal errors in the business logic. We’re finding everything from mathematical errors to errors in token allocation,” Sawhney said.
He went on to reveal that Hosho’s team includes professionals “from the infosec, defcon communities that are white hats who have spent years doing QA.” QA, shorthand for quality assurance, is a method by which coders test a code for its designed function to check for any malfunctions, defects and other flaws that may render it vulnerable or inoperable.
As Sawhney indicated earlier, part of the reason these projects and their auditors don’t do QA is simply because they lack the professional experience to do so. It’s easier, he claimed, to teach Solidity (a smart contract coding language) to those who know how to conduct sound QA than the other way around.
When lack of QA training or a learning curve isn’t the issue, however, Sawhney suggested that, at times, projects won’t secure a thorough audit because they’re simply cutting corners.
“Sometimes I think it’s sheer laziness and being cheap. They see that cost to code a smart contract was only $10k and [an auditor] is charging $30k to review it. They say, ‘Nah, we don’t need that. We have the best engineers in the world so we’re good.’”
To Sawhney, there’s no substitute for a thorough audit. He also holds that, once an audit has been completed, the smart contract should come with a seal of approval, one that both attests to the audit’s quality and reassures users that no code has been altered after the fact. For Hosho’s work, this comes in the form of a GPG file, a cryptographic stamp that simultaneously functions like a certificate of authenticity and denotes the final (or at least most recent) version of audited code, acting rather like the seal on a bottle of cough syrup that proves it hasn’t been tampered with since it last passed quality control.
“Having central governments, regulators, lawyers, PR firms, investors, token holders — everyone — looking for this GPG file, this sign of approval [answers the question]: Has this code been sealed? Because we can monitor this code once we’ve put this seal on it to prove that no one has touched this code, not one line of this code has been changed since a third party audited it. If code changes you’re opening up room for security vulnerabilities.”
Wan’s own solution offers a different sort of prescription, in that she adds post-audit safety nets like Sagewise’s software as a smart contract’s third line of defense.
“Going forward, I believe that blockchain companies will be able to prevent smart contract disasters by using a smart contract developer whose sole focus is developing smart contracts, hiring a reputable security auditing firm, and including a catch-all safety net into smart contracts, such as Sagewise’s SDK.”
The Sagewise SDK integrates with smart contracts to police malicious inputs. It gives developers the chance to freeze the smart contract in question and adjust it accordingly.
“It starts with a monitoring and notification service so users are aware of what’s happening with their smart contract. Paired with our SDK, which basically acts as an arbitration clause in code, users are notified of functions executing on their smart contract and, if such functions are unintended, [they have] the ability to freeze the smart contract. They then can take the time they need to fix whatever needs to be fixed, whether that’s merely fixing a coding error to amending the smart contract or resolving a dispute,” she said.
A Community Problem, a Community Solution
In our interview, Wan claimed that “[less than] 2 percent of the population is able to read code.” Fewer people still are able to read Solidity, let alone at the level needed to insulate it with airtight security features.
So even if projects and companies want to take the measures necessary to vet and protect their code properly, they may be wanting for talent and resources. This problem will likely be educated out of existence as more software engineers develop a thorough, more sophisticated understanding of Solidity and other smart contract programming languages. More mature coding languages may present a solution to this ailment, as well.
But for the time being, the community can help developers and teams to err on the side of caution. Like an arbiter with skin in the game, people using these services need to step up and demand action and change, Wan believes. Otherwise these types of security breaches will continue to happen.
“[B]ecause much of the population cannot read code, it is difficult for them to hold developers accountable for when they do things like code an administrative backdoor into their smart contract (which many large projects have done),” said Wan.
“Just in 2017 alone, half a billion dollars in value was lost in smart contracts, but that apparently has not been enough to get developers to consider adding additional safety nets or community members to demand them. Perhaps we will need to lose billions more to get people to realize that this isn’t how the system should work.”
Sawhney also reiterated this point: “[More] people need to be outspoken, call people out. I think people are scared because the community is tight-knit and everybody knows everybody. No one wants to shun people. There’s not enough self-governance in this space, and I think that’s the biggest step this community needs to take.”
He added, “[not] enough pressure [is] being put on security; there’s not enough regulation around security.”
In an effort to bring self-regulation to the forefront of the industry’s to do list, Hosho will partake in a summit for cybersecurity firms in Berlin. Slated for this September, Sawhney hopes that ETHBerlin will spawn a self-regulatory organization (SRO) from its attendence, “complete with a certificate for our work, kind of like the Big Four for financial audits.”
Adding to the conversation on self-regulation, Budorin finds that the community would do well to document exploited vulnerabilities. This would create a library of case studies and situations for developers to study and to create the solutions necessary to avoid the same pitfalls in the future.
“…the blockchain community needs to collect, store and analyze all known vulnerabilities that have been found in smart contracts and host regular security conferences that will cover security issues in blockchain and develop security guidelines so that new generation of blockchain programmers is more prepared for these problems,” he said.
An answer to Budorin’s call for security-centric conferences, this October, Hosho is hosting HoshoCon, one of the first conferences to focus on blockchain cybersecurity. From October 9-11, 2018, community members will come together in Las Vegas, Nevada to listen to, learn from and discuss with each other what the industry could be doing better to bolster its security.
While working toward proper security is a community effort, the onus is not on the community alone, as the lion’s share of responsibility rests on developers to ensure that their code is as sound as possible before reaching an audience. Together, however, the industry’s community and its architects may combine perspectives to make smart contract hazards an issue of yesterdays.
Until then, Sawhney, Budorin and Wan’s perspectives — and their respective companies’ purposes — provide a healthy reality check for the industry’s pain points. For mainstream adoption and acceptance, these points need be addressed if there is to be any sort of sustained sense of confidence in this new technology.
This article originally appeared on Bitcoin Magazine.