Mega breaches often garner the biggest headlines. Target in 2013 signalled the start of this, but since then we’ve seen Yahoo!, our own Red Cross Blood Bank, the US Department of Personnel Management and others suffer data exfiltration either by malicious parties or through human error. But something more troubling has been happening and it has me worried. Many of the protocols we rely on are under threat.
In 2014 we saw the Heartbleed flaw exposed, with almost 20% of the systems connected to the Internet potentially affected. Heartbleed, or CVE-2014-0160 to use its proper name, was a flaw in the SSL libraries many of our networks depend on for security.
Not long after, there was the CCS Injection Vulnerability, or CVE-2014-0224, that also exploited a vulnerability in the SSL and TLS libraries in many open source *nix distributions.
Shellshock came soon after (CVE-2014-6271) allowing bad guys to execute their own BASH shell commands and grant themselves access to a computer system.
In 2015, a flaw in DNS servers using BIND exposed that critical internet infrastructure to potential attack.
A couple of weeks ago, we had the WannaCry ransomware worm which caused all sorts of grief. Thankfully, Australian companies were largely unaffected. WannaCry takes advantage of a flaw in the Samba file-sharing protocol.
I mention all these exploits because they share a common thread. They take advantage of flaws in source code that is widely used. For hackers, this means they can attack hundreds of thousands, or perhaps millions, of systems with a single attack vector. This means bad guys don’t have to look for application-specific vulnerabilities. They can exact maximum damage by going after the protocols we rely on.
Application-specific and platform-specific threats will continue to be exploited. And if we fail to patch known issues, those applications will continue to be vulnerable.
However, I suspect we are on the cusp of a new era in the fight against cyber-crime. I think we’re going to see more attacks that are dependent on protocol-level exploits. While application patching can be challenging, updating a network protocol or core service might be more difficult. As the bad guys target deeper layers in our infrastructure, it can become harder to patch.
The level of testing a large enterprise might need to carry out in order to ensure all systems remain operational if a core encryption or communications library needs to be updated could be significant. And that might delay rectifying a known issue, creating a window of opportunity for threat actors.
What’s the solution? I’m still pretty old school when it comes to security strategy. I like the protect, detect and respond model. If you can’t patch systems (part of the protect process) then detection and response are critical.
If a new flaw is identified and you can’t patch, put other steps in place to detect any incoming threat that uses the flaw, and have monitoring in place to detect if the exploit is active in your network. Then you can respond by either taking the system offline or isolating it.
While application security, identity management and other measures are still important, I think it’s time to make sure the foundational protocols we depend on are looked at more closely as a potential vulnerability.