When I started working in IT, back in the 1990s, our primary focus was on reliability first and performance second. Viruses were on the scene – Word macro viruses like Melissa were probably the most significant threat of the day. But as long as our anti-virus software was up to date things were pretty good. Then the world changed.
By the early 2000s the Code Red worm was big news. And while many focussed on its virulence and its ability to let hackers execute their own malicious code, the big lesson from Code Red, Melissa, and all those early threats was that they exploited vulnerabilities in software.
Threat actors didn’t have to break into systems per se. They just had to find the cracks left behind by developers. Even the recent Cloudbleed bug is over four years old. And let’s not forget Heartbleed which was more than a decade old when it was found.
Flash forward to today and we see hackers increasing their sophistication in exploiting vulnerabilities that have not yet been broadly revealed. These are zero day hacks.
And if we listen exclusively to security vendors we’d think these were the most significant threat to businesses. But I disagree.
If you’re working in infosec you’ll no doubt have read the annual reports put out by the IBM-sponsored Ponemon Institute telling us malicious attacks aren’t detected for around 230 days. That number is interesting and gets a lot of hype but I think there’s a more important number to focus on.
In Verizon’s annual security report lies a number that should have us worried.
Their report looked at 100,000 incidents, of which 3,141 were confirmed data breaches. And a massive proportion of the breaches they looked at in their report were conducted using vulnerabilities that were known for over a year. Some of the attacks used exploits that were over a decade old.
I get that patching is a massive pain in the backside. It involves testing, planning, scheduled downtime, inconvenience to business, after hours work for the IT team and the risk of something going wrong.
Verizon’s most recent report, published last year looks at what happened in 2015 so the data is not as current as I’d like to see. But the report is clear that there has been a long-term trend of threat actors using old threats and exploits.
“the tally of really old CVEs which still get exploited in 2015 suggests that the oldies are still goodies. Hackers use what works and what works doesn’t seem to change all that often”
So, what aren’t we patching?
I think a big reason is our infrastructure design doesn’t make it easy to update systems without suffering some downtime. In the 1990s, people expected systems to be down so it was OK.
By the end of the 2000s, as emerging technologies such as virtualisation and new approaches to compute and storage took hold, we started to redesign or outsource our data centres but we didn’t really change our approach to how applications were developed and deployed.
But we now have models through AWS, Azure, Facebook and other large online properties that demonstrate the ability to keep systems patched and as secure as possible without suffering downtime (at least, not scheduled downtime).
It’s been about three years since I last worked in a hands-on role in IT. If I was going to take on another IT management role, I’d be looking to create internal infrastructure and to deploy applications in a way that allowed us to keep systems as up-to-date as possible and promise the business that we would have no scheduled downtime.
And I suspect showing a board of directors the risk of not patching, through reports such as those produced by Verizon, Ponemon Institute and others, would help make the business case for why up-to-date systems are critical for mitigating corporate risk.
Verizon Data Breach Investigations Report [Verizon – requires sign-up to read]