Why Are We Slack About Patching Servers?

Keeping systems patched is one of the most basic security protections available. However, ongoing scanning of web hosts in Australia and New Zealand suggests that many servers are not being regularly updated or using basic security mechanisms.

Patchwork picture from Shutterstock

In a presentation at Linux.conf.au in Canberra today, which I'm attending as part of our World Of Servers coverage, Joh Pirie-Clarke discussed some of the findings from an ongoing project scanning servers in .au and .nz domains for information about the software and services they use.

As Pirie-Clarke emphasised, judging the state of a server purely through this kind of analysis won't always produce precise results. "It's not a perfect way of checking that the box is on someone's radar," she said.

Identifying strings can vary from a basic Apache identifier to a very detailed description of what is in place. Nonetheless, one trend is very evident: many machines continue to use old and unpatched software, never updating after they are initially installed.

Testing connections to port 443 also suggests that many of these servers are not using SSL. "The older the version of Apache or IIS that you're running, the less likely it is to have an SSL component," Pirie-Clarke said.

"Patching is like our version of simple passwords or backup. We know it's important but it doesn't happen."

The growth in machine-to-machine connections is only likely to make the problem worse, Pirie-Clarke suggested. "I strongly suspect that when an embedded system goes in, it will never get patched again. And these devices are going to be around forever because people think of them as an appliance."

Lifehacker's World Of Servers sees me travelling to conferences around Australia and around the globe in search of fresh insights into how server and infrastructure deployment is changing in the cloud era. This week, I'm in Canberra for Linux.conf.au, paying particular attention to the systems administration mini-conference and sessions on virtualisation and best practice.


Comments

    The fact that operating systems require to reboot servers post patch is the biggest issue, even in this virtualised world where you can interface a server that's failed to reboot, the world is about casual access. Our servers get more traffic at midnight than they do at midday.

      I'm guessing tone not running a *nix based sever? I've never had to restart the OS after patching. Even if you do, then you should be using a load balancer so you can take one server out of rotation while you upgrade, then swap them and upgrade the second. No down time.

        @jess - most Unix systems require a reboot for kernel patching - even RedHat tell you to - http://rhn.redhat.com/errata/RHBA-2013-0006.html

        But yep - put everything behind a load balancer if you can, or set expectations you will be patching on a basis

    The reason is there's a lack of code confidence. Upgrade Apache, and all of a sudden a required module stops working. Or, upgrade PHP, and then half of the program's expected behaviors change.

    Most get around this by setting up a mirror dev server with the latest patches, then manually ensuring there's no issues. However, this almost never catches all issues, so instead people opt not to bother (instead of annoying customers)

    The ideal way is to use Automated Tests which cover as close to 100% of execution branches as possible. Sadly, with even small applications, going back and writing automated tests retroactively is a time expensive task that most companies aren't willing to put the effort / cost into.

    The best way to avoid the issue is with Test Driven Development where the automated tests are written as the code is written (this has many other added benefits such as ease of refactoring code, fewer bugs, better code design, etc).
    Then, you simply patch your server, ensure the tests pass, and you're good to go :)

Join the discussion!

Trending Stories Right Now