Machine learning is changing the way systems are being designed and how we process information. That's true in security as well. But can a ML-based approach protect us when dealing with attack vectors and exploits that haven't been seen before? I spoke with Cylance's VP for engineering, Milind Karnik.
As computer users, we are faced with new security threats regularly. Last year, it was WannaCry and NotPetya while more recently cryptojacking and the Spectre/Meltdown vulnerabilities have come to light. When each of these new issues emerged, there was a need to patch software and add new security tools to detect them so that any potential damage they caused was mitigated.
But these, and other threats, do share some common characteristics such as the use of processor power, access to particular types of memory, how they interact with the file system, and the types of network traffic they generate. By building a profile of what malicious looks like, you can potentially detect new types of attacks without having ever seen that precise type of attack before. This isn't just about what's happening at a software level, but looking at a very low level, at what's happening within the hardware.
Karnik says that part of Cylance's research and development effort is on thinking like a bad guy in order to preemptively recognise malicious software activity. They are looking for what bad guys could do, rather than just what they are doing.
"When WannaCry was a completely new attack, our model from two years ago was able to protect customers. Even customers that hadn't updated to our latest ML model were still protected against WannaCry. WannaCry was an SMB-based attack. It triggered off certain parameters that Cylance had built into their model two years before the attack happened," Karnik said.
In the case of WannaCry, Cylance has an engineer on staff who had been looking at the SMB vulnerability and had been thinking like an attacker, adding the detection to the detection model two years before the attack launched.
In the case of Spectre and Meltdown, the detection process has been mode complex said Karnik, as these are hardware-specific attacks. Karnik was working at Intel when some of the processor capabilities that led to the Spectre and WannaCry exploits. He says those are "very difficult to work around or catch; those are things Intel has to fix themselves".
New attacks, such as crypto mining attacks where processors are hijacked in order to mine cryptocurrency, all have to have a delivery mechanism, said Karnik.
"Those attacks typically end up being a payload with a delivery mechanism. That delivery mechanism is some non-standard way for a user to get to what's being delivered. We cover every possible delivery mechanism that you can think of. If you stop the attack from being delivered, we can stop the attack".
When it comes to to the skills needed to build their machine learning models, Karnik says you need to "think evil to do good". That's why he works with people that have worked on "good and bad infiltration". That helps build the capability needed to defend against attacks that haven't yet been seen in the wild.