Over the last few weeks, something has been bothering me. One of the recurring themes I'm hearing about, either directly or indirectly, has been around the intersection between technology and trust. While the issues around government access to encrypted communications have received plenty of airplay, the expanding use of machine learning, broad access to vast swathes of data and increased use of social media has made trust the voluble commodity in tech.
At the recent Twilio SIGNAL event, the company CEO Jeff Lawson conducted a live demo on stage where he coded an application that could tell you if a photo he showed of skateboarding legend Tony Hawk was him or not while Hawk was on stage and didn't now this was going to happen.
While programming the app was relatively straightforward, what was amazing was that the machine learning tool Lawson hooked into already had millions of photos of hawk for reference. When Hawk was told this he was simultaneously amazed and shocked.
Last week, at the McAfee MPower event in Sydney, I spoke with the company's CTO Steve Grobman about how that company is adopting machine learning. He described how their software uses a combination of traditional signature-based and heuristic tools to quickly categorise and deal with things that are either known to be good or bad and apply machine learning and artificial intelligence to deal with the more ambiguous files and actions.
But that approach also relies on trust. What if a software designer was able to breach the trust we place in signatures and heuristics by placing something malicious into some software? So, the trust we place in our understanding of we define known good and bad is also questioned.
Traditional tools, like end-point software, firewalls and VPNs have been the basis of most security strategies for the last decade. But with traditional firewalls now of limited use, as application attacks use ports 80 and 443, which need to remain open in order to allow regular web traffic, and third-party VPNs a potential source of weakness, we need to look at different tools. This is where Cylance CEO Stuart McClure, who I met with, says we need to start looking to better utilise machine learning and AI.
In his view, the question of being able to know whether something is good or bad is based on our understanding. For example, if we rely on our understanding of what is good, that is based on trusting our opinion. But machine learning systems are supposedly impartial and won't be swayed by our personal biases.
This is why we need transparency in machine learning models. The reality is that all programmers bring their own biases to the development of systems. When someone decides to maintain a database of Tony Hawk photos, that's a decision that means certain images can be used for AI or machine learning applications and not others. Or, in the case of machine learning for detecting malware, developers decide what malicious activity looks like.
In all these cases, we depend on trusting someone else to create a model we deem to be correct and fair. That makes trust perhaps the most important commodity of the current age of computing.