AI Hid Data From Researchers

Depending on your perspective – this is is either pretty cool or quite scary. Research from Stanford and Google found that a machine learning agent that was taking aerial images and constructing maps was “hiding” information in “a nearly imperceptible, high-frequency signal”. The software was meant to be recreating the aerial images from the maps it created but piqued the researchers’ suspicion when it was performing the task better than expected.

The purpose of the agent was to help Google improve their street maps. But, as the agent was translating the aerial images into maps and then back, the researchers saw extra detail being added that wasn’t in the data the agent was processing at the time. In once case some skylights on a roof that weren’t present on the street map reappeared when the agent recreated the aerial image from the map. Rather than doing what the researchers thought the program would do – create a photo from the map it was presented with – it was sneaking some data away and reintroducing it later.

The machine learning agent ran on a CycleGAN. This is a neural network that learns to transform images of one type into another as efficiently and accurately as possible. The program that was translating the aerial images into maps and back again was doing exactly that. But it was doing it in a way that was unexpected, “hiding” the data from one set of translations and then using it later. The researchers say the software wrote the extra information into the actual visual data of the street map through thousands of tiny changes in colour that the human eye couldn’t see but that were easily detectable to the program.

Basically, the program found a way to watermark the images imperceptibly with data it thought might be helpful later on. What’s especially interesting is that the computer found its own way to do this. The practice is called steganography. It’s not new but the CycleGAN’s method of doing it is pretty neat.

There’s an ongoing narrative currently being played out in machine learning and AI circles around the trustworthiness and transparency of these new and rapidly evolving systems. On one hand, the ability to automate complex tasks offers us great benefits. But there’s still some distrust and skepticism about whether we are creating a world where complex systems do things and we cannot explain or understand the results. In this case, the CycleGAN’s solution was clever but detected reasonably easily as the “extra data” being added was obvious. It was a bit like a child saying they’d tidied their room but really only shoved everything under the bed. But what if the software was a little smarter? Or an original programmer, either accidentally or intentionally, threw in a rogue element that created subtle changes that skew the results in an unwanted way?

As I’ve reported previously, machine learning systems are being used to sentence people convicted of crimes and allocate funds to schools in some jurisdictions in the United States. It’s easy to see how an unintended consequence could lead to significant personal and social impact. Understanding how these programs work is incredibly important as we depend on them more and more. What’s interesting about the CycleGAN’s “behaviour” is what it did and how we can learn from this to ensure we understand how the systems were are increasingly dependent on work.

Comments


2 responses to “AI Hid Data From Researchers”

Leave a Reply