Conspiracy Bot Shows That Computers Can Be As Gullible As Humans 

Conspiracy Bot Shows That Computers Can Be As Gullible As Humans 

Computers believe in conspiracy theories now. The New Inquiry‘s Francis Tseng trained a bot to recognise patterns in photos and draw links between similar pictures, forming the kind of conspiracy-theory diagram seen in the last act of a Homeland episode or the front page of Reddit. It’s a cute trick that reminds us that humans are gullible (hey, maybe those photos do match!), and that the machines we train to think for us could end up just as gullible.

Photo by The New Inquiry’s conspiracy bot

Humans are exceptionally good at pattern recognition. That’s great for learning and dealing with diverse environments, but it can get us in trouble: Some studies link pattern recognition to belief in conspiracy theories. (Some don’t, but that’s what They want you to think.)

Until recently, computers haven’t been especially good at pattern matching. The rise of machine learning is specifically targeted at closing this gap, teaching neural networks how to, say, recognise photos of birds or detect credit card fraud by feeding them vast quantities of data.

This isn’t as easy as replicating a human brain, because we don’t know how to do that. Instead, programmers simulate a brain-like behaviour by letting the neural network search for patterns on its own. As technologist David Weinberger writes, these neural networks, free of the baggage of human thought, build their own logic and find surprising and inscrutable patterns. For example, Google’s AlphaGo can beat a Go master, but its strategy can’t be easily explained in plain language.

But these machines don’t actually know what’s real, so they can just as easily find patterns that don’t exist or don’t matter. This also results in surprising “mistakes”, such as the funny paint colours (stummy beige, stanky bean) generated by scientist Janelle Shane, or the horrifying mess of dog faces Google DeepDream finds hidden inside my selfie:

Conspiracy Bot Shows That Computers Can Be As Gullible As Humans 

These mistakes can be far more serious. Weinberger highlights software that racially profiled accused criminals, and a CIA system that falsely identified an Al-Jazeera journalist as a terrorist threat.

The New Inquiry‘s bot similarly overextends its analysis by finding fake patterns. “If two faces or objects appear sufficiently similar, the bot links them,” says Tseng, the bot’s creator. “These perceptual missteps are presented not as errors, but as significant discoveries, encouraging humans to read layers of meaning from randomness.”

It’s tempting to think the bot is onto something. But chances are, you’re really just looking at dog faces and made-up paint colours. The more computer programs behave like humans, the less you should trust them before learning how they were made and trained. Hell, never trust a computer that behaves like a human, period. There’s your conspiracy theory.


The Cheapest NBN 50 Plans

Here are the cheapest plans available for Australia’s most popular NBN speed tier.

At Lifehacker, we independently select and write about stuff we love and think you'll like too. We have affiliate and advertising partnerships, which means we may collect a share of sales or other compensation from the links on this page. BTW – prices are accurate and items in stock at the time of posting.

Comments