Neuroscientists at the University of Sydney have proven that maybe you won’t be fooled by deepfakes – unless, of course, you trust your gut instead of your brain.
They’ve found that people’s brains can detect artificial intelligence (AI)-generated fake faces, even though people could not report which faces were real and which were fake.
Firstly, let’s explain what a deepfake actually is. To put it simply, the term is a portmanteau of “deep learning” and “fake” and refers to a computer program with the ability to attach a new face to someone else’s within a video. The difference between Snapchat’s face-swapping feature, which uses similar technology, is that it’s starting to look more realistic. So naturally, people are concerned with the implications.
The neuroscientists performed two experiments: one behavioural and one using neuroimaging. In the behavioural experiment, participants were shown 50 images of real races and computer-generated deepfakes. They were asked to identify which were real and which were fake.
Then, a different group of participants were shown the same images while their brain activity was recorded using Electroencephalography (EEG), without knowing that half the images were fakes.
The researchers then compared the results of the two experiments, finding people’s brains were better at detecting deepfakes than their eyes
When looking at participants’ brain activity, the University of Sydney researchers found deepfakes could be identified 54 per cent of the time. However, when participants were asked to verbally identify the deepfakes, they could only do this 37 per cent of the time. Definitely a case of trusting your brain, not your gut.
“Although the brain accuracy rate in this study is low – 54 per cent – it is statistically reliable,” said senior researcher Associate Professor Thomas Carlson, from the University of Sydney’s School of Psychology.
“That tells us the brain can spot the difference between deepfakes and authentic images.”
What could the research lead to?
While the research is somewhat of a novelty project for now, it does lean itself to proper application – a foolproof way of detecting deepfakes.
Of course, more research must be done.
caution that given the novelty of this field of research, their study, published in Vision
“What gives us hope is that deepfakes are created by computer programs, and these programs leave ‘fingerprints’ that can be detected,” Carlson added.
“Our finding about the brain’s deepfake-spotting power means we might have another tool to fight back against deepfakes and the spread of disinformation.”
As highlighted by Carlson, the fact that the brain can detect deepfakes means current deepfakes are flawed.
“If we can learn how the brain spots deepfakes, we could use this information to create algorithms to flag potential deepfakes on digital platforms like Facebook and Twitter,” he said.
By the way, the deepfake in the picture at the top of this article is the one on the right (the left is real). Did you trust your brain or your gut?
You can read more about the work in Vision Research.