Five Minutes of Training Can Boost Your Ability to Spot Deepfakes and Gen AI Faces, Researchers Say

A simple five-minute training exercise boosts the skills of both typical and "super" recognizers — but it's still a tricky task.

Researchers from the Universities of Reading, Greenwich, Leeds, and Lincoln have come up with a simple five-minute training process which can dramatically boost your ability to spot fake faces made with generative artificial intelligence (gen AI) — but warn that even the most-skilled are likely to get it wrong more than a third of the time, while for most people it's still a coin-toss.

"Our training procedure is brief and easy to implement," claims Katie Gray, lead researcher at the University of Reading and first author on the team's paper. "The results suggest that combining this training with the natural abilities of super-recognizers could help tackle real-world problems, such as verifying identities online."

Generative AI models capable of outputting photo-realistic imagery are making disinformation campaigns easier than ever, and as their capabilities improve it becomes increasingly difficult to tell them apart from actual photos and videos. A test group of 664 participants were given real photos and synthetic imagery created using the generative adversarial network StyleGAN3, and told to pick out the fakes: so-called "super-recognizers" who scored significantly above average on traditional facial recognition tests were correct only 41% of the time, while those with typical facial recognition skills were only right 31% of the time — both well below the 50% you'd expect to get by closing your eyes and choosing at random, the researchers note.

The team's training process, carried out on a fresh test group, takes a mere five minutes and focuses on common artifacts in synthetic imagery — things like teeth which don't quite line up as they should, blurring at the hairline, and ears which begin to merge into the background. Hand-picked generative images — both "deepfake," which are based on photos of real people, and wholly-synthetic — were used as examples during the training, but excluded from the final test so all images were previously unseen.

The results are impressive: the super-recognizers in the second group were able to spot the fakes with 64% accuracy, the researchers found, while the typical-recognizers boosted their accuracy to 51%. While that's better than blind guessing — albeit in the case of the typical-recognizers, only just — it still suggests that even those most attuned to facial recognition fail in distinguishing gen AI fakes from real photos more than a third of the time, while the average viewer is likely to fail around half the time.

"We found that detecting and discriminating synthetic faces is an extremely difficult task," the researchers conclude, "for which typical-ability participants perceive synthetic faces as more real than real faces. SRs [Super-Recognizers] consistently outperformed typical-ability participants, where without training they were less susceptible to this AI hyper-realism effect.

"A training and feedback procedure was able to increase performance to a similar extent for both SRs and typical-ability participants. Our results suggest that SRs are using cues unrelated to rendering artifacts to detect synthetic faces. The performance of trained SRs could be harnessed for real-world applications, such as online identity verification," the team notes — suggesting that human verification will be required to help combat the flood of fakes.

The team's work has been published in the journal Royal Society Open Science under open-access terms.

Gareth Halfacree
Freelance journalist, technical author, hacker, tinkerer, erstwhile sysadmin. For hire: freelance@halfacree.co.uk.
Latest articles
Sponsored articles
Related articles
Latest articles
Read more
Related articles