With algorithmically-generated fabricated videos, otherwise called deepfakes, on the rise, Facebook is teaming up with Microsoft and seven academic institutions in the US for a Deepfake Detection Challenge.
The contest — meant to develop technology for detecting deepfakes and prevent people from falling prey to misinformation — is expected to from late 2019 until spring of 2020.
Hard Fork Summit is coming
Join us in Amsterdam on October 15-17
But training an algorithm to single out doctored videos isn’t an easy task, as it requires massive datasets of deepfakes.
Which is why, the social media giant said it will use paid, consenting actors to create a library of deepfake videos in order to train and improve tools to combat the threat of such videos plaguing the platforms.
“The goal of the challenge is to produce technology that everyone can use to better detect when AI has been used to alter a video in order to mislead the viewer,” Facebook’s chief technology officer Mike Schroepfer said.
Although not all deepfakes are bad, they’re troubling for a reason.
It’s fake news taken to a whole new level of persuasion. It’s one thing to read a fabricated story about an non-existent event, but it’s another to witness real people, say politicians, doing and saying fictional things, ultimately questioning the legitimacy of information you see online.
The technology to manipulate images and videos is progressing at an unprecedented pace, outsmarting current capabilities to tell apart the real from the fake.
What’s more, the explosion of AI and machine learning has made it cheaper and easier to create deepfakes, to the point where you can create your own fake videos. Inversely, they are also getting harder to detect.
In case you haven’t heard, #ZAO is a Chinese app which completely blew up since Friday. Best application of ‘Deepfake’-style AI facial replacement I’ve ever seen.
Here’s an example of me as DiCaprio (generated in under 8 secs from that one photo in the thumbnail) 🤯 pic.twitter.com/1RpnJJ3wgT
— Allan Xia (@AllanXia) September 1, 2019
Last week, a Chinese app called Zao that allows users to convincingly morph their faces onto movie stars shot to the top spot in the entertainment section on the App store, though privacy concerns landed it in a soup.
In the case of Zao — and FaceApp before — critics were quick to point out that the app’s privacy policy was no more invasive than some of the most widely used mobile apps across the world.
AI Foundation, an organization that aims to advance the responsible use of AI, launched a tool called Reality Defender last year that combines human moderation and machine learning to spot hyper-realistic fake videos.
But given the lack of a robust solution to the problem, the challenge is doubtless a promising step in the right direction.
Read next: Bitcoin whale moves gooch-tingling ONE BILLION DOLLARS