Spotting fake videos with artificial intelligence

Matthias Nießner uses Face2Face to create a mask of his face so that he can repl

Matthias Nießner uses Face2Face to create a mask of his face so that he can replace it with the face of another person. Image: A. Eckert / TUM

Last year a video clip featuring Barack Obama created quite a stir. It seemed to show the ex-president calling his successor Donald Trump "a total and complete dipshit". Ultimately, the people behind the clip admitted that it was no more than a highly convincing fake. It is now possible to perform this kind of trickery even in real time. Videos can be generated to match pre-recorded audio files, or still photographs can be artificially animated - with potentially serious economic or social consequences.

,,With our research, we are pursuing the objective of making it easier to spot fake videos online in the future."-- Matthias Niessner, professor for Visual Computing

"Spotting fake videos has proved especially tricky in social media, where they are generally uploaded as compressed, low-resolution images," says Prof. Matthias Niessner. "The same methods used to manipulate video content are also capable of detecting fake content with a high degree of accuracy - even when the image resolution is poor."

Large pool of training data for neural networks

For artificial intelligence to decide whether a video is the product of manipulation, it must be able to recognize the patterns that occur in faked content. To learn the recurring elements of such content, neural networks need to be fed enormous volumes of fake videos. In the past, researchers had to manipulate video material manually using image or video processing software. As a result, they lacked the required volumes of training data. Using new deep learning methods and graphic processes, Prof. Niessner has succeeded for the first time in building an extensive data pool, mainly with automated methods, including, among other tools, his own Face2Face software , which permits the real-time transfer of facial expressions from one person to another. With the new data pool, he was then able to train his FaceForensics (++) algorithm with more than half a million frames from over a thousand faked videos.

Focus on faces

"When we trained the neural networks with our data pool, we placed special emphasis on the facial regions in the still images," says Andreas Rössler, a researcher in the TUM Visual Computing Lab. "Thanks to that approach, FaceForensics (++) recognizes deep fakes and videos manipulated with Face2Face or FaceSwap better than any other currently available software." FaceForensics (++) also outperforms experienced experts. According to Prof. Niessner’s study, non-experts correctly identify highly compressed videos as fake only about 50% of the time - in other words, not much better than pure chance. FaceForensics (++), by contrast, accurately classifies 78% of the frames, and thus the video clips, as real or fake.

"With our research, we are pursuing the objective of making it easier to spot fake videos online in the future," says Prof. Niessner. His team is therefore making FaceForensics (++) available to communities in the fields of AI, graphics, computer vision and digital forensics. With the data created by the team, the reliability of other detection methods can also be tested and compared. FaceForensics (++) is already being used by around 200 institutions.



  • Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner: FaceForensics++: Learning to Detect Manipulated Facial Images



  • Justus Thies, Michael Zollhöfer, Matthias Nießner: Deferred Neural Rendering: Image Synthesis using Neural Textures. In: ACM Transactions on Graphics 2019 (TOG).



  • Andreas Rössler, Davide Cozzolino, Luisa Verdoliva, Christian Riess, Justus Thies, Matthias Nießner: FaceForensics: A Large-scale Video Dataset for Forgery Detection in Human Faces



  • Justus Thies, Michael Zollhöfer, Marc Stamminger, Christian Theobalt, Matthias Nießner: Face2Face: Real-time Face Capture and Reenactment of RGB Videos. In: Commun. ACM, Januar 2019.




This site uses cookies and analysis tools to improve the usability of the site. More information. |