Can convincing knock-offs be successfully sorted from the genuine article?
In the current sociological climate, the rise of technology like deepfakes is especially worrying. Any jerk with loose morals could, in theory, rig up a facsimile of an important political figure or famous celebrity and have them say they eat babies or something, and because of how convincing they are, a lot of people could take that misinformation at face value. The importance of verifiable fact has never been so important, which is why it’s vital that we have a reliable means of sussing out deepfake photos and video.
Two researchers with Facebook, Xi Yin and Tal Hassner, working in conjunction with Michigan State University, have been working on exactly that: an AI program designed to not only detect and identify deepfake photo and video, but even trace it back to its source.
“Our method will facilitate deepfake detection and tracing in real-world settings, where the deepfake image itself is often the only information detectors have to work with,” the researchers said on Facebook’s blog.
As the researchers explain, digital photographs and videos have a certain “fingerprint” that makes them unique from all others. When someone uses those images and videos to create deepfakes, it creates “cracks” in the digital fingerprint, which their program can hone in on and identify, as well as trace the origin of.
— New Scientist (@newscientist) June 17, 2021
“In digital photography, fingerprints are used to identify the digital camera used to produce an image,” the researchers explained. The unique fingerprints “can equally be used to identify the generative model that the image came from.”
If they can get their AI to a stable state, the researchers are hoping that it can provide people with “tools to better investigate incidents of coordinated disinformation using deepfakes, as well as open up new directions for future research. “