Who better to spot computer-altered faces than computers?
With a presidential election coming up and a pandemic sickening people all over the United States and beyond, we are living in a time where it is absolutely crucial to have genuine, factual information. Unfortunately, as always, there’s a crowd of jerks out there who constantly seek to spread misinformation to the populace, whether to further a particular agenda or just because they want to watch the world burn. Falsification technology like deepfakes are a very real danger in a time where a clear distinction between fact and fiction can mean life or death. This is why, in an effort to combat misinformation, Microsoft has stepped up their efforts to to point out deepfakes as just that: fakes.
Microsoft launched a new software tool today called the Video Authenticator. When you run the tool over a video, it first seeks out a human face, and then pays careful attention to it while the video plays. As the tool scans, it picks up on subtle quirks that would indicate the usage of a deepfake. This would include things like sudden changes of skin tone, facial feature mismatches, and subtle blurring on the outlines of the face. By compiling these quirks, the tool creates a “confidence score,” a real-time percentage that indicates how likely it is that the scanned face is a deepfake.
The tool was developed in a tag-team effort between Microsoft Research, the company’s R&D division, and an advisory committee on AI, Ethics and Effects in Engineering and Research. The express goal of the project is to further the battle against misinformation in a very delicate time for society.
“We expect that methods for generating synthetic media will continue to grow in sophistication,” the research team said in a statement. “As all AI detection methods have rates of failure, we have to understand and be ready to respond to deepfakes that slip through detection methods. Thus, in the longer term, we must seek stronger methods for maintaining and certifying the authenticity of news articles and other media. There are few tools today to help assure readers that the media they’re seeing online came from a trusted source and that it wasn’t altered.”
The Video Authenticator will tie into the Trusted News Initiative, a tech initiative created by the BBC to create more concrete ways of identifying fabrications and verifying truthful information.