Microsoft Launches DeepFakes Detector ahead of US Presidential Elections
How Microsoft is Leading the Fight Against US Election Disinformation Campaign with Their Commitment Towards Making Fair Elections a Possibility.
The 2020 US Presidential Elections are just around the corner, and the attempts to “hack” these elections are at an all-time high. Right now, the Internet is flooded with disinformation campaigns of all sorts— Bot accounts on social media platforms like Twitter and Facebook spreading false news regarding the presidential candidates, various Facebook groups aimed at creating and reciprocating the fake news.
These campaigns, primarily aimed at defamation of the electoral candidates and related notable personalities, promoting racial hate and polarizing the general public opinion, are seriously hindering the natural procession of the 2020 US Presidential Elections and as the last date for the registering a vote, November 3rd is nearing its arrival, these deliberate, unethical disinformation campaigns are getting more and more widespread.
Of course, up until now, most of these deviating campaigns and the fake content that they generate for their audience on the Internet were relatively easy to detect and verify via fact-checking.
However, nowadays, a new form of disinformation media has been creating quite a buzz on social media platforms. With no actual means to detect whether real or fake, this is probably Artificial Intelligence’s most infamous contribution to humanity yet.
Yes, we are talking about DeepFakes– A form of AI-generated synthetic media that allows doctoring the faces that are originally present within photos or videos to impose someone else’s with remarkable accuracy.
In fact, the output video or photos are sometimes so true to detail that it can be practically impossible for the human eye to tell whether that video is fake or not. One could record a video with, say, some really controversial statements in it, impose someone else’s face on it then circulate it on social media.
While obviously illegal, such a thing might end up tarnishing someone’s public image forever. Because of this, the threat DeepFakes pose to society is much bigger than any other traditional form of misinformation media that is currently circulating on the internet.
Therefore, in order to deal with this serious havoc spreading on the internet and threatening the integrity of the United States of America, the US-based tech giant Microsoft came up with an anti-disinformation initiative— the Defunding Democracy Program, with a vision, to fight disinformation, and to ensure secured election campaigns and protection of those involved within the democratic processes.
It was under this initiative that Microsoft released their state of the art DeepFakes detection system — Microsoft Video Authenticator. A computer vision-powered system, the Video Authenticator can analyze still images or the consecutive frames in a video to return a probability whether the video or image under inspection is manipulated or not.
How Does It Work?
According to the announcements made by Microsoft, while analyzing an image or the individual frames in a video, the system checks for the blending boundaries within the still. Now if you have come across a deepfake video, you must have noticed that generally, these videos have either of the following two characteristics.
- Either the video is very smoothened out, slightly faded to the point that the face in the deepfake loses almost all detail, or,
- The imposed face doesn’t sit well on the original one, resulting in uneven blending boundaries.
While a non-observant human eye might not be able to notice any difference, a computer vision-guided system on the other hand, that analyzes an image at the pixel level, can tell these differences apart with an exceptional accuracy if trained to do so.
Apart from the subtle differences in the blend patterns, the system might also detect whether the still image (or a video frame) is a deepfake or not by looking at certain greyscale elements within the image that a human eye might not be able to detect.
All this allows the Video Authenticator to predict the confidence score, or in other words, the probability of whether an image is deepfake manipulated or not. In the case of videos, the system can do the same by analyzing the video in real-time, frame-by-frame, and displaying the confidence scores for each individual frame.
How Was It Created?
Microsoft developed this deepfakes-detection system at Microsoft Research, the R&D subsidiary of Microsoft, in collaboration with Microsoft’s Responsible AI team, and the AETHER Committee.
The model used within this system was trained on a publicly available dataset from Face Forensic++. For testing it, the team of researchers used the DeepFake Detection Challenge Dataset originally hosted on Kaggle for a deepfakes detection challenge organized by Facebook with prizes worth $1 million.
When Can We See the Product in the Markets?
Microsoft will begin the rollout of the Video Authenticator soon under the Reality Defender 2020 (RD2020) program. The program will initially be open to electoral campaigns and journalists and is aimed at bridging the gap these organizations are facing as of now due to the lack of technology to counter the misguiding disinformation circulating on the Internet.
Apart from the deepfakes detector, Microsoft is also launching a hashing-based authenticator mechanism under its Defunding Democracy Program that will allow content creators to hash the content they generate. This tool will be embedded into the company’s cloud computing platform Microsoft Azure.
A complementary component of this hasher will be available to the general public in the form of a browser extension that will allow them to check whether some content they are watching online has been tampered with or not.
Both these tools will hopefully help Microsoft curb down the out-of-control disinformation campaign. While this is very crucial to the 2020 US Elections, these developments mean a huge deal to the rest of the world as well.
According to a Princeton research, around 96 foreign campaigns were registered in between the years 2013-2019, aimed at influencing the election process in 30 countries worldwide.
The DeepFakes technology, originally created as a fun project, had been under great scrutiny since the very day of its inception. Things like these have raised the call for enforcement of certain moral ethics in the field of AI from time to time. Let’s just hope Microsoft succeeds in its fight against disinformation and the unethical use of AI.