We recently saw the potential application of hypertrucage video (deepfake) Reface as the general public. But when the goal is to destabilize a country, an electoral process or an adversary, it's a whole different story.
Highly present on social media, hyper-rigged videos, images and audio files manipulated by artificial intelligence (AI) remain very difficult to detect.
What the software giant Microsoft has attacked to flush them out using new technologies to decipher the true from the false especially in this American election period when countries like Russia are trying to disrupt the electoral process.
Because, in this world of disinformation and fake news, deepfakes can make high-profile people say things they haven't said or put them in places they haven't.
Elements imperceptible to the human eye
Earlier this month, Microsoft unveiled on its blog a brand new system called Video Authenticator that can analyze a photo or video and assign it a percentage chance or trust score if the medium is artificially manipulated.
In the case of video, this system can provide this percentage in real time on each frame while the video is playing.
How does Video Authenticator work? The system is able to detect at the pixel level subtle elements of discoloration or gray levels that are difficult to detect by the human eye.
And conversely, it becomes possible for producers or publishers to label their media and content with digital certificates that integrate with metadata – thus providing a point of reference that ensures authenticity.
From there, a module can be integrated with web browsers that compare these same certificates with Microsoft Authenticity System databases.
The system developed by Microsoft will be part of the Reality Defender 2020 program which helps organizations through the limitations and considerations inherent in any false media detection technology.
To read :
Reface, the hypertrucage application accessible to the general public
It's in English, but will you be able to pass this little hyper-trick test?