Hackers of India

AI Gone Rogue: Exterminating Deep Fakes Before They Cause Menace

By  Vijay Thaware   Niranjan Agnihotri  on 06 Dec 2018 @ Blackhat


Presentation Material

Abstract

The face: A crucial means of identity. But what if this crucial means of identity is stolen from you? Yes, this is happening and is termed as ‘Deep fake.’ Deep fake technology is an artificial intelligence based human image blending method used in different ways such as to create revenge porn, fake celebrity pornographic videos, or even in cyber propaganda. Videos are altered using General Adversarial networks in which the face of the speaker is manipulated by a network by tailoring it to someone else’s face. These videos can sometimes be identified as fake by human eye; however, as neural networks get rigorously trained on more resources, it will become difficult to identify fake videos. Such videos can cause chaos and bring economical and emotional damages to one’s reputation. Videos targeted on politico in form of cyber propaganda can prove to be catastrophic to a country’s government.

We will discuss about the many tentacles of Deep fake and dreadful damages it can cause. But most importantly, this talk will provide a demo of the proposed solution: to identify complex Deep fake videos using deep learning. This can be achieved using a pre-trained Facenet model. The model can be trained on image data of people of importance or concern. After training, the output of the final layer will be stored in a database. A set of sampled images from a video will be passed through the neural network and the output of the final layer from the neural network will be compared to values stored in the database. The mean squared difference would confirm the authenticity of the video.

In 2018, we believe that Deep fake will progress to a different level. We will also talk about defensive measures against Deep fake.

AI Generated Summarymay contain errors

Here is a summary of the content:

The speaker discusses the growing concern of deep fake videos, , specifically those targeting politicians,20019 elections. They emphasize the importance of verifying the credibility of sources and using human intelligence to identify potentially fake videos. The lack of robust laws protecting individuals victimized by deep fake videos is also highlighted.

The speaker mentions that AI-based techniques are used to create these videos, citing auto-encoders as an example. They predict that this technology will proliferate in the future but acknowledge that researchers are working on using it for good purposes, such as recreating deceased people.

The Q&A session covers various topics, including:

  1. Techniques beyond face swapping: The speaker notes that all current techniques are AI-based and mentions expression transfer as an example.
  2. Efficacy of systems: They express uncertainty about the future development of systems to detect deep fake videos.
  3. Blockchain technology: The speaker is open to exploring blockchain solutions but hasn’t come across any research papers on the topic.
  4. Video watermarking: They mention YouTube’s Content ID system, which checks for similar content before allowing uploads, and note that Apple and Google are working on automating video watermarks.

The talk concludes with an invitation for the security community to raise awareness about deep fake videos and their potential to manipulate thoughts.