What Is A Deepfake?
Deepfakes use deep learning artificial intelligence to replace the likeness of one person with another in video and other digital media.
The simulation of reality by computers has become increasingly accurate. Nowadays, modern movies rely heavily on computer-generated sets, scenery, and characters in place of practical props and locations that were once the norm. Often, these scenes cannot be distinguished from reality.
Deepfake technology has recently made headlines. Artificial intelligence (AI) is used to replace a person’s likeness with another in a recorded video when deepfakes are created.
Also Read: Self taught AI will be the end of us
How do Deepfakes work?
“Deepfakes” refer to the underlying technology of “deep learning,” a type of artificial intelligence. With the aid of deep learning algorithms, which learn how to solve problems based on large amounts of data, fake media can be made to look realistic.
A deepfake can be created in a number of ways, but one of the most common involves deep neural networks and autoencoders that use a face-swapping method. The first thing you’ll need is a target video for the deepfake as well as a collection of clips of the person you want to insert in the target.
Videos can be completely unrelated; the target can be a clip from a Hollywood film, for example, and the videos of the subject chosen can be random clips from YouTube.
An autoencoder is an artificial intelligence program created to study video clips to determine what a person looks like from different angles and under various weather conditions, and then map that person onto the individual in the target video by finding similarities.
The deepfake is further improved by Generative Adversarial Networks (GANs), which detect and improve flaws within multiple rounds, making it more difficult for deepfake detectors to decode.
Another application for GANs is the creation of deepfakes, which use a lot of data to “learn” how to create new examples that reproduce the real thing as accurately as possible.
Deepfakes can be generated easily even by beginners using apps and software, such as Zao, DeepFace Lab, FaceApp (a photo editing app with built-in AI capabilities), Face Swap, and the since removed DeepNude, which generated dangerous nude images of women.
GitHub, which is an open source community for software development, hosts many deepfake softwares. Some of these apps are designed for pure entertainment – which is why deepfake creation is not illegal – while others are probably used maliciously.
Experts predict that deepfakes will become more sophisticated as technology develops and may pose a greater threat to the public, such as election interference, political tension, and additional criminal activity.
Also Read: Could AI Replace Humans?
How are deepfakes used?
Automating the swapping of faces to produce credible and realistic-looking synthetic videos has some interesting benign applications (such as in cinema and games), but it is clearly a dangerous technology. Deepfakes were first applied to creating synthetic pornography.
According to Deeptrace, pornography was the main content of 96% of deepfake videos found online in 2019. This was the result of a reddit user named “deepfakes” creating a forum for actors who had their faces swapped. Since then, porn (particularly revenge porn) has repeatedly made headlines, damaging the reputations of celebrities and prominent figures.
Humor and satire have no shortage of deepfakes, like chips that answer questions like what would Nicolas Cage look like if he appeared in “Raiders of the Lost Ark“?
Are deepfakes only videos?
Deepfakes are not just restricted to videos. Deepfake audio is a rapidly growing field that has a huge range of applications.
With deep learning algorithms, realistic audio deepfakes can now be made from just a few hours (or a few minutes, in some cases) of audio of the person whose voice is being cloned, and once a model of a voice has been made, that person can be made to say anything, such as when fake audio of a CEO was used to commit fraud last year.
Medical professionals can use deepfake audio in voice replacement, as well as computer game designers – now characters in the game can say what they want without having to rely on scripts prerecorded before the game was released.
How to detect a deepfake
It’s likely that society will have to adapt to spotting deepfake videos as online users are now adept at spotting other types of fake news as deepfakes become more common.
In cybersecurity, detecting and preventing deepfake technology often requires more innovation, which in turn triggers a vicious cycle and this is an ever changing threat landscape.
There are a handful of indicators that give away deepfakes:
- Current deepfakes have difficulty animating faces in a realistic way, and the result is videos in which the subject never blinks, or blinks too often or unnaturally. Following University of Albany researchers’ publication of a study that detected the blinking abnormality, new deepfakes were released that didn’t have this issue.
- Look for issues with skin or hair, or faces that appear blurrier than the surroundings in which they’re placed. The focus might appear unnaturally soft.
- Does the lighting seem unnatural to you? Deepfake algorithms typically retain the lighting of the clips that were used as models for the fake video, which doesn’t match the lighting in the target video.
- In some cases, the audio might not match the person, especially if the video was faked but the original audio was not carefully manipulated.
Also Read: Top 10 AI books for beginners
Combatting deepfakes with technology
As techniques improve, deepfakes will only become more realistic, but we’re not completely defenseless when it comes to combating them. Some startups are developing methods to detect deepfakes, and several of these companies are startups themselves.
Sensority, for example, has developed a platform similar to an antivirus for deepfakes that alerts users via email when they’re watching something that bears the telltale fingerprints of AI-generated synthetic media. With Sensity, fake videos are created by using deep learning.
Operation Minerva identifies deepfakes in a more straightforward manner. Operation Minerva’s algorithm compares potential deepfakes to known videos that have been “digitally fingerprinted.” For instance, it can identify revenge porn by recognizing that the video is simply a modified version of a video that Operation Minerva already catalogs.