AI

How to Spot a Deepfake: Tips for Combatting Disinformation

How To Make a Deepfake?

Introduction: How to Spot a Deepfake? 

Detecting deepfake is crucial for a myriad of reasons. At its core, deepfake technology is a potent tool for spreading misinformation and disinformation, capable of distorting truth, manipulating public opinion, and fostering distrust in society. From a political perspective, deepfakes can be exploited to create fraudulent content that can influence election outcomes, damage reputations, and cause diplomatic tensions.

A deepfake video could have serious implications by spreading misinformation and causing serious harm to our delicate democratic process. Find out why.

Source: YouTube

What is a deepfake?

The word “deepfake” combines the concept of deep learning with something that is fake at the same time. It is important to understand that deepfakes are a form of artificial intelligence – a compilation of doctored images and sounds, built using machine-learning algorithms.

Media can be manipulated by Deepfake technology by faking people who don’t exist, or making it appear that real people are saying and doing things they didn’t actually say or do.

An online user calling himself “deepfakes” and posting doctored pornographic videos in 2017 was the first person to use the term. But how? Through manipulation of Google’s open-source, deep-learning technology, he was able to transfer the faces of celebrities onto other people’s bodies.

Another method of deceit is the use of audio deepfakes. The following explains what they are. In a deeper level, deepfake machine-learning and synthesizing technology creates what have been referred to as “voice skins” or “clones” which allow for impersonators to slip into the position of a prominent figure. Basically, the purpose of an audio deepfake scam is to make you believe the voice on the other end is someone you know, such as a client or your boss, in order to get you to take action – like sending money.

Source: YouTube

How do deepfakes work?

It is important to understand how deepfakes work in order to spot them.

GAN — which stands for Generative Adversarial Network — entails the generating of faces as one of its methods. The network is trained by using a set of algorithms to recognize patterns and recognize which faces are recognizable. As a result of this training, it is able to learn the characteristics of real images in order to be able to fake them.

We find that face-swapping and face-replacement technologies are also based on artificial intelligence (AI) algorithms, which are known as encoders. In order to find similarities between two people’s faces, thousands of pictures of their faces are run through an encoder. It is then retrieved by a decoder, or second AI algorithm, which swaps these images so that the real face of one person can be superimposed on another person’s face.

Also Read: What Is A Deepfake?

What is the purpose of a deepfake?

The purpose of Deep Fake is to use fake content in order to trick viewers and listeners into believing something that never happened.

Deepfakes are often used to spread misinformation and for other malicious purposes. Here’s a partial list:

Phishing scams

Data breaches

Hoaxes

Celebrity pornography

Reputation smearing

Election manipulation

Social engineering

Automated disinformation attacks

Identity theft

Financial fraud

Blackmail

Emerging Threats

With deepfake technology, it is possible to create convincing but false videos. The purpose of deepfake videos is often to spread false information on the internet.

In this context, for instance, you may watch a deepfake video which appears to show a world leader saying things that they never said. This can lead to the dissemination of “fake news” by stirring up emotions or swaying public opinion.

There is a growing concern that deepfake videos could have serious repercussions during the upcoming elections of 2024. In today’s world, a lot of people rely on the internet for information, and manipulated videos can potentially affect what they think and how they vote.

Therefore, it is a good idea to know how to spot deepfake videos so that you can avoid falling for them. This is not always easy, but here’s what you need to know.

Source: YouTube

Also Read: How To Make a Deepfake?

Can you spot deepfake videos?

Is it possible to tell if you’re watching a deepfake video or listening to a deepfake audio if they are real or fake?

There is no doubt that as long as detection technology progresses, so will the quality of deepfake technology. There are ways to detect deepfakes both on your own and with some assistance from artificial intelligence.

Source: YouTube

15 ways to spot deepfake videos

The use of AI technology can be difficult to recognize with your eyes alone, so the use of emerging technologies can help to identify characteristics that are harder to see.

In order to help foster deepfake detection, researchers have been looking at soft biometrics, such as a person’s voice and other characteristics in videos. The importance of focusing on soft biometrics is important because you can also keep an eye out for these telltale signs on your own.

Unnatural eye movement.

There are red flags that are flagged by unnatural eye movements – such as an absence of blinking – or by an eye movement that does not look natural. I find it challenging to replicate the act of blinking in a way that makes it look as natural as possible. Likewise, it is difficult to replicate a real person’s eye movements in order to paint an accurate picture. The eyes of a person are usually directed to follow the person they are talking to.

Unnatural facial expressions.

It is possible to detect facial morphing when something doesn’t seem right about a face. This can happen when a simple stitch has been made of one image over another.

Awkward facial-feature positioning.

Whenever you see someone pointing one way with their face and another way with their nose, you need to be suspicious about the video’s authenticity.

A lack of emotion.

If a person’s face does not seem to exhibit the emotion that should accompany what they are supposedly saying, this can also be a sign of a morphing facial effect or image stitch.

Awkward-looking body or posture.

Another sign is a person’s body shape not looking natural or if his or her head and body are positioned in an unnatural or inconsistent manner. There is probably no better way to spot this inconsistency than by looking at the facial features, which deepfake technology usually focuses on rather than the whole body.

Unnatural body movement.

It is important to suspect a fake video if someone appears distorted or off when they turn to the side or move their head, or if their movements are jerky and disjointed from one frame to the next.

Unnatural coloring.

If there is an abnormal skin tone, discoloration, weird lighting, or shadows placed mysteriously, it is likely that you are looking at a fake photo.

Hair that doesn’t look real.

Fake images won’t render frizzy or flyaway hair, because they won’t be able to generate these unique characteristics.

Teeth that don’t look real.

In some cases, the algorithm may not be able to generate each individual tooth outline, so the absence of individual teeth outlines could be a clue.

Blurring or misalignment.

Usually, you will be able to tell that something is wrong if the edges of images are blurry or visuals are misaligned – for example, where someone’s face and neck joins their body.

Inconsistent noise or audio.

There is a general tendency for deepfake creators to spend more time on the video than the audio when creating their videos. As a result, poor lip-sync may occur, robotic-sounding voices, strange word pronunciation, digital background noise, and even the absence of audio.

Images that look unnatural when slowed down.

Whenever you watch a video on a screen that is larger than your smartphone, or if you have video-editing software that you can use to slow down a video’s playback, you’ll be able to zoom in more closely and examine images more closely. By zooming in on their lips, for example, you can see if they are really talking or if they are lip syncing poorly.

Hashtag discrepancies.

Cryptographic algorithms are used by video creators as a way of verifying the authenticity of their videos. It is an algorithm that inserts hashtags at particular points in a video throughout the video. In case the hashtags change in the video, then you should assume the video has been manipulated.

Digital fingerprints.

Videos can also be tagged with a digital fingerprint created by blockchain technology. Despite the fact that this method is not foolproof, this blockchain-based verification system can help establish a video’s authenticity. Here’s what you need to know. When a video is created, the content is registered to a ledger that cannot be altered after it has been created. This technology can be used to verify the authenticity of a video.

Reverse image searches.

The search for an original image, or a reverse image search with the aid of a computer, can provide similar videos online in order to identify if an image, audio, or video has been altered in any way. In spite of the fact that reverse video search technology is not yet widely available, investing in a tool like this might be beneficial.

Using technology to spot deepfakes

There are several groups that are trying to come up with ways in which to promote greater AI transparency and to protect people from deepfakes. Listed below are a few of them.

Twitter and Facebook.

Social media platforms like Twitter and Facebook have banned the use of malicious deepfakes. It is harder to detect deepfakes on the social network as the deepfake tech keeps improving. The problem gets compounded as billions of people use these platforms.

Google.

Google is working on text-to-speech (Audio Track) conversion tools (using neural networks) to verify speakers. Their artificial intelligence algorithm, can help spot deepfakes.

Source: YouTube

Adobe.

There is a system in Adobe that allows you to attribute details about your content to it. Adobe is also working on a tool that will help identify whether a facial image has been manipulated.

Researchers at the University of Southern California and University of California, Berkeley.

A notable push is being led by university researchers to discover new detection technologies. Using machine-learning technology and public datasets that examines soft biometrics like facial quirks and how a person speaks, they’ve been able to detect deepfakes with 92 to 96 percent accuracy.

Deepfake Detection Challenge.

Organizations like the DFDC are incentivizing solutions for deepfake detection by fostering innovation through collaboration. The DFDC is sharing a dataset of 124,000 videos that feature eight algorithms for facial modification.

Deeptrace.

This Amsterdam-based startup firm is developing automated deepfake detection tools to perform background scans of audiovisual media — similar to a deepfake antivirus.

U.S. Defense Advanced Research Projects Agency.

DARPA is funding research to develop automated screening of deepfake technology through a program called MediFor, or Media Forensics.

Deepfakes moves, and countermoves.

Researchers are responding to the latest deepfakes with their own unique technologies to detect them. As a result, the people behind the deep fakes are now reacting to the new detection technology. This is a game of whack a mole.

While the battle goes back and forth, it’s a good idea to know how to spot deepfakes — at least some of the time — and take steps not to be fooled. It helps if you’re skeptical about video you see on the internet. It is impossible for an average person to detect fake video from real videos. With a lot of online video being created and posted, the sheer number of content pieces compounds the problem.

Conclusion

Deepfake content is a rapidly evolving challenge in the digital world, where technology enables the creation of hyper-realistic fake videos featuring public figures, often for malicious intent. The vast majority of these deepfakes circulate on social media platforms, making the detection and management of this deceptive digital content crucial for these companies. The convincing nature of deepfakes makes it increasingly difficult for the average viewer to distinguish between authentic content and manipulated visuals. This is where deepfake detection technologies come into play, serving as vital tools in the battle against disinformation.

Deepfake detection algorithms have been developed to tackle the increasing sophistication of deepfake content. These advanced technologies leverage machine learning and artificial intelligence to identify subtle discrepancies in the deepfake generation process that are typically overlooked by the human eye. They analyze various aspects of the video, including facial expressions, eye movement, lighting inconsistencies, and even the quality of the skin tone. The combination of these elements provides a comprehensive analysis that helps distinguish deepfakes from authentic content.

Social media companies are increasingly integrating these deepfake detection models into their platforms as part of their larger fake news detection initiatives. This is particularly important given the potential human rights implications and the influence of disinformation campaigns on public opinion, especially during critical events like elections. The misuse of deepfakes to create convincing political videos is one of the foreseeable threats that society must prepare for. Therefore, the development and integration of these technologies have become an epistemic necessity.

Despite the advancements in deepfake detection technologies, the fight against disinformation using deepfakes requires a multi-faceted approach. Education plays a crucial role in arming individuals with the knowledge to discern deepfake content. Organizations should also prioritize transparency and ensure that their consumers understand the distinction between real and deepfake content. Ultimately, combating the effects of deepfakes on society requires a concerted effort from individuals, corporations, and governments alike.

References

ADAMS, Maurice. DEEPFAKE TECHNOLOGY and 2020 U. S. ELECTIONS: A Threat to Democracy and How to Spot Deepfakes. 2019.

Rathgeb, Christian, et al. Handbook of Digital Face Manipulation and Detection: From DeepFakes to Morphing Attacks. Springer Nature, 2022.

Schick, Nina. Deep Fakes and the Infocalypse: What You Urgently Need To Know. Hachette UK, 2020.