There is no need to write any code to make deepfakes now. Anyone can create them. Deepfakes use a neural network trained to reconstruct a video from a source frame and a latent representation of motion learned during training. A model which calculates the motion of an object in a new source image (e.g. a sequence of frames) and in accordance with the motion displayed in any video, it predicts how that object will move in its new source image.
This model tracks all movements and speech in an animation, including head movements and eye movements.
This approach will be explored in more detail in the following section before we discuss how we can create our own sequences based on it. Firstly, the algorithm requires a large amount of video data to be trained. A frame pair is extracted from the same video during the training phase and fed into the model during that phase. As a result of its ability to learn what the key points are in each pair and the motion between each pair are, the system is able to reconstruct the video.
Video generator and motion estimator make up this framework. As a starting point, the motion estimator is analyzing the video clips to determine what the latent representation of the motion is. This framework begins by analyzing the video and then calculating the latent representation of the motion. Motion-specific key point displacements (for instance, in this case, the key point could be the location of the eyes, mouth, and so on) and local affine transformations would be examples of motion-specific key point displacements.
Instead of using only the key point displacements to model a larger family of transformations, this combination can represent a wider spectrum of transformations. In the end, you get two outputs from the model: a dense motion field and an occlusion mask. There are two parts to this mask: which parts of the driving video can be reconstructed from the source image by warping, and which parts should be inferred from the context because they aren’t present in the source image.
It then animates the source image based on the motion detector output and the driving video; it warps the source image in a way that resembles the driving video and inverts segments that were occluded.
It is perfectly legal to use deepfakes as long as you do not abuse them. The use of deepfakes as a method of selling goods is prohibited, even though it is a relatively new technology. Laws have been in place for many years regarding misrepresentation, slander, and using someone’s likeness.
As well as being illegal, using convincing deepfakes for slandering someone is also highly unethical. Convincing deepfakes should never be used for any sort of slander. If you believe that a deepfake is real, you can’t use it to ruin someone’s reputation or to force them to say things that you wouldn’t normally say. Under the assumption that it is real, you can’t use deepfakes to ruin someone’s reputation.
Exploring Deepfake Software & Apps
There are many different applications that offer deepfake capabilities, some of them harder to use than other apps and some of them as easy to use as regular old Snapchat filters. What follows is a brief overview of what has been discovered, how it is being used, and how you can do it yourself.
Deepfake Apps vs. Deepfake Softwares
You do not need any background knowledge in coding to use deepfake apps, yet they will not offer the most realistic results. Anyone with a smartphone can use them.
As opposed to simpler apps, deepfake software is more complex to use, but provides a lot more flexibility in how you can use your videos. With deepfake software, you can achieve better results and enjoy more creative freedom if you overcome the technical hurdles.
It is important for you to learn the Python programming language in order to use deepfake software properly. In order for AI to work properly, you have to work with your videos as they self-learn to become smoother and more realistic. As a result of this, the machine must discover how to become smoother and more realistic on its own.
Deepfake is an extremely complex program that needs to be maintained properly and without errors to be successful. It is also necessary to have a dedicated graphics card or a virtual graphics processor. To maintain it successfully and without errors, you will need a sophisticated system.
1. First-Order-Model
It is possible to run a Google Collab document using a GitHub library called first-order-model that can be downloaded from GitHub.
In order for this program to work, you will need to use some images taken from the internet. To generate the source image, you will need to extract motions from any videos and use them to create movement in the images. The key to using this method is to make sure that you use sources with movement input in order to make it work, which means that anchor points will be needed to make it work. This method works with stickers, human movement, and robotic movement.
It is possible to create a First-order model if you film someone, and then convert the movements of your eyes, lips, and head from the video to a still image. There are many advantages to using this method because you are not creating a fake image at all, rather you are manipulating and distorting an existing image. However, you are limited to using one static backdrop and scene for your images. Once you understand the process, it does not take more than ten minutes to complete.
Although this method is realistic and relatively simple, it can only be used to manipulate a static image of the chosen individual. The camera is not very flexible – which means the characters need to remain in position for the scene that they are in.
2. Wav2Lip Model
Also a GitHub library, it manipulates images rather than recreating a deepfake simulation from scratch.
Wave2Lip is a lip-syncing model that has already been trained with lip-syncing data. It combines a wav file with an image, which then lip syncs the wav file with the image.
It seems that Wav2Lip only works quite well when the words in the audio files are slow and deliberate. If all you are looking for is a wav to mp3 converter, then Wav2Lip might be the perfect solution for you.
3. DeepFaceLab
Generally, you can learn all the information you need from the Github page if you do not have any experience with programming. It is the high-end software used for generating a real DeepFake. This software is not easy to install if you don’t have any programming experience.
Using this DeepFaceLab tutorial, you can learn how to extract, filter out unusable images, and run the software using the necessary commands.
In order to master the Deepface Lab, some time and expertise are required. In order to train a model and label images with a GPU, regardless of whether or not you have programming knowledge, you will need a great deal of GPU time.
It is true that Deepfacelab is able to achieve the best professional results today, but it can be quite challenging to use.
4. Zao
This Chinese app lets you make fake videos in a few seconds. If you want to have some fun with deepfakes and don’t want to spend a lot of time or effort on them, Zao might be a good option.
With the extensive library of clips from popular movies and TV shows, you can easily create a deepfake. The rest of the process is handled automatically by Zao.
The app is free to use and is available for Android and iOS.
5. Faceswap
Faceswap is an open source and free deepfake app that can be used by anyone. The software is based on Tensorflow, Keras, and Python and can be used to train and learn machine learning.
There is an active forum on Faceswap where you can ask questions about how to create deep fake videos and see tutorials that show you how to do so. So, if you are interested in the process of creating deep fake videos rather than in the deep fake videos themselves, then this is a great option for you. You can also get guides on how to use the software if you are a complete newbie.
Faceswap is available for Windows, Mac, and Linux. Although according to the developer’s recommendations, it is advised to use a more powerful CPU in combination with a graphics card since the process of face swapping on a CPU is incredibly slow.
6. Deep Art Effects
As a unique deepfake application, Deep Art Effects stands out on this list. It is designed to work with images rather than videos, and can be used to make art from images. During the training process, famous artists’ works such as Van Gogh, Picasso, and Michelangelo’s are being used in order to train the algorithm.
Let the A.I. transform any picture in your gallery into a unique piece of art simply by uploading it, choosing one of the styles available, and letting it do the rest. Android and iOS mobile versions as well as Windows, Mac, and Linux desktop versions of Deep Art Effects are free to download.
7. REFACE
What are your thoughts on messaging your friends and family with a lot of GIFs and memes? In that case, the REFACE app is the one you want. In the software, your face is superimposed over GIFs and images using a facial swapping algorithm called RefaceAI.
With REFACE, you can make deep-fake images very easily. Just select a popular GIF or meme, and then take a picture of your face. You will then receive a personalized image showing your face.
There will be a variation in the degree of accuracy of the deepfake based on the symmetry of your face and the GIF that you use. Fortunately, REFACE has plenty of options that you can try out until you find the right deepfake that you are looking for.
The app is free and available for both Android and iOS.
8. Morphin
As another deepfake app for tracking internet memes, Morphin offers a wide range of popular high-resolution GIFs that you can share with your friends and family.
This app is very similar to REFACE in its overall design. Morphin, on the other hand, has a cartoonish look rather than a realistic one, and it allows you to search through the collection by tags, so you can get exactly what you are looking for. You can take a selfie with a GIF and you can take a deepfake by choosing a GIF based on the selfie you took.
The app is free and available for Android and iOS.
9. Jiggy
The Jiggy app is a deepfake app that can make anyone dance. It will not make you dance directly, but it will make you dance as if you are moving. A dancing deepfake only requires the selection of a face and some dance moves to be created. With the app, you will be able to blend these two together and produce a deepfake.
This is possible thanks to the motion transfer technology used in the app. An interactive animated character is created from a photo of a person that can be interacted with. You can use Jiggy free of charge on both Android and iOS devices.
An exploration of state-of-the-art detection technology
The introduction of several deepfake video-detection (DVD) methods has taken place as a result of recent research. In some cases, some of these methods claim to be accurate in detecting viruses with a detection rate that exceeds 99 percent, but such reports should be interpreted with caution. Based on a number of different factors, including the level of compression, the resolution of the image, and the composition of the test set, there are wide variations in the amount of difficulty in detecting video manipulation.
An analysis of the performances of seven state-of-the-art detectors using five public datasets frequently used in the field showed that there was little difference between them in terms of accuracy, ranging from 30 percent to 97 percent. All five datasets that were tested exhibited wide ranges of accuracies in the detectors. When these detectors are turned to a unique set of data, these detectors usually do not perform well, since they have been configured to look for a certain type of manipulation. There are many efforts being undertaken in this area, and it is certainly true that there are certain detectors that are vastly better than others, but this does not mean that all of them are equal.
On one hand, advances in detection methods, on the other hand, advances in deepfake-generation methods alternate. It will be imperative to continuously improve on DVD methods by anticipating the next generation of deepfake content if the defense is to be successful.
The adversaries are likely to extend deepfake methods by creating videos with high degrees of dynamic realism in the near future, which will be a result of the ongoing development of deepfake methods. There are currently a number of existing deepfake techniques that aim to produce videos that are somewhat static in the sense that they show stationary subjects with constant lighting and unmoving backgrounds. In spite of this, the future deepfakes will incorporate dynamic lighting, poses, and backgrounds, as well as dynamic camera angles. There is a risk that the dynamic attributes of these videos may reduce the efficiency of existing deepfake detection algorithms.
Moreover, as far as human beings are concerned, the use of dynamism in deepfakes could make them more credible to them. The video of a foreign leader driving by on a golf cart and talking would be more engaging, as it would be more realistic and engaging than the exact same leader speaking directly to the camera in a static studio setting.
Both academics and companies are working on creating detection models that are based on deep neural networks that will be able to detect various types of deepfaked media for the purpose of combating this threat.
This issue has been tackled by large organizations one such example is Facebook which is providing $US 1 million in cash prizes to the top five winners of the Deepfake Detection Challenge (DFDC) held in 2019. Initially, the dataset was available only to participants of the competition, but it has since been made public. Participants were asked to build a detector model based on 100,000 deepfake videos that had been trained and validated via various methods.
This dataset was created by Facebook and Microsoft with assistance from a number of academic institutions. There were more than 35,000 models that were submitted, and the winning model achieved an accuracy of 65 percent on a test dataset of 10,000 videos that was not used for training, and an accuracy of 82 percent on a validation set that was used to train the model. During the training, the participants did not have access to the test set. DNN-based classification models tend to have problems with generalizability due to overfitting, as demonstrated by the discrepancy between the validation and test sets.
When taking into account the many aspects that go into creating a photorealistic deepfake-capturing high-quality source footage that is of the correct length, comparing the appearance of the source and destination, using the appropriate model for training, and being skilled in postproduction-you can detect a deepfake more easily. This information can be extracted by training a model with enough different deepfakes of different qualities, so that the range of possible flaws are covered.To build a model detection program, it may be necessary to add a publicly accessible dataset of deepfakes to the dataset of deepfakes. This dataset, for example, could come from the Facebook DFDC.
Due to the increase in the use of deepfakes and their potential negative impact, many people are concerned about their use to misrepresent others. As technology improves, machine learning improves, use of artificial intelligence improves, deepfakes can be used to propagate fake news, manipulate original video and share them on social media platforms to reach wider audience can cause serious challenges.
Especially if they share deepfake content of government leaders that can cause civil unrest, anarchy and riots or mistrust in the government. Adversarial networks can use deepfakes along with deep learning to use target video to create fake videos that can lead to serious consequences. When the lines between real video and deepfake videos will start to blur. Trust in such situations becomes a difficult choice. With open source tool/s it becomes easier every day to take this deepfake journey and as time goes on, this conversion process will keep becoming easier.
A couple of things we can do as individuals when we see a deepfake video is to question the content of the video, if you do not believe a person can say this then trust your instinct till you have irrevocable evidence to say otherwise. To identify deepfake videos, check for blurriness in the video look at the lighting conditions in the video, check for un-natural bright / dark spots in the video, head shape, head angle, dynamism in lighting, background noise check for difference in skin tones in face-swap videos, and canny AI.
Realistic deepfakes built using deepfake images like the one below of former President Barack Obama are very hard to spot and easy to create as there is treasure trove of data in the public domain of popular leaders.
Advances in deep fakes and the deepfake methods is troubling in the day and age when democracy across the globe is in question. The art of Deepfake video is growing with the constant growth of sophisticated technology like deepfake software frameworks along with component to deepfake creation. At some point even a well trained eye will find it difficult to spot it.
Improvement in deepfake tech and deepfake video creation with the help of encoder-decoder network will lead to hyper realistic videos. While these videos may be built from Disparate sources and Generative Adversarial Network, It will be very difficult to prove any deepfake composite. With examples of deepfake videos in circulation and view ability rising on smaller devices it is increasingly difficult to find nuances that help us decipher deepfake composites. It is imperative for us to improve the detection accuracy and build detection systems powered by better datasets.
Existing deepfake-detection models have a lot of catch up to do to identify smaller detailed nuances in deepfake videos that are powered by incredible face swap technology. This will require extensive effort and time to run multiple iterations and time per iteration of software to detect videos built with face-swap technology.
We have along road ahead of us to get to a point where deepfake videos can be detected and rooted out of the system in real time if needed.
Introduction
There is no need to write any code to make deepfakes now. Anyone can create them. Deepfakes use a neural network trained to reconstruct a video from a source frame and a latent representation of motion learned during training. A model which calculates the motion of an object in a new source image (e.g. a sequence of frames) and in accordance with the motion displayed in any video, it predicts how that object will move in its new source image.
This model tracks all movements and speech in an animation, including head movements and eye movements.
Methodology and Approach
This approach will be explored in more detail in the following section before we discuss how we can create our own sequences based on it. Firstly, the algorithm requires a large amount of video data to be trained. A frame pair is extracted from the same video during the training phase and fed into the model during that phase. As a result of its ability to learn what the key points are in each pair and the motion between each pair are, the system is able to reconstruct the video.
Video generator and motion estimator make up this framework. As a starting point, the motion estimator is analyzing the video clips to determine what the latent representation of the motion is. This framework begins by analyzing the video and then calculating the latent representation of the motion. Motion-specific key point displacements (for instance, in this case, the key point could be the location of the eyes, mouth, and so on) and local affine transformations would be examples of motion-specific key point displacements.
Instead of using only the key point displacements to model a larger family of transformations, this combination can represent a wider spectrum of transformations. In the end, you get two outputs from the model: a dense motion field and an occlusion mask. There are two parts to this mask: which parts of the driving video can be reconstructed from the source image by warping, and which parts should be inferred from the context because they aren’t present in the source image.
It then animates the source image based on the motion detector output and the driving video; it warps the source image in a way that resembles the driving video and inverts segments that were occluded.
Also Read: What Is A Deepfake?
Are deepfakes legal?
It is perfectly legal to use deepfakes as long as you do not abuse them. The use of deepfakes as a method of selling goods is prohibited, even though it is a relatively new technology. Laws have been in place for many years regarding misrepresentation, slander, and using someone’s likeness.
As well as being illegal, using convincing deepfakes for slandering someone is also highly unethical. Convincing deepfakes should never be used for any sort of slander. If you believe that a deepfake is real, you can’t use it to ruin someone’s reputation or to force them to say things that you wouldn’t normally say. Under the assumption that it is real, you can’t use deepfakes to ruin someone’s reputation.
Exploring Deepfake Software & Apps
There are many different applications that offer deepfake capabilities, some of them harder to use than other apps and some of them as easy to use as regular old Snapchat filters. What follows is a brief overview of what has been discovered, how it is being used, and how you can do it yourself.
Deepfake Apps vs. Deepfake Softwares
You do not need any background knowledge in coding to use deepfake apps, yet they will not offer the most realistic results. Anyone with a smartphone can use them.
As opposed to simpler apps, deepfake software is more complex to use, but provides a lot more flexibility in how you can use your videos. With deepfake software, you can achieve better results and enjoy more creative freedom if you overcome the technical hurdles.
Also Read: Journal – AI Powered Note Taking App.
The Best DeepFake Softwares
It is important for you to learn the Python programming language in order to use deepfake software properly. In order for AI to work properly, you have to work with your videos as they self-learn to become smoother and more realistic. As a result of this, the machine must discover how to become smoother and more realistic on its own.
Deepfake is an extremely complex program that needs to be maintained properly and without errors to be successful. It is also necessary to have a dedicated graphics card or a virtual graphics processor. To maintain it successfully and without errors, you will need a sophisticated system.
1. First-Order-Model
It is possible to run a Google Collab document using a GitHub library called first-order-model that can be downloaded from GitHub.
In order for this program to work, you will need to use some images taken from the internet. To generate the source image, you will need to extract motions from any videos and use them to create movement in the images. The key to using this method is to make sure that you use sources with movement input in order to make it work, which means that anchor points will be needed to make it work. This method works with stickers, human movement, and robotic movement.
It is possible to create a First-order model if you film someone, and then convert the movements of your eyes, lips, and head from the video to a still image. There are many advantages to using this method because you are not creating a fake image at all, rather you are manipulating and distorting an existing image. However, you are limited to using one static backdrop and scene for your images. Once you understand the process, it does not take more than ten minutes to complete.
Although this method is realistic and relatively simple, it can only be used to manipulate a static image of the chosen individual. The camera is not very flexible – which means the characters need to remain in position for the scene that they are in.
2. Wav2Lip Model
Also a GitHub library, it manipulates images rather than recreating a deepfake simulation from scratch.
Wave2Lip is a lip-syncing model that has already been trained with lip-syncing data. It combines a wav file with an image, which then lip syncs the wav file with the image.
It seems that Wav2Lip only works quite well when the words in the audio files are slow and deliberate. If all you are looking for is a wav to mp3 converter, then Wav2Lip might be the perfect solution for you.
3. DeepFaceLab
Generally, you can learn all the information you need from the Github page if you do not have any experience with programming. It is the high-end software used for generating a real DeepFake. This software is not easy to install if you don’t have any programming experience.
Using this DeepFaceLab tutorial, you can learn how to extract, filter out unusable images, and run the software using the necessary commands.
In order to master the Deepface Lab, some time and expertise are required. In order to train a model and label images with a GPU, regardless of whether or not you have programming knowledge, you will need a great deal of GPU time.
It is true that Deepfacelab is able to achieve the best professional results today, but it can be quite challenging to use.
4. Zao
This Chinese app lets you make fake videos in a few seconds. If you want to have some fun with deepfakes and don’t want to spend a lot of time or effort on them, Zao might be a good option.
With the extensive library of clips from popular movies and TV shows, you can easily create a deepfake. The rest of the process is handled automatically by Zao.
The app is free to use and is available for Android and iOS.
5. Faceswap
Faceswap is an open source and free deepfake app that can be used by anyone. The software is based on Tensorflow, Keras, and Python and can be used to train and learn machine learning.
There is an active forum on Faceswap where you can ask questions about how to create deep fake videos and see tutorials that show you how to do so. So, if you are interested in the process of creating deep fake videos rather than in the deep fake videos themselves, then this is a great option for you. You can also get guides on how to use the software if you are a complete newbie.
Faceswap is available for Windows, Mac, and Linux. Although according to the developer’s recommendations, it is advised to use a more powerful CPU in combination with a graphics card since the process of face swapping on a CPU is incredibly slow.
6. Deep Art Effects
As a unique deepfake application, Deep Art Effects stands out on this list. It is designed to work with images rather than videos, and can be used to make art from images. During the training process, famous artists’ works such as Van Gogh, Picasso, and Michelangelo’s are being used in order to train the algorithm.
Let the A.I. transform any picture in your gallery into a unique piece of art simply by uploading it, choosing one of the styles available, and letting it do the rest. Android and iOS mobile versions as well as Windows, Mac, and Linux desktop versions of Deep Art Effects are free to download.
7. REFACE
What are your thoughts on messaging your friends and family with a lot of GIFs and memes? In that case, the REFACE app is the one you want. In the software, your face is superimposed over GIFs and images using a facial swapping algorithm called RefaceAI.
With REFACE, you can make deep-fake images very easily. Just select a popular GIF or meme, and then take a picture of your face. You will then receive a personalized image showing your face.
There will be a variation in the degree of accuracy of the deepfake based on the symmetry of your face and the GIF that you use. Fortunately, REFACE has plenty of options that you can try out until you find the right deepfake that you are looking for.
The app is free and available for both Android and iOS.
8. Morphin
As another deepfake app for tracking internet memes, Morphin offers a wide range of popular high-resolution GIFs that you can share with your friends and family.
This app is very similar to REFACE in its overall design. Morphin, on the other hand, has a cartoonish look rather than a realistic one, and it allows you to search through the collection by tags, so you can get exactly what you are looking for. You can take a selfie with a GIF and you can take a deepfake by choosing a GIF based on the selfie you took.
The app is free and available for Android and iOS.
9. Jiggy
The Jiggy app is a deepfake app that can make anyone dance. It will not make you dance directly, but it will make you dance as if you are moving. A dancing deepfake only requires the selection of a face and some dance moves to be created. With the app, you will be able to blend these two together and produce a deepfake.
This is possible thanks to the motion transfer technology used in the app. An interactive animated character is created from a photo of a person that can be interacted with. You can use Jiggy free of charge on both Android and iOS devices.
Also Read: How to Spot a Deepfake: Tips for Combatting Disinformation
An exploration of state-of-the-art detection technology
The introduction of several deepfake video-detection (DVD) methods has taken place as a result of recent research. In some cases, some of these methods claim to be accurate in detecting viruses with a detection rate that exceeds 99 percent, but such reports should be interpreted with caution. Based on a number of different factors, including the level of compression, the resolution of the image, and the composition of the test set, there are wide variations in the amount of difficulty in detecting video manipulation.
An analysis of the performances of seven state-of-the-art detectors using five public datasets frequently used in the field showed that there was little difference between them in terms of accuracy, ranging from 30 percent to 97 percent. All five datasets that were tested exhibited wide ranges of accuracies in the detectors. When these detectors are turned to a unique set of data, these detectors usually do not perform well, since they have been configured to look for a certain type of manipulation. There are many efforts being undertaken in this area, and it is certainly true that there are certain detectors that are vastly better than others, but this does not mean that all of them are equal.
On one hand, advances in detection methods, on the other hand, advances in deepfake-generation methods alternate. It will be imperative to continuously improve on DVD methods by anticipating the next generation of deepfake content if the defense is to be successful.
The adversaries are likely to extend deepfake methods by creating videos with high degrees of dynamic realism in the near future, which will be a result of the ongoing development of deepfake methods. There are currently a number of existing deepfake techniques that aim to produce videos that are somewhat static in the sense that they show stationary subjects with constant lighting and unmoving backgrounds. In spite of this, the future deepfakes will incorporate dynamic lighting, poses, and backgrounds, as well as dynamic camera angles. There is a risk that the dynamic attributes of these videos may reduce the efficiency of existing deepfake detection algorithms.
Moreover, as far as human beings are concerned, the use of dynamism in deepfakes could make them more credible to them. The video of a foreign leader driving by on a golf cart and talking would be more engaging, as it would be more realistic and engaging than the exact same leader speaking directly to the camera in a static studio setting.
Both academics and companies are working on creating detection models that are based on deep neural networks that will be able to detect various types of deepfaked media for the purpose of combating this threat.
This issue has been tackled by large organizations one such example is Facebook which is providing $US 1 million in cash prizes to the top five winners of the Deepfake Detection Challenge (DFDC) held in 2019. Initially, the dataset was available only to participants of the competition, but it has since been made public. Participants were asked to build a detector model based on 100,000 deepfake videos that had been trained and validated via various methods.
This dataset was created by Facebook and Microsoft with assistance from a number of academic institutions. There were more than 35,000 models that were submitted, and the winning model achieved an accuracy of 65 percent on a test dataset of 10,000 videos that was not used for training, and an accuracy of 82 percent on a validation set that was used to train the model. During the training, the participants did not have access to the test set. DNN-based classification models tend to have problems with generalizability due to overfitting, as demonstrated by the discrepancy between the validation and test sets.
When taking into account the many aspects that go into creating a photorealistic deepfake-capturing high-quality source footage that is of the correct length, comparing the appearance of the source and destination, using the appropriate model for training, and being skilled in postproduction-you can detect a deepfake more easily. This information can be extracted by training a model with enough different deepfakes of different qualities, so that the range of possible flaws are covered.To build a model detection program, it may be necessary to add a publicly accessible dataset of deepfakes to the dataset of deepfakes. This dataset, for example, could come from the Facebook DFDC.
Also Read: How Video Games Use AI.
Due to the increase in the use of deepfakes and their potential negative impact, many people are concerned about their use to misrepresent others. As technology improves, machine learning improves, use of artificial intelligence improves, deepfakes can be used to propagate fake news, manipulate original video and share them on social media platforms to reach wider audience can cause serious challenges.
Especially if they share deepfake content of government leaders that can cause civil unrest, anarchy and riots or mistrust in the government. Adversarial networks can use deepfakes along with deep learning to use target video to create fake videos that can lead to serious consequences. When the lines between real video and deepfake videos will start to blur. Trust in such situations becomes a difficult choice. With open source tool/s it becomes easier every day to take this deepfake journey and as time goes on, this conversion process will keep becoming easier.
A couple of things we can do as individuals when we see a deepfake video is to question the content of the video, if you do not believe a person can say this then trust your instinct till you have irrevocable evidence to say otherwise. To identify deepfake videos, check for blurriness in the video look at the lighting conditions in the video, check for un-natural bright / dark spots in the video, head shape, head angle, dynamism in lighting, background noise check for difference in skin tones in face-swap videos, and canny AI.
Realistic deepfakes built using deepfake images like the one below of former President Barack Obama are very hard to spot and easy to create as there is treasure trove of data in the public domain of popular leaders.
Advances in deep fakes and the deepfake methods is troubling in the day and age when democracy across the globe is in question. The art of Deepfake video is growing with the constant growth of sophisticated technology like deepfake software frameworks along with component to deepfake creation. At some point even a well trained eye will find it difficult to spot it.
Improvement in deepfake tech and deepfake video creation with the help of encoder-decoder network will lead to hyper realistic videos. While these videos may be built from Disparate sources and Generative Adversarial Network, It will be very difficult to prove any deepfake composite. With examples of deepfake videos in circulation and view ability rising on smaller devices it is increasingly difficult to find nuances that help us decipher deepfake composites. It is imperative for us to improve the detection accuracy and build detection systems powered by better datasets.
Existing deepfake-detection models have a lot of catch up to do to identify smaller detailed nuances in deepfake videos that are powered by incredible face swap technology. This will require extensive effort and time to run multiple iterations and time per iteration of software to detect videos built with face-swap technology.
We have along road ahead of us to get to a point where deepfake videos can be detected and rooted out of the system in real time if needed.
Share this: