Introduction: How to make a deepfake.
Everybody can make DeepFakes without writing a single line of code. Deepfakes use a neural network trained to reconstruct a video from a source frame and a latent representation of motion learned during training. This model takes as input a new source image (e.g. a sequence of frames) and a driving video and predicts how the object in the source image moves in accordance with the motion displayed by the driving video.
In an animation, the model tracks everything that is interesting: head movements, speech, eye tracking, and even body movements.
Methodology and Approach
Let us take a look at this approach a little bit further before creating our own sequence. Firstly, there is a large amount of video data created for the training of the algorithm. In the training phase of the model, the authors extract frame pairs from the same video, which they feed into the model. In order to reconstruct the video, the system tries to learn what are the key points in the pairs and how to represent the motion between them.
To achieve this, the framework is divided into two parts: the motion estimator and the video generator. Initially, the motion estimator tries to figure out what the latent representation of the motion of the video is by analyzing the video. An example of this would be motion-specific key point displacements (the key point being the position of eyes or mouth) and local affine transformations. Instead of using only the key point displacements to model a larger family of transformations, this combination can represent a wider spectrum of transformations. In the end, you get two outputs from the model: a dense motion field and an occlusion mask. There are two parts to this mask: which parts of the driving video can be reconstructed from the source image by warping, and which parts should be inferred from the context because they aren’t present in the source image.
It then animates the source image based on the motion detector output and the driving video; it warps the source image in a way that resembles the driving video and inverts segments that were occluded.
Also Read: What Is A Deepfake?
The Best DeepFake Software – How To Make A Deepfake?
The most important thing is not to use Deepfake technology unethically. The technology is still quite unreliable and we aren’t convinced that it is 100% accurate for a lot of people, but it is dangerous enough that it can put you in a lot of trouble and even cause harm to the person you are mimicking. Deepfakes can be a great way to get a laugh or parody a celebrity if you use the software in a lighthearted manner and avoid misusing it.
Exploring Deepfake Software & Apps
There are many different applications that offer deepfake capabilities, some of them harder to use than other apps and some of them as easy to use as regular old Snapchat filters. The following will be a brief overview of what has been found, how it is being used, and how you can create your own.
Are deepfakes legal?
As long as you don’t abuse them, deepfakes are perfectly legal. Even though this is a relatively new technology, laws regarding misrepresentation, slander, and using someone’s likeness have been in place for many years. We strongly recommend to not misuse this technology. The use of deepfakes as a way to sell goods is prohibited.
Deepfakes should not be used to slander anyone. The act of using deepfakes to slander someone is not only illegal, but it is also highly unethical. It is not acceptable to use deepfakes to attempt to ruin someone’s reputation or to make them say something they would not otherwise say under the presumption that it is real.
Also Read: Our Favorite AI from Video Games.
Deepfake Apps vs. Deepfake Softwares
Despite the fact that deepfake apps are easy to use, you will not get the most realistic results from them. Anyone with a smartphone can use them and there is no need for you to have any background knowledge in coding in order to use them.
On the other hand, deepfake software is more difficult to use. There is, however, a lot more flexibility in how you can use your videos as compared to the limited capabilities of simpler apps. You can get better results and have greater creative freedom if you over come the technical hurdles posed by using deepfake software.
Also Read: Journal – AI Powered Note Taking App.
For deepfake software to work properly, you’ll need to learn the Python programming language in order to use it. With AI, you will be working with your videos as they self-learn to become smooth and realistic. The machine, therefore, must discover how to become smooth and realistic on its own.
Additionally, you will need a dedicated graphics card or a virtual GPU (Google Cloud is a popular option). The Deepfake software is a demanding program for any computer and requires you to maintain it properly and without any errors by using a sophisticated system.
There is a GitHub library called first-order-model and you can run it through a Google Collab document.
This program uses some images taken from the internet and some driving videos. To generate the source image, you need to extract motions from the driving video and use them to generate movements. Using this method, you will need to use sources with movement input in order for this to work, meaning you will need to use anchor points to achieve this. This method works with stickers, human motion, as well as robotic movements.
A First-order-model can be created by filming a video of yourself or someone you know, and then transferring the eye, lip, and head movements from the video onto the still image. The great thing about it is that rather than creating a “fake” image, you are just manipulating and distorting an actual image. However, the process has the drawback of limiting you to one static scene and backdrop. It takes about ten minutes to create them once you have learned the process.
In spite of the fact that this method is realistic and relatively simple, it is restricted to manipulating a static image of the person that you have selected. The camera is not that flexible – they need to remain in the scene and relative position that they are in within the picture.
It is also a GitHub library and is used instead of recreating a deepfake simulation to manipulate images instead of creating one.
The Wave2Lip is different from other lip-syncing models because it is a model that has already been trained with specific lip-syncing data. In order to perform a lipsync, you only need to match a .wav file to an image and that image will then lipsync to the wav file. That’s where the name wav2lip comes from.
This seemed to work pretty well when the words in the audio files were slow and deliberate. If wav to mp3 conversion is all you need, Wav2Lip might be an ideal solution.
This is the high-end software used for generating a real DeepFake. The installation of the software and its dependencies can be a bit difficult if you do not have any experience with programming. In most cases, you can learn all the information that you need from the Github page.
It is also possible to use this DeepFaceLab tutorial to help walk you through extraction, filtering out unusable pictures or videos, and explaining the commands needed to train and run the software.
The Deepface Lab creates incredible results and is easily learned by a novice, but takes some time and expertise. Whether or not you have programming knowledge, labeling images and training models can take a lot of your time and GPU time to run and teach the algorithm how and what to modify.
The most professional results can be achieved today with Deepfacelab, but it is difficult to use.
Zao is a Chinese app that lets you make fake videos within a few seconds. Zao is a good option if you want to have a bit of fun with your deepfakes and don’t want to put in too much effort or time into them.
Using the app is really simple and quick. It is very simple to create your first deepfake; all you need to do is choose a video from the wide range of clips from popular movies and TV shows from the extensive library. The rest of the process is handled by Zao automatically.
The app is free to use and is available for Android and iOS.
Faceswap is an open source and free deepfake app that can be used by anyone. This software is based on Tensorflow, Keras, and Python and can be used as a learning and training tool.
There is an active forum on Faceswap where you can ask questions about how to create deep fake videos and see tutorials that show you how to do so. So, if you are interested in the process of creating deep fake videos rather than in the deep fake videos themselves, then this is a great option for you. You can also get guides on how to use the software if you are a complete newbie.
Faceswap is available for Windows, Mac, and Linux. Although according to the developer’s recommendations, it is advised to use a more powerful CPU in combination with a graphics card since the process of face swapping on a CPU is incredibly slow.
Deep Art Effects
On this list, Deep Art Effects stands out as a unique deepfake application. It is designed to work with images rather than videos and can be used to convert images into work of art. The algorithm behind it is developed from the works of famous artists like Van Gogh, Picasso, and Michelangelo, and is trained using their works.
Let the A.I. transform any picture in your gallery into a unique piece of art simply by uploading it, choosing one of the styles available, and letting it do the rest. It is free to download both the Android and iOS mobile version of Deep Art Effects, as well as the Windows, Mac, and Linux desktop version.
What are your thoughts on messaging your friends and family with a lot of GIFs and memes? In that case, the REFACE app is the one you want. It uses a facial swapping algorithm called RefaceAI to superimpose your face on GIFs and images within the software.
It is very easy to make a deep-fake image using REFACE. From the app’s gallery, you can select a popular GIF or a popular meme, and then snap a picture of your face. The app will then create a personalized image that shows your face in the image.
Depending on the symmetry of your face and the GIF you use, the accuracy of the result will vary. Fortunately, REFACE has plenty of options that you can try till you get the perfect deepfake that you want to use.
The app is free and available for both Android and iOS.
Morphin is another deepfake app that you should consider when you want to stay on top of the latest internet memes. In addition to the standard emojis, Morphin has a wide collection of popular high-resolution GIFs that you can use to send to your friends.
This app is very similar to REFACE in its overall design. The GIFs on Morphin, however, have more of a cartoonish look rather than a realistic one, and you can search the collection by tags if you choose. Take a selfie with a GIF and then take a deepfake by choosing a GIF based on the selfie you took.
The app is free and available for Android and iOS.
The Jiggy app is a deepfake app that can make anyone dance. It will not make you dance directly, but it will make you dance as if you are moving. A dancing deepfake only requires the selection of a face and some dance moves to be created. With the app, you will be able to blend these two together and produce a deepfake that is bound to brighten up anyone’s day.
You will be able to do this because the motion transfer technology used behind the app will make it possible to do so. An interactive animated character is created from a photo of a person that can be interacted with. You can use Jiggy free of charge on both Android and iOS devices.
State of Detection Technology: A Game of Cat and Mouse
The introduction of several deepfake video-detection (DVD) methods has taken place as a result of recent research. In some cases, some of these methods claim to be accurate in detecting viruses with a detection rate that exceeds 99 percent, but such reports should be interpreted with caution. There is a wide variation in the amount of difficulty in detecting video manipulation based on several factors, including the level of compression, image resolution, and the composition of the test set.
An analysis of the performances of seven state-of-the-art detectors using five public datasets frequently used in the field showed that there was little difference between them in terms of accuracy, ranging from 30 percent to 97 percent. All five datasets that were tested exhibited wide ranges of accuracies in the detectors. When these detectors are turned to a unique set of data, these detectors usually do not perform well, since they have been configured to look for a certain type of manipulation. There are many efforts being undertaken in this area, and it is certainly true that there are certain detectors that are vastly better than others, but this does not mean that all of them are equal.
In spite of the fact that current detectors are very accurate, DVD is a game of cat and mouse. On one hand, advances in detection methods, on the other hand, advances in deepfake-generation methods alternate. It will be imperative to continuously improve on DVD methods by anticipating the next generation of deepfake content if the defense is to be successful.
It is likely that adversaries will soon extend deepfake methods by creating videos that have a high degree of dynamic realism. Currently, there are a number of existing deepfake methods which aim to produce videos that are somewhat static in the sense that they show stationary subjects with constant lighting and unmoving backgrounds. Nevertheless, deepfakes of the future will incorporate dynamism in terms of lighting, poses, and backgrounds. There is a risk that the dynamic attributes of these videos may reduce the efficiency of existing deepfake detection algorithms. Moreover, as far as human beings are concerned, the use of dynamism in deepfakes could make them more credible to them. The video of a foreign leader driving by on a golf cart and talking would be more engaging, as it would be more realistic and engaging than the exact same leader speaking directly to the camera in a static studio setting.
Both academics and companies are working on creating detection models that are based on deep neural networks that will be able to detect various types of deepfaked media for the purpose of combating this threat. With the Deepfake Detection Challenge (DFDC) held in 2019, Facebook has played a major role in tackling this issue by providing $US 1 million in cash prizes to the top five winners.
Each participant was expected to create a detector model that had been trained and validated using a curated set of 100,000 deepfake videos. These videos were created by Facebook and Microsoft with the assistance of a number of academic institutions. It was originally only available to members of the competition, however, the dataset has since been made publicly available. There were more than 35,000 models that were submitted, and the winning model achieved an accuracy of 65 percent on a test dataset of 10,000 videos that was not used for training, and an accuracy of 82 percent on a validation set that was used to train the model. During the training, the participants did not have access to the test set. According to our analysis, the discrepancy between the validation and the test sets indicates some overfitting of the model and therefore a lack of generalizability, a problem that tends to plague DNN-based classification models.
When taking into account the many aspects that go into creating a photorealistic deepfake-capturing high-quality source footage that is of the correct length, comparing the appearance of the source and destination, using the appropriate model for training, and being skilled in postproduction-you can detect a deepfake more easily. The target would be to train a model on a model complex enough to extract this information by training a model with enough variations of deepfakes of various qualities, so that the range of possible flaws is covered. To build a model detection program, it may be necessary to add a publicly accessible dataset of deepfakes to the dataset of deepfakes. This dataset, for example, could come from the Facebook DFDC.
Also Read: How Video Games Use AI.
Should We Be Concerned About Deepfakes?
As a result of the rise in the use of deepfakes and the potential negative impact they can have, many people are concerned about how they can be used to misrepresent someone. For the moment, however, it seems that people use deepfake technology only to play around with it for fun purposes. The deepfake apps allow you to create GIFs in deepfake apps and then share them on Instagram Stories or make YouTube videos that appear as if they have been created in deepfake apps. As technology improves, machine learning improves, use of artificial intelligence improves deepfakes can be used to propagate fake news, manipulate original video and share them on social media platforms to reach wider audience can cause serious challenges.
Adversarial networks can use deepfakes along with deep learning to use target video to create fake videos that can lead to serious consequences. The target video then becomes a realistic deepfakes with deepfake image, and deepfake process and the lines between real video and deepfake videos will start to blur. Trust in such situations becomes a difficult choice. With open source tool/s it becomes easier every day to take this deepfake journey and as time goes on, this conversion process will keep becoming easier.