Dangers of Deep Fakes

Sep 21, 2020
3 min read

Privacy is a myth in the world of the internet! We are sure you must have read some of the other forms of this quote, and there is no denying that this one is false! With so many lawsuits being filed against either Google, Facebook, or other Internet companies, putting our data at risk has been a major concern.

What Is It?

Going by the name, this is something deeply faked! Well, we won't be joking with you on this, so let's get into what exactly it is. Deep fake is a portmanteau of deep learning, a form of synthetic media in which part of the person in an existing image or video is replaced by someone else. Deep fake involves generative neural network architectures such as autoencoders or generative adversarial networks.

But, deep fakes are not just about deep learning, machine learning, or technology! They had existed for years before any of these technologies became part of our lives. What do you think made it possible for Fast and Furious to be completed after the death of Paul Walker? But, back in those days, it took days to achieve something like that. However, with the coming technology, this has become simpler than ever before.

How is it Done?

Deep fakes are made possible because of machine learning. The creator first trains the neural network for long hours on the real video to give the machine a realistic understanding of what the video looks like from different angles and lighting. This trained network is then combined with computer-graphic techniques to super-impose a copy of the person onto a different person. Although many believe that GANs play an important role in deep fakes, mostly artificial intelligence algorithms and other non-AI algorithms are used for building deep fakes.

The Real Danger


One of the deepest hit industries by deep fakes is the one we look around through shy eyes. With deep fake, it has become easier for revenge porn to float around. The creator only needs many photos of victims, which are often readily available in their social media account. It has been used to bully or harass individuals to spread hate and tarnish their image.


If there is one thing we all know about politicians and politics, there are often lies in its web. Deep fakes have overblown the threat to democracy. Politicians across the world are using deep fakes to win new voters. Recently, an Indian politician's video surfaced around the internet, where he is seen criticizing the opponent party in English. The video was created in Bhojpuri, but it was deep faked in English to reach the English listening audience. The more significant risk here is that people might get to the point where they stop trusting what they see or hear. It is not hard to imagine the corrosive impact that this might have on an already fragile political environment.

C-Level Fraud

There is a rise in different types of C-level frauds, from early spear-fishing to the recent whailing attacks. A whailing attack is a phishing attack where the target is being made to the CEO or the CFO within the organization to steal information from them. This is usually done by making calls to them from those that hold higher positions than them. One recent incident happened with the anti-money laundering officers of the US Credit office. Cybercriminals tend to spend significant money when profiling their targets because of the potentially high returns.

The Measures

With the world often feeling gripped by deep fake, there have been many measures taken across countries to control deep fakes. Although challenging to enforce, California has recently made a law that makes it illegal to create or distribute deep fakes. Newsom also signed a bill AB 602, which allows California residents to sue if their image is being tarnished or sexually explicit. February of 2020 brought us the unveiling Twitter's plan to curb the spread of manipulated content, including "deep fake" videos. This was done as part of a move to fight information that might result in deep fakes or other harm.

Along with the DARPA, the Pentagon is currently working with the largest research centers to get ahead of deep fake. Computers are now being fed with real videos, training them to detect the deep fakes. The major challenge, however, is in detecting fake audios. Training computers to recognize visual inconsistencies is one way to see fake audio.


We are already surrounded by fake news, with information creating deliberate falsehoods with deep fake worsening the situation. Many people even predict a cyberwar, and there is no denying of its possibility.

Deepfakes aren't going away, posing the question of how to regulate them? With more advanced technology being developed everyday, governments will have a harder time creating laws and regulations.