The creators of these videos replace the original audio with dialogue that aligns with the lip movement of characters, making us believe that the audio is actually true. Of course, these videos are made absurdly hilarious, so we can know these are dubbed.

But what if they took the audio and replaced it with something that could be more believable. Or what if an artificial intelligence technology didn’t just alter the speaker’s voice but also their facial features to make it look like they’re doing something completely different? 

There is actually a technology that can do that can make it look like a certain someone is doing something that they didn’t actually do. 

Take a look at this video where Zuckerberg is talking about taking control of the stolen data of billions of people. Since Zuckerberg does have the image of a data stealer, it was believed as a real video and soon went viral.

If you look closely, you’ll see that it’s not a real video but its quality also reflects that the deepfake industry is evolving and fast. 

Fortunately, Instagram has decided not to delete this video so we can look at it and understand that not everything we see on the internet is true.

Another example of this technology is this Obama video made by researchers from University of Washington. With audio of the former US president Obama, they have come up with a method to create HD videos that have accurate lip syncing. 

This project used artificial intelligence and processed his mouth shape for each instance as he speaks. The output was highly photorealistic and appears to be a genuine Obama speech.

While this was done as an experiment, it revealed several risks. The US House of Representatives Permanent Select Committee on Intelligence organized a hearing on the security issues of artificial intelligence and manipulated media. This was the first hearing that was held specifically to examine deepfakes.

So What is a Deepfake Really?

The word deepfake gained popularity in 2017. 

It’s a broad term that incorporates machine learning for image synthesis. More specifically, it involves Generative Adversarial Network (GAN). GANs are tools that allow software to do unsupervised learning.

In simple words, deepfakes are edited audios/videos that fool us into believing in something that isn’t real.

About two years ago, the term deepfake caught attention when a Reddit user called Deepfakes used the technique to produce convincing video clips that changed porn star faces with celebrities’.

Is Deepfake New?

While the AI technology is new, the fakery goes back to as old as the early twentieth century. 

The Cottingley Fairies

The Cottingley Fairies became famous in 1917 when some pictures taken by two cousins showed the appearance of fairies. For a long time, people believed that there was some phenomenon behind the pictures.

However, in 1983, the photographers admitted that the pictures were faked and were created using cardboard cutouts.

The Loch Ness Monster

You might have heard of the Loch Ness Monster. People affectionately call him Nessy, and if you’re a South Park fan, you might remember him for asking for “tree fiddy.”

Several people have tried taking a photo of Nessy but only one man (a doctor), was able to successfully capture him on a photo. This photo was called the Surgeon’s Photograph and was published in the Daily Mail in 1934. This was considered the first real photograph of Nessy. 

Turned out later that a man had used a toy submarine and wood putty to take the photo and asked the doctor to hand the photos as the latter had a good and honest reputation. 

Hollywood

Hollywood has always used different techniques to create fake images and videos. And since they didn’t have AI to help them, these processes have always been difficult and labor-intensive.

For example, in Furious 7, Paul Walker passed away while the movie was in production. Production experts completed his performance using CGI and other advanced techniques.

Now with machine learning, such software is easily available and can be used on a regular computer.

How They Create Realistic Deepfake Videos

Creating a deepfake video is a bit like translating one language to another. 

Google Translate uses machine learning. The software analyzes tons of texts in several languages to detect the right language and to translate it to your chosen language.

Deepfake works in a similar way. 

The creator of the video uses a machine learning system called a deep neural network that examines facial movements to create the right output.

How these Videos are Faked

This is how a deepfake video is made.

The creator collects tons of photos and videos of the target. This is why it’s helpful if you’re targeting a famous person as their photos will be readily available. These files are fed into the application that uses artificial intelligence to combine various photos and videos to give the desired output.

This requires a good graphics processing unit and a lot of system memory, which is not unachievable. It’s time-consuming, so the video similar to the Obama video mentioned above would easily take 50-60 hours. However, in the coming years, it might take just a few minutes to create such videos.

According to tech expert Antonio Martinez, we will soon be able to use this technology to superimpose a face onto someone else, creating authentic-looking videos of anything.

How Voices are Faked

It’s the same principle. You take a number of recordings of a person and feed them into the AI program. The recordings are chopped up and stitched together to form the sentence you desire. This has to be done carefully so that it doesn’t sound made-up. 

With this technology, you can make your subject say anything you want. A team of engineers has managed to create a convincing speech by John F Kennedy by feeding 831 JFK speeches to the machine learning software. 

Can You Create Deepfakes?

Yes, you can. Or you can get someone else to make them for you for as little as $20.

So these are your options:

  • Use an app such as FakeApp to create a deepfake.
  • Pay someone on darkweb to create a deepfake for you.
  • Use the free code available on GitHub to create your own deepfake. Of course, this will need technical knowledge. 

If you have no technical knowledge, you’ll find a number of deepfake discussion boards that take requests and can create deepfakes for interested clients. These artists generally receive requests for pornographic deepfakes.

To begin with, the deepfake creator will need a LOT of images of the person. These images are called facesets. The creator will also need donor bodies. The donor bodies are often porn stars whose bodies will be used in the video. 

Some video creators use videos of women and extract their photos from it, while others use voice cloning software to create convincing audios.

Since this is all very easily available, governments of the world must do something to regulate the situation before it turns into a severe form of cyber bullying.

Deepfakes were initially reported by Vice and they have been banned ever since on all social media websites. But you can still find the code to create them on GitHub.

Is it Just a Passing Trolling Trend or Something Serious?

Photoshop and airbrushing have been used for long enough and they allow easy manipulation of photos. Now videos can be morphed to look accurately real. We generally see so many morphed videos and take them as entertainment. 

What would you do if you could create a deepfake?

An average user might want to use a morphed video to show off on social media and appear to be more successful than they actually are. Or they might want to create a funny video of their friends.

Are deepfakes really a threat? Or are they just a way to have fun with technology? 

Let’s see.

Weaponization of deepfakes

It was all fun and games when this technology was used for research purposes or parody videos. With any new technology, there are always chances of misuse and deepfakes are no exception.

There are several fake porn videos online that are used to humiliate or harass women. Presently, a lot of deepfake applications are related to porn.

Deepfake artists can create disturbingly realistic porn videos with photos taken from the internet. While it started by targeting celebrities, even ordinary women have started to suffer from them.

Scarlett Johansson said that she’s worried that in some time, any person could be easily targeted with this technology.

Johansson’s face has been superimposed on several porn videos over the years. She is worried that this technology could affect women and children and this is a virtually lawless abyss.

One deepfake artist created an explicit video of a makeup blogger by stealing her videos. The video has now been removed as Reddit and Pornhub don’t allow non-consensual pornographic videos. However, there are several other ways to circulate them and this is a worrying factor.

There was one shocking app called DeepNude that let the users undress the photo of any woman. The story appeared on Vice and the DeepNude app was shut down in a couple of days.

The use of deepfakes in the porn industry has skyrocketed and it’s scary to see that there’s a lot of demand for this technology.

A deepfake artist said that his website received more than 20,000 views per day on his 10-month old website. The site doesn’t have many rules except that the visitors must be over 18 years of age and they wouldn’t take requests on rape or graphic violence. 

However, there are other websites that will fulfill such requests. 

And Then There’s Revenge Porn

Not a very difficult scenario to imagine. A girl turns down a boy on a date so he edits her face on an amateur porn video and tells others they had sex. That’s bad. Now imagine he uses this clip as a blackmail weapon. She has a deeply religious family and is forced to give in to his demands.

The situation can be very dark and depressing – and it probably is already.

There are people on Reddit asking deepfake artists to map their ex’s photos on porn videos. There are several fake porn videos of celebrities already.

This isn’t good news. If you think this doesn’t affect you, imagine living in a country where there are strict blasphemy laws and someone creates a video of you tearing up a holy book. The deepfake scenario can affect almost anyone.

And this technology, like any other, is evolving rapidly. Victims of revenge porn don’t have any options because current laws don’t cover this subject. Also, if a deepfake video is created and circulated, it can be hard to find the one who did it.

Fortunately, governments are waking up to this and new laws are being created.

Virginia has expanded its revenge porn laws to include cases of deepfakes. The government of UK is also reviewing their laws regarding non-consensual imagery as they witness a surge in cases related to offensive and abusive digital communications.

While some regions are now aware of the rising threat of deepfakes, not all countries have the right laws in place. This is why lawyers have taken up different tactics to handle such cases.

In 2015, a California man superimposed his ex-wife’s face on porn images. Prosecutors handled the case by charging him with conspiracy and identity theft.

Google doesn’t allow non-consensual posting of nudes and allows people to get their photos removed by following a simple process. Pornhub and Reddit have also banned such videos but there are other websites where such content can be published.

Deepfakes have opened a new world of abuse, humiliation, and harassment. These fakes can be posted on porn websites and are very difficult to detect.

You’re Worried About Fake News? Deepfakes Take it to Another Level

In 2018, a video was released that showed the US President Donald Trump asking Belgian authorities to withdraw from the Paris climate agreement.

While the video was a bit choppy if you notice the mouth movement, the voice was pretty convincing. It was made using After Effects, a video editing tool. The intention of the video was to grab the attention of viewers. 

This video, although not made using AI, shows the power of fakery and how it can lead to social or political unrest. It was just a small demonstration of how deepfake technology can threaten the already vulnerable digital ecosystem and diminish the line between authentic and fake content.

The Possibility of Political Unrest

Consider this scenario. You open a trusted news website and see a video that says your country’s leader is conspiring with the enemy state to import substandard medicines and pharmaceutical products to your country. 

You open the video and there it is – your president with the enemy state’s leader, discussing the shipping of possibly harmful medicines to the country.

You knew all politicians are bad but this was the limit! This can’t be fake – you saw it with your own eyes. 

But later it turns out that the video was fake. If you can’t trust a reliable news website, who can you trust?

In 2017, Qatar’s official news agency reported that the emir has praised Islamist groups such as Hamas and Hezbollah. Later the foreign ministry of Qatar said that such speech never took place. While they did not use deepfake technology, it does show how reputed news sources can also publish fabricated stories and nobody can really be trusted.

People generally accept whatever is published on news websites. Any news piece can be made even more trustworthy when it’s accompanied by a video.

And the way these videos are getting increasingly realistic is disturbing. Do you remember the movie, The Shining? What if it starred Jim Carrey instead of Jack Nicholson? Here’s a video on how it might have looked. It’s so accurate you’d get confused.

Creepy, right?

Danielle Citron, a law professor at the University of Maryland, says that while deepfakes are used to violate a woman’s right to privacy by creating fake porn videos, things can go beyond porn or trolling on Reddit. 

According to her, they can be fabricated to destroy the democratic society. In 2015, Freddie Gray died in Baltimore and the police were said to be responsible for the death. The city was under riots for this incident. Back in the day, if someone created a deepfake of the police chief saying something racist, this could lead to a lot of distress in an already ignited situation.

People who propagate fake news would be happy with the idea of deepfakes. Anyone who can tinker with this technology – from trolls to propagandists – would be able to manipulate beliefs and use the online medium to push communities more into the subjective realities they have created.

According to a report, “The marketplace of ideas already suffers from truth decay as our networked information environment interacts in toxic ways with our cognitive biases.” The report continues, “Deep fakes will exacerbate this problem significantly.”

Bad for Journalism

If people can no longer trust seemingly reliable sources, political leaders will be free to dismiss any evidence on their wrong-doings as a fabrication. As doctored photos and videos start emerging frequently, facts will lose their power. 

And this has already started happening. 

In November, Press Secretary Sarah Sanders shared a video on Twitter that justified withdrawing the credentials of Jim Acosta, a CNN correspondent. Later, this video was found to be doctored to show the actions of Acosta as more aggressive than they actually were.

When shown the original clip, Sanders replied that the reporter did make contact, which was wrong so they stand by their statement.

A Threat to National Security

Deepfakes can be used to cause civil unrest. According to Clint Watts, a senior fellow at the Center for Cyber and Homeland Security, China could be a threat in this case as foreign enemies can use deepfake to provoke fear in people and to distort the reality of Western democracies.

Many of us have seen the manipulated video of Nancy Pelosi and the recent doctored clip of Bernie Sanders. If the general public can be made to believe such videos, this puts democracy in danger. There can be foreign meddling in our digital content without us knowing about it.

Fakery in the Justice System

Deepfakes can erode trust in video evidence. Since video evidence is very important in the criminal justice system, a doctored video can lead to wrong judgments. 

And because the technology is so readily accessible, it’s easy for anyone to create a fake surveillance video and use it in a criminal trial. If made carefully, deepfakes can look realistic and the jury might be unable to detect the fraud.

Effect on Economy

Big corporations play a huge role in the modern economy. A statement by the CEO of an influential organization can have a sudden impact on the stocks of that company. This can affect other stocks as well. If planned well, deepfakes can be a weapon for economic manipulation. They can also interfere in financial markets by targeting companies and industry leaders.

How to Spot a Deepfake

It’s not that easy.

While a choppily made video can be identified easily, it’s not easy to identify a finely doctored video. Yes, there are several resources to create deepfakes, there aren’t many reliable tools to identify a fake video, which makes things difficult.

Still, there are some steps you can take to check if a video is real or fake.

Weird or flickering faces

If it’s a choppily made video, a little careful observation will show that it’s not real. The people in the video would be looking weird and not natural. Their faces might be flickering as well. Or the faces might not match the bodies.

No sound

If a video is edited, it becomes easier for the faker if there’s no sound. This is why a lot of deepfakes don’t have any sound.

Color Change

As blood is pumped in and out, there are subtle changes in color on our face. This change is so small that artificial intelligence is unable to detect it, but human eyes can see the difference if monitored carefully.

Blinking

Last year, a paper discussed the ability to detect deepfakes by the way people blinked in the videos. 

But deepfake technology can get around eye-blink detection by collecting and mapping images in a way that shows the person is blinking regularly.

Since most deepfakes are $20 jobs, they don’t go into that sort of refinement. 

However, if a deepfake is designed to affect a country’s economy, it will be designed very carefully and will be difficult to notice. To detect such videos, there’s a technique called Eulerian video magnification that sees the pulse rate of people. 

Evolving automatic forensics

While deepfake technology is developing rapidly, engineers are also working on creating technology to detect such videos. This technology will include:

  • Detection of two camera models in the same clip
  • Detection of fake pixels in images
  • Using fake images and videos as training data and feeding it in machine learning tools

The problem is that detection technology isn’t evolving as rapidly as the deepfake technology.

Steps Being Taken to Curb the Menace

Deepfakes are dangerous but it’s nothing new. We have always had fake media and misinformation issues.

Artificial intelligence isn’t a problem but more of an accelerator to a current problem. Deepfake technology is a tool that has unfortunately found more negative applications than positive ones.

Banning deepfakes

It’s easier said than done. Firstly, this technology has positive applications as well. And secondly, even if they are banned, somebody will still do it.

Banning guides on deepfakes will not stop them from being used. The best way to handle deepfakes is to educate people about them.

There is a need for public awareness. Governments alone cannot be held responsible for such videos. A lot of these videos have visible clues that they are fake. The editing job is generally done in a rough way and carefully observing a video will tell you that it’s fake.

Role of social media in the detection of deepfakes

As discussed earlier, there was an open hearing by the House Intelligence Committee on deepfakes this year. In the hearing, Dr. Doermann of University at Buffalo suggested that social media sites should be forced to moderate their content.

According to him, social media websites should check if an uploaded video is fabricated and label that video as such. 

However, this isn’t easy to implement. Social media has the nature of freedom and spontaneity. People post their videos on social media in an instant. Applying checks on each video will delay the process and create hurdles in letting people express themselves. 

So it’s doubtful that social media organizations would want to implement such drastic changes in their platforms.

Blockchain to Detect Deepfakes

According to Lawfare, if there were a technology that could record every moment of yours, it would be able to check if the claim made by a video is true or not. For example, if a video claims that you went out with someone on a particular Saturday, you can show the records and prove that you were actually in your home, enjoying a movie.

Of course, this service would prove to be more harmful than beneficial. If someone gets a hold of this data about all your movements, it can be used to spy on you. 

One solution could be using blockchain for this technology. Blockchain replicates each record on several computers and is secured by a public-private key pair. The owner of the data is the only one who has the private key. So while a system might record all the activities of a person, he’s the only one who has access to it.

Blockchains are also less prone to hacking attacks, which makes it a viable option. However, the cost of recording each and every step you take is another matter of discussion.

What Can the Governments Do?

For starters, the governments all over the world should define deepfakes in their legal systems. When a deepfake is identified, it should be curbed before it spreads. This is something that will need participation from social media websites as well.

The governments should also fund the development of techniques that could identify deepfakes. Currently, the techniques that are used to detect deepfakes are far behind the techniques used to develop them.

Blurring the Sense of Reality

Using digital tools for malicious intent isn’t new. But using artificial intelligence to diminish reality can be dangerous. 

No, you’re not being paranoid. The threat is real.

We have seen glimpses of deepfakes and what they can do. What’s used today for revenge porn in the dark web can be used to tomorrow to put you and your family in danger. It can be used to change economies and create riots.

There are people who are gullible enough to fall for conspiracy theories. It’s important to wake up to reality and find a solid solution to find out when a video is real and when it’s doctored. And it’s important for people to know that whatever they see on the internet isn’t always true.