Categories
Latest
Popular

Demonized but Useful: The Deepening Deepfake Problem

Photoshop used to be a proper noun referring to a raster graphics editing software from Adobe Inc. Now, it has also become a verb, which means the use of computer software to manipulate, modify, or alter digital images or computer graphics. Back in the day, it was easy to dismiss scandalous, unflattering, or undesirable images shared online as “photoshopped” or edited. Many knew that it was easy to edit digital images, but it’s not the case when it comes to videos. Now, even videos can be modified
easily to depict a scene as if it was really captured by a video recording device.

What Are Deepfakes?

Deepfakes are essentially fake videos generated by artificial intelligence. Their creation involves the use of an imaging technique that entails the superimposition of an image over a preexisting one to output a different video that is usually meant to be deceptive. The process is smoothened by a machine learning method referred to as generative adversarial network. The result is a video or moving image with little to no indication of alteration. There’s a tendency for many to not doubt the authenticity of a fake video since not many are aware that the modification of details in videos is already possible.

We already wrote a post about deepfakes before. This update aims to present real life situations that demonstrate the critical problems this video editing technology creates. Likewise, this post presents an oft-ignored aspect of deepfakes: its usefulness. The “fake” in the name of this technology shouldn’t take away the fact that it also has sensible uses.

Image credit: Washington Post (Fair Use)

Real Deepfake Problems

Deepfakes are a problem primarily because they can be used as a tool to spread fake news or false information. They are highly deceptive and can easily convince people to believe a fabrication as truth.

The most popular use of deepfakes is arguably in the field of pornography. Based on a survey conducted by Deeptrace, most deepfakes are pornographic, and they are growing at a staggering pace. The technology is often used to produce fake celebrity porn videos. According to Deeptrace, there are already around 15,000 videos presented as deepfakes. Of this number, approximately 96% are pornographic.

Fake pornographic videos are a serious problem not only for celebrities. They can also target ordinary individuals who become victims of revenge porn or plain bullying. It’s difficult to belie or contradict what is being presented in videos, especially when not many are aware that it’s already possible to fake videos convincingly.

Far worse than the pornography problem is the ability of deepfakes to propagate fake news and sway public opinion. In the Philippines, for example, an opposition senator noted for being a critic of the country’s brutal drug war was made to appear that she admitted to her involvement in the illegal drug trade on the senate floor. To most people, the edited video did not show any hints of being fake. Only video experts are likely to detect anomalies in the video. It has been shared on social media numerous times and was only addressed recently when the deepfake video was compared side by side with the original video.

It is expected that deepfakes will become prominent as politicians and other partisans become more aggressive with the 2020 US presidential election drawing near. Anti-Trump players may disseminate deepfakes of the president supposedly admitting to his crimes or fake videos depicting his allies turning their backs on him. Conversely, those who want to defend the president may use deepfakes of Democrat politicians and supporters uttering contentious statements.

Things have already started with Donald Trump himself posting on Twitter an altered video of Nancy Pelosi. Media organizations have undertaken fact checking and ruled out that the video posted by the president was modified. The Pelosi video doesn’t show her doing anything illegal or highly controversial. It was just her made to sound drunk or stammering, but for some, this can be viewed as a weakness—being unconvincing in the points she is trying to convey. It may sound benign, but it can also be used as a tool for influencing public opinion.

Image credit: Pixabay

Undermining the Usefulness of the Technology

Deepfake technology is not entirely malefic. It has its usefulness. For example, it is employed in the translation or localization of video advertisements for audiences who speak different languages. It can be used in filmmaking to avoid having to retake scenes. It can be used to correct inconsistencies in scenes or change the appearance of actors without relying too much on prosthesis and costumes. It is a boon in the visual effects field. Additionally, it can help in producing realistic reenactments for documentaries or investigative journalism.

In the movie “Justice League,” Henry Cavill appeared clean-shaven even though he actually wasn’t. This was a form of deepfake used to compromise for Cavill’s other movie commitments. Also in “Solo: A Star Wars Story,” deepfake was employed to make Harrison Ford look younger. This is the same technique used in making Samuel Jackson (as Nick Fury) in the Captain Marvel film.

Deepfake has good applications. Unfortunately, the bad media coverage and less-than-desirable name emphasize the drawbacks that come with it. It does not introduce a new problem but only enhances existing ones—particularly the tendency of most people to use technology for malevolent purposes.