Until recently, photographs, video, and audio were considered a trusted form of communication and storytelling.
Then, photo editing technology and software was developed and photos were no longer considered to be fully trusted.
Some photographs have been altered so well it’s almost impossible to tell.
But videos can’t be faked, right? Wrong.
New technology is proving that to be false.
Deep fake technology has been developed to superimpose someone’s face onto another’s.
According to the computer science department chair at Winona State University, Mingrui Zhang, the idea behind the technology has been around for more than ten years.
It’s mostly been used for entertainment purposes such as the popular children’s movie Toy Story.
“It uses generative adversarial network (GAN) which is based on neural network algorithms,” Zhang said. “It is like any unsupervised neural network it learns from the subjects.”
According to a research paper by Robert Chesney, a professor at the University of Texas School of Law, and Danielle Citron, a professor at the University of Maryland Francis King Carey School of Law, “Harmful lies are nothing new. But the ability to distort reality has taken an exponential leap forward with ‘deep fake’ technology. This capability makes it possible to create audio and video of real people saying and doing things they never said or did.”
This technology could pose as a threat to privacy and security, according to Zhang.
“It may bring up legal and ethical concerns,” Zhang said. “Those are also what computing education society is facing. The social implication of technology,”
Much like on the social media app, Snapchat, there is a feature that maps out a user’s face and can put photos of friends faces or other filters onto a user’s face.
Snapchat is similar to deep fake technology. Flaws can be detected rather quickly and the users can tell it isn’t someone else.
With deep fake technology, it is more complex but the results are better.
“For example, you want actor B to behave like actor A,” Zhang said. “You take video of actor A, the software will analyze the video and construct the skeleton of A, and A’s motion. In filming, wrapping the skeleton of A with the skin of actor B will make the audience think that B is in action. That’s how AVATAR was made, but the process is too expensive for average person. But with help of a machine learning algorithm like GAN, faking is possible for everyone.”
An issue where deep fake technology arose was in the porn industry. Users of the technology were placing celebrities faces onto others in porn videos.
This is an issue of consent and the well-being of those celebrities. They did not give permission to have their faces in those videos.
“Deep fakes make them available to average person. It started for entertainment, could be used to fake someone’s action who has never committed,” Zhang said.
Chesney and Citron wrote more on the effects of deep fakes.
“Deep fakes have characteristics that ensure their spread beyond corporate or academic circles. For better or worse, deep-fake technology will diffuse and democratize rapidly,” wrote Chesney and Citron. “. . . technologies—even dangerous ones—tend to diffuse over time.”
With that in mind, the porn industry may not be the only industry that is affected as it is hard to contain this type of technology.
Chesney and Citron also wrote about how deep fake technology could affect journalism.
“Media entities may grow less willing to take risks in that environment, or at least less willing to do so in timely fashion,” wrote Chesney and Citron. “Without a quick and reliable way to authenticate video and audio, the press may find it difficult to fulfill its ethical and moral obligation to spread truth.”
Video posted on YouTube by: Bloomberg