Into The Deep End

The turn of the century brought new technology and advancements in Artificial Intelligence. Deepfakes, a new form of generative programming model, grows and advances everyday. As deepfakes continue to become more realistic and easier to make, the potential societal impact continues to grow.

Into+The+Deep+End

A video of Mark Zuckerberg pops up on an Instagram feed; Zuckerberg speaks freely, sharing his ideas and the ‘truth’ about privacy on Facebook. “One man, with total control of billions of people’s stolen data, all their secrets, their lives, their futures,” Zuckerberg says, referring to himself. Except it’s not actually Zuckerberg. In fact, the person on the screen doesn’t even exist. Zuckerberg never spoke about Facebook’s far reach in this way, never filmed or released this video.

Deepfake technology continues to progress, each day getting more and more advanced.

In 2012, deep learning, a type of neural network that simulates the human brain’s ability to comprehend and essentially learn from data, switched from analysis to content generation. Instead of just viewing and classifying images, computers were able to create and manipulate media. After 2012, Generative Adaptive Networks, GAN, continued to become more complex and powerful.

“Starting from GAN work, there’s more and more development now, more and more innovations, so the generated image can become more high-quality and more diverse,” said Dr. Xiaoming Liu, a professor at MSU and pioneer in deepfake identification and management. “For example, in the beginning of the GAN network, GAN typically generated very small, not very high quality images. It could only generate human faces. More recently, [models] generate not just faces but also objects. The content becomes more complicated, yet, with the innovation of different GAN models, the quality also becomes higher.”

When quality becomes higher, it becomes more difficult to tell if an image is real or fake. As deepfake images continue to become clearer and more accurate, the more compelling, and therefore, dangerous a deepfake becomes.

This is problematic in several different circumstances. Political content is already polarized and intense online; fake news seeps into personalized ads and is spewed in comment section battles.

We have seen the impact unchecked dissemination of fake news online has had in the real world, including spurring the Jan. 6th insurrection in 2021 after former president Trump spent months shouting from rooftops and on Twitter accounts that the 2020 election had been stolen. As deepfakes enter these fraught grounds, more believable and more convincing, concern grows.

Hyper-realistic deepfakes have the potential to be used in political environments. Computer-generated videos of Trump, Biden, and Obama have already circulated social media platforms. Watching a deepfake Trump in a “Breaking Bad” scene may be amusing, but this kind of content has the potential to have a very real societal impact.

“Imagine, two weeks before the presidential election, some misinformation about the candidate is being spread out on Facebook… a lot of people see it,” Liu said. “So people may not have the ability to tell whether it’s real or fake, or it’s much too close to the election date. There’s no time to explain those things.”
Misinformation shared by a computer-generated — but very realistic — political candidate, without time for the real person to remedy or clear out a controversy, could very well skew an election if people believe it. And deepfakes are getting to be that good.

Deepfakes also cause concern in regards to content related to personal privacy. Images and videos can be manipulated to create inappropriate or false content. Pornography and other explicit media can be generated easily with rapidly progressing deepfake technology, which can be used to tarnish a reputation or livelihood. This technology can create and manipulate any human face, into any scenario — that includes your face.

“So anything with political, national security, and personal privacy—those should be top areas to somehow regulate,” Liu said.

AI generative technology also continues to become more accessible. Deepfake technology is used as a learning tool and the basis for many entry level coding and Artificial Intelligence courses. Rita Ionides took an AI basics introduction course at the University of Michigan during her junior year of high school.

“It was essentially a first course in artificial intelligence,” Ionides said. “The prerequisites were very low. The barrier to entry was not high. And I was just a high school junior who took the course for fun. If I wanted to, I could create some very convincing deepfakes.”

In the class, Ionides learned how to create photo and audio deepfakes through the open source programming package TensorFlow, a software library for artificial intelligence. Assignments included manipulating speech to create new audio and creating new images.

“Ultimately, I think the takeaway is that these skills are not hard and deepfakes are everywhere because anyone can make them,” Ionides said.

If anyone can learn to build a convincing deepfake, the possibilities in creation are endless. Unfortunately, this includes content with potentially dangerous or harmful societal impact. The accessibility of deepfake creation makes regulation all the more important.

Deepfakes are largely unregulated on an international scale. Regulating deepfakes through government intervention is complicated on two levels: adaptability of legislation and corporate influence. Deepfake technology is rapidly changing and adapting; as one new deepfake GAN model appears and is classified, a different, more complex model takes its place. The US government system is not set up to adapt to new technology; change is slow and requires a concerted effort to update legislation. Due to the fast paced technological growth, legislation regulating deepfakes would have to be constantly updated.

Deepfakes are most prevalent on social media platforms: Facebook, Twitter, Instagram. In order to impose regulation on deepfakes, the US government would need to legislate these corporate giants.

“[The government would have to have] big talks with those big tech companies, and try to arrive [at] some consensus to say what is allowed, what is not allowed, and [how] can we have regulation,” Liu said.
These “big tech” companies have heavy influence in Washington DC. On top of pre-existing laws and regulations, deepfake regulation becomes a long and difficult process.

Monitoring the interwebs for deepfake material is no small task, and can take a toll environmentally.

“Imagine the volume of how many photos, let’s say, Facebook or Google receive every single minute,” Liu said.
Running every photo through a model to check its legitimacy takes immense energy.

“If you can improve the efficiency of your model of your binary classification by 10%, it can save a lot of energy,” Liu said.

Not only would better screening models reduce the danger deepfakes pose, they would also reduce the impact on the Earth.

More recently, AI generation has advanced to text and literary production. ChatGTP — a new, very popular virtual assistant — can write essays, answer questions and produce any style or format of writing. The development of literary generation is just as concerning as video and audio AI. The existence of this style of deepfake technology questions academic integrity and legitimacy in all academic institutions and fields; AI generated text can not be traced or checked for authenticity.

Deepfakes continue to advance incredibly quickly in all avenues: text, audio, digital and visual. The more advanced the technology, the more dire the societal consequences.

Unchecked, deepfakes show no sign of slowing their progression, continuing to impact technology and society. Solutions and preventative measures need to develop alongside them — falling behind could prove disastrous. Preventative measures must be taken as this technology continues to develop; governments and national corporations must come together and fund a plan to create long lasting and comprehensive code to identify and flag AI generated content on all media platforms.

Losing this race could compromise personal and national security. You may think deepfakes could never affect you, but as you scroll through TikTok or Instagram, do you ask yourself if each video is real? Who created it and who is it benefitting?

“[These deepfake creators are] playing with the limits of technology and civilization. They’re seeing what it can do and they’re seeing what makes it tick,” Ionides said. “What happens when that’s unleashed on the rest of us?”