TORONTO -- As fake videos generated by AI continue to become more convincing, what was once a tool to share laughs on the internet has grown into a worrying sector of digital media.
Whether it鈥檚 a viral video of 鈥溾 doing a magic trick or "Facebook鈥檚 Mark Zuckerberg" boasting about having deepfake videos have the capacity to cause real harm to people who fall for their deception.
A Pennsylvania woman was of girls on a cheerleading team her daughter used to belong to 鈥 the videos showed the girls nude, smoking or partying 鈥 in an attempt to get them kicked off the team.
Graphic artist Chris Ume, the mastermind behind the Tom Cruise TikTok deepfake, told 麻豆影视 that when he started making deepfake videos it was just to 鈥渉ave good fun.鈥
But now as manipulated media continues to make headlines, his views have changed.
鈥淚鈥檓 concerned that it鈥檚 getting easier to do it,鈥 Ume said. 鈥淓specially when people want to misuse the technology.鈥
WHAT ARE DEEPFAKES?
An AI process known as 鈥渄eep learning鈥 is used to manipulate photos and videos to create deepfake media.
Many of them are used for pornography. A found more than 14,500 deepfake videos online in September of that year 鈥 96 per cent of them pornographic in nature. Of the videos studied, 99 per cent involved swapping female celebrities' faces onto porn stars without their consent.
Deepfake technology can also be used to create convincing but ultimately fake pictures from scratch, and audio can be faked as well in a process known as 鈥渧oice skins鈥 - where someone鈥檚 voice is cloned and then manipulated to 鈥渟ay鈥 what the user wants it to.
Public figures such as politicians or CEOs are especially vulnerable to this process because of their frequent public addresses; in 2019, scammers . The CEO thought he was speaking to his boss, a German executive.
A insulting other politicians caused outrage in Italy before it was revealed to be a manipulated video for an Italian satirical show.
HOW ARE DEEPFAKE VIDEOS MADE?
There are a few ways to make a deepfake, and several steps must be taken.
First, a user can begin by inputting thousands of photos of two people into an AI algorithm called an encoder, which then finds similarities between the two faces, reducing them to their shared common features.
A secondary algorithm known as a decoder is then used to recover the faces from the files. To make a face swap, the user simply , effectively swapping their faces. This has to be done for every frame of the video being created.
Generative Adversarial Networks, or GANs, are another method. GANs are
By cycling the content through the networks hundreds of thousands of times, the two systems make believable manipulated media.
The technology needed to create convincing deepfakes, powerful desktops with high-end graphics cards and a knowledge of video editing, means that your average internet user is not going to be churning out manipulated videos or photos anytime soon.
However, 鈥渟hallow fakes,鈥 which are videos that are manipulated using regular editing tools, are still capable of fooling people.
Facebook in a bid to stem misinformation, but their policy did not extend to 鈥渟hallow fakes鈥 鈥 which is why a manipulated video of House speaker Nancy Pelosi 鈥渟lurring鈥 her way through a speech was allowed to stay on the site.
HOW CAN YOU SPOT A DEEPFAKE?
As AI technology advances, spotting deepfake videos becomes much more difficult.
Poor lip-syncing or flickering in the video frames are some ways to spot a poorer-quality deepfake, but most high-quality AI technology has figured out ways to render those issues moot.
In 2018, announced their discovery that deepfake faces didn鈥檛 blink, but no sooner
Ironically, AI may be the best way to spot deepfakes, and corporations like have taken initiatives to detect and remove manipulated media.
WHAT IS THE DANGER WITH DEEPFAKES?
While tricking people into believing in a large-scale event is unlikely, as most countries have their own surveillance systems and intelligence communities to verify data, deepfake media can erode trust in public institutions and individuals.
In 2019, professor Hany Farid of the University of California Berkeley against women, especially when they are used to create and distribute revenge porn.
Farid told 麻豆影视 that the risks from deepfakes are tantamount to 鈥渕assive fraud.鈥
A was discovered in 2020 that was used to 鈥渦ndress鈥 more than 100,000 women, many who were under the age of 18.
The Telegram bot is thought to be powered by , which uses deep learning to generate what it thinks a person鈥檚 body looks like.
鈥淲e can really wreak havoc, and the real concern here is the virality with which this content spreads online before anybody figures out that its fake,鈥 Farid said.
------------
With files from 麻豆影视' Washington bureau correspondent Richard Madan