• contact@digitaladvice.com.bd
  • +88 01831 111 220
Facebook Is Banning Deepfake Videos Ahead of the 2020 Election

01/10/2020 by Editorial Team with 0 comments

Facebook Is Banning Deep fake Videos Ahead of the 2020 Election


It’s a step in the right direction—but will it make a difference?

  • In a blog post published on Monday, Facebook said it would ban all “misleading manipulated media,” including deep fakes, before the 2020 presidential election.
  • Deepfakes are a type of media, usually in video form, that uses artificial intelligence to make the subject appear to do or say something that, in reality, they did not.
  • During a House Energy and Commerce hearing on Wednesday that addressed manipulated media, Monika Bickert, vice president of global policy management for Facebook, fielded questions.

Remember that viral video of Nancy Pelosi that President Donald Trump reposted on Twitter last summer? The video, which was altered, showed the Democratic Speaker of the House slurring in what appears to be a drunken soliloquy. However, the video was altered with artificial intelligence. In other words, it was a deep fake.

While it may be too late for the folks who saw and shared this particular political deep fake, Facebook is trying to prevent the spread of this new-age form of misinformation throughout the rest of the 2020 presidential election cycle through a new internal policy on deep fakes.

In a blog post released Monday, Monika Bickert, vice president of global policy management for Facebook, wrote that the company is making new efforts to “remove misleading manipulated media.”

The social media giant’s new rules came just days before a marathon House Energy and Commerce hearing focused on manipulated media on Wednesday morning. Bickert—author of the Facebook deep fake blog post—was one of the witnesses that the committee interrogated. She reiterated again and again that the company wants to ensure people have control over their data and what they see on the platform, but was a bit indirect in answering some specific questions.

It was apparent that the committee is looking toward legislation that would address deep fakes moving forward, given their potential to disrupt the political process, elections, journalism, and even individual freedom.

“Technology is outpacing technology and the people,” said Lisa Blunt Rochester, a U.S. Representative from Delaware, during the hearing.

What Is a Deepfake?

The gif above is not drawn from a real video of former President Richard Nixon giving a speech on the Apollo moon mission—it’s from a deep fake video where he announces that the astronauts on board have died.

 

In 2014, Ian Goodfellow, a Ph.D. student who now works at Apple, invented the deep fake, which is based on generative adversarial networks, or GANs.

GANs help algorithms move beyond the simple task of classifying data into the arena of creating data—in this case, images. This happens when two GANs try to fool one another into thinking that an image is real. Using as little as one image, a tried-and-tested GAN can create a video clip of, say, Richard Nixon saying something that is patently false and that he never even actually said. (Yes, this has already been made).

“Crucially, the system can initialize the parameters of both the generator and the discriminator in a person-specific way, so that training can be based on just a few images and done quickly, despite the need to tune tens of millions of parameters,” researchers wrote in a May 2019 paper that describes how simple it is to create a deep fake of a floating head from even just one image. “We show that such an approach can learn highly realistic and personalized talking head models of new people and even portrait paintings.”

In the Nixon example, a team at MIT used deep learning, a type of artificial intelligence, to edit the video footage and employed a voice actor to build the voice of Nixon. Alongside Canny AI, an Israeli startup, the researchers studied video dialogue replacement strategies to replicate the movement of Nixon’s lips while speaking, helping to match up to his mouth to the fake speech. The final product is a truly believable video of Nixon telling the U.S. public that the moon landing mission had failed.

Clearly, videos like this and the Pelosi deep fake are not only a threat to credible news, but also national security and, yes, even elections.

Facebook’s Strategy

Facebook is taking a specific, two-pronged approach to flagging and removing deep fakes. For an image to be taken down, it must meet the following criteria, according to the blog post:

  • It has been edited or synthesized–beyond adjustments for clarity or quality–in ways that aren’t apparent to an average person and would likely mislead someone into thinking that a subject of the video said words that they did not say.
  • It is the product of artificial intelligence or machine learning that merges, replaces, or superimposes content onto a video, making it appear to be authentic.

Satire and parody videos are still safe, though, as are videos that have been edited only to omit or change the order of words.

The thing about the Pelosi deep fake is that it doesn’t appear Facebook’s new ban would cover that video at all. After the video went viral last summer, it was widely viewed on Facebook. According to The Verge, Facebook said that the video did not violate the company’s policies at that time. With the new rules, it still looks like the Pelosi deep fake could slip through the cracks and remain posted on the site. That’s because that specific video wasn’t created with artificial intelligence at all—it was most likely edited using readily available software that could slow down Pelosi’s speech to a drunken slur.

Still, Facebook says videos that don’t meet its deep fake removal standards can be reviewed by one of its independent third-party fact-checkers, including 50 global partners that operate in over 40 different languages.

“If a photo or video is rated false or partly false by a fact-checker, we significantly reduce its distribution in News Feed and reject it if it’s being run as an ad,” the blog post says. “And critically, people who see it, try to share it, or have already shared it, will see warnings alerting them that it’s false.”

Facebook says it all manipulated videos were simply removed, they would still be available “elsewhere on the internet or social media ecosystem,” and that leaves context out. Instead, the company contends, it’s better to label such deep fakes as false. In the case of the Pelosi video, this is likely what would be done if it came out today.

A House Hearing on Deep fakes

 

Dr. Joan Donovan, research director of the Technology and Social Change Project at Harvard’s Shorenstein Center on Media, Politics, and Public Policy said during the House committee hearing that she believes the use of decentralized technology, like blockchain, could help identify users who have created the original deep fake videos that end up appearing on social media.

Bickert, meanwhile, said that Facebook already does “verify the identity of those advertisers” that post ads to the social media site, seeming to imply that there is no need for further intervention. Facebook also keeps a full library of those ads and who is responsible for them, Bickert added.

Congressman Jerry McNerney, who represents California’s 9th district, asked Bickert if Facebook’s fact-checking process happens quickly enough to prevent viral videos from spreading, to which she replied that user reporting of deep fakes allows the removal process to happen quickly. However, she skirted McNerney’s question when he asked if Facebook would be willing to submit to a third-party audit of its practices in spreading or preventing misinformation by June 1 of this year.

“We think transparency is important,” Bickert replied, adding that the company would be happy to follow up with the committee on specific concerns, but did not address the possibility of an audit.

Meanwhile, other members of the committee are working on draft legislation that could force Facebook to rethink the way it handles deep fakes and other forms of misinformation.

Representative Yvette Diane Clarke from New York’s 9th congressional district has proposed a new deep fake bill that would force content creators to label their content as such if they want to post it anywhere online.

Donovan said this could help content platforms keep the spread of misinformation from going viral. She calls this “proactive” content moderation.

Leave Comment