Meta is facing a review into its policies on manipulated content and artificial intelligence-created “deepfakes”, after the company’s moderators refused to remove a Facebook video that wrongfully described US president Joe Biden as a paedophile.
The Silicon Valley company’s Oversight Board, an independent Supreme Court-style body set up in 2020 and consisting of 20 journalists, academics and politicians, said on Tuesday it was opening a case to examine whether the social media giant’s guidelines on altered videos and images could “withstand current and future challenges”.
The investigation, the first of its kind into Meta’s “manipulated media” policies, has been prompted by an edited version of a video during the 2022 midterm elections in the US. In the original clip, Biden places an “I Voted” sticker on his adult granddaughter’s chest and kisses her on the cheek. In a Facebook post from May this year, a seven-second altered version of the clip loops the footage so it repeats the moment when Biden’s hand makes contact with her chest.
The accompanying caption calls Biden “a sick paedophile” and those who voted for him “mentally unwell”. The clip is still on the Facebook site. Although the Biden video was edited without the use of artificial intelligence, the board argues its review and rulings will also set a precedent for AI-generated and human-edited content. “It touches on the much broader issue of how manipulated media might impact elections in every corner of the world,” said Thomas Hughes, director of the Oversight Board administration. “Free speech is vitally important, it’s the cornerstone of democratic governance,” Hughes said. “But there are complex questions concerning what Meta’s human rights responsibilities should be regarding video content that has been altered to create a misleading impression of a public figure.” He added: “It’s important that we look at what challenges and best practices Meta should adopt when it comes to authenticating video content at scale.” The board’s investigation comes as AI-altered content, often described as deepfakes, is becoming increasingly sophisticated and widely used.
There are concerns that fake but realistic content of politicians, in particular, could influence voting in upcoming elections. The US goes to the polls in just over a year. The Biden case surfaced when a user reported the video to Meta, which did not remove the post and upheld its decision to leave it online following a Facebook appeals process. As of early September, the video had fewer than 30 views and had not been shared. The unidentified user then appealed against the decision to the oversight board.
Meta confirmed its decision to leave the content on the platform was correct. The Biden case adds to the board’s growing number of investigations into content moderation around elections and other civic events. The board this year overturned a decision from Meta to leave up a Facebook video that featured a Brazilian general, whom the board did not name, following elections potentially inciting street violence. Previous assessments have focused on the decision to block former US president Donald Trump from Facebook, as well as a video in which Cambodian prime minister Hun Sen threatens his political opponents with violence.
Once the board has completed its review, it can issue non-binding policy recommendations to Meta, which must respond within two months. The board has invited submissions from the public, which can be provided anonymously. In a post on Tuesday, Meta reiterated that the video was “merely edited to remove certain portions” and therefore not a deepfake caught by its manipulated media policies. “We will implement the board’s decision once it has finished deliberating, and will update this post accordingly,” it said, adding that the video also did not breach its hate speech or bullying policies.
Source : Financial Times