Meta is currently under review for its policies regarding manipulated content and AI-generated “deepfakes” after the company’s moderators declined to remove a Facebook video that falsely depicted US President Joe Biden as a paedophile.
The Oversight Board, an independent body similar to a Supreme Court, comprising journalists, academics, and politicians, initiated an investigation to determine whether Meta’s guidelines for altered videos and images can withstand present and future challenges.
This inquiry, the first of its kind concerning Meta’s “manipulated media” policies, stems from a modified version of a video that circulated during the 2022 US midterm elections. In the original video, President Biden affixes an “I Voted” sticker to his adult granddaughter’s chest and kisses her on the cheek.
However, in a Facebook post from May of the same year, a seven-second edited version of the clip loops the moment when Biden’s hand touches her chest and includes a caption branding Biden as “a sick paedophile” and his supporters as “mentally unwell.” This edited clip remains on Facebook.
Although the Biden video was edited without the use of artificial intelligence, the Oversight Board contends that its review and decisions will set a precedent for both AI-generated and human-edited content.
Thomas Hughes, director of the Oversight Board administration, emphasized the broader issue of how manipulated media might influence elections worldwide. He pointed out the importance of striking a balance between free speech and Meta’s human rights responsibilities regarding video content that misrepresents public figures.
Hughes also stressed the need to consider the challenges and best practices that Meta should adopt for authenticating video content on a large scale. This review comes at a time when AI-altered content, often referred to as deepfakes, is becoming more sophisticated and widespread, raising concerns about the potential influence of fake but convincing content on elections.
The Biden case came to light when a user reported the video to Meta, but the company chose not to remove it and upheld this decision after a Facebook appeals process. By early September, the video had received fewer than 30 views and had not been shared. Subsequently, an unidentified user appealed to the Oversight Board, which confirmed that Meta’s decision to keep the content on the platform was correct.
The Oversight Board has been conducting various investigations into content moderation during elections and civic events. The board can issue non-binding policy recommendations to Meta after completing its review, and Meta must respond within two months. Public submissions are invited, and anonymity can be maintained.
Meta has clarified that the video was simply edited to remove certain portions and did not meet the criteria for being a deepfake according to its manipulated media policies. The company indicated that it would implement the board’s decision once the deliberation is complete, and it also mentioned that the video did not violate its hate speech or bullying policies.
from Firstpost Tech Latest News https://ift.tt/ZjMOQC1
No comments:
Post a Comment
please do not enter any spam link in the comment box.