Deepfakes are now a part of the Israel-Gaza conflict. We have witnessed the proliferation of manipulated images aimed at deceiving people, and swaying public opinion. Now, social media platforms are filled with distressing images such as a bloodied corpse, a photo of a crying baby seemingly placed in a rubble-strewn environment, and visuals that initially convey the impression of an entire Gaza neighbourhood having been razed.
However, a number of them, often shared from official pages of news platforms and ministers have turned out to be AI-generated.
Israel and Hamas, both using Deepfakes
Dpepfakes have been used by both Israel and Hamas. However, the extent to which these manipulations have succeeded in persuading the masses through social media, shaping public opinion, influencing decisions, and proliferating, as has been a cause for concern because how easy it is for most people to fall for such images.
A significant proportion of the counterfeit images that have surfaced on various platforms, including X and Facebook may be aptly described as ‘shock and awe’ images. This categorization, put forth by Henry Ajder, an authority on deepfakes and generative artificial intelligence, underscores the deliberate design of these images to elicit heightened emotions and immediate, sensational responses.
The capabilities of AI image generators have evolved significantly over time, which has made images more convincing and easier to create. In an era where misinformation on social media is so common in politics, conflicts, and other major events, the increasing number of hard-to-detect deepfake images poses a serious threat.
Liar’s Dividend
It’s worth noting that poorly contextualized and unreliable deepfake detection can be more harmful than having no detection at all. Such unreliable detection methods may provide a false sense of confidence in distinguishing between genuine and manipulated content, leading to misguided assessments. This phenomenon is often referred to as the “liar’s dividend,” another term coined by Ajder.
A recent analysis conducted by AI company Accrete for Bloomberg News revealed that five accounts associated with a network aligned with Hamas on platform X have consistently alleged that real footage from Israel was generated using AI technology, as a means to discredit the authenticity of the content. Similarly, content coming from Hamas has been flagged as AI generated, when in fact they were proven to be real.
Side-stepping security provisions have become very easy
Andy Carvin, a senior fellow at the Atlantic Council’s Digital Forensic Research Lab, noted that it has become a common occurrence to encounter discussions on social platforms such as Facebook and X where individuals challenge the authenticity and context of images.
Carvin, representing the Atlantic Council, emphasized that deepfake images are disseminated online through accounts associated with both Israeli and Palestinian interests. It has been observed that some individuals create manipulated images to support specific causes, frequently incorporating images of children.
Although generative AI tools typically incorporate safeguards against the creation of violent content, Carvin explained that it is relatively straightforward to instruct these tools to generate images featuring children, like the one of a group of children in the aftermath of an airstrike.
(With input from agencies)
from Firstpost Tech Latest News https://ift.tt/Ogyu5ts
No comments:
Post a Comment
please do not enter any spam link in the comment box.