The manipulated videos of Rashmika Mandanna and Katrina Kaif that have gone viral have brought our focus again on deepfakes, or fake and manipulated videos that are made using very rudimentary forms of AI and ML.
As problematic as the issue is, deepfakes aren’t anything new. Morphed videos and images have been around for decades now. But, because AI is the flavour of our zeitgeist and has become a buzzword, morphed videos and images or ‘deepfakes’ as they are now called are seeing a renewed interest.
However, the two deepfakes are just the tip of the iceberg. The underlying problem is something of a juggernaut.
Not a new problem
Although we are calling them deepfakes now, the issue is not exactly new. Morphed images of Indian women, celebrity or not, have been going around on porn sites and platforms like 4Chan and Reddit for nearly as long as they have existed, which is almost a decade now. Its just that with photo and video editing tools becoming easy, and now, with AI, the process of manipulating an image or a video has become much easier.
There is a whole category of Indian porn that is made up of fake, manipulated videos and images of actors and actresses in Bollywood, and this has been going on for decades. At times, certain politicians and journalists have also found themselves to be at the receiving end of this.
Furthermore, it is not just for smut that people have been manipulating images and videos. Morphed photos and deepfake videos have been a tool of intimidation as well. Several loan apps, which were deemed illegal just recently, have manipulated the images of borrowers, especially women, after obtaining them illegally.
The modus operandi is almost the same — morph the face of the borrower onto a pornographic scene, and share it with everyone on the borrower’s contact list.
Not even sophisticated deepfakes
Anyone who has observed AI and deepfakes over the years will tell you that although the way that bad actors work has evolved, they still aren’t sophisticated. Says Satnam Narang, a threat researcher at Tenable, “So far, the advancement of generative AI has not yet had an impact on the world of deepfakes. We’re still seeing rudimentary deepfakes being used to scam victims out of money as part of cryptocurrency scams. However, once generative AI adoption occurs in this space, it will make it that much harder for users to distinguish between deep fake and non-deep fake-generated content.”
As per a report titled 2023 State of Deepfakes, published by the United States-based Home Security Heroes, there has been a 550 per cent increase in the number of deepfake videos this year compared to 2019.
This is mainly due to the fact that these 60-second deepfake videos are quicker and more affordable to make than ever, taking less than 25 minutes and costing as little as ₹0 using just one clear face image, which is then imposed on an actual video
Proper deepfake videos that are made from scratch — that’s where the trouble is. And although they are comparatively expensive to make and need more skills, they are out there and they are virtually impossible to tell apart.
Tip of the iceberg
As horrific as Rashmika Mandanna and Katrina Kaif’s deepfakes are, they are just the tip of the iceberg. There are countless women in India who have had their morphed images and deepfaked videos leaked online.
And even though the Government of India has reiterated that the punishment for posting or sharing such images and videos online is 3 years in jail and a fine of Rs 1 lakh, there are some serious issues with this.
First, getting someone to register a complaint is a task in and of itself. Moreover, even if a complaint is registered, arrests are rarely made. Obviously, in such a case, the uploader goes scot-free and is very rarely prosecuted.
Instead, authorities in India take the easy way out — start threatening social media platforms with penalties and jail time for their executives and get the posts taken down. Although this does work at times, this is a stop-gap solution at best. The perpetrator, more often than not, again goes scot-free.
As for social media companies allowing such posts on their platform, most platforms have no method to filter out such content as soon as it is posted. Content moderation takes time and is an expensive process. Most companies are working on mechanisms that will help them flag such content, but again, these mechanisms and processes will largely depend on AI, which, can be spoofed, very easily.
Need laws like Singapore, China
As horrific as it sounds, Rashmika Mandanna and Katrina Kaif’s deepfakes aren’t exactly new in any sense. The reason why they are making so much noise now, is because of how AI has become a buzzword and trending topic, something that people love to cry doom about.
What India needs is a set of laws that are actually enforceable. Unfortunately, we don’t have any deepfake-specific laws.
We can take cues from countries like Singapore and China, where people have been prosecuted for posting deepfakes. The Cyberspace Administration of China (CAC) has recently introduced comprehensive legislation designed to regulate the dissemination of deepfake content.
This legislation explicitly prohibits the creation and dissemination of deepfakes generated without the consent of individuals and necessitates the implementation of specific identification measures for content produced through artificial intelligence.
In Singapore, the Protection from Online Falsehoods and Manipulation Act (POFMAN) serves as a legal framework that forbids deepfake videos. Similarly, South Korea mandates that AI-generated content and manipulated videos and photos such as deepfakes are labelled as such on social media platforms.
While such laws are introduced in India, we can use extant laws within Sections 67 and 67A of the Information Technology Act (2000) that contain provisions which may be invoked in analogous situations.
Notably, elements of these sections pertaining to the publication or transmission of obscene material in electronic form and related activities may be applied to protect the rights of individuals victimised by deepfake activities, including instances of defamation and the dissemination of explicit content subject to the aforementioned Act.
from Firstpost Tech Latest News https://ift.tt/5nvSN2A
No comments:
Post a Comment
please do not enter any spam link in the comment box.