Delhi police have arrested the individual responsible for creating a deepfake video featuring actor Rashmika Mandanna. The video, which went viral in November, triggered widespread calls for regulations on social media platforms due to concerns over deepfake technology.
Deepfake video
The Deepfake Incident
The video initially showcased British-Indian influencer Zara Patel, but through the use of deepfake technology, Ms Patel’s face seamlessly morphed into that of Rashmika Mandanna. The incident raised alarm and prompted Ms Mandanna to describe it as “extremely scary,” emphasizing the vulnerability individuals face due to technology misuse.
Government Response and Advisory
The aftermath of the viral deepfake video led the Centre to issue an advisory to social media platforms, emphasizing legal provisions and potential penalties associated with the creation and circulation of deepfakes.
Government Review and Advisory Issuance
Union Minister Rajeev Chandrasekhar met with social media platforms in December to review their progress in tackling misinformation and deepfakes. He asserted that advisories would be issued to ensure 100% compliance by platforms.
Deepfake Technology Overview
Deepfakes, synthetic media created using artificial intelligence, manipulate visual and audio elements. Since gaining prominence in 2017, deepfake technology has evolved into a potential weapon for cybercriminals to disrupt and damage reputations.
Government’s Stance on Deepfakes
Union IT Minister Ashwini Vaishnaw recently highlighted the government’s efforts in addressing deepfake issues. Notices were sent to social media companies, urging them to identify and remove disinformation. The minister emphasized that the ‘Safe Harbour’ Clause doesn’t apply if platforms fail to take adequate steps to remove deepfakes.
Recent Deepfake Instances
Apart from Rashmika Mandanna, deepfake videos of various celebrities, including Katrina Kaif, Amitabh Bachchan, Priyanka Chopra, and Sachin Tendulkar, have circulated on the internet in recent weeks, raising concerns about the misuse of deepfake technology.