Advent and Impact of Deepfakes in the modern society

Advent and Impact of Deepfakes in the modern society


AUTHOR: MR. NISHANT DATTA (Adv.)
CO-AUTHOR: ABHISHEK CHOUBEY (Law Centre-II, Faculty Of Law, University Of Delhi)

Evolution of man took millennia. It was a time-taking process, but when man became smart, he started to think and invented ways and technologies to make tasks easier. It is an outcome of these efforts that we have the computer, internet, light, science, space, genetic modification, etc. One of the greatest inventions of man in the past century is the computer, a machine curated to compile work digitally. Computer programs evolved in generations and the most recent generation is ‘artificial intelligence’.

Deepfakes are a leading concern of the 21st century. The Merriam-Webster defines ‘deepfakes’ “as the process where a digital image, audio or video has been manipulated and changed into something that isn’t as the original.” It also includes morphing the face of the victim or tuning his/her voice to make him say something (s)he has not said in the first place, the modified end product being a fabrication which is ‘not reality.’ This is done as a nuisance mostly to tarnish the image of a person on a public platform, using Generative AI tools to create such morphed content.


OBJECTIVE OF THIS ARTICLE

This article seeks to inform readers about the dark side of internet-Deepfakes and other connected issues, their potential use and the applicable laws cited for holding the accused guilty. Further, real life cases and analogies mentioned here would keep one informed about cyber security threats. Both Indian and global perspectives have been addressed in the article.


COMMON MISUSE OF DEEPFAKES

Deepfake Pornography

Misuse of deepfakes is significantly seen in the pornography industry. Such videos exploit the identity of women disturbing them to a greater extent. Further, such videos also tarnish the image of women, who are otherwise not engaged in the porn rackets.

In 2023, ‘Sensity’, an identity verification company, has found that “96% of deepfakes are sexually explicit and feature women who did not consent to the creation of the content.”


Deepfake Safeguard Methods

Victims of deepfake pornography have several tools available to contain and remove content, including securing removal through a court order, intellectual property tools like the DMCA takedown, reporting for terms and conditions violations of the hosting platform and removal by reporting the consent to search engines.

Several major online platforms have taken steps to ban deepfake pornography. As of 2018, ‘gfycat’, ‘reddit’, ‘Twitter’ (now ‘X’), ‘Discord’, etc. have all prohibited the uploading and sharing of deepfake pornographic content on their platforms. In September 2018, Google also added ‘involuntary synthetic pornographic imagery’ to its ban list, allowing individuals to request the removal of such content from search results.


Pre-existing laws applicable to deepfakes:
  • (A) Section 67A, IT Act, 2000 (Punishment for publishing or transmitting of material containing sexually explicit) If someone shares or helps in sharing any kind of material online that shows sexual acts or behavior, they can be punished as follows:
    On first conviction, the penalty can be imprisionment for a term which may extend to five years and with fine which may extend to ten lakh rupees. In case of second conviction, the punishment is harsher, extending to imprisonment for a term which may extend to seven years and with fine which may extend to ten lakh rupees.
  • (B) Section 67, IT Act, 2000 (Punishment for publishing or transmitting obscene material in electronic form) This section was amended through the Information Technology (Amendment) Act of 2008, adding Section 67A which is explained above and also Section 67B to specifically criminalize the transmission of material depicting children in sexually explicit acts, essentially creating a stricter penalty for child pornography related offences.
    Punishable with imprisionment for a term which may extend to three years and with fine which may extend to five lakh rupees. In the event of subsequent conviction, for a term which may extend to 5 years and also with fine which may extend to ten lakh rupees.

Deepfakes Potential for causing Public Riots

Deepfake videos often pose a major issue in the political arena. There were potential threats of deepfakes that had caused a major issue in the 2024 Lok Sabha elections. A potential issue was imagined where deepfake videos or audio clips portraying political candidates making inflammatory remarks against a specific community, circulated with the intention of stroking communal tensions or provoking riots. Such actions could disrupt the elections process and pose a threat to social harmony and public order.


Applicable laws:
  • -Section 153A (Promoting enmity between different groups on grounds of religion, race, place of birth, residence), IPC It was introduced in 1898 by the Indian Penal Code (Amendment) Act, 1898. It was added in response to an increase in violence due to breaches of public tranquility.
    This section prohibits promoting enmity between groups based on religion, race, place of birth, residence or language and doing acts that harm the maintenance of harmony. The 1969 amendment further amended it and increased the punishment to make the offense cognizable and to put a strong bar on offenses arising out of hatred.
  • -Section 505 (Statements conducting to public mischief), IPC It covers offenses committed in places of worship or in assemblies engaged in religious worship. It also covers statements that intend to cause fear or alarm to the public, or incite a class or community to commit an offense against another class or community. This section allows for punishment for imprisonment, a fine, or both.

Deepfakes and violation of Right to Privacy

The use of any person's image, voice or video content without their consent for deepfake creation is a violation of their right to privacy, as guaranteed under Article 21 of the Constitution of India (held under KS Puttaswamy v. Union of India, 2017).


Leading examples:
  • 1. Two artists and an advertising company created the deepfake video of Founder and CEO of Facebook, Mark Zuckerberg saying things he never said and uploaded it on Instagram. ~Samantha Cole
  • 2. Deepfake video of Rashmika Mandhana An FIR by lodged by Rashmika Mandhana over a deepfake video uploaded on social media in which her face was morphed. This aggravated a huge debate over laws safeguarding people over deepfakes. In the age of AI, it is very important to safeguard the interests of persons especially women who are more prone to such threats followed by extortion.
    Applicable laws cited: The FIR was registered under Sections 465 (punishment for forgery) and 469 (forgery for intending to harm the reputation of a party) of the IPC, 1860 and Section 66C (identity theft) of the IT Act, 2000.
    Following this event, India’s IT Minister Rajeev Chandrashekhar emphasized on India’s IT Rules. Under this, social media platforms have to ensure “no misinformation is posted by any user.”
  • 3. In January 2024, AI generated sexually explicit images of Taylor Swift were posted on ‘X’ (formerly Twitter) and spread to other platforms like Instagram, Reddit and Facebook. One tweet with over 45 million views was seen before it was removed. Report from the 404 Media found that the images had originated from a Telegram group whose members used tools like Microsoft designed to generate the images. Taylor contended that those images were generated without her knowledge and consent.
  • 4. The 2024 Telegram Deepfake Scandal emerged in South Korea in August,2024, where many teachers and female students were victims of deepfake images created by users who utilized AI technology. Journalist Ko Narin of ‘The Hankyoreh’ uncovered the deepfake images through Telegram chats.
  • 5. Scarlett Johansson sued Open AI for using her voice without permission in its AI models. She had declined to license her voice to Open AI, but the company used a voice that sounded like her in its AI model. Open AI claimed that the voice belonged to another actress, but Johansson and many other others claimed that it sounded like her.

PERSONALITY RIGHTS IN INDIA

There’s absence of an explicit recognition of personality rights in the statutes of India. However, certain aspects of a person’s identity and handwork are protected against commercial misappropriation through the Copyrights Act, 1957 and the Trademark Act, 1999. Personality rights in India have been recognized and gradually developed by the courts through reliance on common law and jurisprudence in the United States. Yet, the jurisprudence has been ambiguous and inconsistent in parts.

Titan Industries Ltd. v. Ramkumar Jewellers (Titan Industries case) – Hon’ble Delhi High Court had noted a violation of the publicity rights of Amitabh and Jaya Bachchan by unauthorized use. In this case, Titan’s Tanishq (plaintiff) had signed an “Agreement of Services” with Amitabh and Jaya Bachchan for endorsement of their brand which gave them exclusive ownership rights of copyright over material pursuant to the agreement. However, the advertisements were used by third parties and the plaintiff brought an action for violation of publicity rights on behalf of Amitabh and Jaya Bachchan.


DIFFERENCE BETWEEN DEEPFAKES/ GROK AI AND ADOBE PHOTOSHOPPED IMAGES

FEATURE DEEPFAKES/ GROK AI ADOBE PHOTOSHOP IMAGES
Technology used Uses AI and deep learning (GANs – Generative Adversarial Networks) to synthesize realistic videos or images. Uses manual photo editing tools to manipulate images by altering pixels, layers and effects.
Primary Output Primarily used for video and realistic face swaps, though it can also generate images. Primarily used for still images with manual enhancements.
Level of automation Highly automated. AI generates content with minimal human intervention. Requires manual effort and artistic skills for precise editing.
Realism Often more realistic and dynamic, especially in videos. Can be highly realistic but depends on the skill of the editor.

*Not all AI tools are responsible for the creation/modification of deepfake images/videos/sound effects. AI tools like ‘ChatGPT’, ‘Google Gemini’, ‘Perplexity’ have a regulatory policy that prohibits creation of deepfakes/morphed images online.


SELF-REGULATION V. GOVERNMENT REGULATION ON EDITED AND PHOTOSHOPPED IMAGES

SELF-REGULATION (INDUSTRY LED APPROACH)-
  • PROS:
    • Encourages creative freedom by allowing brands, influencers and content creators to edit images without strict Government interference.
    • Organizations like the Advertising Standards Council of India (ASCI) or international bodies like ASA (UK) can create flexible, sector-specific rules or Industry-specific guidelines.
    • Self-regulation evolves quickly with technology, unlike government laws which can be slow and rigid.
    • Maintains business interests by avoiding excessive compliance costs and legal hurdles.

  • CONS:
    • Lack of accountability and less fear of legal consequences can lead to misleading edits, that may continue to go unchecked.
    • Industry players may prioritize profits over ethics, leading to weak enforcement.
    • Consumers might not trust self-imposed guidelines as they can be easily bypassed.

GOVERNMENT REGULATION (LEGALLY BINDING RULES)-
  • PROS:
    • Government regulations ensure transparency in advertising and social media to prevent unrealistic beaty standards and misinformation.
    • Penalties deter excessive manipulation in ads and media.
    • Government regulations ensure that all companies follow a uniform legal framework.
    • Countries like France, Norway and the UK already mandate disclosure of retouched images to combat body dysmorphia and eating disorders.

  • CONS:
    • Government regulations are difficult to enforce as policing millions of edited images online is impractical.
    • Broad regulations might limit harmless creative expression.
    • Companies find loopholes to bypass laws and set regulations that are forced upon them.

Adopting a HYBRID Model can act as a middle ground in the above debate:
  • Self-regulation sets industry standards, but Government oversight ensure compliance.
  • Mandatory disclosures like ‘This image has been altered’ balance transparency with creative freedom.

KEY HIGHLIGHTS OF THE INDIA AI IMPACT SUMMIT 2026

  • Shri Ashwini Vaishnaw, Hon’ble Minister for Electronics & IT: “The deepfake problem is growing rapidly and requires stronger, more responsive regulation.”
  • Shri Narendra Modi, Hon’ble Prime Minister of India: Emphasised the need for clear labelling of AI-generated content, akin to disclosure standards in regulated sectors.
  • Mr. Sundar Pichai, Chief Executive Officer, Google: Warned that the digital divide must not evolve into an ‘AI divide’.
  • Mr. Sam Altman, Chief Executive Officer, OpenAI: Advocated for global governance frameworks to manage AI risks.
  • Policy experts at the Summit: Highlighted that deepfakes remain difficult to reliably detect, making regulation and safeguards essential.
  • Industry stakeholders: Noted that the primary risk lies in the speed and scale of deepfake dissemination, rather than mere creation.

EXISTING LAWS CITED FOR PROTECTION AGAINST DEEPFAKES

While existing Indian Laws do not explicitly punish the use of deepfake technology, unethical practices using deepfake technology can be prosecuted under the following legal provisions:

  • Deepfakes contain sexually explicit images of women. Such a form of sexual harassment can be punished u/s 67 and 67A of the IT Act and Section 354A, IPC.
  • Complain can also be filed u/s 500 of the IPC for Defamation, which provides for punishment of 5 years jail and a fine amount up to Rs.10 lakhs as fine upon subsequent convictions.
  • Deepfake images fostering a feeling of hatred towards women from a certain religion, then they will be charged u/s 295A, 153A and 153B of the IPC, which provide for punishment/Jail term up to 3 years and fine or both.
  • In case, images malign a woman from a particular scheduled caste/tribe, then it may be punished with imprisonment extending up to 5 years and a fine.
  • Deepfake images that sexualize children are punishable under provisions of the Protection of Children from Sexual Offences (POCSO) Act, 2012, which provide for punishment of 5 years jail sentence and a fine.

CENSORSHIP & GLOBAL MEASURES

Can we censor a Computer Programme that is behind deepfake generation?
Governments cannot outrightly censor or ban a computer program solely because it is prone to misuse for creating deepfakes as such tools are often protected as free speech or neutral technology with regulations instead targeting harmful outputs or specific uses.


Legal Perspectives:
  • United States: Deepfake-generating programs receive First Amendment protection unless used for crimes like fraud, extortion or obscenity. No federal law bans the tools. Proposed bills like ‘NO FAKES Act’ target non-consensual replicas not the software providers. Cases against AI generators focus on infringement during training not prohibiting the program.
  • India: India's 2025 IT Rules amendments regulate platforms hosting deepfakes, mandating labels, takedowns within 36 hours and user declarations for AI content, but do not ban creation tools. Misuse violates existing laws on defamation, impersonation or DPDP Act privacy with fines up to ₹ 250 crore for biometric data abuse.
  • EU and Global: EU AI Act prohibits harmful manipulation systems and requires deepfake labelling from August 2025, classifying high-risk AI but allowing legitimate tools with safeguards. Developers implement filters and monitoring but outright bans are rare due to innovation concerns.

Global Measures Against Deepfakes:
  • United States: Several states have enacted laws specifically targeting deepfakes, particularly concerning elections and non-consensual pornography, establishing penalties for misuse. The “Deepfake Accountability Act” was introduced to the United States Congress in 2019 but died in 2020. It aimed to make production and distribution of digitally altered visual media that was not disclosed to be such, a criminal offense. The title specifies that making any sexual, non-consensual altered media with the intent of humiliating or otherwise harming the participants, may be fined, imprisoned for up to 5 years or both. A newer version of this Bill was introduced in 2021 which died away in 2023.
  • European Union: The EU is developing regulations focused on transparency and accountability for digital content, aiming to address misinformation and digital manipulation, including deepfakes. In the EU, the Digital Services Act became applicable from 17th February, 2024 on the online platforms and search engines, who's services are classified as Very Large Online Platforms and Very Large Online Search Engines. Parallelly, the AI Act was also adopted in the EU on 1st August 2024, making it the first ever global framework on the regulation and use of AI technology. This act regulates the providers of AI systems and entities using AI in a professional context, classification is based on the risk of causing harm posed by the non-exempt AI applications, particularly unacceptable risks (which are banned), high-risk (compliance requirement with security, transparency and quality obligations and conformity assessments), limited-risk (only transparency obligations), minimal-risk (not regulated). Even though this act covers all types of AI across various sectors, exceptions for AI systems used for military, national security, research and non-professional purposes are provided therein.
  • United Kingdom: The UK government has proposed measures under the Online Safety Bill to tackle harmful content, including deepfakes, holding platforms accountable for the spread of misleading media. In the United Kingdom, the Law Commission for England and Wales recommended reform to criminalize sharing of deepfake pornography in 2022. In 2023, the government announced amendments to the Online Safety Bill to that end. The Online Safety Act, 2023 amends the Sexual Offences Act. 2003 to criminalize sharing intimate images that shows or “appears to show” another without consent. In 2024, the Government announced that an offence criminalizing the production of deepfake pornographic images would be included in the Criminal Justice Bill of 2024.
  • Australia: Australia has introduced legislation targeting the malicious use of deepfakes, particularly concerning non-consensual pornography, establishing penalties for offenders.

In Canada, the penalty for publishing non-consensual intimate images is up to 5 years in prison, whereas in Malta it is a fine of up to €5,000.


TECHNICAL SAFEGUARDS AGAINST DEEPFAKES

  • ‘BioID’ Deepfake Detection Software: is a reliable detector that is highly in demand. ‘BioID’ employs sophisticated algorithms, leveraging artificial intelligence to verify the authenticity of visual content. This guarantees a strong defence against the growing threat of deceptive media in identity verification processes. The technology typically uses machine learning algorithms to analyse facial features, gestures and other elements in videos. It is specifically designed to secure digital identity verification from fraud. It discerns whether a face found in an image or video is a deepfake/AI-generated/AI-manipulated or an original photo. This capability helps prevent criminals from overcoming digital identity verification by impersonating someone else using a deepfake.
  • Other safeguards:
    • Through flagging and detection
    • Through Policy Mandates
    • Through Immediate Removal of Content
    • Through Public Awareness Campaigns

CONCLUSION

The absence of robust legal frameworks on deepfakes opens the door to widespread harassment, hurting men and women alike in ways that scar reputations and erode trust. Crimes like these don't discriminate by gender. They prey on anyone vulnerable and any delay in strong regulations only emboldens scammers while innocent victims pay the price.

Take for example, firearms. Guns aren't banned outright because they have legitimate uses, but strict licensing, background checks and targeted penalties for misuse (not blanket liability on every dealer) keep society safe without killing innovation, just as we need for deepfakes.

The rise of deepfakes exposes a stark imbalance in legal protection. While celebrities and public figures can invoke recognised personality and publicity rights as seen in Anil Kapoor v. Simply Life India & Ors. to restrain misuse of their identity. Ordinary individuals are left to navigate a fragmented framework of privacy, defamation and IT law remedies such as those under the Information Technology Act, 2000. This creates an unjust hierarchy where protection against identity exploitation depends on fame rather than dignity, underscoring the urgent need for a universal, status-neutral legal recognition of personality rights in the age of synthetic media.

A smart framework must pair tough prohibitions and penalties for abuse with rules that nurture beneficial tech development, letting India keep pace globally. While no one-size-fits-all fix works for AI, we can draw from powerhouses like the EU's AI Act and U.S. precedents (think PLCAA shielding gun makers from user crimes), adapting them to our IT Rules and context for a balanced, homegrown shield against harm. This way, we protect people without stifling progress.