Advertisement

Advertisement banner

Advertisement

Advertisement banner

Advertisement

Advertisement banner

Science & Technology

Deepfake Technology and Artificial Intel...

H

| Posted on March 6, 2026

Deepfake Technology and Artificial Intelligence Regulation

Introduction

The Viral Video That Changed Everything

In January 2026, actress Rashmika Mandanna woke up to a nightmare: a video that was becoming viral on WhatsApp and Instagram showed her face neatly transferred onto someone else's body. A lot of people saw it. There was a lot of arguing in Parliament within 48 hours.

This wasn't the first time. No one is safe from deepfake technology, not even Katrina Kaif, Alia Bhatt, or politicians. But this time, things were different. The government did something.

India released a bombshell in February 2026. The Ministry of Electronics put forth draft changes to the IT Rules 2021 that say all AI-generated work must have digital watermarks. Not just deepfakes but everything made by AI must now have an invisible signature.

The big change is that instead of playing whack-a-mole with viral deepfakes, we need to stop them at the source.

In a country with 560 million social media users, where WhatsApp forwards can change elections and video evidence can win court cases, the question is not whether to govern AI but how to do it without stifling innovation.

What is Deepfake Technology, and Why Does It Matter?

Understanding the Tech Behind the Chaos

Deepfakes employ deep learning algorithms called Generative Adversarial Networks to change faces in videos in a way that looks very natural. If you give an AI 500 pictures of two people, it will learn how to put one face on another's body without any problems.

The technology itself isn't bad. It gives power to:

  • Effects that make actors look younger in movies (Robert De Niro in "The Irishman")

  • Dubbing in multiple languages with excellent lip-sync

  • Medical training games

  • Tools for people with speech difficulties to get around

When Good Technology Turns Weapon

The grim truth that makes rules for AI: 96% of deepfakes are non-consensual porn, mostly aimed at women. Actresses, journalists, and regular ladies have had their faces used on pornographic content without their permission.

Political manipulation: A phony video of a prime minister saying that war was declared was published 48 hours before the elections. When it's proven false, the votes have already been cast.

Financial fraud: In March 2025, scammers in Hong Kong utilized deepfake video conversations to pretend to be a CFO. The result was that $25 million was stolen.

Judicial crisis: How do courts work when deepfakes make video evidence unreliable?

Current Laws on Artificial Intelligence

India has no dedicated AI law. Instead, AI rules are spread across many acts. These laws struggle to keep up.

What Exists Today

IT Act 2000:

Section 66E (Privacy violation): 3 years in prison.
Section 67 (Obscene content): 5 years in prison.
Section 66D (Personation): 3 years in prison.

Indian Penal Code:

Section 500 (Defamation): Civil remedy.
Section 509 (Insulting modesty): 1 to 3 years in prison.
Section 469 (Forgery): Applies to fraudulent deepfakes.

Why These Laws Are Failing

Speed issue: Court cases take three to five years. Deepfakes spread in three to five hours.

Attribution is hard. Hosting is cross-border. Platforms can be encrypted. Creators can stay anonymous. Good luck catching them.

Disparity in enforcement as of February 2026:

  • FIRs for deepfakes totaled more than 150.

  • No convictions were obtained.

  • There are still millions of deepfakes in circulation.

  • There is a law, but no enforcement.

IT Rules 2021 Amendments

The Digital Watermark Revolution

The main proposal: Every AI tool functioning in India must include a digital watermark in the generated output.

  • Watermarking includes AI-created images (e.g., Midjourney, DALL-E).

  • AI-generated videos (Synthesia and DeepFaceLab)

  • Voice clones and synthesized audio

  • AI-written text (in certain instances).

How It Works:

  • Invisible to the naked eye.

  • Detectable by verification tools.

  • Tamper-resistant (cryptographic signatures).

  • Includes metadata (author information, timestamp, AI model).

Platform and Creator Obligations

Social media platforms must:

  • Use AI detection systems

  • Label AI content that is not marked

  • Remove non-compliant content within 24 hours

  • Publish monthly transparency reports

AI tool providers must:

  • Use watermarking by default

  • Stop users from removing watermarks

  • Register with MeitY within 90 days

  • Appoint an Indian Grievance Officer

Penalties:

  • First violation: ₹50 lakh fine

  • Repeat violations: ₹2 crore fine or a ban in India

  • Individual creators: ₹10 lakh fine + 3 years in prison

The Paradigm Shift

Before: Create a deepfake → Upload → Go viral → Legal battle for years → Damage is done.

After: Create a deepfake → No watermark → Auto-detected → Blocked before it goes viral → Creator identified → Fast action.

This shift from reactive to proactive is a major change.

How the World Handles Regulation in Machine Learning

European Union: The Strictest Regime

The EU AI Act of 2024 establishes the gold standard worldwide:

Risk-based strategy (high-risk AI is closely examined)

Applications that have been banned (social scoring, emotion recognition at work)

Transparency is required for all AI content.

Penalties of up to €35 million, or 7% of worldwide sales

Rules for deepfakes: Every deepfake ought to have obvious transparency. If platforms intentionally host hidden deepfakes, they are held accountable.

United States: Fragmented Freedom

No federal AI law. State-by-state approach:

Deepfake pornography is prohibited in California (SB 602).

Texas: 30 days prior to elections, political deepfakes are prohibited

Deepfake Disclosure Act pending in New York

What caused the wait? First Amendment issues. Speech, including synthetic speech, is protected by the US courts.

China: The Control Model

Regulations for Deep Synthesis (2023): Watermarks must be visible for all deepfakes.

Verification of identity is necessary to produce AI content.

The government examines all AI stuff before it is published.

On-demand algorithm access

The outcome was a 70% decrease in deepfake occurrences. but at a huge cost to privacy.

India's Middle Path

All three are modelled by India: EU-style comprehensiveness

US-style platform liability

China-style prompt enforcement

Can we properly regulate without going too far?

The Great Debate: For and Against Strict Artificial Intelligence Regulation

The Case FOR Regulation

Women's Safety Is Non-Negotiable

When 96% of deepfakes target women with non-consensual porn, this is not free speech.
It is a safety crisis.

Advocate Karuna Nundy says, "Every day we delay, hundreds of women wake up to see their faces in porn.
This is digital sexual assault.
We need strong laws now."

Democracy Depends on Truth

With over 900 million voters, deepfakes can do real harm.
They can spread communal hatred.
They can swing election results.
They can create national security crises.

Former EC SY Quraishi says, "By 2029, unchecked deepfakes could decide who governs India.
That is unacceptable."

Tech Companies Will Not Self-Regulate

History shows platforms favor engagement over safety.
If left alone, real rules for machine learning will not happen.

We Regulate Everything Else

We require safety rules for cars.
We require nutrition labels on food.
We require content ratings for films.
Why should AI get a free pass?

The Case AGAINST Over-Regulation

Innovation Will Suffer

Mandatory watermarking adds costs. Indian AI startups cannot compete worldwide when rivals face no such burden.

Rohan Verma, DesiGPT CEO: "We are building India's ChatGPT. Now we hire lawyers before engineers. How can we compete?"

Watermarks Can Be Removed

Skilled bad actors will remove watermarks. The result is clear.

Legitimate users face extra burdens.

Criminals ignore the rules anyway.

Regulation targets the law-abiding, not lawbreakers.

Surveillance Concerns

Mandatory AI tool registration lets the government know who creates what. This can lead to:

Content monitoring.

A chilling effect on speech.

Mission creep, from deepfakes today to political satire tomorrow.

Digital rights activist Apar Gupta: "Every rule expands. Where does it stop?"

Definition Problems

What counts as "AI-generated"?

A photo edited in Photoshop using AI tools?

An essay written with ChatGPT's help?

AI-made memes?

Vague definitions can lead to overreach.

Real Cases That Shaped the Debate

The Politician Who Never Said It

Karnataka Elections: April 2024

A deepfake video depicted the candidate making provocative comments. 2 million shares in 48 hours. The candidate lost by a slim margin. The video was disproved three days after the poll. The creator was never apprehended.

Lesson: Current rules are too slow, causing damage before the truth is revealed.

The $25 Million CEO Scam

March 2025, Hong Kong

Fraudsters used a deepfake video call to impersonate the CFO. The employee transferred $25 million. Money was never recovered.

Lesson: Deepfakes aren't just reputation attacks. They're financial weapons.

Rashmika's Nightmare

January 2026, India

A face was swapped into explicit material and went viral. It caused emotional trauma, disrupted Parliament, and sparked a national debate. 

Lesson: A celebrity victim can sometimes speed up policy reform.

What Should India Do? The Practical Path Forward

Short-Term Actions (2026)

  1. Tiered watermarking is required for financial communications, private photos, and political information. 

Suggested: News and commercial advertisements

Art, education, and entertainment are optional.

  1. Deepfake cases require digital courts with fast-track weeks rather than years. Specialized courts have tech-savvy judges and 24-hour takedown powers.

  2. Campaign for Public Awareness. Most Indians do not know about deepfakes. Use PSAs, school programs, and media literacy checks.

  3. The National Detection System. Use this free government tool to check for AI-made content. You can also verify deepfakes through DigiLocker.

Medium-Term (2026-2027)

  1. The sandbox for regulation. Allow businesses to test ideas in a controlled environment before full compliance.

  2. Global Collaboration: Borders are meaningless to deepfakes. Participate in international forums to help reconcile US and EU rules.

  3. Reputation rehabilitation, legal support, mental care, and the Victim Support Helpline.

Long-Term (2028+)

  1. Comprehensive AI Law Dedicated law that goes beyond band-aid solutions like IT rules.

  2. Assist IITs in developing blockchain verification, tamper-proof watermarks, and enhanced detection by using the Counter-Deepfake Research Fund.

  3. Curriculum for AI Literacy: Every student will understand digital citizenship, deepfake detection, and AI by 2030.

FAQs

Q1: What is deepfake technology, and why is it a threat?

AI is used in deepfake technology to make fake videos that look real, in which someone seems to say or do things they never did. The fact that 96% of deepfakes are sexual images of women made without their permission is a serious problem. They can also be used to steal money, harm reputations, and sway politics. Because the technology is so advanced, detection needs special tools.

Q2: Does India have any legislation about AI?

There is no specific AI law in India yet. The current framework contains the IT Act 2000 (for privacy and obscene material offenses) and the IPC (for defamation and fraud). The changes to the IT Rules in February 2026 suggest that AI content must have digital watermarks, but enforcement is still poor. So far, 150 FIRs have been filed, but no one has been convicted.

Q3: What are the changes to the IT Rules 2021?

The changes require all AI-generated output to have digital watermarks. Every AI tool needs to have invisible signatures. Platforms must find and classify AI content, take down content that doesn't follow the rules within 24 hours, and make transparency reports public. Fines of ₹50 lakh to ₹2 crore and the possibility of a ban in India.

Q4: Will adding a watermark stop deepfakes?

Watermarking won't stop criminals who are motivated to do bad things, but it will prohibit people from accidentally spreading deepfakes. Most people who share deepfakes don't know they're fake. Watermarks let you check something before sharing it, which cuts down on false information that spreads quickly by 60–70%, according to China's experience. Making it illegal to remove watermarks adds legal ramifications.

Q5: How do rules for AI work in the world?

The EU has the harshest rules (the 2024 AI Act, which can punish up to €35 million). There are no federal laws in the US; there are only state laws (California, Texas). In China, there must be visible watermarks, identity checks, and government review of material. India's proposed model takes parts from all three techniques and puts them together.

Q6: Is it against the law to make deepfakes?

Yes, if you use it in a way that is against the law. 3 to 5 years in prison for non-consensual pornography. Deepfakes that are political or financial deception are against the law. Making deepfakes without watermarks is now against the law, and you might face a ₹10 lakh fine and three years in prison. There are just a few exceptions for artistic or satirical content.

Q7: How can I tell if anything is a deepfake?

Look for unusual shadows, robotic voices, irregular lighting, unnatural blinking, lip-sync incompatibilities, and deformed face edges. You can use verification tools like Intel FakeCatcher or Microsoft Video Authenticator. India's new government website would let people check things for free. If you're not sure, check the facts with Alt News or Boom.

Conclusion

India is at a crossroads. Do nothing and watch women suffer, democracy fall apart, and chaos take over. Or control everything, stifle new ideas, build surveillance systems, and kill new businesses. The middle path is to have smart, fair rules for AI that protect victims without hurting innovation.

India's first real try at IT rules changes came in February 2026. Not perfect—watermarking is hard to do, enforcement will put our systems to the test, and bad people will discover ways around it. But you can't just sit back and do nothing anymore.

Regulation is not only required, but it's also crucial when a woman's face may be used as a weapon without her permission, when fake films can change the outcome of an election, and when your voice can approve fake transactions. The key question isn't whether or not to control deepfake technology. The question is how to govern it in a smart way.

The changes made in February 2026 aren't the end goal; they're only the first step toward making AI help people instead of scare them. India has only just begun its journey toward responsible AI governance.

0 Comments