Thursday, October 23, 2025

Trending

Related Posts

Indian Govt propose new rules to label and trace AI/deepfake content

The Indian government has introduced new rules to label deepfake content, aiming to strengthen transparency and accountability in the age of generative AI. Through proposed amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules, 2021, the Ministry of Electronics and Information Technology (MeitY) sets out fresh obligations for platforms, AI-tools and content creators.


These rules seek to make it easier for users to differentiate between human-made and AI-generated media, a step that reflects global efforts to regulate synthetic content


What do the proposed new rules to label deepfake content contain?

Definition & scope

The draft rules define synthetically generated information as content “artificially or algorithmically created, generated, modified or altered using a computer resource in a manner that appears reasonably authentic or true”.
This covers audio, video, image and other media where users might be misled

Labelling and visibility

Platforms and content creators must ensure that such synthetic media is clearly labelled or embedded with a unique metadata identifier.
For visual media, the label or marker must cover at least 10% of the surface area of the display. For audio content, the identifier must appear in the first 10% of the playback duration.

Platform and user obligations

  • Significant social media intermediaries (SSMIs) — platforms with over 50 lakh or 5 million users (depending on cut-off) — are required to obtain a user declaration indicating whether uploaded content is synthetically generated.
  • Platforms must deploy technical and verification measures to identify AI-generated content and ensure compliance.
  • Platforms cannot modify, suppress or remove the label or identifier once applied

Traceability & metadata

The draft mandates embedding non-removable metadata or identifiers which enable traceability of synthetic content

Potential consequences

If platforms fail to comply, they risk loss of safe-harbour protections under the IT Act — meaning higher liability for third-party content.

Consultation period

Stakeholders (industry, civil society) are invited to give feedback on the draft until 6 November 2025.


Why these new rules to label deepfake content matter

  • Misinformation risk: AI-generated videos or audio can impersonate individuals, distort events or influence public opinion — in a country like India with nearly a billion internet users, such risks are heightened.
  • Election & social stability concerns: Synthetic media can be weaponised in the context of elections, communal tensions or fraud. The government explicitly mentions this.
  • Global alignment: Other countries and jurisdictions are also moving toward labeling AI-generated content (e.g., EU, Spain). India’s effort places it among early adopters of quantifiable standards for visibility.
  • Innovation vs regulation: While the focus is safeguarding users, regulators also emphasise not stifling AI innovation—balancing oversight with growth.

Challenges and criticism of the new rules to label deepfake content

  • Technical feasibility: Ensuring that all synthetic content is detected, labelled and traceable through metadata may be technically and operationally challenging for platforms and creators.
  • Effectiveness of labels: Some academic research suggests that simply labelling content as “AI-generated” may not reduce its persuasive power or probability of being shared.
  • Definitional ambiguity: What exactly counts as “synthetically generated” can be blurry—e.g., minor edits, filters, or human-AI hybrid content.
  • Enforcement and liability: Platforms may need to make significant investments to comply; the burden could be high especially for smaller firms. The timing and penalty structures are still to be clarified.
  • Free speech & creativity: Some worry that over-broad regulation could inadvertently chill creative uses of AI or hamper legitimate transformations.

What this means for industry, creators and users

  • For AI developers and tool-makers: If your tools generate or modify media, you may need to embed identifiers or label the output. Platforms that host such media will need systems to check for compliance.
  • For social media platforms & intermediaries: Significant platforms will face new due-diligence obligations — gathering user declarations, deploying detection mechanisms, labelling output, and embedding metadata.
  • For content creators: When uploading any content that may be AI-generated or edited via AI, you’ll likely need to declare that fact and ensure proper markings.
  • For users and the public: The rules aim to increase transparency — you should see markers indicating synthetic content. But you still need to stay vigilant — labels are no guarantee of truth.
  • For regulators & policymakers: Monitoring the implementation, clarifying enforcement mechanisms and defining penalties will be crucial next steps.

Next steps and implementation timeline

  • Stakeholder feedback – until 6 Nov 2025. India Today
  • Finalisation of the draft and formal release of amendments to the IT Rules, 2021.
  • Platforms will need to adapt technical systems, metadata standards and labelling practices.
  • Monitoring & enforcement mechanisms will need to be clarified, including how failure to label will be penalised or safe-harbour lost.
  • Industry and civil society may challenge aspects of the rules (e.g., technical feasibility, free-speech implications).

Conclusion

India’s proposal for new rules to label deepfake content marks a significant regulatory step in the governance of synthetic media. These measures emphasise transparency, traceability and platform accountability, with clearly defined labelling requirements for AI-generated content. While the goals of countering misinformation and protecting users are laudable, the success of the rules will depend on effective implementation, fair enforcement and balance with innovation. The focus keyword new rules label deepfake content points to a policy shift that may have wide-ranging impacts for creators, platforms and users alike.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Popular Articles