India Mandates AI Content Labelling: New IT Rules to Combat Deepfakes and Protect Digital Users
- Date & Time:
- |
- Views: 12
- |
- From: India News Bull

The IT Ministry has introduced draft amendments to IT rules that would require AI-generated content to be clearly labelled, enabling users to distinguish between synthetic and authentic information. This initiative comes as part of broader efforts to address growing concerns about deepfakes and synthetic content.
In New Delhi, officials have recognized the increasing threats posed by generative AI technologies, which can be misused to spread misinformation, manipulate electoral processes, or impersonate individuals. These concerns have prompted action following extensive public discussions and parliamentary deliberations.
The proposed amendments to IT Rules 2021 focus on strengthening due diligence requirements for intermediaries, particularly targeting social media platforms with 50 lakh or more users (like Meta) and significant social media intermediaries (SSMIs). The rules would also apply to platforms that enable creation or modification of synthetically generated content.
Under the new provisions, major social media platforms would be required to obtain declarations from users regarding whether uploaded content is synthetically generated. These platforms must implement reasonable technical measures to verify such declarations and ensure appropriate labelling of synthetic information.
The draft notification introduces a clear definition of 'synthetically generated information' as content that is artificially created, generated, modified, or altered using computer resources in a manner that appears reasonably authentic or true.
Visibility standards in the proposed amendments require synthetic content to be prominently marked, with visual indicators covering at least 10 percent of the display area or, for audio content, notification during the initial 10 percent of playback duration.
Enhanced verification and declaration requirements for significant social media platforms mandate reasonable technical measures to confirm whether uploaded content is synthetically generated and to label it accordingly.
These amendments aim to promote user awareness, enhance traceability, and ensure accountability while maintaining an environment conducive to innovation in AI technologies. The IT Ministry is accepting feedback and comments on the draft amendment until November 6, 2025.
An explanatory note on the IT Ministry website highlights recent incidents where deepfake audio, videos, and synthetic media have gone viral on social platforms, demonstrating the potential of generative AI to create convincing falsehoods that can be weaponized for various harmful purposes.
Both global and domestic policymakers have expressed increasing concern about fabricated or synthetic images, videos, and audio clips that are virtually indistinguishable from authentic content. These deepfakes are being used to produce non-consensual intimate imagery, mislead the public with fabricated content, commit fraud, or impersonate individuals for financial gain.
The IT Rule changes provide statutory protection to intermediaries that remove or disable access to synthetically generated information based on reasonable efforts or user grievances.
Additionally, the amendments mandate that intermediaries offering computer resources for creating or modifying synthetic content must ensure such information is labelled or embedded with a permanent unique metadata or identifier.
The rules specify that these identifiers must be visibly displayed or made audible in a prominent manner, covering at least 10 percent of the visual display area or playing during the initial 10 percent of audio content duration. These measures are designed to enable immediate identification of synthetically-generated information.
Furthermore, intermediaries are prohibited from modifying, suppressing, or removing such labels or identifiers, ensuring transparency for users encountering AI-generated content.
Source: https://www.ndtv.com/india-news/centre-proposes-labelling-ai-content-to-protect-users-from-deepfakes-9498105