Post by : Saif Nasser
The Indian government has proposed new rules that will make artificial intelligence (AI) and social media companies clearly label all AI-generated content. Officials say this move is needed to protect people from deepfakes and false information spreading quickly online.
With nearly one billion internet users, India has one of the world’s largest digital populations. But in such a diverse country—home to many religions, languages, and cultures—fake news or manipulated videos can easily create social tension or even violence. Recent cases of AI deepfake videos have worried government officials, especially during election periods.
According to India’s Information Technology Ministry, the new rules will require companies like Google, Meta, X (formerly Twitter), and OpenAI to mark any AI-generated content clearly. The rule states that visual or video content made by AI must carry a label covering at least 10% of the screen area. For AI-generated audio clips, the label must appear during the first 10% of the clip’s duration.
The goal, the ministry said, is to make it immediately clear to viewers when something has been created or changed using AI tools. This will “ensure visible labelling, metadata traceability, and transparency for all public-facing AI-generated media.” The ministry has invited public and industry comments on the draft rules until November 6 before final approval.
Under the proposal, social media platforms will also need to ask users to declare whether their uploaded content has been created with AI. The companies must also use technical tools to detect and stop the spread of false or misleading AI-generated content. These steps will increase the responsibility of large technology companies operating in India. The government believes that these companies must do more to prevent harmful or misleading material from reaching users.
The Indian government said it was becoming increasingly worried about how AI tools could be misused. It warned that AI could be used “to cause user harm, spread misinformation, manipulate elections, or impersonate individuals.” Globally, countries like the European Union and China have already started introducing similar laws. India’s move shows how governments around the world are trying to balance innovation and safety as AI technology becomes more powerful.
Experts say India’s proposed rule is one of the first in the world to set a quantifiable visibility standard—meaning that it clearly defines how big and visible AI labels must be. “This is among the first global attempts to create measurable visibility for AI warnings,” said Dhruv Garg, a public policy expert and founding partner of the Indian Governance and Policy Project. He added that if the rule is implemented, AI companies will need to build automated labelling systems that can identify and tag AI-generated content as soon as it is created.
In recent months, Indian courts have been hearing several lawsuits involving deepfakes. Bollywood actors Abhishek Bachchan and Aishwarya Rai Bachchan filed a case in New Delhi to stop the creation and sharing of fake AI videos that misuse their images and voices. They also challenged YouTube’s policy on using publicly available videos to train AI models. Deepfakes—videos or photos created using AI that make people appear to say or do things they never did—have become a growing global problem. They have been used to spread political propaganda, defame celebrities, and create fake news stories.
India has been pushing for more responsible AI development. The government believes AI can support healthcare, education, and agriculture but says it must not be used to deceive people. Officials said these new labelling requirements will help create a safer online environment, where users can easily identify what is real and what is AI-made. “We want to promote technology that helps people, not harms them,” an IT ministry official said.
Global tech giants have not yet commented on the proposal. But the new rules, once approved, could set an example for other countries trying to regulate AI content responsibly. For now, India’s move marks one of the strongest efforts yet to fight misinformation and protect digital trust in the age of artificial intelligence.
Mattel Revives Masters of the Universe Action Figures Ahead of Film Launch
Mattel is reintroducing Masters of the Universe figures in line with its upcoming film, tapping into
China Executes 11 Members of Criminal Clan Linked to Myanmar Scam
China has executed 11 criminals associated with the Ming family, known for major scams and human tra
US Issues Alarm to Iran as Military Forces Deploy in Gulf Region
With a significant military presence in the Gulf, Trump urges Iran to negotiate a nuclear deal or fa
Copper Prices Reach Unprecedented Highs Amid Geopolitical Turmoil
Copper prices soar to all-time highs as geopolitical tensions and a weakening dollar boost investor
New Zealand Secures First Win Against India, Triumph by 50 Runs
New Zealand won the 4th T20I against India by 50 runs in Vizag. Despite Dube's impressive 65, India