[ad_1]
New Delhi: Platforms and intermediaries equipped with artificial intelligence (AI) and generative AI (GenAI) capabilities, such as Google and OpenAI, must now obtain government approval before offering services that enable the creation of deepfakes to users, according to an advisory by the ministry of information technology and electronics.
These platforms are required to label themselves as ‘under testing’ and must also secure explicit consent from users, making them aware of the potential errors inherent in the technology, said minister of state for information technology and electronics Rajeev Chandrasekhar.
Issued late Friday, the ministry’s advisory targets all platforms and intermediaries that utilize large language models (LLMs) and foundational models.
It mandates explicit government permission for the use of any “under-testing/unreliable artificial intelligence model(s)/LLM/Generative AI, software(s), or algorithm(s)” and insists on clear labelling of the potential unreliability of the generated outputs. A ‘consent popup’ mechanism has also been recommended to inform users explicitly about the technology’s fallibility.
Additionally, these services must not produce content that compromises the integrity of the electoral process or violates Indian law, highlighting concerns over misinformation and deepfakes influencing election outcomes.
“(W)e have added the word electoral process only in the context of misinformation,” the minister noted.
This advisory, the first such action by the Indian government against GenAI platforms, comes ahead of the general elections and follows recent controversies, including an instance where Google’s AI platform Gemini came under fire for answers generated by the platform on a question about Prime Minister Modi.
Instances of ‘hallucinations’ by GenAI models have also come to fore, with Ola’s beta GenAI platform Krutrim.
“In a lot of ways, this advisory is signalling the framework of the future of our regulatory framework as a legislative framework that is aimed at creating a safe and trusted Internet,” the minister said.
The move introduces a ‘sandbox’ approach, requiring platforms to comply and take responsibility for any unlawful content generated on their platforms.
“It’s not a constant legal obligation yet, but it’s certainly something that is an advisory that they should figure out a way of embedding metadata or some sort of permanent unique identifier for everything that is synthetically created by their platform,” he said, adding that the advisory was a measure to prevent prosecution against the platforms, if they were to comply, but should not be read as a safe harbour.
Once the platforms seek permission from the government, they maybe asked to demo their service or go through a stress test. This, Chandrasekhar said, will provide a lot more ‘rigour’ to the service before it goes into the hands of consumers than what it is in the lab and was presently just being pushed out to users. Intermediaries have been given 15 days to submit a compliance report.
Unlock a world of Benefits! From insightful newsletters to real-time stock tracking, breaking news and a personalized newsfeed – it’s all here, just a click away! Login Now!
Download The Mint News App to get Daily Market Updates.
Published: 02 Mar 2024, 04:55 PM IST
[ad_2]
Source link