Sun. Dec 22nd, 2024

[ad_1]

NEW DELHI
:

The government is likely to seek robust technology and AI-based solutions from social media platforms and significant intermediaries to proactively scan for, and flag, deepfake content on their platforms, two senior officials said seeking anonymity.

Platforms will be discussing voluntary mechanisms to identify and block deepfake content that may be potentially harmful during high-level meetings with officials from the ministry of electronics and information technology on Thursday and Friday.

“The issue of deepfake content, who put it, and on which platform, and then it’s virality, all elements will have to be addressed. This has to be done while allowing space for innovation. While there have been instances of deepfake content being flagged and taken down by the platform, there has to be onus on intermediaries to do this on their own,” said one of the officials.

Meta, which runs social media platforms Facebook and Instagram, Google, YouTube, Telegram, Snap and other significant intermediaries, and technology experts and other stakeholders will be part of the consultations with the minister of information technology and telecom, Ashwini Vaishnaw on Thursday, the officials said. Minister of state for IT and electronics Rajeev Chandrasekhar will hold discussions with stakeholders on Friday, they said.

The second official said the absence of rules or regulations governing deepfakes in IT Intermediary rules has been a concern of late.

“So far, any policy linked with how deepfakes and artificial intelligence (AI)-manipulated content are regulated on social media platforms are based on voluntary reporting of content. After a recent surge of deepfake content that has showcased how manipulated content can lead to a range of issues, the IT ministry may look at formulating rules to govern proactive approach to scanning for and spotting such content using AI,” said an industry executive, seeking anonymity.

Queries to Meta, Google and Meity did not elicit any response till press time.

A senior lawyer, who consults leading social media intermediaries on technology policy, requesting anonymity, said the new rules are likely to be introduced as an amendment to the existing IT rules.

Chandrasekhar had said regulations for harmful impact of AI will be taken up in the upcoming Digital India Act.

The government’s push to regulate deepfakes should not pose a significant challenge for most firms, which will rely on AI tools to scan for misinformation, the lawyer said. “However, it is important that the regulations leave scope for understanding where technologies may fail, and also offer impetus on the users to report misinformation more extensively.” he added.

Concerns around deepfake videos have escalated after multiple high-profile public figures were targeted in such videos. The individuals included Prime Minister Narendra Modi and actor Katrina Kaif. Since then, ministry officials have raised concerns around deepfakes and the need to regulate them.

Prime Minister Narendra Modi had also raised the issue of deepfakes in his address to the Leaders of G20 at the virtual summit on Wednesday. “We must understand dangers posed by deepfakes to society and individuals.”

On Tuesday, Chandrasekhar said the Centre is considering a law to regulate deepfakes and misinformation, details of which will be revealed this Friday.

On 1 November, India had joined 27 nations to be a part of the first cross-border policy on AI. Addressing the media two days after the Bletchley Declaration was signed, Chandrasekhar said: “We spoke about the need to have safe and trusted AI platforms, and distinguishing them from unsafe, untrusted platforms. It represents a massive opportunity, and should not be demonized or regulated out of existence and innovation.We spoke about who will determine safety and trust, and discussed four harm of AI, such as privacy impact on individuals, workforce disruption, non-criminal harm and weaponization and criminalization of AI.”

India’s AI regulations are anticipated to advance significantly at the Global Partnership on Artificial Intelligence 2023 summit in in December.

[ad_2]

Source link