The Indian government may grant social media platforms additional technical preparation time before enforcing new IT rules requiring them to detect and label AI-generated content. The amended rules mandate that platforms make users declare synthetic content and deploy automated tools to verify these declarations, with a focus on significant social media intermediaries.
Major global tech companies are already part of coalitions working on content authenticity standards, but their systems need tweaking to comply with India's specific regulations. The rules also shorten the deadline for removing unlawful content, including AI deepfakes, from 36 hours to three hours, which platforms have called operationally difficult.
The regulations extend beyond social media to encompass a wide range of AI software and services, requiring them to embed disclaimers on AI-generated content.
Main Topics: New Indian IT regulations on AI-generated content, compliance deadlines and technical preparation for social media platforms, global efforts for content authenticity, and extended applicability to AI software and services.
The government may give social media platforms time to prepare technically before enforcing the new information technology regulations that require them to detect and label artificial intelligence (AI)-generated content submitted by users, officials told ET.
In effect, these platforms will have to incorporate audit-ready technical measures to comply with the new norms, they said.
The amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules require social media platforms to make users declare whether posted content is âsynthetically generated informationâ and to deploy automated tools to verify such declarations. The rules were amended last month.
The officials said these companies must ensure their technology works and be ready to prove its effectiveness to the government whenever requested.
The new rules, notified on February 10, came into force 10 days later, prompting platforms and industry body Nasscom to say the compliance deadline was untenable.
"Major platforms and tech companies are already working on this issue globally, and their systems are already being deployed here. However, these systems will have to be tweaked to work in line with the latest amendments which have created a comprehensive regime to weed out deepfake and harmful AI content, with detailed provisions for reporting these," an official said.
Google, Meta, Microsoft, Amazon, Intel OpenAI and other tech giants are already part of global efforts to validate the authenticity of digital data.
Also Read: Tighter takedown rules are for all social media content
These firms are steering committee members of the Coalition for Content Provenance and Authenticity (C2PA), which provides an open technical standard to establish the origin and edits of digital content called Content Credentials.
Any move by the government to provide time for the new systems to be put in place will also allow all technology intermediaries, beyond social media platforms, to implement similar systems to detect AI content, he added.
While the new AI provisions in the IT rules focus on only significant social media intermediaries (SSMIs) or those with 5 million or more registered users in India, all technology intermediaries have to be part of the efforts, officials had said earlier.
All technology firms would have to embed the disclaimer in their content whether they are a social media intermediary or are just providing software or services that leverage AI. This opens up a long list of popular AI-based software, apps and services, including OpenAI's ChatGPT, Dall-E, and Sora, Google's Gemini, NotebookLM, and Google Cloud, Microsoft's Copilot, Office365 and Azure, and Meta AI, to scrutiny.
At a meeting with the ministry of electronics and information technology last month, some intermediaries had complained about the operational difficulty in adhering to the shorter deadlines, especially for removing unlawful content, including AI-generated ones, within three hours of them being posted, down from the previous deadline of 36 hours, officials had told ET.
Also Read: Explained: As govt tightens AI content rules, what must social media platforms & others do
In effect, these platforms will have to incorporate audit-ready technical measures to comply with the new norms, they said.
The amended Information Technology (Intermediary Guidelines and Digital Media Ethics Code) Rules require social media platforms to make users declare whether posted content is âsynthetically generated informationâ and to deploy automated tools to verify such declarations. The rules were amended last month.
The officials said these companies must ensure their technology works and be ready to prove its effectiveness to the government whenever requested.
The new rules, notified on February 10, came into force 10 days later, prompting platforms and industry body Nasscom to say the compliance deadline was untenable.
"Major platforms and tech companies are already working on this issue globally, and their systems are already being deployed here. However, these systems will have to be tweaked to work in line with the latest amendments which have created a comprehensive regime to weed out deepfake and harmful AI content, with detailed provisions for reporting these," an official said.
Google, Meta, Microsoft, Amazon, Intel OpenAI and other tech giants are already part of global efforts to validate the authenticity of digital data.
Also Read: Tighter takedown rules are for all social media content
These firms are steering committee members of the Coalition for Content Provenance and Authenticity (C2PA), which provides an open technical standard to establish the origin and edits of digital content called Content Credentials.
Any move by the government to provide time for the new systems to be put in place will also allow all technology intermediaries, beyond social media platforms, to implement similar systems to detect AI content, he added.
While the new AI provisions in the IT rules focus on only significant social media intermediaries (SSMIs) or those with 5 million or more registered users in India, all technology intermediaries have to be part of the efforts, officials had said earlier.
All technology firms would have to embed the disclaimer in their content whether they are a social media intermediary or are just providing software or services that leverage AI. This opens up a long list of popular AI-based software, apps and services, including OpenAI's ChatGPT, Dall-E, and Sora, Google's Gemini, NotebookLM, and Google Cloud, Microsoft's Copilot, Office365 and Azure, and Meta AI, to scrutiny.
At a meeting with the ministry of electronics and information technology last month, some intermediaries had complained about the operational difficulty in adhering to the shorter deadlines, especially for removing unlawful content, including AI-generated ones, within three hours of them being posted, down from the previous deadline of 36 hours, officials had told ET.
Also Read: Explained: As govt tightens AI content rules, what must social media platforms & others do