new delhi, Nov. 8 -- A grand plan to label all AI-generated content on social media has run into resistance, with Big Tech, policy thinktanks and industry body Nasscom calling the government's proposals expensive and impractical, suggesting that the new regulations instead focus on preventing harm from AI. Social media platforms said this will require enormous scrutiny, escalating operating costs. In their submissions to the ministry of electronics and IT (Meity), they called the proposals too broad-based, and requested adding definitions of disinformation and deepfakes. The rules will "make it impossible for content to be posted, shared and consumed the way they are on social media in India today," said an executive close to Meta Platforms, which runs Facebook and Instagram. "Tagging every piece of content with watermarks and metadata, which too are not fail-proof processes and can be meddled with, is a massive commercial exercise. Through these rules, India will not only risk becoming the most expensive nation to run social media platforms on-it will also be near-impossible to make social media platforms work the way they do currently," the executive said on the condition of anonymity. Meity released draft rules on 22 October that would require social media platforms to watermark and tag algorithmically modified content. It also proposed to hold these platforms as well as AI generators accountable for any unidentified deepfakes. Given the volume of responses, Meity has extended the last date for filing feedback to 13 November. A senior Nasscom official said the right approach would be to focus on assessing AI content based on its impact and harm, instead of going by the process through which AI content is generated. Nasscom is set to submit its official response. Google, which has raised its views at industry conversations, is yet to submit its views, an executive aware of the company's plans said. The executive also pointed to YouTube's new content policy, which outlines what qualifies as significantly modified synthetic content. YouTube has also expanded its early-stage trials of AI likeness detection tool to curb deepfakes. The first executive associated with Meta added that the company believes a voluntary reporting-based assessment of deepfakes that lead to misinformation is the regulatory approach they've recommended as part of their submissions. "We've always seen users actively report content that could be misleading. This is the right way for us to regulate AI, instead of including all content in the same bucket," the executive added. Questions sent to Meta, Google and Meity remained unanswered. The industry believes that the rules sought to simplify "a largely complex subject", said Deepro Guha, associate director for public policy at consultancy firm The Quantum Hub. "Companies do have a point in stating that algorithmically modified content could include absolutely anything, and that makes regulations difficult to implement. The definition is too wide," Guha said. However, others said that while some modification may be needed, the regulation is a necessity today. "The ideal form of regulation would be for the Centre to lay down outcome-based rules, rather than focus on the mechanism and process. For instance, AI regulations should focus on content that gets reported as disinformation-and offer stringent steps for platforms to actively scan for such content and curb them," said Anushree Verma, senior director analyst at Gartner....