New Delhi, Nov. 7 -- The government has extended the deadline for public consultation on draft rules for labelling AI-generated content by one week to November 13, responding to requests from industry stakeholders seeking more time to assess the proposals' technical and operational implications. The ministry of electronics and information technology (Meity) released the draft amendments on October 22, requiring all artificial intelligence (AI) tools and major social media platforms to embed permanent watermarks on synthetic content. "In response to the representations received from several stakeholders, the ministry has decided to extend the last date for receipt of feedback/comments till 13th November, 2025," the IT ministry said in a notice on Thursday. A number of think tanks, industry bodies and at least two Big Tech companies have pressed for more deliberation time to formulate detailed responses to rules they say could impose significant operational burdens, according to documents seen by HT. The draft framework requires companies offering AI generation tools to embed permanent watermarks or metadata identifiers on all synthetic content, with visible labels on at least 10% of images and videos and audio identifiers during the first 10% of playback. Industry executives and legal experts have flagged several concerns about the draft framework, particularly Rule 3(3), which places labelling obligations on intermediaries offering AI tools. The provision could make platforms responsible for proving whether every piece of content is "authentic"-a burden industry representatives described as heavy and unclear. Companies warned the rule's language is too broad and could capture routine editing activities. "Everyday editing performed though widely used tools such as Adobe Photoshop, Lightroom, Premiere, Canva, or smartphone photo editors would fall within the proposed definition because they 'modify or alter' information using a computer resource and are intended to look natural and realistic to the viewer," a large technology company said in its written inputs circulated among industry stakeholders and seen by HT. Benign activities such as colour correction, adjusting brightness and contrast, or applying filters risk being classified as synthetically generated information, requiring labelling under the rules, this company contended. A senior executive at another large technology company, speaking on condition of anonymity, told HT said compliance costs would be "phenomenal". "It is something that will be incredibly expensive to do. Both, in the labelling obligations and the verification of self-disclosure by a user as they upload content to a platform," this person said, adding that implementing the changes across platforms might take a year. Asked whether the government planned to support companies through these challenges, Meity secretary S Krishnan said: "We will look into all the feedback we receive and then take a call." Industry representatives also highlighted technical limitations. AI detection tools remain unreliable, they said, and over-relying on them could lead to false positives. Platforms might label all content as "AI-made" to avoid penalties, defeating the purpose of the labelling requirement. Rule 4(1)(a), which establishes a compliance framework for significant social media intermediaries, has drawn particular scrutiny. A representative of a tech policy group, which has compiled a list of grievances and concerns of big tech and other stakeholders after consultation, said the provision transforms platforms from passive processors into gatekeeping authorities, since "reasonable verification" is not clearly defined. And if platforms fail to verify or label AI content-even due to technical limitations-there is apprehension they might lose immunity from liability under Section 79, as per these compiled comments reviewed by HT. They also cautioned that smaller platforms could struggle more with these requirements, potentially hurting competition and innovation. Apar Gupta, advocate and founder director of the Internet Freedom Foundation (IFF), said: "The proposed labelling requirements in the IT Rules also potentially plug into a censorship power of takedowns for non-compliance. Hence, when many social media companies are finding them vague, they have the potential for arbitrary enforcement." One of the executives cited earlier questioned whether Rule 2(1A), which adds "synthetically generated information" to the definition of "information", is necessary. They pointed out that the current IT Rules, 2021, already address deepfake-related harms-including fake or misleading content, impersonation, privacy violations and non-consensual intimate material-under Rule 3(1)(b)....