India, Dec. 20 -- The department for promotion of industry and internal trade (DPIIT)'s working paper on generative Artificial Intelligence (AI) and copyright is framed as a careful balancing act - between innovation and creator rights, access and compensation. But its starting premise merits a closer look. The paper assumes that the rapid and ubiquitous deployment of AI systems is not only inevitable, but inherently desirable. Such an approach could end up determining whose interests are treated as foundational and whose are treated as negotiable. Much of today's responsible AI discourse begins with a more basic question: Is AI actually necessary to transform every sector and ecosystem? While AI can certainly deliver efficiencies, the DPIIT paper does not engage with this threshold inquiry. Instead, it treats large-scale AI deployment as a given, and defines regulation's role to primarily smoothen that path. This is a policy choice, not an objective reality. When regulation is premised on the inherent desirability of a technology, safeguards tend to emerge as afterthoughts rather than first principles. This orientation is especially visible in the paper's treatment of copyright. The analysis relies heavily on economic logic: Authors resist sharing their works because they are insufficiently compensated, and remuneration is therefore the solution. Compensation matters, but copyright law has never been only about money. Authors also enjoy moral rights, including the right to control how their works are used. Creators frequently withhold content not because of inadequate payment, but because they object to particular forms of reuse - especially in political, cultural, or deeply personal contexts. Reducing copyright to a royalty pipeline sidelines authorial autonomy altogether. Then there is another questionable technical assumption that more data automatically leads to better AI systems. Its emphasis on maximising access to "all data" suggests that problems such as bias and hallucination are merely matters of scale. They are not. Data divorced from context or purpose - no matter how voluminous - can amplify error and reinforce bias. The experience with LLMs like ChatGPT, trained on vast swathes of the internet, illustrates this clearly. By contrast, narrower, curated datasets can often deliver more reliable and contextual outputs. Copyright policy should, therefore, focus not just on data quantity, but on data quality and relevance. The proposed Copyright Rights Collective for AI Training (CRCAT) also raises concern. It leaves individual authors with little bargaining power, binding them to collectively negotiated rates without meaningful opt-out options. Unlike traditional collective licensing systems, creators here cannot withdraw and pursue independent negotiations. Rate-setting under a government-controlled process risks entrenching a single institutional perspective, while judicial review offers limited practical comfort given the cost and delay involved. None of this is an argument against AI innovation. The real question is how India balances efficiency with legitimacy, and scale with consent, at a time when the downstream consequences of generative AI remain uncertain. Legitimacy in emerging technology ecosystems is built not only through centralisation, but through trust. A framework that accommodates consent, differentiation, and dialogue may move more slowly, but it is better equipped to adapt as AI capabilities and social expectations evolve....