India, Oct. 25 -- My cheque bounced at a Mumbai bank a few weeks ago. Not for lack of funds, but because the signature on file no longer matched the one scrawled on paper. The old signature was careful and deliberate. The new one was a blur. It was an erosion of muscle memory. If a person cannot reliably reproduce their own mark anymore, what makes policymakers believe machines can reliably tell who or what created something? And this brings us to India's Ministry of Electronics and Information Technology's most recent proposal. It insists every piece of AI-generated content, whether video, image, or podcast, must carry a visible label covering at least ten per cent of its frame or duration. Nearly three out of four Indians have seen deepfakes in the past year. One in three has been targeted by a scam using one. Something must be done. On reaching out to Shrinath V, a Google Startups mentor and technology policy analyst, about what he made of the proposal, he texted back: "Right intent, wrong plans. We're reacting to potential dangers with an old mindset." Here is a system reaching for yesterday's tools to confront tomorrow's threats. And why may that be? Because detection is unreliable. AI detectors throw up false positives often, especially against writing by non-native English speakers. In a country with twenty-two official languages and hundreds of dialects, the problem multiplies. Most detection systems train on English data. Their accuracy drops sharply when they encounter Hindi, Tamil, Bengali, or even Indian English, where grammar and phrasing follow different rhythms. Human writing by Indians will be flagged as synthetic. Synthetic text tuned to Indian usage will often slip through. The bias is not merely technical. It is cultural. Watermarks are easy to remove. Visible markers can be cropped or edited out with basic smartphone tools. Invisible metadata, while more resilient, degrades under compression, format conversion, and deliberate tampering. The draft prohibits removal, but prohibition is not prevention. Even if the technology worked as advertised, behaviour rarely changes. Studies show warning labels on synthetic content barely slow its spread. People share what they want to believe, not what is verified. A deepfake that confirms bias travels further than a real clip that corrects it. Regulation cannot legislate psychology. Enforcement stumbles hardest. Our digital policy swings between overreach and inaction. WhatsApp alone hosts hundreds of millions of Indian users, many chatting in encrypted groups. How will the government verify labels inside messages it cannot read? The draft is silent. Elsewhere, the story is not much better. China already requires AI content to carry watermarks and threatens violators with prison. The European Union's AI Act imposes fines running into tens of millions of euros. Spain calls unlabelled AI media a serious offence. None of these regimes can stop a determined manipulator. The difference is that those governments have institutional muscle. India has intent but not capacity. The draft also asks users to self-declare if their uploads are AI-generated. The actors causing harm will not comply. Scammers, propagandists and political operatives will ignore it. Honest creators will comply, adding friction to legitimate creativity. Worse and labels can backfire. People may assume that if something is not marked as AI, it must be authentic. In a country where WhatsApp forwards travel faster than facts, that false confidence can deepen the very problem the rule is meant to solve. Look at what actually causes harm. Fraud. Defamation. Manipulation. Deepfake scams are already illegal. They need faster investigation and prosecution, not new bureaucracy. Non-consensual intimate images need quick takedowns and victim support, not universal watermarks. Election misinformation needs fact-checking capacity and digital literacy, not metadata that can be stripped away. Regulation should focus on outcomes, not optics. Platforms will do the minimum to look compliant. Detection systems will keep missing what matters. When the next scandal erupts, the response will be predictable: tighten the screws further. Do more of what does not work. On engaging some more with Shrinath about what exemplifies a new mindset, he explained, "We still don't know how to deal with this well. We are trying to do it with tools of the past. Labelling and banning. We need a more fluid approach." A more fluid approach begins by admitting what we do not know, cannot control, and what we should not try to regulate before we understand the terrain. It means investing in digital literacy in regional languages. Supporting voluntary detection tools that platforms deploy based on risk. The bounced cheque happened because the hand forgot what mattered. Precision gave way to speed. Policy is doing the same. Right intent. Wrong plan. Old tools trying to make sense of new machines. The question is not what to do next. It is whether we are ready to admit the answer might be: stop, think harder, try something else....