India, May 10 -- Most people don't care about laws made in Europe. And who can blame them? They're long, they're boring, and they seem to have nothing to do with life here in India. But one new law from Brussels just landed. It's about Artificial Intelligence. And whether you're a college student learning to code, or someone still figuring out what AI even is, this one quietly changes the future of work for an entire generation. It's called the EU AI Act. Simply put: it tells companies what kinds of AI are okay, what kinds are risky, and what rules they need to follow before launching anything new in the European market. Already in force since six months, the next part of the Act will start to get implemented beginning next month. That may still sound like someone else's problem. But here's the twist. A lot of the AI used by companies in Europe is built in India. By Indian engineers. Working in Indian offices. Writing code that now needs to pass Europe's toughest rules. Take companies like Infosys, Wipro, TCS. They build software that helps European banks decide who gets a loan. Or helps companies shortlist job applicants. Or powers systems for public services. These are high-risk uses. And under the new law, they must now be transparent, fair, and safe. If the AI makes a decision, it must be able to explain why. If it gets something wrong, there has to be a way to fix it. It sounds fair. But this is where the tension begins. Most Indian IT companies weren't built for this kind of scrutiny. They were built for speed and scale. Systems were designed to work, not to explain themselves. That approach now needs to change. And it's not just theory. Some European clients have already slowed down deals. Legal teams in Europe don't yet know how to vet compliance from vendors based outside their region. So they wait. What's striking is how many in India still think this is a side issue. As if it's a niche concern for lawyers or compliance officers. But that's a blind spot. Because the global conversation around AI is shifting, and this time, the rules are being written elsewhere. Sharad Sharma from the iSPIRT Foundation helps map the terrain. He explains that the world is approaching AI regulation from three very different angles. Europe's approach is strict. Everything released into the market must be proven safe. Think of a car. Nobody lets it on the road unless it passes every crash test. That's how the EU sees AI. If a system could impact someone's life, it has to be tested thoroughly before launch. America's approach is looser. It says not every engine goes into a car. Some power lawn mowers. Others run kitchen mixers. You don't need to test everything the same way. Just focus on the high-stakes stuff. Let the rest evolve without friction. India's approach, Sharma says, is different again. The focus is on building systems that are visible, not just pre-approved. Take GST. The invoices aren't checked upfront, but the entire trail is trackable. With AI, the view is that every link in the chain-especially when data involves children or sensitive use cases-must be visible to those who need to know. That's the accountability. Not a central stamp of approval. It's a pragmatic stance. India is large. It's young. There are 23 million children aged 15 to 18 alone. The country needs systems that can adapt, not just systems that are locked down. So it treats AI as a complex adaptive system, one that must be watched in motion, not just certified at the gate. But that won't always cut it globally. Indian companies working with Europe will now have to comply with the EU's more cautious, high-friction model. And that means serious changes in how code is written, and deployed. Across the board, the shift feels slow. Almost reluctant. As if AI is still being treated as an engineering problem. Not yet as a social one. For years, India built its tech reputation on low cost, high quality, and execution speed. But in this new world, that's not enough. Trust is becoming the currency. And trust has to be earned. There's an upside, though. If Indian companies meet these higher standards, they won't just survive. They'll lead. They'll become the gold standard for trustworthy AI. Not just good enough for Europe, but good enough to be the benchmark elsewhere. It's already happened once before. India built the software factories that ran the world. Now it has a shot at building the ethical factories too. But only if it chooses to....