AI must aid human thought, not become its replacement
India, July 24 -- Watching the recent resurgence of violence in Kashmir, I find myself grappling with questions about the role of technology, particularly Generative Artificial Intelligence (GenAI), in warfare. India is built upon the philosophy of live and let live, yet that doesn't mean passively accepting aggression. As someone deeply invested in responsibly applying AI in critical industries like financial services, aerospace, semiconductors, and manufacturing, I am acutely aware of the unsettling dual-use potential of the tools we develop: The same technology driving efficiency and innovation can also be weaponised for harm.
We stand at a critical juncture. GenAI is rapidly shifting from mere technological advancement to a profound geopolitical tool. The stark division between nations possessing advanced GenAI capabilities and those dependent on externally developed systems poses serious strategic risks. Predominantly shaped by the interests and biases of major AI-developing nations, primarily the US and China, these models inevitably propagate their creators' narratives, often undermining global objectivity.
Consider the inherent biases documented in AI models like OpenAI's GPT series or China's Deepseek, which subtly yet powerfully reflect geopolitical views. Research indicates these models minimise criticism of their home nations, embedding biases that can exacerbate international tensions. China's AI approach, for instance, often reinforces national policy stances, inadvertently legitimising territorial disputes or delegitimising sovereign entities, complicating fragile diplomatic relationships, notably in sensitive regions like Kashmir.
Historically, mutually assured destruction (MAD) relied on nuclear deterrence. Today's arms race, however, is digital and equally significant in its potential to reshape global stability. We must urgently reconsider this outdated framework. Instead of mutually assured destruction, I advocate for a new kind of MAD: mutual advancement through digitisation. This paradigm shifts the emphasis from destructive competition to collaborative development and technological self-reliance.
This evolved MAD requires nations, particularly technologically-vulnerable developing countries, to establish independent, culturally informed AI stacks. Such autonomy would reflect local histories, cultures, and political nuances, making these nations less susceptible to external manipulation. Robust, culturally informed AI not only protects against misinformation but fosters genuine global dialogue, contributing to a balanced, multipolar AI landscape.
At the core of geopolitical tensions lies a profound challenge of mutual understanding. The world's dominant AI models, primarily trained in English and Chinese, leave multilingual and culturally diverse nations like India, with its 22 official languages and hundreds of dialects, in a precarious position. A simplistic AI incapable of capturing nuanced linguistic subtleties risks generating misunderstandings with severe diplomatic repercussions. To prevent this, developing sophisticated, culturally aware AI models is paramount. Multilingual AI systems must leverage similarities among related languages such as Marathi and Gujarati or Tamil and Kannada to rapidly scale without losing depth or nuance. Such culturally adept systems, sensitive to idiomatic expressions and contextual subtleties, significantly enhance cross-cultural understanding, reducing the risk of conflict driven by miscommunication.
As GenAI becomes integrated into societal infrastructure and decision-making processes, it will inevitably reshape human roles. While automation holds tremendous promise for efficiency, delegating judgment, especially in life and death contexts like warfare, to AI systems raises profound concerns. I am reminded of the Cold War incident in 1983 when Soviet Lieutenant Colonel Stanislav Petrov trusted human intuition over technological alarms, averting nuclear disaster - a poignant reminder of why critical human judgment must never be relinquished to machines entirely.
My greatest fear remains starkly clear: A future where humans willingly delegate judgment and thought to algorithms. We should not accept this future. We share collective responsibility as innovators, technologists, and global citizens, to demand and ensure that AI serves human wisdom rather than replaces it. Let's commit today: never allow technology to automate away our humanity....
To read the full article or to get the complete feed from this publication, please
Contact Us.