New Delhi, Feb. 6 -- The age of voice is here - reshaping how India searches, shops, and connects in a multilingual digital world

You've likely encountered this moment: crafting a college application, updating a professional bio, or composing a quick message on Teams, and pausing to think, "Is there a better way to phrase this? Should it sound more polished or professional?" That hesitation is no accident. In the back of our minds, we know we're not just speaking to people anymore. There's an algorithm, silently parsing tone, optimising keywords, and sorting outcomes.

We've begun adjusting, not for each other, but for the machine.

From self-expression to self-optimisation

The first time I heard someone say their resume was rejected for not being 'ATS-optimised', it stopped me. They hadn't failed the interview. They hadn't even reached a person. A line of code had already made the decision.

Artificial intelligence (AI) didn't just change how we apply. It changed how we think about ourselves. A study published in PNAS backs this up: participants who were told they were being evaluated by AI, rather than a human, altered their behaviour. Their tone became more formal, less emotional. They didn't try to sound authentic. They tried to sound machine-readable.

The shift is subtle but seismic.

We're now editing our voice, our tone, even our identity, not for managers, professors, or colleagues, but for a system that never blinks. This is the invisible influence of AI. Not what it says but what we think it sees.

Meanwhile, AI Is learning to think

On the other side of this equation lies a fascinating paradox: as we become more mechanical in front of AI, it is learning to be more human. A study in the Journal of Management Information Systems asked whether large language models like ChatGPT could conduct Grounded Theory, a sophisticated, qualitative method typically carried out by trained researchers.

With the right prompts and human guidance, ChatGPT didn't just summarise findings. It generated new insights, recognising patterns and built theory from complex data, something previously thought to be uniquely human. It wasn't just recapitulating. It was discovered.

So what's really happening here?

We're witnessing a radical transformation in the human-AI dynamic. One study shows how people shrink themselves when they know AI is watching. Another shows how AI now stretches into traditionally human domains of creativity and insight. Both point to the same truth: AI isn't just operating in our systems; it's operating in our psyches.

The illusion of intelligence - and the cost of belief

Why are we so willing to trust AI with our voice, our behaviour, even our thinking? Professor Neil Dodgson offers a compelling answer in his 2023 paper Artificial Intelligence, ChatGPT, and Human Gullibility. He argues that we're wired to anthropomorphise, to attribute human characteristics to non-human entities. When AI sounds intelligent, we assume it is intelligent. When it sounds confident, we assume it's right.

But large language models (LLMs) like ChatGPT don't think. They predict. They generate what sounds most plausible based on massive amounts of prior data. And yet we conflate fluency with truth, a bias that leaves us vulnerable to automation bias and over-reliance.

In China, AI is a teammate

Interestingly, not all cultures view this transformation in the same way. In a 2024 paper presented at an international conference, researcher Yixin Gao explored how ChatGPT is being integrated into everyday work and learning in China. Rather than fearing it, users embraced AI as a universal intelligent system - a supportive collaborator across medicine, education, and communication.

In this framing, AI isn't a threat. It's a teammate. That perception-useful, humanised, even comforting-may be the most powerful illusion yet.

So where does that leave us?

We now know AI shapes how we speak. It even helps shape what we discover. But the deepest shift may be emotional. We tailor our tone. We delegate our thinking. We fall for fluency. Not because machines are necessarily better but because they're faster, easier, and never judgemental.

The real question: Are we still thinking?

Most of us have already run a message past ChatGPT before sending it to a friend. We've asked it how to say something, how to frame something and even how to feel about something. But that convenience has a cost. If AI becomes the ghostwriter of our resumes, our research, our imagination-and now, astonishingly, even our therapist-what happens to the slow, uncertain, deeply human work of thinking and feeling for ourselves?

Maybe the most unsettling part of AI isn't what it can do. It's how instinctively we've begun trying to please it.

One last prompt

The next time you find yourself rewriting a sentence to be more 'AI-friendly', pause. Ask yourself: Is this better? Or just faster? Because the real test of intelligence in the age of AI may not be whether machines can think. It may be whether we still choose to.

No Techcircle journalist was involved in the creation/production of this content.

Published by HT Digital Content Services with permission from TechCircle.