
New Delhi, Sept. 17 -- With generative AI (GenAI) transforming business models, enterprises are increasingly investing in large language models (LLMs) that go beyond raw performance. Scalability, data privacy, security, and seamless integration with existing systems are becoming key considerations alongside accuracy and speed. While established models such as OpenAI's GPT-4o and GPT-4.5, Google's Gemini 2.0 Pro, Anthropic's Claude 3 series, and Meta's Llama 3 offer advanced capabilities, companies are evaluating factors like pricing, infrastructure compatibility, and governance frameworks before adoption. This has spurred a growing trend toward building and scaling tailor-made LLMs-often homegrown-to meet specific operational, linguistic, and regulatory requirements, enabling enterprises to innovate while maintaining control and trust.
"The question for us was how to scale this technology and allow our teams to safely experiment and rapidly build applications that drive business value," said Natarajan Ramamurthy, VP of Data Engineering at Target in India.
Target's response to this challenge is ThinkTank, a recently launched developer-friendly platform built to accelerate the creation of GenAI solutions at scale. Designed to empower engineers, data scientists, and product teams, ThinkTank simplifies access to both proprietary and open-source models while embedding safety and compliance controls.
"With the emergence of GenAI, we saw a wide range of opportunities across guest interactions, vendor operations, and internal workflows," Ramamurthy added. "But scaling these solutions across teams and ensuring responsible usage required a robust framework." ThinkTank offers shared capabilities, including context-grounding services, prompt optimisation, and governance tools, all integrated into a unified AI Studio environment. By enabling teams to quickly access vetted models and securely experiment, the platform fosters innovation while minimising risk.
AI-driven use cases at Target range from a digital shopping assistant and guided product search to asynchronous tools that streamline vendor operations. "It's about giving teams the freedom to innovate while ensuring governance is baked into the system," said Ramamurthy.
AI across sectors
Target's approach reflects a broader enterprise shift. U.S.-based e-commerce company Wayfair, which operates a technology development centre in Bengaluru, has been experimenting with AI-powered visual discovery tools. The company's homegrown AI tool Muse, powered by Google's Gemini, enables customers to visualise furniture in specific spaces and automate product categorisation-cutting curation time by 67%. "We experiment with multiple LLMs-Gemini for catalogue enrichment, ChatGPT for customer support, and Claude for coding tasks," said Rohit Kaila, Wayfair's Head of Technology.
AI's role at Wayfair extends to personalising recommendations and optimising supply chains, helping the company cut task completion times from days to hours. Similarly, IKEA's Kreativ, Walmart's Wallaby, and Amazon's Rufus are leveraging AI to enhance customer engagement and streamline operations.
Indian firms drive local innovation
In India, the LLM wave is gaining momentum with a focus on regional languages and local context. Reliance Industries, in partnership with NVIDIA, is developing a foundation model tuned for Indian languages, aiming to enhance user engagement across sectors. Infosys, in collaboration with IIT Madras's AI4Bharat, is working on foundational models for speech recognition, translation, and language understanding-helping enterprises build inclusive, accessible solutions.
Tech Mahindra's Project Indus is building an LLM for Hindi and 37 dialects, using community-driven data and ethical AI practices. Its open-source 'GenAI in a box' framework allows businesses to deploy conversational AI seamlessly.
There are many more such examples. Startups such as Sarvam AI's sovereign LLM supports governments and enterprises with secure, multilingual AI tools, driving strategic autonomy and innovation. CoRover.ai's BharatGPT, developed with the Ministry of Electronics' Bhashini initiative, supports over 12 languages, including Tamil and Kannada, expanding AI's reach to diverse user bases. With AI adoption deepening, firms are hiring linguistic experts to fine-tune models, calibrate tone, and ensure cultural sensitivity. "Nuanced feedback from native speakers is increasingly vital," say industry specialists, as reliance on translations can dilute context and accuracy.
Looking ahead
As enterprises scale LLMs across sectors, the focus is on delivering tailored solutions that are secure, compliant, and linguistically diverse. Whether it's enhancing customer interactions, automating backend operations, or building governance-first frameworks, businesses are leveraging AI to future-proof operations and accelerate growth.
Sumit Agarwal, VP Analyst at Gartner, noted that the growing complexity of business workflows and the demand for higher accuracy are pushing enterprises toward specialised models fine-tuned for specific tasks or domain-specific data. "These smaller, purpose-built models offer faster response times and require less computational power, helping organisations cut operational and maintenance costs," he explained.
Against this backdrop, the shift toward homegrown, enterprise-grade AI models signals a transformation-one where language, culture, and trust are as critical as the algorithms powering the solutions. With sustained investment and thoughtful scaling, Indian enterprises are poised to lead the next phase of AI-driven innovation.
Published by HT Digital Content Services with permission from TechCircle.