India, Feb. 18 -- Bengaluru-based AI startup Sarvam AI unveiled two new large language models (LLMs), Sarvam-30B and Sarvam-105B, on the third day of the ongoing 'India AI Impact Summit 2026'.
The Sarvam-30B models support context length of up to 32,000 tokens to allow lighter usage and price efficiency through lower power consumption. The model has been trained on 16 Tn tokens, allowing for more efficient thinking and logical reasoning on lower token usage.
The startup tested and demonstrated how the model's performance across general reasoning and coding benchmarks was at par with other popular models like Gemma 27B, Mistral-32-24B, OLMo 31.32B, Nemotron-30B, Qwen-30B and GPT-OSS-20B.
Meanwhile, Sarvam-105B, supports context length o...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.