Tech Mahindra is developing an indigenous sovereign Large Language Model (LLM) with 1 trillion parameters under the IndiaAI Mission.
Think of 1 trillion parameters as 1 trillion tiny switches inside the AI’s brain that learn patterns from data. The more switches, the smarter and more accurate the model gets.
The deets: the homegrown model is part of a government-backed project to build foundational AI systems in India. Alongside Tech Mahindra, seven others, including Fractal Analytics and BharatGen, an IIT Bombay consortium have been picked to develop these massive LLMs.
The numbers: a 1-trillion-parameter model is the scale of global heavyweights like GPT-4 or Gemini 1.5. Parameters are the “neurons” of an AI model, the internal variables that help it understand and generate human-like text. The more parameters, the more nuanced the understanding.
Why you should care: for the common man, this means smarter AI tools made for India, in India. Think chatbots that understand Hinglish, local dialects, or government forms explained in your language. It could power better customer support, healthcare helplines, and education apps, all tailored to how Indians actually speak and search.
Zoom out: the global Large Language Model (LLM) market is set to explode from $5.7 billion in 2024 to a whopping $123 billion by 2034. That’s a 35.9% annual growth rate, driven by AI adoption across industries.
As LLMs power everything from chatbots to copilots, demand for smarter, faster models is surging.
