New Delhi, April 24 -- This month, Meta unveiled its new AI model, Llama-3, and Microsoft introduced Phi-3. The base version of each is tiny in comparison to large language models (LLMs). Turns out, shrinking AI models is what Big Tech is doing to reach every user in the world.
Small language models (SLMs) are as small as just 0.1% of LLMs. For instance, Google's Gemini Nano-1 uses 1.8 billion data parameters, in comparison with OpenAI's GPT-4 using 1.75 trillion parameters. This has many advantages-companies building models need fewer but more specific data, which is easier to get. Further, small models will need less powerful computers to train, so costs come down. For users, any average smartphone would be able to run and process such...
Click here to read full article from source
To read the full article or to get the complete feed from this publication, please
Contact Us.