Snowflake has lifted the wraps of its new Arctic large language model (LLM) it hopes will revolutionize enterprise AI by surpassing benchmarks when it comes to complex enterprise workloads.
Arctic, with its Mixture-of-Experts (MoE) architecture, promises to tackle tedious business tasks effectively and at scale.
Snowflake says the launch shows its commitment to openness and collaboration, as the LLM’s weights have also been released under an Apache 2.0 license, with details of the research leading to how it was trained also available.
Snowflake Arctic
“By delivering industry-leading intelligence and efficiency in a truly open way to the AI community, we are furthering the frontiers of what open source AI can do,” Snowflake CEO Sridhar Ramaswamy commented.
The launch of Arctic comes as many enterprises are now actively seeking open-source LLMs, with Snowflake citing a Forrester report revealing nearly half (46%) of global enterprise AI decision-makers are leveraging such models to adopt generative AI.
As of January 2024, Snowflake claims it works with more than 9,400 companies around the world, and the LLM also promises to deliver model performance and efficiency benefits, with Snowflake claiming that Arctic activates around 50% fewer parameters than DBRX, and 75% less than Llama 3 70B during inference or training.
Snowflake describes Arctic as a “truly open model… that permits ungated personal, research, and commercial use.” The LLM is available for serverless inference in Snowflake Cortex, but it will also be available to AWS, Hugging Face, Lamini, Microsoft Azure, Nvidia API catalog, Perplexity and Together AI users.
+ There are no comments
Add yours