- The collaboration will empower sovereign AI entities to fine-tune models using private data while leveraging SambaNova’s advanced technology.
Saudi Arabia’s leading digital enabler, stc Group, through its AI arm, stc.AI, launched a Large Language Model (LLM) sovereign cloud platform, which will run the world’s largest open-source frontier model in collaboration with California-based SambNova.
Powered by the fastest inference speeds for Llama 405B, one of the most powerful AI large language models in the world, the stc Group sovereign cloud platform will drive innovation across sectors.
A significant milestone
Key features of the platform include stc Enterprise GPT, a state-of-the-art generative AI solution. The Generative component will power AI to create new content, using the fastest inference speeds – for Llama 405B, the platform will ensure seamless integration and scalability for enterprises.
The open-source model will allow users within Saudi Arabia to use, modify, and improve the software according to their specific needs, contributing to stc Group’s own Enterprise GPT.
This initiative empowers Saudi enterprises and developers to harness cutting-edge AI technology, fostering innovation and positioning the Kingdom as a leader in AI adoption and development.
“The collaboration with SambNova marks a significant milestone in our journey to empower Saudi enterprises with sovereign AI capabilities. By offering a secure and scalable inferencing-as-a-Service platform, we are enabling organizations to unlock the full potential of their data while maintaining complete control,” Saud Alsheraihi, Vice President of Digital Solutions at stc Group, said.
Rodrigo Liang, CEO of SambaNova Systems, commented: “SambaNova is pleased to partner with stc to introduce KSA’s premier sovereign inferencing-as-a-service cloud, running the world’s largest open-source frontier models at one-tenth the power compared to other solutions,” said Rodrigo Liang, CEO of SambaNova Systems.
“This partnership showcases cutting-edge research and innovation from both companies and the fastest inference speeds.”
The availability is scheduled for later this year.