Home NetworkingHybrid Cloud VMware enlists Intel, IBM watsonx for private AI initiative

VMware enlists Intel, IBM watsonx for private AI initiative

by Contributor

VMware has two new corporate partners, in the form of Intel and IBM, for its private AI initiative. The partnerships will provide new architectures for users looking to keep their generative AI training data in-house, as well as new capabilities and features.

Private AI, as VMware describes it, is essentially a generative AI framework designed to let customers use the data in their own enterprise environments — whether public cloud, private cloud or on-premises — for large language model (LLM) training purposes, without having to hand off data to third parties. VMware broke ground on the project publicly in August, when it rolled out VMware Private AI Foundation with NVIDIA. That platform combines the VMware Cloud Foundation multicloud architecture with Nvidia AI Enterprise software, offering systems built by Dell, HPE and Lenovo to provide on-premises AI capabilities.

Earlier this week, at the VMware Explore event in Barcelona, the company announced that it had expanded its partnership with Intel to offer that company’s Max Series GPUs, Xeon processors, and AI software tools as an alternative to Nvidia. The Max Series GPU is designed to pack 128 of the company’s most advanced AI-focused cores onto a given chip, while its latest Xeon CPUs boast the Intel Advanced Matrix Extension additions to its x86 instruction set — also designed to maximize performance for AI and machine learning tasks.

“VMware and Intel are designing a VMware Private AI reference architecture that will enable customers to build and deploy Private AI models and reduce TCO by harnessing the Intel AI software kit, processors, and hardware accelerators with VMware Cloud Foundation,” the company said in an official blog post.

VMware also announced a partnership with IBM, which will bring that company’s watsonx AI framework to VMware customers, using Cloud Foundation and Red Hat’s OpenShift containerization platform to provide a full-stack architecture for data management, governance and operational machine learning tasks.

“Additionally, organizations will be able to access IBM-selected open-source models from Hugging Face, as well as other third-party models and a family of IBM-trained foundation models to support GenAI use cases,” the blog post said.

Source link

Related Posts

Leave a Comment