Ingram Micro stitches AI with infrastructure, powered by NetApp and Nvidia

Designed to help partners unify AI workloads and simplify customer deployments.

Ingram Micro is offering pre-configured artificial intelligence (AI) infrastructure solutions via the channel through an enhanced alliance with NetApp and Nvidia.

The move is designed to arm partners with the capabilities to support businesses in unifying AI workloads and simplifying deployments to enhance investment returns, utilising ONTAP AI -- a cloud-connected all-flash storage solution from NetApp -- and DGX systems powered by Nvidia.

“With the increasing demands of AI, we realised that more organisations are looking at how to transform their businesses in their management -- like streamlining processes and aggregating data,” said Eunice Lau, executive managing director of Singapore at Ingram Micro.

“In addition to achieving faster time to deployment with preconfigured solutions, NetApp ONTAP AI removes deployment complexity with multiple points of support, with configurations ranging from starter to large with different capacity options with available expansion, to cater to every business need.”

Combined, the offering aim to help customers “realise the promise” of AI and deep learning technologies, underpinned by strengthened networking capabilities.

“Enterprises are making significant investments in AI but often lack the infrastructure required to optimise data and scale AI applications and workflows for impactful business outcomes,” said Wendy Koh, vice president of Pathways, Alliances and Strategy across Asia Pacific at NetApp.

“NetApp ONTAP AI and Nvidia DGX systems create a single data environment for AI. By simplifying operations, customers are empowered to ensure the right data is available to the right teams to harness AI as a force-multiplier.”

According to Dennis Ang -- senior director of Enterprise Business across ASEAN and Australia and New Zealand at Nvidia -- enterprises adopting AI require infrastructure designed to provide data rapidly to accelerated computing platforms.

“This provides an ideal solution for enterprises to scale the AI workloads supporting new services and products that boost safety, grow operational efficiency and increase customer satisfaction,” Ang said.