Organisations exploring the use of data-processing units (DPU) and infrastructure processing units (IPU) got a boost this week as the Linux Foundation announced a project to make them integral to future data centre and cloud-based infrastructures.
DPUs, IPUs, and smartNICs are programmable networking devices designed to free-up CPUs for better performance in software-defined cloud, compute, networking, storage and security services.
The new plan, called the Open Programmable Infrastructure (OPI) Project calls for creating a community that develops standards for building DPU/IPU-based architectures.
OPI will develop technologies designed to simplify network, storage, and security APIs within applications to enable more portable and efficient applications in the cloud and data centre across DevOps, SecOps and NetOps, the Linux Foundation stated.
Founding members of OPI include Dell Technologies, F5, Intel, Keysight Technologies, Marvell, NVIDIA, and Red Hat. OPI joins others such as AWS and AMD working to build smartNICs and DPUs for deployment in edge, colocation, or service-provider networks.
“DPUs and IPUs are great examples of some of the most promising technologies emerging today for cloud and data centre, and OPI is poised to accelerate adoption and opportunity by supporting an ecosystem for DPU and IPU technologies,” said Mike Dolan, senior vice president of Projects at the Linux Foundation.
OPI goals include delineating vendor-agnostic frameworks and architectures for DPU- and IPU-based software stacks applicable to any hardware solutions. This is in addition to enabling the creation of a rich open source application ecosystem and integrating with existing open source projects aligned to the same vision such as the Linux kernel.
Other updates include creating new APIs for interaction with, and between, the elements of the DPU and IPU ecosystem, including hardware, hosted applications, host node, and the remote provisioning and orchestration of software.
According to Dolan, DPUs and IPUs are increasingly being used to support high-speed network capabilities and packet processing for applications like 5G, AI/ML, Web3, crypto, and more because of their flexibility in managing resources across networking, compute, security and storage domains.
Instead of servers being the infrastructure unit for cloud, edge, and the data centre, operators could create pools of disaggregated networking, compute, and storage resources supported by DPUs, IPUs, GPUs, and CPUs to meet their customers’ application workloads and scaling requirements.
As part of the OPI announcement NVIDIA contributed its DOCA networking software APIs to the project. DOCA includes drivers, libraries, services, documentation, sample applications, and management tools to speed up and simplify the development and performance of applications, NVIDIA stated.
DOCA allows for flexibility and portability for BlueField applications written using accelerated drivers or low-level libraries, such as DPDK, SPDK, Open vSwitch or Open SSL. BlueField is NVIDIA’s data centre services accelerator package.