Menu
New HPE offerings aim to turbocharge machine learning implementation

New HPE offerings aim to turbocharge machine learning implementation

Machine Learning Development System and Swarm Learning address implementation pain points for enterprise users.

Credit: Dreamstime

Hewlett Packard Enterprise (HPE) has released a pair of systems designed to broaden the uptake and speed deployment of machine learning among enterprises. 

Swarm Learning is aimed at bringing the wisdom of crowds to machine learning modelling without sacrificing security, while the Machine Learning Development System is meant to offer a one-box training solution for companies that would otherwise have had to design and build their own machine learning infrastructure.

The Machine Learning Development System is available in physical footprints of several different sizes, but the company says a "small configuration" uses an Apollo 6500 Gen10 compute server to provide the horsepower for machine learning training, HPE ProLiant DL325 servers and Aruba CX 6300 switches for management of system components, and NVIDIA's Quantum InfiniBand networking platform, along with HPE's specialist Machine Learning Development Environment and Performance Cluster management software suites.

New system brings HPC computing to machine learning

According to IDC research vice president Peter Rutten, it's essentially bringing HPC (high performance computing) capabilities to enterprise machine learning, something that would usually require enterprises to architect their own systems.

"It is the kind of system that businesses are really looking for, now that AI is more mature," he said. "The biggest hurdle with bringing AI into your business is that you have to build the system." 

Using cloud resources could be an option for some companies, but the data required for AI models tends to be sensitive and business-critical, so certain businesses might shy away from that option even if regulatory restrictions on some industries make it outright impossible for others.

Swarm Learning decentralises machine learning

The sensitive nature of machine learning data is the pain point HPE is trying to address with its other new product, Swarm Learning. This is a decentralised framework that uses containerisation to accomplish two ends — first, it allows machine learning to take place on edge systems, without the need for a round-trip to a central data centre, letting companies gain accurate insight faster than they would otherwise be able to. 

Second, it allows peer companies to actually share the results of AI model learning among themselves, potentially creating industrywide benefits without requiring businesses to share the underlying data with one another.

"So if you have seven hospitals that are all trying to solve problems with AI model training, but they can't share data, then you get limited AI training," said Rutten. This makes for low-accuracy models with potential bias built in, depending on the demographics of the hospitals patients and a host of other factors. 

"In order to solve this … swarm learning doesn't share the data, but it shares the results of the model training in each location and combines those into a trained model."

Rutten noted that swarm learning is a relatively novel technique, meaning that widespread uptake might be slow, but that HPE's machine learning development system directly targets a present-day pain point, making it the more interesting announcement of the two.

"It's almost an aaS [as-a-service] offering in your data centre," he said. "This is what people are looking for in enabling AI model training in their enterprise."


Tags Hewlett Packard Enterprise

Show Comments