Containers and Kubernetes: 3 transformational success stories

Containers and Kubernetes: 3 transformational success stories

Powerful combo of workload portability and orchestration has become an invaluable business asset in multi-cloud and hybrid cloud environments

Credit: Dreamstime

“This environment allows us to continue to see value out of applications that we have developed over several decades in a new and modern way,” Pellas says. “It also plays a key role in our API [application programming interface] and microservices strategy” by facilitating the delivery of new capabilities to business applications, he says.

The container and Kubernetes combination enables Primerica to not only host applications but also to monitor them and recover quickly should anything happen to the containers.

“We provide end-to-end infrastructure-as-code, which enables us to consistently build environments that are predictable, and eliminates the possibility of human error,” Pellas says.

“Our continuous integration and delivery capabilities ensure that product teams always have the latest changes at their fingertips, and they can feel confident that verification of the environment and application has been done as part of the deployment process itself.”

The main drivers for using containers and Kubernetes is to provide teams with an opportunity to deliver applications faster and with better quality, Pellas says. They also provide a secure, stable environment in which to work and scalability during high-use times.

“We also wanted the predictability and consistency between our various environments, to help with debugging and problem resolution,” Pellas says.

While Primerica has only recently begun to take advantage of containers and Kubernetes, it is seeing increased productivity of teams as well as development of new features for its users that can be delivered in an incremental, agile way.

“We have also been able to provide more secure and predictable applications by catching issues early in the development process,” Pellas says. “We hope this will lead to increased quality of our applications and a consistent development experience throughout our product teams, as we migrate more and more applications to the platform.”

As with any technology, there is a learning curve for organisations looking to adopt containers and Kubernetes. “Enabling teams with the right skill sets to properly develop within the environment can be challenging,” Pellas says.

Primerica is addressing those challenges by providing education to its product teams and best practices for business leaders to get their projects into the pipelines.

Clemson University: Wrangling massive computational resources

The Feltus lab in the Department of Genetics and Biochemistry at Clemson University is an interdisciplinary team of geneticists, computer scientists, computer engineers, and bioengineers who blend software engineering and computational biology techniques to make useful molecular discoveries in human and plant biological systems.

The lab uses bioinformatics, statistical and data science approaches to discover patterns, says Alex Feltus, a professor in the department. “The biological data sets we analyse are in the tera- to peta-scale range, and we engineer optimised data-intensive computational workflows that fit data to a myriad of computational platforms,” including those of several commercial cloud providers.

In recent years, the lab has focused development efforts on workflows that run on Kubernetes systems. “We believe that Kubernetes will be a common standard platform for data-intensive computing for many years, which allows us to focus our software engineering efforts on one architecture,” Feltus says.

Biological databases are growing geometrically, Feltus says, and data sets can be mined for biological insight into some of the largest medical and food security challenges. “Even small biology labs are in constant need of massive computational resources,” he says.

“Researchers will soon want to ask biological questions at the petascale [level], which will require access to massive computers that are currently possible in commercial clouds. Kubernetes clusters are an excellent platform to do large scale computing.”

Before moving to the cloud, “biological researchers need democratised, credit-free cloud sandboxes where one can engineer and test workflows at scale,” Feltus says. “These sandboxes are critical, since 90 per cent of scientific experiments lead to dead ends, which would burn cloud credit budgets before discovery occurs.”

The Feltus lab is working with many other research groups to pilot scalable resources that are a blend of on-premises and cloud services, and Kubernetes and containers will play a huge role.

“Kubernetes and containers are a go-to platform for computational biology workflow engineering,” Feltus says. “These systems have allowed my students to bypass many of the vagaries of HPC [high-performance computing] environment configuration.”

The lab has deployed Cisco Container Platform, which helps the team manage multiple clusters from various cloud providers under one platform. Once the lab has tested its workflows in Kubernetes clusters, it can run the containerised workflows in multiple commercial clouds. “This streamlines end-user training and allows the user to focus on the science,” Feltus says.

Tags containersKubernetes


Show Comments