The term 'cloud' has been evolving ever since the term was first used in the early 1990s. One could argue that Cloud 1.0 was really nothing more than a euphemism for hosted services.
Hosted services gave companies the ability to run critical apps off their premises in a highly secure, predictable environment. This value proposition continued with the future rise of services like Amazon Web Services (AWS) and Microsoft Azure, where businesses would 'lift and shift' legacy apps and drop them into a cloud.
Cloud 2.0 gave rise to web-optimised apps. In Cloud 2.0 the apps were truly built for the cloud and spawned companies that made the cloud their primary compute platform. However, this cloud strategy revolved around a single cloud provider and traditional monolithic app architectures.
Even companies that used multiple clouds built app A on one cloud, app B on another, etc. In this case, multi-cloud was actually multiple clouds being used as discrete, independent infrastructure entities.
We have now entered the Cloud 3.0 era, which can be thought of as multi-cloud on steroids. The rise of microservices and containers has allowed app developers to build apps by accessing services from multiple cloud providers.
Most modern, cloud-native apps are being built this way. Edge computing is on the horizon, which will create more locations for app developers to extend access to data and app services. This is the concept of the distributed cloud, where the cloud is no longer a single location, but a set of distributed resources.
Distributed cloud changes app delivery
The evolution of the cloud – and cloud-native apps as a result – has had a profound impact on the networking and security services required to connect app components to other app components and to connect apps to users.
With Cloud 1.0, IT professionals used physical appliances such as load balancers or application delivery controllers and web app firewalls. These were installed in the same data centres that hosted the app infrastructure.
Meanwhile with Cloud 2.0, these functions were virtualised and installed as a cloud resource. The network and app architectures were largely the same, but the infrastructure shifted to cloud-resident virtual appliances.
With distributed cloud (Cloud 3.0), app components (e.g. microservices) are modular and reside in containers across multiple clusters. This creates significant deployment and operational challenges for DevOps teams and IT professionals.
The dynamic and distributed nature of containerised workloads and microservices can’t be supported by the physical or even virtual infrastructure traditionally used for monolithic apps, which rely on centralised control and visibility, as opposed to the dynamic operational model needed for highly distributed clusters and workloads.
A containerised workload can be spun up and deprecated in a matter of minutes, even seconds. This means the supporting network and security infrastructure like load balancers, web app firewalls, and API gateways need to be spun up and down just as quickly.
Meeting the operational challenges of distributed cloud
This week cloud-native app infrastructure provider, Volterra, announced the latest release of its VoltMesh service, which addresses many of the operational challenges associated with deploying and operating modern apps in a distributed cloud model.
The company offers a wide range of what would typically be known as 'application delivery services' from the cloud, made available as a single SaaS-based offering.
Instead of having to deploy multiple virtual appliances per cluster, across multiple clusters, VoltMesh offers an integrated service stack that can be easily deployed across a distributed cluster with centralised management, end-to-end visibility, and policy control.
By leveraging the speed and ubiquity of a SaaS-based service, DevOps can speed up the deployment of distributed cloud-native applications and simplify ongoing operations, while developers can achieve better code integration and agility as they have greater freedom of where and how to develop.
VoltMesh can be deployed in clusters in every major cloud provider as well as private clouds and edge sites. Volterra also offers its own application delivery network (ADN) to improve the performance and security of cloud-native apps – hosting them directly on Volterra’s private network and closer to end users.
The cloud-native infrastructure approach Volterra is taking is for more than just speed though, as there are major security implications. As app developers shift to building cloud-native apps on microservices, new threats are emerging from embedded and unseen APIs.
The number of APIs per app has exploded, creating more intra-application traffic. Security approaches like segmentation, even micro-segmentation, operate at the network layer – so they’re blind at the API layer. This means DevOps and DevSecOps teams must shift the focus of their zero-trust model from the network to the API layer, including the ability to 'see' all APIs and then enforce policies on them.
As part of this release, VoltMesh has unveiled its new API auto-discovery capability that can automatically find all APIs in an application through the use of its machine learning engine.
It then automatically applies policies to whitelist only the APIs that are required and validated, rendering any unneeded or unsafe ones ineffective. This shrinks the attack surface significantly, increasing protection and compliance without delaying app release cycles.
The concept of multi-cloud (or multiple clouds) plus monolithic apps is rapidly giving way to distributed cloud plus distributed apps, and this will change the way we provision networking and security infrastructure for them.
Traditional application delivery infrastructure needs to evolve to be cloud-native and distributed if DevOps and NetOps teams hope to keep up with the agility of developers building modern apps today.