Running microservices on Google Cloud Platform

Running microservices on Google Cloud Platform

From roll-your-own Kubernetes and PaaS to serverless containers and functions, Google Cloud provides many options for building microservices applications in the cloud. Here’s a guide.

Credit: Dreamstime

Microservices architecture moves complexity out of internal application design and into the external network connected service architecture.

Cloud providers offer a variety of approaches for managing this complexity. This article gives you an overview of the options available in the Google Cloud Platform (GCP).

GCP microservices tools overview

Users can get a general introduction to microservices here. There are a number of approaches to dealing with such an architecture of smaller, interrelated services. Below is a list of options available in GCP.

  • Roll-your-own Kubernetes with Google Compute Engine
  • Managed Kubernetes with Google Kubernetes Engine
  • Serverless container architecture with Google Cloud Run
  • Platform-as-a-service on Google App Engine
  • Serverless functions with Google Cloud Functions

This overview proceeds in a general way from the more hands-on, developer-driven approach towards the more hands-off, platform-managed options. They are not mutually exclusive, and some teams will tend to use a single approach, with others blending options. Cloud Functions especially are often used in conjunction with other approaches to handle smaller requirements.

Also note that we are here dealing with application architecture specifically, and not considering things like datastore solutions.

Roll-your-own Kubernetes with Google Compute Engine

Kubernetes is a cross-platform, open source system (originally developed at Google) for managing containerised application clusters.

The most hands-on approach to building microservices applications is to define your virtual machines and networking in Google Compute Engine, then install Kubernetes into this infrastructure. You are then in charge of configuring and running the Kubernetes cluster on top of this infrastructure.

The general process is to create a master VM and one or many worker VMs, with Kubernetes installed to control the containerised applications deployed therein. An overview for running on Google Compute Engine from the Kubernetes docs is here, and guides to installing Kubernetes with deployment tools are here.

Manually defining the infrastructure gives the greatest degree of control to the developer. The flip side of that coin is that it requires the most intervention. Infrastructure setup like VM provisioning and network configuration can be managed via tooling like Ansible and Terraform, and autoscaling can be supported by tools like GCP Cloud Monitor.

Managed Kubernetes with GKE

Google Kubernetes Engine (GKE) is a higher-level abstraction built atop Kubernetes. It is designed to automate certain aspects of cluster management. These include:

  • Automated load balancing
  • Node pool subsets
  • Automatic scaling of your node instance count
  • Automatic upgrades for your cluster’s node software
  • Node auto-repair to maintain node health and availability
  • Logging and monitoring with Google Cloud Operations

In general, GKE strives to bundle together the common needs faced by developers when managing Kubernetes clusters, from setup and provisioning to monitoring and autoscaling, and offer simplified means for addressing them. Moreover, GKE allows for managing many of these options via its web GUI.

GKE includes logging at both the container and host level. GKE also supports integration with GCP's CI/CD tooling like Cloud Build. You can publish your container images to Google's Container Registry.

Of course, these conveniences come at a cost. GKE clusters are billed at a rate over and above the specific services upon which they run. You’ll find a pricing guide and calculator here. And an overview of GKE here.

Serverless container architecture with Google Cloud Run

Google Cloud Run is a serverless abstraction layer built atop Knative, which is an open source project for creating serverless applications atop Kubernetes.

In general, Google Cloud Run is a higher-order abstraction over and above GKE. Cloud Run abstracts away from the developer almost all of the provisioning, configuration, and management of the Kubernetes cluster. It is designed to run simple microservices applications that require little customised infrastructure management.

Google Cloud Run also includes the ability to use its management facility with an existing GKE Anthos cluster that you are using, thereby opening up a greater degree of developer control.

When choosing between GKE and Google Cloud Run, Google recommends that you “understand your functional and non-functional service requirements like ability to scale to zero or ability to control detailed configuration.” This is sound advice in any case, but specifically here the question is whether Cloud Run offers you the control you need for your application architecture. If not, you need to use GKE.

Like PaaS solutions, Google Cloud Run requires you to employ a stateless application architecture.

Platform-as-a-service on Google App Engine

As an abstraction of application infrastructure, platform as a service (PaaS) stands somewhere between IaaS and serverless. Although you will see Google App Engine referred to as serverless, it is fundamentally a PaaS. Google App Engine also employs Kubernetes under the hood, but this is largely hidden from you as the developer.

As with other PaaS offerings like Cloud Foundry, Google App Engine applications must be stateless. This is because the PaaS itself is responsible for scaling up and down and routing requests. The developer does not have control over how app resources are added or removed. An app node that handles a given client request may not exist for the next request.

Serverless functions with Google Cloud Functions

Google Cloud Functions fall into the FaaS (functions as a service) category. This is the most abstracted sort of cloud computing. The unit of deployment is the function, and the infrastructure to deliver the processing is highly managed.

Google Cloud Functions are triggered by events and perform simple function-scope actions. Triggers at the time of writing include HTTP, Cloud Storage, and Pub/Sub triggers. Data from the triggers are passed into Cloud Functions as parameters.

At the moment, Google Cloud Functions support Go, Java, .NET, Node.js, Python, and Ruby as runtime languages. These allow for idiomatic use of related technology. As an example, you can use the Java Servlet API to handle HTTP triggers or you can adopt more advanced approaches, like using frameworks such as Spring Cloud Function or Node.js Express.

Google Cloud Functions represent a very powerful and simple approach to deploying functionality. However, they are limited in their ability to handle complex use cases and they limit the ability of developers to control infrastructure. Cloud Functions are often used to handle smaller chunks of functionality in conjunction with the other approaches described in this article.

Google suggests Cloud Functions for these types of use cases:

  • Lightweight data processing and ETL: Running data or file-based triggers to handle tasks like image processing or compression
  • Webhooks: Respond to HTTP-based requests from systems like GitHub or Stripe
  • Lightweight APIs: Handle individual requests or events that can be interrelated to compose larger applications
  • Mobile back end: Act as an intermediary between cloud-based services like Firebase and other services
  • IoT: Leverage Pub/Sub triggers to handle IoT-scale eventing

Many microservices options

The landscape of cloud services in GCP offers many options for application architectures supporting microservices. By understanding the microservices offerings and tools available, you can find the right architecture and approach to successfully meet your requirements.

Tags Google CloudMicroservices


Show Comments