Traditionally, highly available, resilient applications would have to be deployed to a large number of servers or virtual machines. Having one application per virtual machine makes it hard to make the most of your resources. And creating and deleting virtual machines can take a long time. Containers (such as Docker) are designed to solve these issues. They start and stop quickly and improve your resource utilisation because you can run many containers on a single machine.
Kubernetes was created to make it easy to manage containerised applications across many – potentially distributed – machines. Some of the high-level features you get with Kubernetes are: automated, zero-downtime deployments; horizontal and vertical autoscaling; load balancing across multiple containers on multiple servers; and container-aware service discovery through DNS.
All of these work with a declarative API. Users define the desired state of the application (say, two instances with load balancing) in their preferred configuration language (JSON or YAML). Kubernetes performs the necessary steps to bring the application into the desired state. This enables a self-healing system. For example, when one instance of the application dies, Kubernetes will automatically start a new instance to maintain the desired state.
The configuration can be stored in files and added to source control repositories next to the application’s source code. This neatly ties in with the infrastructure as code and immutable infrastructure patterns.
Kubernetes is clearly winning as the new standard of cluster orchestration. Several different providers offer cloud-hosted or packaged Kubernetes distributions, making it an obvious strategic choice of an application platform. That said, Kubernetes is a complex toolset and, if you’re looking to adopt it within your organisation, you’ll need a strategy. In our opinion, Kubernetes makes it possible for small teams to fully own their infrastructure, for the first time. We’ve given that strategy a name – microplatforms.