Container Orchestration

The Cloud Native Computing Foundation (CNCF) focused on integrating the orchestration layer of the container ecosystem. The CNCF’s selected Google’s Kubernetes container orchestration tool as its first containerization technology to support it's goal to  create and drive adoption of a new set of common container technologies.

Containers are changing how enterprises build and run their applications and infrastructure. Containers become popular and being used in more dynamic infrastructure due to various supporting technologies. Along with mode companies migrating workloads to containers, orchestration platforms like Kubernetes and Amazon Elastic Container Service (ECS) have become more popular and if the trends continue, soon the majority of enterprises running containers will use Kubernetes to some extent. the trend also proved by increased adoption of Managed Kubernetes services such as GKE (on Google Cloud Platform) and EKS (on Amazon Web Services).

Containers orchestration platform exist to solve the containers scalability in day-to-day operation in large scale containers environments. For an enterprise running production containers at large scale, container orchestration, deployment scenario automation, management, hybrid or multi cloud scaling, networking, and availability of your containers becomes essential.

Container orchestration is about managing the lifecycles of containers, especially in large, dynamic environments. and normally used for:

How Containers Orchestration Works?

Container Orchestration typically describe the application configuration in a YAML or JSON file, depending on the orchestration tool. These configurations files are where you tell the orchestrator where to obtain container images, how to establish networking between containers, how to mount storage volumes, and where to store logs for that container. Typically, this configuration script will be branched and version controlled so the same applications can deployed  across different development, staging and testing environments before being deployed to production clusters.

Containers deployed to hosts, usually in replicated groups. If need to deploy a new container into a cluster, the container orchestrator schedules the deployment and looks for the most appropriate host to place the container based on predefined constraints (CPU or memory). Once the container is running on the host, the orchestrator manages its lifecycle according to the specifications written out in the container’s definition file (Dockerfile).

Containers are supported in different kind of environment, from traditional on-premise host to public cloud instances.

Kubernetes as de Facto Standard

Originally developed by Google, Kubernetes has established as the de facto standard for container orchestration. It’s the flagship project of the CNCF, which is backed by such key players as Google, Amazon Web Services (AWS), Microsoft, IBM, Intel, Cisco, and RedHat.

Kubernetes continues to gain popularity with DevOps practitioners because it allows them to deliver a self-service Platform-as-a-Service (PaaS) that creates a hardware layer abstraction for development teams. Kubernetes is also extremely portable. It runs on public clouds, or in on-premise installations. Application workloads can be moved without having to redesign your applications architecture or rethink your infrastructure. This helps enterprises to standardize a platform and avoid vendor lock-in.

Other Insights

← Previous Insight
DevOps Powered Microservices