Containers encapsulate discrete components of application logic provisioned only with the minimal resources needed to do their job.
- Unlike virtual machines (VM), containers have no need for embedded operating systems (OS); calls are made for OS resources via an application programming interface (API).
- Containerisation is, in effect, OS-level virtualisation (as opposed to VMs, which run on hypervisors, each with a full embedded OS).
- Containers are easily packaged, lightweight and designed to run anywhere. Multiple containers can be deployed in a single VM.
- A microservice is an application with a single function, such as routing network traffic, making an online payment or analysing a medical result.
- The concept is not new; it has evolved from web services, and stringing microservices together into functional applications is an evolution of the service-oriented architecture (SOA), which was all the rage a few years ago.
Containers and microservices are not the same thing.
A microservice may run in a container, but it could also run as a fully provisioned VM. A container need not be used for a microservice. However, containers are a good way to develop and deploy microservices, and the tools and platforms for running containers are a good way to manage microservice-based applications. In many cases, the terms can be interchanged.
Containers have been integral to Unix and Linux for years. A recent change has been the ease with which they can be used by all developers, and an entire supporting ecosystem has grown up around them. Containerisation is not something happening on the fringes of IT, it is core to the way many web-scale services operate and is increasingly being adopted by more conservative organisations. The suppliers mentioned in this article cite customers ranging from the NHS to large banks.
There are many suppliers involved, but no one disputes that Docker has led the charge and sits at the heart of the market. Docker says millions of developers and tens of thousands of organisations are now using its technology. However, another statistic indicates the novelty of containers in production.
Docker’s dominance does not mean it holds a monopoly; far from it. Across the whole container ecosystem, there is plenty of choice. There are many startups and the great and good of the IT industry are all on board, as a glance at the sponsors of the February 2016 Container World event shows.The top sponsor is IBM, which is one of Docker’s three main global go-to market partners, along with Microsoft and Amazon Web Services (AWS).
Open source componentisation
Multiple containers are deployed in clusters and managed using a range of tools. Many of these containers will be pre-built components that can be layered together to build up application images. A prime benefit is that it is running less scheduled downtime means better business continuity. This has led to the rise of the DevOps concept, which allows faster deployment of new software capabilities directly into an operating environment.
Much of the core containerisation technology is open source, and suppliers that have previously eschewed it, such as VMware, are being drawn in. At its heart is the Open Container Initiative (OCI) launched in 2015. This operates under the auspices of the Linux Foundation to create open industry standards around container formats and their runtime environment. Docker has donated its own format and runtime to the OCI to serve as the cornerstone.
Many containerised components are downloadable from open collaboration projects such as GitHub and Docker Hub. As with all open source technologies, the suppliers that operate in the market must earn their money by providing stable versions with associated support services.
The container stack
There are four technology layers that need consideration:
1. Container operating systems
Even though containers do not have an embedded OS, one is still needed. Any standard OS will do, including Linux or Windows. However, the actual OS resources required are usually limited, so the OS can be too. This has led to the development of specialist container operating systems such as Rancher OS, CoreOS, VMware Photon, Ubuntu Snappy, the Red Hat-backed Project Atomic and Microsoft Nano Server. The benefit here is that the VMs provisioned to run containers are lightweight (some run in about 25MB) and when it comes to security, the attack surface is minimised. Cloud platform providers are embedding their own support.
2. The container engine
This is where Docker dominates, but there are competitors, such as CoreOS Rocket (Rkt). AWS says Docker is by far the most popular engine with its customers, and therefore the focus of its support plans. Engines come with supporting tools, for example the Docker Toolbox, which simplifies the setup of Docker for developers, and the Docker Trusted Registry for image management. There are also third-party tools, such as Cloud66.
3. Container orchestration
Containers need to be intelligently clustered to form functioning applications, and this requires orchestration. Orchestration is where much of the differentiation lies in the containerisation ecosystem and it is where the competition is hotting up most.
The engines provide basic support for defining simple multi-container applications, for example Docker Compose. However, full orchestration involves scheduling of how and when containers should run, cluster management and the provision of extra resources, often across multiple hosts. Tools include Docker Swarm, the Google-backed Kubernetes and Apache Mesos. You could use general-purpose configuration tools, such as Chef and Puppet (both open source) or commercial offerings such as HashiCorp Atlas or Electric Cloud ElectricFlow. None of these is container specific, however.
4. Application support services
Many additional tools are emerging to support containerised applications some examples follow. There is a danger of containerisation ending up like herding cats, which is a problem for application portability. An organisation may want to move an app from one cloud platform to another. Software suppliers will need to consistently recreate their applications for user deployments. How do you ensure all the dependencies and necessary containers are copied and recreated? Rancher Labs core product (it also has an OS) enables applications to be built up from containers so that the full operating environment can be recreated, including the containers themselves, load balancers, networking, and so on.
Networking is an issue, especially across platforms. In 2015, Docker released Docker Networking to enable virtual connections between containers. UK-based Weaveworks also focuses on networking with WeaveNet, a micro-software-defined network (SDN). Metaswitch’s Project Calico is all about making container networking more secure through the dynamic construction of firewalls, taking policy from orchestrators.
Docker, too, is developing new tools to support the lifecycle of containerised applications. Last year, it acquired a company called Tatum, an on-demand service for building, deploying and managing applications. The Docker Universal Control Plane (UCP) provides similar on-premise capability. Both products are yet to go into production. Weaveworks also has a tool called WeaveScope for production monitoring of containerised applications.
All the big industry players are joining the containerisation bandwagon, with a range of initiatives under way.
Google is an old hand with containers it has been developing and deploying billions internally for years. The company has been a major contributor to various container-related open source projects, included the Kubernetes orchestrator, which it donated.
Google has now opened up this expertise to its customers and added the Google Container Engine to the Google Cloud Platform.
Microsoft has added container support with Windows Server Containers enabling the sharing of the OS kernel between a host and the containers it runs. Hyper-V Containers expands on this by running each container in an optimised virtual machine. For the cloud, there is the Azure Container Service (ACS), developed in conjunction with Docker, which can manage clusters of containers with “master machines” for orchestration. ACS also supports other orchestration tools, such as Kubernetes.
AWS customers were quick off the mark to deploy containers on its EC2 platform, so AWS has followed up by providing a cluster management and scheduling engine for Docker called the EC2 Container Service (ECS). This is supported by the EC2 Container Registry (ECR) to support the storing, management and deployment of container images. ECS is widely available, whereas ECR is currently available only in the eastern US.
VMware has not taken the move to containers lying down. Later this year, it will release vSphere Integrated containers, using VMWare’s Photon OSto turn VMs into Docker-like containers based on OCI. This will allow users to take advantage of existing vSphere support tools. In a first for VMware, it has open-sourced both the PhotonOS and the associated Photon Controller.
Other examples include IBM Containers for Bluemix, Rackspace Carina (based on OpenStack Magnum, embedded support for containers and orchestration). Another open source initiative is Deis, a platform-as-aservice (PaaS) based on CoreOS.
For developers, the open source nature of the container marketplace makes it easy to access the technology and supporting tools and crack on with building agile applications through a DevOps-style process.
This offers many benefits to businesses, but they must consider the supporting platforms and technologies that are endorsed to ensure longer term stability and support. Making such decisions will not be easy as the containerisation market changes.