Subscribe to our Newsletter

Receive our news and insights

Blog / Cloud   Cloud Computing  

Containerization Growth Positions Kubernetes for Wider Deployment

A Picture of Charles Green
September 11, 2017 | Topic: Cloud   Cloud Computing  
Containerization Growth Positions Kubernetes for Wider Deployment

Three years ago Google, along with Microsoft, IBM, and Red Hat, introduced the business technology world to Kubernetes, an open-source system for managing containers. The central benefit of Kubernetes lies in its ability to spread out workloads among a network’s computers, even when those networks span public and private clouds.

Containerization effectively pockets off portions of servers for applications, and it breaks programming code into standard components, so workloads can be automated. Containers are used and reused, and optimized without having to wait for a system wide upgrade. The transition to containers puts enterprise system well on the path toward deploying Kubernetes. Using Kubernetes for container management offers the ability for applications to be scaled up or down based on simple commands, by using the UI, or else scaling can be done automatically based on API usage.

However, the announcement generated surprisingly little reaction in 2014. At first blush that was something of a surprise given the prominence of the backers, and Google’s disclosure that it had used the underlying Borg technology to support the operation of some two billion containers a week for its services.

In part that was because Kubernetes was introduced amid an industry-wide rethink on IT architecture. After years of struggling with systems overburdened by demand, the software industry had just settled on virtualization as a way of realizing efficiency by re-allocating work to servers with spare capacity. The virtualization model was still showing efficiency gains, while fundamental questions related to security often took precedence over deployment of new models.  Moreover, early adopters at times struggled to automate infrastructures, running into trouble with inter-container management and auto-scaling and other features that should have delivered gains.

In this environment, adoption of Kubernetes has occurred within the broader trend of microservices. Enterprises typically encounter microservices at a time when they already looking to rewrite code for the cloud. As explained in Belatrix’s A Software Executive’s Guide to Microservices, available as a free download: “Often it makes sense to use this opportunity to implement microservices, as you can then gradually transition the application to the cloud, rather than undertake a more risky and complex migration.”

Since then, Google has updated Kubernetes, now in version 1.7, on a steady basis. One particularly dramatic update in 2016, Kubernetes 1.4, not only simplified installation and bootstrapping, but it offered a series of security enhancements as well. (For a detailed analysis of individual security features in Kubernetes 1.4, see Swapnil Bhartiya’s article on CIO here.) Further upgrades have been inspired by feedback from developer conferences.

Soon though, the pace of deployment may accelerate. For one, containerization has demonstrated impressive gains when it comes to enterprise ability to synchronize data, align testing phases with deployments, and rapidly introduce updates. As Mike Johnson noted in a post on Supergiant, containerization is now bypassing the server-level engineering clash that has slowed developer’s ability to automate. Increasingly developers are able to develop and test apps within a container, and because the engineering team doesn’t need to have a fully integrated view of how the app in the container functions, only that it runs, teams are accelerating speed to market.

Also, while there are a number of container orchestration and management systems to choose from, over the past 12-18 months it has become clear that enterprises are picking between two principal options, Kubernetes and Docker Swarm. Each has its merits, and increasingly firms like IBM appear inclined to use a Kubernetes/Docker stack as the basis for hybrid cloud deployments.  Indeed, according to one survey conducted earlier this year by 451 Research and CoreOS, amongst those companies already using containers, 71% were deploying Kubernetes to increase the efficiency and productivity of their cloud deployments.

All this suggests containerization is reaching the point where deployments are becoming sufficiently frequent that major efficiency gains are at hand. This, along with the winnowing of options for container management, positions Kubernetes for wider deployment.

Related articles

A Software Executive’s Guide to Microservices

The top challenges with microservices

Moving to a serverless architecture in 2017

The chatbot revolution

Related Services



The leaders we need to navigate the COVID-19 storm


April 23 / 2020

1 Stars2 Stars3 Stars4 Stars5 Stars

As we gradually get used to our new COVID-19 reality, daily life from just a few weeks ago now feels like a lifetime away. For businesses this has created,...

Read post