How Docker brought containers mainstream

If you’re in data center or cloud IT circles, you’ve been hearing about containers in general and Docker in particular non-stop for a few years now. With the
release of Docker 1.0
in June 2014, the buzz became a roar.

All the noise is happening because companies are adopting Docker at a remarkable rate. At OSCon in July 2014, I ran into numerous businesses that were already moving their server applications from virtual machines (VM) to containers. Indeed, James Turnbull, Docker’s VP of services and support, told me at the conference that three of the largest banks that had been using Docker in beta were moving it into production. That’s a heck of a confident move for any 1.0 technology, but it’s almost unheard of in the safety-first financial world.

Three years later, Docker is bigger than ever. Forrester analyst Dave Bartoletti thinks only 10 percent of enterprises currently use containers in production now, but up to a third are testing them. 451 Research agrees. By 451’s count, container technologies, most of it Docker, generated $762 million in revenue in 2016. In 2020, 451 forecasts revenue will reach $2.7 billion, for a 40 percent compound annual growth rate (CAGR).

Docker, an open-source technology, isn’t just the darling of Linux powers such as Red Hat and Canonical. Proprietary software companies such as Microsoft have also embraced Docker.

So why does everyone love containers and Docker? James Bottomley, fomerly Parallels‘ CTO of server virtualization and a leading Linux kernel developer, explained VM hypervisors, such as Hyper-V, KVM, and Xen, all are “based on emulating virtual hardware. That means they’re fat in terms of system requirements.”

Containers, however, use shared operating systems. That means they are much more efficient than hypervisors in system resource terms. Instead of virtualizing hardware, containers rest on top of a single Linux instance. This in turn means you can “leave behind the useless 99.9 percent VM junk, leaving you with a small, neat capsule containing your application,” said Bottomley.

Therefore, according to Bottomley, with a perfectly tuned container system, you can have as many as four-to-six times the number of server application instances as you can using Xen or KVM VMs on the same hardware.

Sounds great right? You get a lot more application bang for your server buck. So, why hasn’t anyone done it before? Well, actually they have. Containers are an old idea.

Containers date back to at least the year 2000 and FreeBSD Jails. Oracle Solaris also has a similar concept called Zones while companies such as Parallels, Google, and Docker have been working in such open-source projects as OpenVZ and LXC (Linux Containers) to make containers work well and securely.

Indeed, few of you know it, but most of you have been using containers for years. Google has its own open-source, container technology lmctfy (Let Me Contain That For You). Anytime you use some of Google functionality — Search, Gmail, Google Docs, whatever — you’re issued a new container.

Docker, however, is built on top of LXC. Like with any container technology, as far as the program is concerned, it has its own file system, storage, CPU, RAM, and so on. The key difference between containers and VMs is that while the hypervisor abstracts an entire device, containers just abstract the operating system kernel.

This, in turn, means that one thing hypervisors can do that containers can’t is to use different operating systems or kernels. So, for example, you can use Microsoft Azure to run both instances of Windows Server 2012 and SUSE Linux Enterprise Server, at the same time. With Docker, all containers must use the same operating system and kernel.

On the other hand, if all you want to do is get the most server application instances running on the least amount of hardware, you couldn’t care less about running multiple operating system VMs. If multiple copies of the same application are what you want, then you’ll love containers.

This move can save a data center or cloud provider tens of millions of dollars annually in power and hardware costs. It’s no wonder that they’re rushing to adopt Docker as fast as possible.

Docker brings several new things to the table that the earlier technologies didn’t. The first is that it’s made containers easier and safer to deploy and use than previous approaches. In addition, because Docker’s partnering with the other container powers, including Canonical, Google, Red Hat, and Parallels, on its key open-source component libcontainer, it’s brought much-needed standardization to containers.

Since then Docker donated “its software container format and its runtime, as well as the associated specifications,” to The Linux Foundation’s Open Container Project. Specifically, “Docker has taken the entire contents of the libcontainer project, including nsinit, and all modifications needed to make it run independently of Docker, and donated it to this effort.” Docker has continued to work on other container standardization efforts.

At the same time, developers can use Docker to pack, ship, and run any application as a lightweight, portable, self-sufficient LXC container that can run virtually anywhere. As Bottomley told me, “Containers gives you instant application portability.”

Jay Lyman, senior analyst at 451 Research, added:

Enterprise organizations are seeking and sometimes struggling to make
applications and workloads more portable
and distributed in an effective, standardized, and repeatable way. Just as GitHub stimulated collaboration and innovation by making source code shareable, Docker Hub, Official Repos, and commercial support are helping enterprises answer this challenge by improving the way they package, deploy, and manage applications.

Last, but by no means least, Docker containers are easy to deploy in a cloud. As Ben Lloyd Pearson wrote in Opensource.com:

Docker has been designed in a way that it can be incorporated into most DevOps applications, including Puppet, Chef, Vagrant, and Ansible, or it can be used on its own to manage development environments. The primary selling point is that it simplifies many of the tasks typically done by these other applications.

Specifically, Docker makes it possible to set up local development environments that are exactly like a live server, run multiple development environments from the same host that each have unique software, operating systems, and configurations, test projects on new or different servers, and allow anyone to work on the same project with the exact same settings, regardless of the local host environment.

In a nutshell, here’s what Docker can do for you: It can get more applications running on the same hardware than other technologies; it makes it easy for developers to quickly create ready-to-run containered applications; and it makes managing and deploying applications much easier. Put it all together and I can see why Docker is riding the hype cycle as fast as I can recall ever seeing an enterprise technology go.

Moreover, for once the reality is living up to the hype. Frankly, I can’t think of a single company of any size that’s not at least looking into moving their server applications to containers in general and Docker in specific.

Related Stories:



Source link