Container vs Package Deployments

Ever run into this scenario?

Your team has competed its development, reviewed and tested the code on their dev systems and check it into the code repository.  The build system has compiled the application, run the unit tests and whatever else it needs to validate the build.  But the application fails after it is deployed to some downstream server.  You dig and dig only to find out that there is a configuration difference between the development systems and the down stream systems.

Or this one?

You have completed the development of your application and suddenly your customers management decides to change the hosting provider.

Root Cause

For a traditional, non-cloud applications running on a server, delivery artifacts are typically a software package tailored for the target OS

The typical deployment strategy of packages involves copying the package archives (that contain binary executables, JARs, scripts, etc) to the host VM along with making any necessary configuration changes to the host VM and updating the DB if needed.  All of this work is done using tools available on the target OS to install and manage dependency on other packages  The target OS package management tool installs the application, gives file ownership to the proper users, ensures application startup on OS boot and shutdown on OS stop.

All of this means when your customer decides that they would like to deploy your application in a new IT data center, you will have to go into a several month development cycle just to update startup scripts and installation procedures.  This also means that you will introduce support risk since there is probably no longer a one to one mapping between your dev/qa systems and the production system.  This is even harder if the customer can not provide a replicated environment for testing or instructions to configure a VM with their customized installation.  So the production installation may fail, even though it may function correctly in QA.

To get this work done using traditional VMs, you would need to have your application development team deliver an RPM containing their software and startup scripts.  They would also need to communicate all of the information regarding network dependencies such as firewall rules.  This would allow a VM to be created that contains any dependent software.  Each application would be deployed into a VM.  These would be large in size since VM instances do not share any common components and so may be too heavy to deploy to a developers laptop.

What is the solution?

What is needed in this situation is a deployment unit that is smaller than a VM but also provides app isolation.  Something that will reduce the applications dependencies on OS packaging.

Development & QA will want to work directly with the deployment unit when doing its testing.  This will ensure that the same test results are delivered at each step on the release path.

You will also want to automate the construction of this unit of deployment into your continuous integration tool so that in your build process you will create a single unit that encapsulates ll of the application dependencies, operating environment needs (port mapping), and startup requirements.

Enter: Docker

Docker is an Open-source project (Apache 2.0 License) that provides a software container in which linux applications run.  This container provides all of the dependencies that the application needs but avoids the weight of a full VM by sharing the kernel with other containers.  It also provides resource isolation (CPU, memory, I/O, network) and shares resources between running apps where possible (OS, bins/libs).  They have much faster start times and far less physical disk storage requirements which can translate to higher densities per node.

The construction of Docker images can be integrated into your current build system allowing each application to be built and delivered as a Docker Container.  This means that developers and testers will run the exact same image that is going to be deployed to production.  Testers should never again hear “It is working on my system” from developers.

Here is a list of some of the Docker Advantages

  • Isolation
  • Filesystem: each container has a completely separate root filesystem; shared files can be “mounted” in from the Host OS
  • Resources: CPU and Memory can be allocated differently to each container
  • Network: each container has its own network namespace, each with virtual interface and IP
  • Common deployment unit
  • No worries about supporting different package managers or init mechanisms
  • Images can be stacked / chained together
  • Same container runs on developer’s laptop, in the CI environment, and in the production environment

There are many opportunities where Docker images can be used.  I would love to hear how you are using them.

Stay safe out there,

Dwain

Advertisements
This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s