Container vs Package Deployments

Ever run into this scenario?

Your team has competed its development, reviewed and tested the code on their dev systems and check it into the code repository.  The build system has compiled the application, run the unit tests and whatever else it needs to validate the build.  But the application fails after it is deployed to some downstream server.  You dig and dig only to find out that there is a configuration difference between the development systems and the down stream systems.

Or this one?

You have completed the development of your application and suddenly your customers management decides to change the hosting provider.

Root Cause

For a traditional, non-cloud applications running on a server, delivery artifacts are typically a software package tailored for the target OS

The typical deployment strategy of packages involves copying the package archives (that contain binary executables, JARs, scripts, etc) to the host VM along with making any necessary configuration changes to the host VM and updating the DB if needed.  All of this work is done using tools available on the target OS to install and manage dependency on other packages  The target OS package management tool installs the application, gives file ownership to the proper users, ensures application startup on OS boot and shutdown on OS stop.

All of this means when your customer decides that they would like to deploy your application in a new IT data center, you will have to go into a several month development cycle just to update startup scripts and installation procedures.  This also means that you will introduce support risk since there is probably no longer a one to one mapping between your dev/qa systems and the production system.  This is even harder if the customer can not provide a replicated environment for testing or instructions to configure a VM with their customized installation.  So the production installation may fail, even though it may function correctly in QA.

To get this work done using traditional VMs, you would need to have your application development team deliver an RPM containing their software and startup scripts.  They would also need to communicate all of the information regarding network dependencies such as firewall rules.  This would allow a VM to be created that contains any dependent software.  Each application would be deployed into a VM.  These would be large in size since VM instances do not share any common components and so may be too heavy to deploy to a developers laptop.

What is the solution?

What is needed in this situation is a deployment unit that is smaller than a VM but also provides app isolation.  Something that will reduce the applications dependencies on OS packaging.

Development & QA will want to work directly with the deployment unit when doing its testing.  This will ensure that the same test results are delivered at each step on the release path.

You will also want to automate the construction of this unit of deployment into your continuous integration tool so that in your build process you will create a single unit that encapsulates ll of the application dependencies, operating environment needs (port mapping), and startup requirements.

Enter: Docker

Docker is an Open-source project (Apache 2.0 License) that provides a software container in which linux applications run.  This container provides all of the dependencies that the application needs but avoids the weight of a full VM by sharing the kernel with other containers.  It also provides resource isolation (CPU, memory, I/O, network) and shares resources between running apps where possible (OS, bins/libs).  They have much faster start times and far less physical disk storage requirements which can translate to higher densities per node.

The construction of Docker images can be integrated into your current build system allowing each application to be built and delivered as a Docker Container.  This means that developers and testers will run the exact same image that is going to be deployed to production.  Testers should never again hear “It is working on my system” from developers.

Here is a list of some of the Docker Advantages

  • Isolation
  • Filesystem: each container has a completely separate root filesystem; shared files can be “mounted” in from the Host OS
  • Resources: CPU and Memory can be allocated differently to each container
  • Network: each container has its own network namespace, each with virtual interface and IP
  • Common deployment unit
  • No worries about supporting different package managers or init mechanisms
  • Images can be stacked / chained together
  • Same container runs on developer’s laptop, in the CI environment, and in the production environment

There are many opportunities where Docker images can be used.  I would love to hear how you are using them.

Stay safe out there,

Dwain

Posted in Uncategorized | Leave a comment

boot2docker Cheat Sheet

boot2docker – Remote Docker daemon

boot2docker is a lightweight Linux distribution based on Tiny Core Linux made specifically to run Docker containers. It runs completely from RAM, weighs ~27MB and boots in ~5s.

boot2docker is required if you want to do any work with docker images on a Macintosh.  This includes building images and running containers.

Installing Boot2Docker on Mac using homebrew

$ brew install boot2docker

If you are not a user of HomeBrew for package management, I highly recommend it.  You can get more information on it and how to install it at : Homebrew

Start boot2docker

$ boot2docker init
$ boot2docker start
$ $(boot2docker shellinit)

“boot2docker init” creates a new VM.  This only needs to be run once unless you delete your VM.

The last line “$(boot2docker shellinit)” sets the DOCKER_HOST environment variable for this shell. 

SSH into the boot2docker VM

$ boot2docker ssh

On the MacOS, the Docker config file is located at: /etc/init.d/docker

Managing your Boot2Docker VM

There is a limited set of commands that can be used to manage you boot2docker vm. but by using the VirtualBox CLI, you can fine tune the configuration of it.  If you prefer to use a graphical interface to configure the vm, you can use VirtualBox.  Once boot2docker is up, you can start VirtualBox and see the boot2docker-vm listed there.  Also, download for VirtualBox also includes the documentation for the CLI.

Handling the insecure registry error

Error: Invalid registry endpoint : Get : EOF. If this private registry supports only HTTP or HTTPS with an unknown CA certificate, please add --insecure-registry 168.84.250.205:5000 to the daemon’s arguments. In the case of HTTPS, if you have access to the registry’s CA certificate, no need for the flag; simply place the CA certificate at /etc/docker/certs.d/168.84.250.205:5000/ca.crt

Insecure connections to registries are not allowed (by default) starting with version 1.3.1 of docker. You may receive the error above when attempting to pull from an insecure private registry.  To fix this issue …

$ boot2docker init
$ boot2docker up
$ boot2docker ssh
$ echo 'EXTRA_ARGS="--insecure-registry  --insecure-registry "' | sudo tee -a /var/lib/boot2docker/profile
$ sudo /etc/init.d/docker restart
$ exit

Sync boot2docker

boot2docker host suffers from time drift while your OS is asleep.  This issue manifests itself on the MacOS. I am not sure if there is an issue about Windows.  I ran into this issue while compiling code on an image as it was being constructed.  The build date of the application tended to lag further and further behind until I would restart boot2docker and then it would re-sync.  What I needed was the ability to sync boot2docker with a time server every time a new image was being built.

To resync the boot2docker vm with a time server

$ /usr/local/bin/boot2docker ssh sudo ntpclient -s -h pool.ntp.org

Exposing your containers to the network

If you want to share container ports with other computers on your LAN, you will need to set up NAT adaptor based port forwarding.

On a running instance of boot2docker that is hosting a Tomcat server on port 8080, forward all incoming requests on port 8080 from the host OS to boot2docker

$ VBoxManage controlvm "boot2docker-vm" natpf1 "tcp-port8080,tcp,,8080,,8080";
$ VBoxManage controlvm "boot2docker-vm" natpf1 "udp-port8080,udp,,8080,,8080";

As I mentioned above in the section “Managing your Boot2Docker VM”, this can also be configured using VirtualBox

Posted in Uncategorized | 1 Comment

Docker Cheat Sheet

My last post took you though setting up boot2docker on your system and provided a quick cheat sheet on how to configure and interact with it.

In this post I will be laying out a running list of the commands I use with docker to get the most out of my containers.

What is Docker? (From docker.com)

Docker allows you to package an application with all of its dependencies into a standardized unit for software development.

Docker containers wrap up a piece of software in a complete filesystem that contains everything it needs to run: code, runtime, system tools, system libraries – anything you can install on a server. This guarantees that it will always run the same, regardless of the environment it is running in.

Docker Images vs Docker Containers

Docker Images are the basis from which Docker containers are created.  When you startup a Docker image, a docker container is created.  I liken it to classes and objects.  The image (class) represents all of the capabilities of the container (object) once it is instantiated but an image cannot do anything.  Once a container is created, it can be started and stopped freely and it saves its state.  You can create multiple instances of a particular image as long as you give them different names.

Working with Docker Images

Installing Docker using homebrew

$ brew install docker

If you are not a user of HomeBrew for package management, I highly recommend it.  You can get more information on it and how to install it at : Homebrew

Open bash prompt in a container

$ docker images
$ CONTAINERID = $(docker run -rm -t -i $image_id /bin/bash)

-i = Keep STDIN open even if not attached

–rm = Automatically remove the container when it exits

-t = Allocate a pseudo-TTY

This command creates a container from the specified image ($image_id), opens a bash shell into it and returns the container id (CONTAINERID)

List all images in your local repository

$ docker images

Remove all untaged images (registry cleanup)

$ docker rmi $(docker images | grep "^<none>" | awk "{print $3}")

Working with containers

Open bash shell in a container

To launch a container, simply use the command docker run + the image name you would like to run + the command to run within the container. If the image doesn’t exist on your local machine, docker will attempt to fetch it from the public image registry.

$ docker run -t -i ubuntu /bin/bash

-i = Keep STDIN open even if not attached

-t = Allocate a pseudo-TTY

List all of the running containers

$ docker ps

List all containers

$ docker ps -a

Get the full container id

$ docker inspect -f '{{.Id}}' $NSM_CONTAINER_NAME

Stop a container and remove it

$ docker rm $(docker stop $CONTAINERID)

Stop all containers

$ docker stop $(docker ps -a -q)

Remove all containers that are not running

$ docker rm $(docker ps -a -q)

Execute a command in a running container

(Requires docker v1.3 or later)

$ docker exec $CONTAINERID <command>

Tail the log on a running container (Requires docker v1.3)

(Requires docker v1.3 or later)

$ docker exec $CONTAINERID tail -f <path to log file>

Fetch the logs of a running container

$ docker logs $CONTAINERID

Copy files from a container

Start the container

$ CONTAINERID=$(docker run -d $DOCKER_TAG /usr/local/bin/ncm)

Copy file(s) from the container to the destination path

$ docker cp $CONTAINERID:$source_path $destination_path

Shut down the container

$ docker rm $(docker stop $CONTAINERID)

Copy files to a running container

$ FULL_CONTAINER_ID = $(docker inspect -f '{{.Id}}' $NSM_CONTAINER_NAME)
$ sudo cp file.txt /var/lib/docker/aufs/mnt/$(docker inspect -f '{{.Id}}' $NSM_CONTAINER_NAME)/root/file.txt

or

$ sudo cp file.txt /var/lib/docker/aufs/mnt/$FULL_CONTAINER_ID/root/file.txt

Enjoy,

Dwain

Posted in Uncategorized | 1 Comment

Setting the $JAVA_HOME environment variable on OSX 10.5 or later

Setting the $JAVA_HOME environment variable on the Mac OS can be a little confusing. I hope this clears things up.

Open either ~/.bash_profile or ~/.profile in your favorite editor. I am using TextMate and I prefer editing ~/.profile.

Add “export JAVA_HOME=$(/usr/libexec/java_home)” to the file and save it.

You can reload the current environment without logging out by typing “. ~/.profile”

What is going on here?  The java_home man page lays it out pretty clearly.

“The java_home command returns a path suitable for setting the JAVA_HOME environment variable. It determines this path from the user’s enabled and preferred JVMs in the Java Preferences application. Additional constraints may be provided to filter the list of JVMs available. By default, if no constraints match the available list of JVMs, the default order is used. The path is printed to standard output.”

There are serveral options called out in the man page but the most important to you is probably “-v” & “-V”

“/usr/libexec/java_home -V” – Prints the matching list of JVMs and architectures to stderr.

“/usr/libexec/java_home -v” – Filters the returned JVMs by the major platform version in “JVMVersion” form. Example versions: “1.5+”, or “1.6*”.

So, if you need to change JAVA_HOME to an earlier version of Java (maybe you have a program that requires Java 5), add “export JAVA_HOME=$(/usr/libexec/java_home -v 1.5)” to your ~/.profile file.

This assumes that version 1.5 was returned by “/usr/libexec/java_home -V”

Dwain

Posted in Uncategorized | Leave a comment

Enable Apache on Mountain Lion

Mac OS X Mountain Lion comes with Apache installed but not enabled. Here are the steps to re-enable Apache

1. Open the OS X Terminal (/Applications/Utilities/)

2. Create and open an Apache user configuration file named for your account in your favorite editor. I am using TextMate so I used the following command

sudo mate /etc/apache2/users/[username].conf

3. Copy the following text into the file that opens, but be sure to change the [username] text to the short name of your user account:

<Directory “/Users/[username]/Sites/”>
Options Indexes MultiViews
AllowOverride None
Order allow,deny
Allow from all
</Directory>

4. Save the file and close the editor.

5. Enable apache by typing

sudo apachectl start

6. Verify Apache is up by typing the following url into your browse

http://localhost/~%5Busername%5D

7. To enable the server even after subsequent reboots

sudo defaults write /System/Library/LaunchDaemons/org.apache.httpd Disabled -bool false

8. To disable the server even after subsequent reboots

sudo defaults write /System/Library/LaunchDaemons/org.apache.httpd Disabled -bool true

9. To be sure the files (and any others you may have configured) are properly accessible

sudo chown root:wheel /etc/apache2/users/*
sudo chmod 644 /etc/apache2/users/*
sudo apachectl restart

Dwain

Posted in Uncategorized | Leave a comment

A PostMortem Template

In my last 2 posts I discussed the importance of engaging in a postmortem at the end of your projects and promised to provide a template that can be followed when gathering feedback prior to the meeting and consolidating feedback during the meeting.

Templates like this have been created and posted all over the web so this is really just a collection of what I think is some of the “best of” details that should be gathered together to make a postmortem successful.  I have customized it for my uses.  Feel free to grab it and customize it for yours.

  1. Project
    1. Description
      1. Project Name:
      2. Client:
      3. Project Manager:
      4. Solutions Architect:
      5. Start Date:
      6. Completion Date:
    2. Project Overview [Describe the project in detail.
      1. Discuss the project charter
      2. What was the project success criterion?
      3. etc.
  2. Performance
    1. Key Accomplishments [List and describe key project accomplishments in the space provided below. Explain elements that worked well and why. Consider listing them in order of importance. Be specific.]
      1. What went right?
      2. What worked well?
      3. What was found to be particularly useful?
      4.  Project highlights
    2. Key Problem Areas [List problem areas experienced throughout the project. Be specific.]
      1. What went wrong?
      2. What project processes didn’t work well?
      3. What specific processes caused problems?
      4. What were the effects of key problems areas (i.e. on budget, schedule, etc.)?
      5. Technical challenges
    3. Risk Management [List project risks that have been mitigated and those that are still outstanding and need to be managed.]
      1. Project risks that have been mitigated:
      2. Outstanding project risks that need to be managed:
    4. Overall Project Assessment [Score/rank the overall project assessment according to the measures provided. A 10 indicates excellent, whereas a 1 indicates very poor.]

      Criteria

      Score

      Performance against project goals/objectives 1   2   3   4   5   6   7   8   9  10
      Performance against planned schedule 1   2   3   4   5   6   7   8   9  10
      Performance against quality goals 1   2   3   4   5   6   7   8   9  10
      Performance against planned budget 1   2   3   4   5   6   7   8   9  10
      Adherence to scope 1   2   3   4   5   6   7   8   9  10
      Project planning 1   2   3   4   5   6   7   8   9  10
      Resource management 1   2   3   4   5   6   7   8   9  10
      Project management 1   2   3   4   5   6   7   8   9  10
      Development 1   2   3   4   5   6   7   8   9  10
      Communication 1   2   3   4   5   6   7   8   9  10
      Team cooperation 1   2   3   4   5   6   7   8   9  10
      Project deliverable(s) 1   2   3   4   5   6   7   8   9  10
    5. Additional Comments:
      1. Other general comments about the project, project progress, etc.
  3. Key Lessons Learned
    1. Lessons Learned [Summarize and describe the key lessons and takeaways from the project. Be sure to include new processes or best practices that may have been developed as a result of this project and to discuss areas that could have been improved, as well as how (i.e. describe the problem and suggested solution for improvement).]
    2. Post Project Tasks/Future Considerations [List and describe, in detail, all future considerations and work that needs to be done with respect to the project.]
      1. Ongoing development and maintenance considerations
      2. What actions have yet to be completed and who is responsible for them?
      3. Is there anything still outstanding or that will take time to realize? (i.e. in some instances the full project deliverables will not be realized immediately)

Enjoy

Dwain

Posted in Uncategorized | 4 Comments

Wanted: Post-Mortem – Those without Courage and Optimism need not apply (Part 2)

Part 2: A Post-Mortem takes Optimism

In my last post, I wrote about post-mortems and how they require courage to perform well. In this post, I will focus on the need for optimism.

The most important aspect of the post-mortem is the final result. If nothing changes as a result of the meeting, it has been a waste of time. In fact if the project didn’t go well I would say that it was a painful waste of time. Why spend the time rehashing the mistakes if you are not going to impose any new processes that would prevent this from happening next time?

With this in mind, you should go into this meeting with a great sense of optimism. Optimism for the future. There is no reason to have a post-mortem if you don’t think things can or will get better.

To be optimistic we have to make sure that we cover all of the right bases. This means going over the successes as well as the failures. Everyone should come out of this meeting feeling good about themselves and having a plan for their areas of improvement. Covering the bases also means making sure that this is not an opportunity to punish the team members. I am talking to management here. No one is going to open up in the meeting and give their honest opinions if they think they will be punished later. Finally, you need to create a plan of action. This is the frosting on the cake. This is what helps everyone to leave the meeting feeling good and looking forward to the next project.

Can this be done? In my next post, I will layout a template for a postmortem meeting that you can use to achieve courageous postmortem meetings that are attended by optimistic (maybe even excited) individuals.

Next up: A PostMortem Template

Dwain

Posted in Uncategorized | Leave a comment