≡ Menu


What is Docker? An absolute beginner’s guide

To the uninitiated, we are not talking about Docker clothing company, which makes the popular Men’s Khakis. What we are talking about is the Docker that has changed the way Software Applications are bulit, shipped and run. You have probably heard about Docker and how cool it is, but never really understood fully. And you are itching to finally sit down and read about it. You have come to the right place. In this blogpost, I’m going to demystify Docker for you.  By reading this post fully, you will understand,

  1. What the heck is Docker?
  2. What makes Docker so invaluable and indispensable?
  3. How to install Docker on your PC or MAC?
  4. Most commonly used Docker commands
  5. A full working example

Docker image

Ready? Let’s begin.

What the heck is Docker?

Docker provides a way for Applications to be built and run in a container with all the required software packaged in it. 

But you ask, what in the world is container?

Container is a Docker process that can be run on any Linux or Windows based system. It includes everything it needs to run, including System lirbaries, application code, application dependencies and configuration files. (You throw Docker container in a Car’s bumper and it will still work. Just kiddin.) Containers running on a system share the Operating System Kernel with other processes (including other Docker containers). A Docker container is a mini-machine in its own right.

Tip: You can list all the Docker containers running your system by running the command

docker ps


To contrast this with how software applications are traditionally run, look at the image below.


At this point, you may be wondering, ‘Wait a minute. I’ve seen this before. Are you not talking about virutalization? Vmware and stuff?

Not really. Simply put, vmware virutalizes the underlying hardware resources. But Docker virtualizes the Operating System that runs on top of Vmware virutal machine.


At this point, if you have been doing Application Support for a while, one stricking advantage should be obvious to you: Consistancy of environments. Think about how many times you have been told by the development team: ‘Oh, but it works in my local Dev enviornment. Something must be wrong in production servers. May be a jar file is missing in the classpath in Prod?’ Painful. Docker puts an end to all this environment specific mysteries.

So, to summarize: Docker is a container solution that enables buliding, shipping and running applications with all the required software in a single unit. The benefits include consistency across deployments, fast startup, flexible and developer-friendly build process.

Enough fluff. Let’s get our hands moving by actually running a Docker container.

Installing Docker

Before you can actually run your first Docker Conainter, you have to install Docker software.

What do you need to begin?

A Linux or Windows or MAC system (can be a desktop or a server).

There are two flavors of Docker. Community Edition and Enterprise Edition. Community Edition is free and mainly greared towards learning and testing purposes. Enterprise Edition comes with few bells and whistles and also comes with an invoice in the mail. We will be using Community edition to learn Docker. (Note: You cannot install Docker Enterprise Edition in Desktops). For full compatibility matrix, check this link.

Note: In older MAC and Windows systems, Docker was installed using Docker Toolbox, a legacy desktop solution that inlucded docker-machine, docker and docker-compose commands (more on these commands later). If you have newer MAC ( OS X El Capitan10.11 or later) or Winodws (Winodws 10 or later) It is recommeded to install Docker for MAC or Docker for Windows instead of Docker Toolbox.


To install Docker for MAC, download the dmg from Get Docker for Mac (Stable).

Double click Docker.dmg. Drag and drop to the Applications folder as shown below.


Note: You must have Admin privilege to perform the install, as shown by the pop up below that appears when you run the install

Screen Shot 2018-02-17 at 10.09.09 PM

Once installed, the docker icon (Whale) should appear on your top status bar.


Download the installer from Get Docker for Windows (Stable).

Simply launch the exe file and follow prompts. It should be a super straight forward install.


You can use apt-get to install from the Docker repository or install a deb package using dpkg.

Follow the instructions here


You can use yum to install from the Docker repository or download rpm from here

For detailed instructions, go here.

Great, now that you’ve installed Docker, let’s start playing with it with some basic commands.

Using Docker for the first time

We will be using Docker for Mac to illustrate. The commands are exactly the same in other flavors.

First thing you want to do is to make sure you have the latest version of Docker (at least not some ancient version). As of this writing, Docker CE 17.12.0 is the stable version. Click on the Docker icon (Whale) at the top status bar and click on About Docker

Screen Shot 2018-02-17 at 10.34.39 PM

Open a terminal and type the following command

~$docker version
 Version: 17.12.0-ce
 API version: 1.35
 Go version: go1.9.2
 Git commit: c97c6d6
 Built: Wed Dec 27 20:03:51 2017
 OS/Arch: darwin/amd64

 Version: 17.12.0-ce
 API version: 1.35 (minimum version 1.12)
 Go version: go1.9.2
 Git commit: c97c6d6
 Built: Wed Dec 27 20:12:29 2017
 OS/Arch: linux/amd64
 Experimental: false

Notice the GO version. Docker is built using the Go programming language.

If you simply type the command docker and press Enter, you will see nice concise command list.

Screen Shot 2018-03-12 at 8.01.09 PM

If you need help with any of the commands, simply type the following command:

docker <command name> –help

~$docker ps --help

Usage: docker ps [OPTIONS]

List containers

 -a, --all Show all containers (default shows just running)
 -f, --filter filter Filter output based on conditions provided
 --format string Pretty-print containers using a Go template
 -n, --last int Show n last created containers (includes all states) (default -1)
 -l, --latest Show the latest created container (includes all states)
 --no-trunc Don't truncate output
 -q, --quiet Only display numeric IDs
 -s, --size Display total file sizes

Okay, let’s get down to business. Let’s actually run a docker container.

With docker, there is this concept called images. First, you create an image and then you run the image using a container.

Image ——–> Run ——–> Container.

Running your first container

From the shell prompt, execute the following command.

~$docker run -i -t ubuntu /bin/bash
Unable to find image 'ubuntu:latest' locally
latest: Pulling from library/ubuntu
d3938036b19c: Pull complete
a9b30c108bda: Pull complete
67de21feec18: Pull complete
817da545be2b: Pull complete
d967c497ce23: Pull complete
Digest: sha256:9ee3b83bcaa383e5e3b657f042f4034c92cdd50c03f73166c145c9ceaea9ba7c
Status: Downloaded newer image for ubuntu:latest

Let’s disect the above command.

docker: This is the docker executable

run: This tells docker to run a container

-i: This tells docker to keep STDIN open (i.e to be interactive)

-t: This tells docker to allocate a tty ( so that you get a nice terminal to type commands in)

ubuntu: This is the all important parameter – this tells docker which image to run. If the image is not available locally, docker will try to obtain the image from docker repository (hub.docker.com)

/bin/bash: This is the command to be run when the container begins running.

Open another terminal window and type the following command:

~$docker ps
a5dfc81d2340 ubuntu "/bin/bash" 3 seconds ago Up 5 seconds practical_kepler

docker ps shows the containers that are currently running. Notice the container id. Also notice how the docker command borrows ps which is unix command to show the runnign processes.

Note: you can use the command docker ps -a show the containers that had run in the past but no longer running.

~$docker images
ubuntu latest c9d990395902 3 days ago 113MB

docker images shows the avaialble images. When I ran the docker run command, it pulled the ubuntu image from hub.docker.com. Note the tag latest. Tags are how you keep various versions of the image. If no tag is specified, the defaut tag would be latest.

Digging deep

So, what is the nature of the container I just started ? Let’s run few commands in the container’s bash shell that was started earlier.

root@a5dfc81d2340:/# cat /etc/os-release
VERSION="16.04.4 LTS (Xenial Xerus)"
PRETTY_NAME="Ubuntu 16.04.4 LTS"

So, what you got was Ubuntu 16.04.4 (from the version field).

Note: What if you want a sepcific version of ubuntu ? You need to go to hub.docker.com and check the available ubuntu versions.

Screen Shot 2018-04-16 at 7.37.37 AM

When you search for ‘ubuntu’, you get the following images:

Screen Shot 2018-04-16 at 7.38.23 AM.png

Notice the ‘Supported tags and respective Dockerfile links’. Note how 16.04 is tagged as latest. If you want ubuntu 14.04 for example, your docker run command sholud look like the following:

~$docker run -i -t ubuntu:14.04 /bin/bash
Unable to find image 'ubuntu:14.04' locally
14.04: Pulling from library/ubuntu
c2c80a08aa8c: Pull complete
6ace04d7a4a2: Pull complete
f03114bcfb25: Pull complete
99df43987812: Pull complete
9c646cd4d155: Pull complete
Digest: sha256:b92dc7814b2656da61a52a50020443223445fdc2caf1ea0c51fa38381d5608ad
Status: Downloaded newer image for ubuntu:14.04
root@2cbd08aaa731:/# cat /etc/os-release
VERSION="14.04.5 LTS, Trusty Tahr"
PRETTY_NAME="Ubuntu 14.04.5 LTS"

docker ps will show you that you are running 14.04

~$docker ps
2cbd08aaa731 ubuntu:14.04 "/bin/bash" 4 minutes ago Up 4 minutes fervent_kepler

Yes, it’s that simple.

Coming back to the ubuntu 16.04 container we started earlier, let’s run couple of commands in the container.

root@ff8b7428adf6:/# df -m
Filesystem 1M-blocks Used Available Use% Mounted on
overlay 64251 1447 59512 3% /
tmpfs 64 0 64 0% /dev
tmpfs 1000 0 1000 0% /sys/fs/cgroup
/dev/sda1 64251 1447 59512 3% /etc/hosts
shm 64 0 64 0% /dev/shm
tmpfs 1000 0 1000 0% /proc/scsi
tmpfs 1000 0 1000 0% /sys/firmware

df -m shows the file systems that have been mounted with their sizes in megabytes.

The beauty of docker is, all these file systems are completely isolated from the host operating system. It’s as if you are sitting in a OS inside an OS. Whatever you do to this container

Splunk vs ELK

If you are in IT Operations in any role, you have probably come across either Splunk or ELK, or both. These are two heavyweights in the field of Operational Data Analytics. In this blog post, I’m going to share with you what I feel about these two excellent products based on my years of experience with them.

The problem Splunk and ELK are trying to solve: Log Management

While there are fancier terms such as Operational Data Intelligence, Operational Big Data Analytics and Log data analytics platform, the problem both Splunk and ELK are trying to solve is Log Management. So, what’s the challenge with Log management?

Logs, logs, logs and more logs


The single most important piece of troubleshooting data in any software program is the log generated by the program. If you have ever worked with vendor support for any software product, you have been inevitably asked to provide – you guessed it, Log files. Without the log files, they really can’t see what’s going on.

Logs not only contains information about how the software program runs, they may contain data that are valuable to business as well. Yeap, that’s right. For instance, you can retrieve wealth of data from your Web Server access logs to find out things like geographical dispersion of your customer base, most visited page in your website, etc.

If you are running only a couple of servers with few applications running on them, accessing and managing your logs are not a problem. But in an enterprise with hundreds and even thousands of servers and applications, this becomes an issue. Specifically,

  1. There are thousands of log files.
  2. The size of these log files run in Giga or even Terra bytes.
  3. The data in these log files may not be readily readable or searchable (unstructured data)

Sources_of_logfiles (4)

Both Splunk and ELK attempt to solve the problem of managing ever growing Log data. In essence, they supply a scalable way to collect and index log files and provide a search interface to interact with the data. In addition, they provide a way to secure the data being collected and enable users to create visualizations such as reports, dashboards and even Alerts.

Now that you know the problem Splunk and ELK are attempting to solve, let’s compare them and find how they are achieving this. I’m going to compare them in 4 areas as follows:




Learning Curve for the operations team

Got it ? I can’t wait to share. Let’s dive in.



ElasticSearch Logo

Read More

How to enable colors in shell and vi in Mac?

When working with shells, if your Mac does not show various colors automatically, you can enable them by two easy steps.

First add a line as shown below to your .bash_profile. This file should be under your home directory.

export CLICOLOR=1

Second, add the following line in your .vimrc file. This file should be under your home directory. If it is not there, create one.

Read More

What is Virtualization?

Virtualization is a technique using which you can run multiple Operating Systems (aka Guest) in a physical server (host) by abstracting (or virtualizing) CPU, Memory, Disk and Network resources. The core component of any virtualization solution is Hypervisor – the software that performs the abstraction of bare metal resources.

Here are the primary benefits of using Virtualization:

  1. Save cost on hardware
  2. Centrally manage the infrastructure
  3. Add effective fault tolerance and high availability
  4. Dynamically update the infrastructure

The diagram below shows virtualization at a high level.

Read More

5 reasons why you can’t afford NOT to Virtualize

The verdict is in. Virtualization is the future. If you are still running your applications on bare metal, you are missing out on tons of benefits, or even hurting your business. Virtualization is a software tech that lets you run multiple operating systems and applications on a physical server by abstracting the hardware underneath. Among several Virtualization software makers, the following are considered leaders:

VMWare (ESXi)

Citrix (XenServer)

Microsoft (Hyper-V)

Let’s dive in to 5 reasons why you can’t afford NOT to virtualize (not necessarily in any order)

Read More

How to use AppDynamics to monitor Server health?

Yes, AppDynamics is awesome for Application monitoring – Java Heap, deep transaction tracing, tons of out-of-the-box framework monitoring (JDBC,WebService etc) and the list goes on. But do you know Appdynamics can be used to effectively monitor Servers too, whether it is virtual or physical? When I say server, I mean the host operating system such as RedHat Enterprise Linux, Windows 2012, Solaris etc. Let me show you how you can do this.

Enter AppDynamics Machine Agent

While Java can be monitored using a Java Agent, a Server can be monitored using a special type of agent called Machine Agent. You will have to have license to run these agents (When you purchase Application agents, typically AppDyanmics throws the same number of Machine Agents, and so you should be good in terms of additional cost). If you are not sure about your present licensing situation, click on ‘licensing’ in your Controller UI as shown below.

Unlike Application agents which run inside the JVM/CLR, Machine agent is a standalone Java program that runs on the host operating system. It collects hardware metrics and sends them to Controller (once a minute). A user can view these metrics via Controller UI. Pretty simple, hah?

Read More

How to install Apache Web Server using Yum?

Software installation has never been more fulfilling since YUM came along.

YUM is the most popular rpm based interactive package manager. It is super powerful and reliable.

In this quick article, I show how to install apache web server, the world’s most popular web server in your Linux Server.

You need to have root access to do this. You also need to have internet access on the server you are installing apache.

Simply run the command

sudo yum install httpd

That’s it. Yum does the rest.

Once done, which takes about 10 seconds, start the httpd server

Sudo service httpd start

Once the service starts, simply use a browser to access the server (use the default server name or the ip address).

Or you can choose the geeky way to use curl to test out. I created a basic html file. Here is how to access it

Curl http://localhost/index.html

You can check access.log to see how you did. Access_log is present under /var/log/httpd/

The configuration file for apache (httpd.conf) will be under /etc/httpd. (This depends on your platform).

That’s it. Your own enterprise grade Web Server, up and running in about 10 seconds.

Way to go YUM !!

Buckle up! You can get your own AWS server in cloud. You can run a verity of OS on it, connect to any popular Database you want to and even get your hands on some of the coolest products from AWS. I don’t know about you but I’m psyched about all this.


Yes, Amazon Web Services Free Tier allows you to have your own Server in EC2 for 12 months. If you are new to AWS, this is a great way to get your feet wet, or may be drenched.

This article shows exactly how you sign up and crank up your own server in cloud. It takes about 15 to 20 minutes to get your hands on a brand new Amazon Linux instance (or Windows or Suse or RHEL…..)

Without further due, here are the actual steps to follow.

Read More

Introduction to APM: Benefits of APM

So, what can an APM tool buy you? Setting aside the hypothetical ‘Peace of mind’ marketing pitch, let me show you how exactly an APM tool can help you support your Application effectively

1. Historic Monitoring of Key Metrics

APM tool can record the monitoring metrics which are invaluable in troubleshooting. For example, take a look at the ‘response time’ graph of a particular application. You can readily see that the application suffers during business hours.


Read More

Introduction to APM (Application Performance Management)

Back in the 90s when I was working as a Solaris/HP-UX Administrator, all I needed was two or three commands to figure out what was wrong with a particular Server or Application. I will just glance at ‘vmstat’, ‘iostat’ and ‘top’ for a minute or two and the problem will reveal itself clearly. While those command still prove valuable at a certain level, in order to answer ‘Why is the application slow’ you need much more than just few OS commands.

Read More