A developer always looks to develop apps on one platform and make it available on multiple platforms. While developing the app, they don't care about the development environment. Truly, it's all about the app and all app needs is a secure isolated environment with minimal OS services to run.
We've been using VMs since a long time for running applications on different platforms. It is indeed a major advancement over physical machine. But the VMs do come with a fully blown OS which takes a lot of resources. So wouldn't it be better if we can run our apps without the underlying OS overhead?
Let's find out how to achieve this as we go ahead.
Container
Containers are an application runtime environment similar to virtual machines. Runtime environment contains the things that an application needs in order to execute. But containers are much more light weight than virtual machines. Now as we all know the operating system(Linux) is installed on top of the physical machine. The Linux kernel manages the hardware underneath it. Before the containers and VMs, every app would be installed on the user space.
Containers enable us to create multiple isolated instances of user space. These isolated instances of user space are called containers. This type of virtualization is called OS level virtualization. Containers are light weight because they share a single common Linux kernel on the host. Thus containers are faster and portable.
Each container has an independent and isolated instance of user space. Now for an isolated user space we need an isolated root file system, process hierarchy and networking stacks. So each instance of the user space has its own view of the root file system, process hierarchy and networking stacks. Thus an app running inside of a container can change anywhere within its own view of the file system.
But this kind of isolation is provided by the feature of the linux kernel called namespaces. They allow partitioning of the system namespaces (For ex. process namespace) and assign a partition to a container.
Next, cgroups(control groups) is a linux kernel feature to limit, account and isolate resource usage (CPU, memory, disk I/O, etc) of process groups. In case of containers, cgroups are mapped one to one to containers. Thus we can control how much CPU, memory etc. the container has access to.
Docker
Docker itself is an isolated runtime environment. It is an open source platform for developers and system admins to build, ship and run distributed apps.
Well for me, I had to ship one of my applications from Ubuntu to CentOS, that's when I started learning docker. So packaging everything for use elsewhere is always a challenge when it comes to porting your application stack together with its dependencies. This is where docker enables apps to be quickly assembled from components. As a result, IT can ship faster and run the same app, unchanged, on laptops, data center VMs, and any cloud.
Docker brings together the namespaces, cgroups into its docker engine. It provides a standard runtime providing developers to code apps in docker container and package them and ship to any docker container and start working on it. It can be shipped to data center, AWS, Azure anywhere where the destination is running docker runtime or daemon. Docker seems to be evolving as a platform rather than a runtime.
Remember I said earlier that docker containers share the Linux kernel features such as namespaces, cgroups, etc. Now docker contains libcontainer as the default execution driver for sharing the Linux kernel features. You can configure it to use lxc or libvirt.
Installing Docker
We'll install docker on Linux 14.04. It is recommended to have a Linux kernel of minimum 3.8. This is for the support of namespaces plus stability. To check the version of Linux kernel you can run the following command in the terminal window as follows:
uname -a
As you can see the kernel version for my system is 3.13 which is even better. Next run the following command to update the packages.
apt-get update
It is not necessary to login as root but I don't like to prefix "sudo" before running every command.
Next we will install docker using the following command:
apt-get install -y docker.io
Once docker is successfully installed, check if the docker.io service is running using the following command:
service docker.io status
Woah! Docker is installed and is running. This indicates that the docker daemon is running. Docker comes with a client as well as daemon.
Now let's use the client and get the version of docker installed on our system using the following command:
docker -v
Then run the following command to get more details about the docker:
docker version
This tells us that the client as well as server are of the same version(i.e. 1.0.1). Good for us!
Next, run the following command to get internal info about the docker:
docker info
As you can see, this gives us information about the containers and images in the docker. Docker uses the aufs union file system as the storage driver. Here the native execution driver indicates that it is using libcontainer. Remember I said earlier that docker uses lxc or libcontainer to talk to the Linux kernel.
I guess that's it for this post. You must have got the idea of what docker is, what docker does, and how to install it and run some basic commands.
Do keep learning. I hope this post will get you started with docker. If you have any doubts, do comment below and I'll look to it.
Happy Reading! :)
No comments:
Post a Comment