Docker and its containers: a new way of deploying your environments

"In my computer it works, but not in the client’s!"  

docker ok

How many times has this happened to us? Developing a system, testing it and making sure it works, but then when we deploy it, everything crashes? What to do? Redeploy? Debug?

This article describes a tool to help avoid this uncmfortable moment every developer has experienced.

Below is an introduction to the use of containers, advantages and disadvantages and some simple commands to test.

What is a container?

Software containers are a group of elements that allow the execution of a certain application on any operating system, ensuring that an application is applied correctly when the environment changes.

The overall idea is that, when developing a system with a specific environment, one should be able to execute it in other environments with different characteristics, without having to change frameworks or settings.

Thanks to their usefulness and agility to migrate any development from one platform to another, the use of software containers has grown in the past few years.

 

Container VS Virtual Machine

444

Given that each tool has its own advantages and disadvantages, it is common to wonder which option is more convenient to use. The answer is that it will depend on what one wishes to achieve. If different types of applications will be running that use up different and several resources, VM will probably be the best choice. On the other hand, if the idea is to execute several copies of one same application, maybe the best option is a container. Another solution is to use both VM and containers, and in this way use the best of both worlds. The challenge in this case will be how to manage/administer both architectures at the same time.

What is Docker?

Docker is an open source Project that automates the deployment of applications inside software containers, providing an additional layer of abstraction and automation of virtualization at an operating system level.

With this tool, one can bundle an application and its dependencies into one virtual container, which can be executed from any server. Likewise, it provides flexibility and portability where the application can be executed, be it in the physical facilities, the public cloud, the private cloud, etc.

Docker uses isolation features from kernel resources, such as cgroups and namespaces, to allow independent ‘’containers’’ to execute in one single instance, avoiding the overload of starting-up and maintaining virtual machines. 

Linux kernel support for name spaces isolates the view it has of an application from its operating environment, including process trees, network, user ID and mounted files systems. On the other hand, cgroups and kernel provide isolation of resources, including the CPU, the memory, the E/S block and the network.

In the 0.9 version, Docker includes the libcontainer library as its own way to directly use the virtualization facilities offered by Linux's kernel, apart from using the different virtualization abstracted interfaces.

Basic Docker Commands

To list our active containers we use the following command:

docker ps 

To create a container we use the following command: 

docker run -it imagen (you can put -it so that it runs in iteration mode)

To see the created containers, but that are not being executed, we use the command:

docker ps -a

To execute our already created container: 

docker start <id>

To eliminate the container, it must be stopped with the stop command:

docker stop <id>

Then we can delete it with the command: 

docker rm <nombre_contenedor>

The execution can be deleted if you use the flag f:

docker rm <nombre_contenedor> -f 

Other Docker commands are:

Screenshot 1

 

Screenshot 2

Useful links:

https://docs.docker.com/get-started/

https://www.codeschool.com/courses/try-docker

Authors:

Javier Horacio Campa - Developer .NET

Gregorio Michalopulos - Developer .NET

José María Virgili - Developer .NET

 

Baufest en Instagram

Contact Us

info@baufest.com