Containerization : A Brief Introduction

Containerization : A Brief Introduction

Containerization is the process of bundling an application together with all of its required libraries, frameworks, configuration files and dependencies so that it can be run efficiently in different computing environments. Containers are isolated but run on a single host and access the same operating system kernel. Since they encapsulate an application with its own operating system, they enable developers to work with identical environments. 

How it started?

Containerization was born in 1979 with Unix version 7 and became dormant for twenty years. In the early 2000s FreeBSD introduced “jails” which are partitions of a computer. There could be multiple partitions on the same system. This idea was further developed in the next six years with the addition of partitioning of resources in Linux VServer in 2001, Linux namespaces in 2002, to control groups (cgroups) in 2006. At first, cgroups were used to isolate the usage of resources like CPU or memory, but they were further developed in Linux Containers (LXC) in 2008. LXC became the most stable version of container technology at the time running on a Linux kernel. Due to its reliability technologies started building on top of LXC, one of them being Docker in 2013. 

The Golden Age

Docker introduced an easy way to use and maintain containers which was a great success and the start of a golden age. Soon after in 2014, using Docker’s success, rkt (pronounced Rocket) came onto the market offering an alternative engine. It tried to solve some of Docker’s problems by introducing more secure requirements when creating containers. Docker has since introduced the use of containerd, hence, solving what rkt was trying to do. Both Docker and rkt were allowed to run on Windows computers in 2016 using a native Hyper-V hypervisor. The introduction of Kubernetes, a highly effective orchestration technology, and its addition to cloud providers became an industry standard to use with container technology. The momentum continues to this day with further improvements introduced by the community.

Containerization vs Virtualization 

Virtualization is the process of emulating a computer on top of a real computer using hypervisor software. A hypervisor, also known as a virtual machine monitor, manages the execution of guest operating systems (OS) on the physical hardware. Thus, in a single powerful machine, there can be multiple virtual machines, each with its own space for processing, network interface, kernel, file systems and all the other things a fully functional operating system entails. 

Both technologies have their advantages and disadvantages. If you require an OS’s full functionality, with a long life cycle, or want to deploy multiple applications on a server, then virtual machines will be better suited to your needs. However, if you want to use less resources and your application will have a short life cycle, then it is better to use containers. They are designed to have what an application needs and nothing more, making it useful for one task i.e. a microservice. In terms of development, virtual machines tend to have a more complex life cycle due to an increase to virtual copies as well as required resources, hence, more thought needs to be put into design and implementation. Even though containers are small, fast to execute in most cases and use less resources, they are more vulnerable to any threat because they all run on the same operating system kernel. This means that each container on a particular host machine needs to be designed to run on the same kind of operating system, whereas virtual machines can run a different operating system than the underlying OS.

Development life cycle with containers

From my experience, working with containers and microservices on a day to day basis is a lot faster than monolithic style systems. The code is much easier to understand in a microservice running inside a container because it is typically designed to perform a singular task. It requires less implementation to perform a specific task which means that debugging the code can be done rapidly. This reduces the feature request, development and bug fixing life cycles a lot. 

The containers are lightweight and can be moved between different systems which makes it very versatile. The time to create it is very minimal and can be used instantly after pushing it to a repository. It is easy to set up your own repository similar to Docker’s default one, Docker Hub, where millions of images are freely available to download and build upon. It can be used in conjunction with an orchestrator like Kubernetes to create the containers with my piece of functionality very quickly.

Conclusion

In conclusion, containers are revolutionizing the way we develop and deliver applications. They are very portable since they have a write once, run anywhere structure. The same container can be used locally, in a cloud or in a development environment, thus eliminating the “works on my machine” problem that is heard a lot in the industry. There are a few security issues involved with it, however, this technology is constantly evolving and they will probably be addressed in the future. Using orchestrators like Kuberenetes can help out greatly by utilizing the install, upgrade and rollback functionality. 


Author
Damian Kluziak
Software Engineer | Ammeon

Creating an application template from an existing application

Ammeon | September 10, 2021

In this blog post we’ll be looking at how to take that running application and create an application template from it.  This will allow the whole application to form a simple repeatable deployment based on a few given parameters. 

READ MORE

Ammeon awarded Container Platform Specialist status by Red Hat

Ammeon | August 27, 2021

Red Hat have awarded us with Container Platform Specialist status. This has been awarded to us for our consistent high standards of Red Hat OpenShift delivery, as well as our specialist expertise and experience that we bring to projects. Ammeon has become one of Red Hat’s leading professional service partners across Europe and our work with OpenShift has been a major reason for this award. We design, build, deploy and manage OpenShift models for our customers across a range of …

READ MORE

How Can Flow Efficiency Improve Productivity

Ammeon | July 4, 2021

Flow efficiency examines the two basic components that make up your lead time: work and wait time.

READ MORE