ADD SOME TEXT THROUGH CUSTOMIZER

Drivers for Containerizing Applications and Container Architecture Overview

As highlighted in the intro blog, this blog series defines and analyzes the different design and architecture components of Google Cloud based Kubernetes containerized solution architecture (aka Google Kubernetes Engine GKE).

Before we dive into Kubernetes architecture component and design considerations, first let’s start by defining and describing the container architecture, containerized applications’ characteristics along with the approaches of using a container.

Today with virtualizations and virtual machines, the common way that systems or applications’ admin deploying applications, is by spinning up a VM along with the required operating system and all the libraries and dependencies for the required applications.

The questions here, is there any issue with this proven approach? The answer in general, is Yes, there are a few issues.

First of all, technically, VMs are loaded with OS, and applications libraries tend to take longer time to reload or start up.

The other key issues are the flexibility and reliability when running different applications, for instances if a VM is running two or more applications X, Y, etc. When application X requires a dependency or a library update, what will be the impact on other applications running on the same VM in this case?

So this means we not only have to test application X with the new library update, but we have to test it with application Y and see the impact, and if the system or application Admin missed the testing with the other applications, potentially there might be a negative impact on application Y and any other application running on the same VM.

Similarly, an OS update almost always include dependency updates, including libraries, and this cloud introduce compatibility issues between the running applications and the new updated runtime libraries.

In other words, there is tight coupling (interdependencies) between VM, applications and the undelaying VM OS and the runtime libraries.

That’s why modern systems and applications’ Admins and developers realized that there is a need of a new approach that is capable to isolate applications so that they would only have the application specific dependencies without impacting or interfering other applications.

Moreover, what if we can reduce the load of the entire VM hosting operating system,

In which systems and applications’ Admins and developers, are required to ensure they have what exactly the application(s) requires to run, in turn this cloud reduce the deployment time to a few minutes or seconds! Not to mention, the need for micro-services application architecture requires a model to provide fast and easy way to provision infrastructure that focuses on applications’ functions or capabilities with loose coupling, to provide more modular and reusable application architecture.

This is where containers appear in the picture.

In a high level, containers are a sort of isolated partition with an OS. According to Docker “A container image is a lightweight, stand-alone, executable package of a piece of software that includes everything needed to run it: code, runtime, system tools, system libraries, settings”

The following figure illustrates the classical Virtual machines deployment model Vs. containers approach (the container could be running on a physical machine or can just be on a virtual machine)

It is obvious with the containers’ approach we can run applications with minimum dependencies to run each application. Consequently, application X and Y each is loaded in its own container, and each container contains only the minimum libraries, code, settings etc. required to run each application. Also it isolates the libraries and the runtime environment like CPU and storage etc. consumed by an application to reduce the effect of any operating system update.

As a result, this will offer faster applications testing and deployments (containerized applications can be provisioned within seconds), at the same time, running applications and its dependences is completely isolated from the host operating system (more portable “can run anywhere”). In other words, by using a containerized image, you technically combining the application along with the required libraries, runtime and dependencies, to build an isolated executable environment ‘a container’, which can technically be deployed on the platform of your choice, including desktops, VMs, on-premises or cloud.

“Containers isolate software from its surroundings, for example differences between development and staging environments and help reduce conflicts between teams running different software on the same infrastructure.”

If we look at Docker containers, technically, it uses a client-server type of architecture,

and the client is where the command-line tool is running when we interact with it, also, another key service in Docker architecture is the Docker service which can be running in a different or same host where the client is running (but not the optimal way to do in production environment). Docker service contain a daemon on it that’s runs for all of the building, running and downloading images. Technically, the Docker client delivers commands instructions only, acting an http API wrapper.

On the other hand, Docker daemon is the brain behind the entire process. For instance, when the system admin, use the ‘docker run’ command to spinup a container, typically the docker client will convert this CLI into HTTP API call, then push it to docker daemon, in turn Docker daemon translate the request, communicate to underlying host element e.g. OS, then provisions the container.

With containers we need images, they are templates for containers, which is powerful tool used to build and rebuild containers. In fact, we could with one template create, several containers (as many as your system capacity allows) with the ability to support versioning, packaging of the required libraries by an application etc. these images are located in different registries, public like Google Container Registry, DockerHub etc. or it cloud be something private in a data center or even on your laptop.

Docker is one of the early implementations of containers (but not the only one), it uses namespaces, control groups, and SELinux. Namespaces > used to isolate processes, and protect system resources per

individual container. SELinux >  protect access among containers and containers to host. cgroups >  protect the containers interaction to the host.

Furthermore, containers can be characterized as a ‘reusable’ entities, because the same container can be reused by different applications without the need build and install a full operating system. for instance, a container with MySql DB image that is used as a backend for an application, when this container is removed, the system applications Admin can easily and quickly recreate this MySql container without the need to go through any OS setup tasks.

Furthermore, containers boost the microservices development approach because they provide a lightweight and reliable environment to create and run services that can be deployed to a production or development environment without the complexity of a multiple machine environment.

Last but not least, today, not every single application can be containerized, because some applications may require the ability to access lower level hardware information, like file systems, memory, etc. and therefore, for such applications containerized approach may not be the optimal solution today, due to container constraints for these type of applications, but this fine, because in reality you wall ways have mixture of bare-metal, VMs, On-Prem, Cloud etc. the goal always to optimize wherever possible using the most suitable architecture.

In summary, with container images, we confine the application code, its runtime, and all the required libraries dependencies in a pre-defined format, to create and provision one or more containers, in which single or multiple containers running on a single host do not need a dedicated full-blown operating system, as they share it on the same host. Typically, this will optimize system resources’ utilization, storage, memory and CPU. As well as helps systems admins and applications developers to avoid any OS related maintenance, upgrades and patching impacts. Also, containers are way lighter than VMs (VM size is in GBs while a container in MBs) which make is easier and faster to scale it (out and in) as well as improves its portability between cloud environments (hybrid and multi-cloud) as is has less cloud platform dependences, compared to IaaS and PaaS solutions.

That being said, till now we are focusing on running containers on a single host. Practically, in production environments we deal with cluster(s) of hosts, containers and applications that we aim to ensure they are always fault-tolerant and scalable, as well as easy to manage and controller through a single controller/management unit, after connecting multiple nodes together in a cluster, in a similar fashion how VMs are managed and controlled from single interface. Part-2 of this blog series will discuss containers cluster orchestration with focus on GKE architecture .

Categories :
Marwan Al-shawi – CCDE No. 20130066, Google Cloud Certified Architect, AWS Certified Solutions Architect, Cisco Press author (author of the Top Cisco Certifications’ Design Books “CCDE Study Guide and the upcoming CCDP Arch 4th Edition”). He is Experienced Technical Architect. Marwan has been in the networking industry for more than 12 years and has been involved in architecting, designing, and implementing various large-scale networks, some of which are global service provider-grade networks. Marwan holds a Master of Science degree in internetworking from the University of Technology, Sydney. Marwan enjoys helping and assessing others, Therefore, he was selected as a Cisco Designated VIP by the Cisco Support Community (CSC) (official Cisco Systems forums) in 2012, and by the Solutions and Architectures subcommunity in 2014. In addition, Marwan was selected as a member of the Cisco Champions program in 2015 and 2016.

Leave a Reply

Your email address will not be published. Required fields are marked *

Order Now