ADD SOME TEXT THROUGH CUSTOMIZER

Google Cloud Kubernetes Engine – POD Design Considerations

The previous blog of this blog series, highlighted that “technically a Pod in worker node, can run one or more containers”.

Based on this, someone might ask, why don’t we run a containerized web application like WordPress container and its backend Database container such as PostgreSQL or MySQL DB in the same Pod? Technically, this is doable. However, from containerized systems and application architecture with Kubernetes, this can be seen as an anti-pattern for Pod construction.

This is, because, the front-end web application ‘WordPress’ and the back-end DB in this architecture are not tightly coupled (symbiotic). In other words, If the front-end container and the back-end container hosted on two different Nodes or Pods, there is nothing going to impact their operational model as ‘web and DB’ systems, as they can still work and communicate over a network, unless there is networking issue between them, such as capacity limitation or latency.

This will lead us to an important design aspect, with regard to the way that systems or applications’ teams dealing with the different ‘ends’ of the application. For instance, with this simple multi-tiered application, almost always you won’t need to provide the same scale up or scale out to the application ends (front-end and the backend), like with this design, the front-end here (WordPress) might be deployed as stateless, and during certain peak hours you may require to scale out the web front-end containers, in response to the load at that time. With Kubernetes, this means adding more Pods > Web containers.

However, because a Pod is an ‘Atomic’ unit in which with this scaling strategy, you have to have all the containers up and running in a Pod in order for the Pod to be operational, here we have to scale out the Pods with both WordPress and its DB containers.

As a result, a higher Pod resources will be required as it host both the Web and DB containers. Also from application and DB admin point of view, scaling the DB with multiple instances/replicas is not as straightforward as the stateless front-end web server.

Therefore, when designing containerized applications with the agility and elasticity of Kubernetes, the question that need to be asked should be, “are these containerized applications can be loosely coupled, by running them on different machines?” If  “Yes,” then a Pod per container is the way to go (e.g. micro services architecture), as illustrated below, the blue container represent WordPress and now can be scaled out, independently from the DB .

Unless otherwise these two containers they must reside on the same Pod for whatever technical reason, then multiple containers in a Pod is the way to hosts these tightly coupled containers, taking into considerations the points discussed above. In addition, containers running on the same Pod, will be sharing all the Pod resources (storage volume, HW resources, virtual network etc.)

After deciding on the which model to be used for each of the applications tiers, now it’s time to look at what to be considered inside a Pod.

Both Pod and Containers are ephemeral, and the files inside a container as well are also ephemeral which means they are lost when the container is moved or terminated, this can be even worse if you are running stateful applications. In addition, the IP address allocated to a Pod is ephemeral, which means if Pod restarted the Pod internal IP will be different. Now we have two key issues here, due to the fact that a Pod is ephemeral: data loss, and connectivity loss due to the possibility of changing IP.

Let’s first see how Kubernetes Persistent Volumes address the storage issue.

Persistent Volumes

Keeping data in containerized environment with multiple hosts, acting as containers’ cluster, managing volumes among the containers and the underlying hosts it is not an easy job, for provisioning volume dynamically and sharing data among containers. Kubernetes volumes introduced several features and capabilities to make this job easier and more reliable.

Kubernetes support a wide range of persistent network attached storage types that all offer the ability for a storage volume to live outside the life of a Pod, which means the lifecycle of a volume can be completely independent of the lifecycle of Pod. This cloud be NFS, AWS EBS, GCP GCE, Azure data Disk, iSCSI, FiberChannel, VSphere, etc.

With persistent volumes the, a pre-defined volume will be automatically attached to a Pod and container and provisioned. And then will be detached from the Pod/container when terminated, but the data will be intact (configurable) and this is key. If another Pod comes up and reference the same instance of the volume, it will get the exact same data.

The below is a sample Pod YAMIL configuration file, the Pod reference an existing Google Compute Engine (GCE) Persistent Disk volume called “disk-106” for use by any container hosted by this Pod.

What is interesting here, is that, when the Pod is created, Kubernetes calls out to GCP APIs and attached the specified existing disk volume to the worker node that the Pod is schedule to, and mount it to the container in that Pod. Also, when the Pod is moved to another worker node in the cluster, Kubernetes again calls out to the GCP APIs to detach this existing disk and attached to the node where the Pod was moved to, and again as well mount the volume to the container(s) in the Pod.

The limitation with this approach, each volume need to be pre provisioned and explicitly referenced in the Pod configuration, which will limit the flexibility and simplicity of dealing with K8s clusters at scale.

That why K8s provide an abstraction layer to the Persistent volumes provisioning, called “PersistentVolumeClaim” which is like a storage request, and helps to make the cluster independent of the underlying storage volume provider (which could be: GCP, AWS or even On-Premises). There are two approaches to deploy this Persistent volume model:

Static Volume Provisioning:

With this approach the cluster or system admin, needs to provision the specification of the Persistent storage disk provider “PersistentVolumes”. Then the storage volume consumer, (Pod layer) requests it using “PersistentVolumeClaim”, in which, the Pod will be able to mounts the requested storage volume by the “PersistentVolumeClaim”

As illustrated in the figure above, the Kubernetes “PersistentVolumeClaim” act as an abstraction layer to decouple Pod requested volumes form the actual physical storage volume. With the static approach, the “PersistentVolumes” PV have to be pre provisioned (the provisioning is environment specific), in GCP, this means the Persistent Disks has to be provisioned. And then different “PersistentVolumeClaims” can be used to request the required storage capacity, access etc. and then Kubernetes will take this request and to find a match with the available “PersistentVolumes” PVs to bind it to the PVC request. With this approach, the system admin or application developer can reference the PVC instead of referencing the specific volume, which makes the Pod definition and overall architecture more portable, that can be deployed across different environment. While the PV has to use the specific environments APIs to provision the actual physical disks

Dynamic Volume Provisioning:

The portability provided by the “PersistentVolumes” and “PersistentVolumeClaims” is great, however, as you may have noticed there is always a manual step that needs always to be done by the system/cluster Admin, which is pre-provisioning all the volumes (PV) is a limiting factor in scalable and dynamic environments.

This where the unique Kubernetes capability of dynamically creating storage volumes can be used. With this approach, Kubernetes automatically provision storage volume to fulfil a PVC request. This is very effective approach, especially in cloud environments like GCP, as an API call can provision a storage volume in seconds.

To be able to use this dynamic provisioning approach, a storage class object need to be configured in addition to the PVC. A storage class role is to dynamically provision a PV, then associates it with a PVC, Also in GCP, it offers the ability to specify any supported storage type by GCE, e.g. pd-ssd, as well as, specify the desired zone.

For further details, refer to:

https://cloud.google.com/kubernetes-engine/docs/concepts/persistent-volumes

The subsequent blog, will discuss the ephemeral nature of a Pod and its impact on IP communications, and how Kubernetes/GKE addresses this issue.

Categories :
Marwan Al-shawi – CCDE No. 20130066, Google Cloud Certified Architect, AWS Certified Solutions Architect, Cisco Press author (author of the Top Cisco Certifications’ Design Books “CCDE Study Guide and the upcoming CCDP Arch 4th Edition”). He is Experienced Technical Architect. Marwan has been in the networking industry for more than 12 years and has been involved in architecting, designing, and implementing various large-scale networks, some of which are global service provider-grade networks. Marwan holds a Master of Science degree in internetworking from the University of Technology, Sydney. Marwan enjoys helping and assessing others, Therefore, he was selected as a Cisco Designated VIP by the Cisco Support Community (CSC) (official Cisco Systems forums) in 2012, and by the Solutions and Architectures subcommunity in 2014. In addition, Marwan was selected as a member of the Cisco Champions program in 2015 and 2016.

One Comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Order Now