Kubernetes : An Ammeon Overview

The strange-sounding name Kubernetes comes from the Greek word for Helmsman and is an etymological root for the word cybernetics. This envisions Kubernetes as the platform to steer the ship that is container-based Cloud-native applications. The project was originally designed by Google in 2014, starting as a C++ project was heavily influenced by the Google project Borg. The first stable release of the project saw it rewritten in Go and adopting the Greek moniker. The project has since been made open-source in a partnership between Google and the Cloud Native Computing Foundation.

What is Kubernetes?

Kubernetes is widely configurable in terms of its deployment model, however, it is always made up of Master and Minion nodes or the primary/replica architecture. The role of the Master node in deployment is to provide features such as ETCD a key-value data store, an API server which allows the passing of JSON of HTTP, a Scheduler which acts as an OS process manager by slotting Kubernetes pod objects into available ‘slots’ within the Minion nodes, amongst many others.

The Minion nodes in the Kubernetes deployment run the containerized Software in ‘Pod’ objects. These pods can consist of one or more containers which are co-located on the node and can share resources. Each pod is given a unique identifier and will communicate across the Kubernetes cluster by use of a Service object with service discovery using environment variables or Kubernetes DNS.

These objects are just a tip of the iceberg of the functionality available within the Kubernetes ecosystem. Users can also define their own Custom Resource Definitions objects for Kubernetes to act upon allowing complete flexibility of the platform.

K8s versus Swarm

So why Kubernetes and not Docker Swarm or Apache Mesos? To explain why Ammeon chose to lead with Kubernetes over other containers orchestration platforms we need to look at the main differences between these main competitors. While all three might be able to fulfil the needs of orchestration, they all have a better fit depending on the project. Mesos is the best fit for data centre management. It provides this with having a layer of container management frameworks such as Kubernetes run on top of it.

For those looking to build a local deployment, for a personal project Swarm would be a good fit. It’s quick and easy to set up, but in its simplicity, it lacks some of the industry-grade features provided by Kubernetes. These features include auto-scaling, a wide array of storage options, external load-balancing.

Advantages of K8s

The first major advantage of adding Kubernetes to a product project to be highlighted here is Scalability. Kubernetes provides the ability to horizontally scale services easily across the cluster. By using either Daemonsets to deploy a pod on each Node within the cluster or have a Deployment object to deploy multiple pods in a single namespace. Incoming requests to the cluster are automatically load-balanced amongst these pods. Kubernetes also allows for vertical scalability, this is the auto-scaling feature available in Kubernetes. This frees the users from having to put specific CPU and Memory values on the running containers.

Advantage 2 – Availability, this is provided to the running services by use of Liveness and Readiness Probes. These probes act by detecting if a pod has shifted from a healthy state into a broken state. It does this by executing a command against the service container.

Advantage 3 – Container configuration, by use of the ConfigMap objects. This supplies a range of environment variables as well as other configuration options to the underlying pod. These configuration changes can be dynamic configuring the Deployment object to restart when changes to the ConfigMap object occurs. Kubernetes also provides declarative management of its objects using configuration files. Each of the Kubernetes objects can be stored as YAML files, which allows for easy input and output from the system. It also stores a ‘last-applied configuration’ field as part of the object description, which is helpful to see what has changed between states.

Monolith to Microservice – slimline containers

I came onto my current work with Kubernetes from a monolithic-style project, and the differences between my day-to-day are immense. My work is focused on creating slim-lined containers with microservices running inside. These containers are then hosted in the Kubernetes namespaces where they can communicate and process the traffic incoming to the cluster. This brings a level of flexibility to the development process I hadn’t experienced before. It brings the lead time to make a change to service, build a new container and test it on a live system to a minimum.

This, as a developer, speeds up the feedback time for development, ensuring quicker feature request and bug fixing lifecycles. These services are also easily versioned when deployed on the cluster, which allows easy traceability for the underlying code. This is a vital tool in terms of debugging issues which occur during production.

In terms of testing new versions of containers Kubernetes, there is a handy feature available to you as a developer. This is the Canary Deployment feature, this involves deploying an in-test version of a service next to a stable release. Then using the built-in ingress of Kubernetes, we can monitor if this new service handles traffic without disrupting the running system. Then based on this test the canary version can be rejected from the cluster or promoted to the new leader.

Another handy mechanism which is similar to the canary deployments is rolling updates. Rolling updates allow Deployment objects updates to take place with zero downtime. This happens by incrementally updating Pod objects with new ones as they pass the liveness probes. This means when I change a container image, environment variable or port of the container, these changes will be rolled out across all the pods for the deployment on the cluster.

Eliminating the “works on my machine” problem

I can also mount my local workspace onto the cluster, by using one of the provided Kubernetes storage options. This also allows me as a developer to use the same container as someone else although having my local code available. This then eliminates the “works on my machine problem”, so all the service code is containerized, and those containers are no longer run on local machines with the use of a Docker daemon. Now they are centralized on the Kubernetes nodes. This ensures that all users of the system have access to the same

services and pods running within the deployment. This will greatly reduce fraudulent bugs being opened due to environmental issues or differences in OS architecture.

Metrics server Deployment

Kubernetes itself provides the metrics server deployment as standard. This allows us to retrieve certain data about the health of the entire cluster. This includes the CPU and memory utilization cluster-wide, resources used by the containers running on the clusters. It takes all these metrics into consideration when applying the horizontal or vertical pod autoscaler. When looking for more in-depth information on the metrics from the cluster, Kubernetes provides a pain-free integration with monitoring solutions such as Prometheus. As a platform Kubernetes has boosted the development lifecycle in all the projects I’ve seen it used in with its tie-ins to CI/CD. It has easy integration with Jenkins, Spinnaker, Drone and many other existing applications. This allows a reliable backing to existing CI applications as well as providing natural scalability to the CI world. Which in turn provides better infrastructure to develop and deliver code upon.

Network Function Virtualisation

OSM Release SEVEN from the European Telecommunications Standards Institute project provides support for deploying and operating cloud-native applications to Network Function Virtualization (NFV) deployments and with this over 20,000 production-ready Kubernetes applications. This includes support for Kubernetes based Virtual Network Functions (VNFs). To facilitate the Kubernetes based VNFs (KNFs) Kubernetes preferably needs to be located in a virtual infrastructure manager as opposed to a bare-metal installation. After the initial deployment of the cluster, KNFs can be onboarded using Helm charts or Juju bundles. This drives a delivery system from VNFs to Container-Native Network functions (CNFs) gives a new level of portability of these applications regardless of the underlying infrastructure choices such that containers can run atop Public cloud, Private Cloud, Bare metal, VMWare, or even a local deployment. These CNFs also bring a lightweight footprint due to the small container images, a rapid deployment time, a lower resource overhead, and in unison with the Kubernetes autoscaling better resilience. The proposed view of the NFV space is that year on year the shift towards this new CNF model will increase with a projection of 70% of all NFV deployments will follow this model by the end of 2021.

In summary, Kubernetes is a useful technology that provides benefits all across the software industry. It is a constantly evolving technology with more than 500 companies contributing to Kubernetes-related projects, so the benefits are constantly increasing. It provides stability and uniformity, to companies, projects, and developers alike. It is a great fit for every project moving towards the cloud-native application area.


Author
Conor Murphy
Lead Cloud Engineer | Ammeon

Kubernetes : An Ammeon Overview

Ammeon |

At Ammeon, we are big fans of Kubernetes. In this post, Ammeon’s Lead Cloud Engineer talks through what is and why we use Kubernetes.

READ MORE

Sector Series: Software in Telecoms

Ammeon | August 18, 2020

Ammeon CEO, Stian Kildal, Starts the Sector Series with a review of software in the Telecoms industry. How is software innovation key to 5G and future networks?

READ MORE

Penetration Testing In DevOps

Ammeon | August 5, 2020

Pentesting should be performed on an ongoing basis to keep up with the continuous developments

READ MORE