Monitoring With Elastic


In this medium article, we are going to deploy the Elastic Stack (Elasticsearch-Kibana-Metricbeat) for monitoring the Kubernetes Cluster. Metricbeat will collect metrics from Kubernetes Cluster and it will forward the data to Elasticsearch for analytics. Kibana will allow us to visualize data in a dashboard format. And the best part is that We will deploy this whole stack on Kubernetes Itself. This article consists deployment of all component in the simplest way so do not worry/panic if you see the password in plain YAML 😬 😐

What is Elastic Stack?

Elastic Stack is the collection of three open-source products Elasticsearch, Kibana, Logstash and Beats…


Photo by Luke Chesser on Unsplash

Introduction to Kube-State-Metrics

kube-state-metrics is an open-source project to generate metrics about the state of the Kubernetes Cluster Objects. It is a service that listens to the Kubernetes API server. It does not perform any modification on Kubernetes API just reads the required data for metrics.

Where we can use it?

kube-state-metrics exposes raw data unmodified from the Kubernetes API. This allows users to have all the data they require and perform heuristics as they see fit. These metrics are designed to be consumed either by Prometheus itself or by a scraper that is compatible with scraping a Prometheus client endpoint like Kubernetes dashboard, Elasticsearch Metricbeat etc.

How we can use it?



From Microsoft (

Microsoft announced in late March 20 that Office 365 would officially become Microsoft 365 effective April 21. This is kind of re-branding strategy which typically organisations do time to time. This reflects their innovation strategy, commitment to product features and design values.

Behind this product Microsoft has enabled powerful AI enabled features for Microsoft Office apps and strengthen the security stack around most of the products. There are other features and I would love to explore them.

I will add more details on this once I will get hands on these new features. Thanks.

Microsoft Graph

Microsoft Graph is the gateway to…


Photo by Sander Lenaerts on Unsplash

Storage in Kubernetes

Kubernetes is the orchestrator for container workloads. Containers are ephemeral so in any use case, where your application requires persistence of data Kubernetes volume abstraction comes into the picture. The Kubernetes volume in its simplest form is a directory that is attached to a pod and mounted to one or more containers running inside the pod.

Kubernetes support many types of Volumes and they can be mounted as a directory in different ways. In this post, I will try to mount Files and Blobs in Kubernetes Pod using the symlink and blobfuse mount approach. …


Photo by Scott Webb on Unsplash

Kubernetes became the de-facto standard for container deployment in recent times. This makes container security a critical component in the Kubernetes realm. Each container running on Kubernetes Cluster may have a different attack surface and vulnerabilities which can be exploited by attackers. To handle that Kubernetes also comes with different solutions to harden the surface. In this Medium article, I will try to explain the basics of Kubernetes Pod security context and policies.

What is Kubernetes Pod?

Pods are the basic unit of workload in Kubernetes. A Pod can have 1 or more containers running inside it and has its own IP address.

What is Admission Controller?



Photo by Justus Menke on Unsplash

Part-1: Linux Container as a Process

Modern DevOps cycles are constantly evolving and we should thank containers for this. The concept of Container took birth under the Linux Operating System. A Linux Container is a process that is isolated from the rest of the system. As I said it is a process then it will have a process id (PID) and will be associated with a particular user and group account. This is the basic concept of the Linux OS process.

Part-2: Running a Docker Container

By default, containers run as a root in Docker. If you want to start your container process as a non-root user then you must specify…


Photo by Markus Winkler on Unsplash

I am using docker for container image build and deployment for almost 2–3 years and sometimes I struggled to get my storage back. By default, docker does not clear unused objects such as images, containers, networks, and volumes. This causes a high amount of disk usage by Docker.

In this post, I will try to cover quick steps to analyze and clean up this data.

You can list the statistics of docker file system usage by inbuilt command -

arun@controller:~$ sudo docker system df
Images 15 0 2.674GB 2.674GB …


Container Storage Interface

PODs mount volume inside Containers and they access the storage like it’s their local file-system. Even you can share data between containers in the same pod using a shared volume.

Kubernetes Pods are immutable!

Kubernetes Container Orchestrator manages PODs(Containers). If you are familiar with the concept of Container or PODs then you know that PODs are ephemeral. It means if a pod dies it can never be resurrected, all data which it has generated dies with it.

So if you want to store or save the state or data then Kubernetes volume abstraction comes into the picture. This volume is similar to Docker Volume…


When you deploy components or systems in Kubernetes, you have to work with YAML files. You create a set of few Kubernetes object files and deploy it via running the kubectl create or apply command. This approach is pretty okay when you are dealing with one Environment and a limited set of objects. This approach gets little difficult to manage with multiple environments and number of customization objects. Every environment may be unique, may have a different set of requirements. There can be a lot of solutions which can help you in this situation. Helm, Kustomize, Kapitan, Ksonnet .. the…


Photo by David Clode on Unsplash

_ _name_ _ variable

In python, you may write a script specifically keeping all your functions and import that script as a module in another script. __name__ is a built-in variable that evaluates the name of the current module.

This variable decides whether you want to run the script or you want to import the functions defined in the script.

there is no main() function in Python. If the source file is executed as the main program, the interpreter sets the __name__ variable to have a value “__main__”. …

Arun Kumar Singh

In quest of understanding How Systems Work !

Get the Medium app

A button that says 'Download on the App Store', and if clicked it will lead you to the iOS App store
A button that says 'Get it on, Google Play', and if clicked it will lead you to the Google Play store