Kubernetes in 33 points — Bird’s eye view

Quick Summary of Kubernetes (K8S) in Action

Apurav Chauhan
7 min readApr 4, 2021
Kubernetes — Feature Summary

TL;DR;

This blog is just a very short personal summary of the book Kubernetes in Action. Feel free to skip it if you have read that book

History

First there were monoliths deployed as a big components and then world moved to microservices (ms). As the companies grew, the number of ms built and deployed were enormous and managing the life cycle of these ms needed a special role and skillset to manage infrastructure. Thats where devops role came in to manage the infra/hardware failure/migration of failure components to healthy ones and so forth. MS deployment was solved with dockerized containers but automation of container’s life cycle and ops task like resource, computation management was still unsolved. Thats where k8s came in to ease out dev and devops life.

What is Kubernetes -K8S?

Kubernetes enables developers to deploy their applications themselves and as often as they want, without requiring any assistance from the operations (ops) team. But Kubernetes doesn’t benefit only developers. It also helps the ops team by automatically monitoring and rescheduling those apps in the event of a hardware failure. The focus for system administrators (sysadmins) shifts from supervising individual apps to mostly supervising and managing Kubernetes and the rest of the infrastructure, while Kubernetes itself takes care of the apps.

Kubernetes abstracts away the hardware infrastructure and exposes your whole data-centre as a single enormous computational resource. It allows you to deploy and run your software components without having to know about the actual servers under-neath. When deploying a multi-component application through Kubernetes, it selects a server for each component, deploys it, and enables it to easily find and communicate with all the other components of your application.

K8S Architecture — Bird’s eye view

2 types of nodes:

Master Node [Control Plane]— consists of following components:

>>keyval store (etcd) to save cluster configs,

>>an API server to query configs etc,

>>Scheduler to schedule the deployments on different pods,

>>Controller Manager to manage failures, replication of components other cluster level functions

Worker Nodes — runs apps with following components:

>>Container runtime like Docker or rkt

>>Kubelet: a component to talk to API server

>>kube-proxy: a LB to balance traffic between app components

Nodes?

Node is a single physical hardware or a single machine. Multiple Pods are deployed on a node

Pods vs containers?

The basic building block in Kubernetes is the pod. Containerised apps run in PODs and when scaling we create more PODs inside which there are containerised apps. A POD is an isolated unit of capacity and resources. Each pod has its unique private IP and hostname. If multiple containers are deployed in a single POD they will share the IP, namespace and network interfaces.

Why need a ReplicationController/ReplicaSet?

Whenever you deploy an app, have it done via Replication controller or ReplicaSet. RC’s responsibility is to track health, node failure etc and scale the pods up and down based on rules defined

K8S Concepts, commands and more:

1. Run a docker image from docker rgistr

docker run <imagename>

2. Create Docker Image with following Dockerfile

docker build -t apurav .

Above command means build an image with name/tag apurav from the current directory (. dot means current directory). Docker will search for Dockerfile in current directory to build image. See below cur dir

FROM node:7
ADD app.js /app.js
ENTRYPOINT [ “node”,”app.js” ]

FROM: tells the base image for this app

ADD: adds app code in docker container fie system

ENTRYPOINT: defines command arguments to run on startup of container

3. Check All running containers or inspect

docker ps

docker inspect <container-name>

4. Local K8S setup

>brew install minikube

>minikube start

5. Install kubectl client to interact with k8s cluster

>> minikube kubectl -- get po -A

6. Run an image in k8s cluster using below

kubectl run <any-name> --image=<dockerhub-image-name> --generator=run/v1

!!The above internally creates pods and deploy container in it!!

6. Display all POD details OR a specific POD

kubectl get pods -o wide OR kubectl get pods <pod-name>-o wide

kubectl get pods -o yaml OR kubectl get pods <pod-name>-o yaml

kubectl get pods -o json OR kubectl get pods <pod-name>-o json

kubectl describe pod <pod-name>

7. See cluster dashboard

minikube dashboard

8. POD, service or any object in k8s can be created via json or YAML file as well. For Pod we created above, you can explore the object definition file in YAML using:

kubectl get po <pod-name> -o yaml

9. To create a POD using YAML file

kubectl create -f <pod-manual.yaml>

10. Check logs of a Pod OR specific container in a POD

kubectl logs <pod-name>

kubectl logs <pod-name> -c <container-name>

11. Sending Request/connect to a specific POD via port forward

kubectl port-forward <pod-name>8888:8080

12. You can add/edit multiple labels to PODS which later help in listing certain Pods only based on label-selectors

Create label>>kubectl label po <pod-name> env=staging

Label Selector>>kubectl get po -l env=staging

13. Node selectors can be used to selector machine where the POD is preferred to be deployed

kubectl get nodes (to fetch node name and other details)

kubectl label node <node-name> gpu=true

Now while creating a POD via YAML file you can define a node selector to find right set of nodes to create our POD

Sample YAML showing labels and node selector

apiVersion: v1
kind: Pod
metadata:
name: apu-pod
labels:
env:staging
spec:
nodeSelector:
gpu: "true"
containers:
- image: apuravchauhan/kubia
name: apurav

14. You can logically and horizontally segregate all resources by creating namespaces

15. Liveness probes can be configured that help k8s restart app in case of node failure or app crash. Readiness probes can be used to tell k8s when app is ready to serve traffic after startup

16. Creating Pods indirectly via Replication Controller/ReplicationSets yaml file. As see we can define replicas

kubectl create -f rc.yaml

Below is sample rc.yaml


apiVersion: v1
kind: ReplicationController
metadata:
name: kubia
spec:
replicas: 3
selector:
app: kubia
template:
metadata:
labels:
app: kubia
spec:
containers:
- name: kubia
image: apuravchauhan/kubia
ports:
- containerPort: 8080

17. Scale up replicationcontroller

kubectl scale rc kubia --replicas=10

18. You can run exactly one pod on each node with DaemonSet

19. You can run pods that perform a single completable task using Job Resource

20. As a client you communicate with a group of a specific type of Pods via service resource. Below is an example YAML that can create a service using

kubectl create -f orderloadbalancer.yaml

We are defining a service called orderloadbalancer, which will accept connections on port 80 and route each connection to port 8080 of one of the pods matching the app=orderservice label selector.

apiVersion: v1
kind: Service
metadata:
name: orderloadbalancer
spec:
ports:
- port: 80
targetPort: 8080
selector:
app: orderservice

21. When exposing multiple service group/ load balancers to external world, you can use something called Ingress Resource to define multiple rules and mappings based on paths

22. Pods can be decoupled from underlying storage technology by mounting different storage volumes in k8s like direct gitrepo, aws, gce, nfs, worker’s node fs etc

23. Special types of volumes exist to store configurations and secrets that can be mounted on worker nodes.

Example of volume mounting yaml

apiVersion: v1
kind: Pod
metadata:
name: gitrepo-volume-pod
spec:
containers:
- image: nginx:alpine
name: web-server
volumeMounts:
- name: html
mountPath: /usr/share/nginx/html

volumes:
- name: html

gitRepo:
repository: https://github.com/luksa/kubia-website-example.git
revision: master
directory: .

24. To further decouple the Pods from underlying storage, we can use an abstraction called PersistentVolumeClaim

25. For managing production build updates and patch, use Deployment resource to handle rolling updates gracefully. Other deployment strategies are also available. Deployment resources maintains history of deployments and can even rollback the last deployment.

You can also pause, resume deployments and even block rollouts of bad versions using Deployments.

Example deployment yaml to be created:

apiVersion: apps/v1beta1
kind: Deployment
metadata:
name: kubia
spec:
replicas: 3
template:
metadata:
name: kubia
labels:
app: kubia
spec:
containers:
- image: luksa/kubia:v1
name: nodejs

after this you can roll out a new version like:

$ kubectl set image deployment kubia nodejs=luksa/kubia:v2

26. Use Statefulset resource to maintain the app state in case a Pod/container goes down. StatefulSet helps maintain the disk state and other identity details when a new POD is created.

27. Make sure your k8s is secured including API server using proper authentication/ role based access. Same for cluster network

28. Post-start and pre-stop hooks can be registered with Pods

29. LimitRange resources can be used to define computational limits for PODs

apiVersion: v1
kind: LimitRange
metadata:
name: example
spec:
limits:
- type: Pod
min:
cpu: 50m
memory: 5Mi
max:
cpu: 1
memory: 1Gi
- type: Container
defaultRequest:
cpu: 100m
memory: 10Mi
.
.
.
.
.

30. You can use ResourceQuota objects to limit the amount of resources available to all the pods in a namespace.

31. Use HorizontalPodAutoscaler resource for autoscaling. Just point it to a Deployment, ReplicaSet, or ReplicationController and specify the target CPU utilisation for the pods

32. A package manager called Helm makes deploying existing apps without requiring you to build resource manifests for them.

33. You can extend K8S server with your own custom objects and APIs

Summarising the K8S resources

Core resources of K8S

--

--