K8s Café: Kubernetes Cluster on Azure Free Account

In this story, we will talk about Kubernetes and its simple deployment steps on Azure. We will build 2 node K8s cluster from scratch. Apart from this Azure provides a simple way to deploy Kubernetes which is called Azure Kubernetes Service (AKS ). In this story we are not going to deploy K8s cluster on AKS. I have taken reference from following github repositories for bootstrapping the cluster
kelseyhightower/kubernetes-the-hard-way
mmumshad/kubernetes-the-hard-way

To practice Kubernetes there are alternatives also available in the market like Katacode or Kubernetes Labs but to deploy it from scratch I feel free account from Azure can be a great help. I tried to keep things simple as much possible as I can. We are going to create 2 node cluster on Azure VMs from scratch. In this article I have expected that you are having proper Kubernetes architecture understanding and its components.

So Lets begin now ..

Please create free account in Azure service first. This account comes with a significant benefits and we are going to use them aggressively. Once account is created Login in Azure portal and take a look on all services.

Home page for resource creation in Azure

We will deploy 2 Node cluster on Azure. One node will work as a master node and another one as a Slave node on Kubernetes.

VM List

We will add VMs for this activity one by one. While adding first VM, you may need to add extra attributes specific to Azure Cloud. Please refer the Azure documentation for more details.

VM Creation

You need to define in which region you are going to deploy the VMs and what image you are going to use for deploying the server. We will use Ubuntu 18.04 LTS for it. I have used the average size VM to keep some free money for other usage. Please note Azure free account will provide some credits for use which will be deducted as per the subscription plan.

VM Creation Details — Image/Size of VM
VM Creation Details — Ports

I need to access the machine via command line from my laptop so I am allowing inbound port for SSH.

VM Creation Details — Disk

You can have multiple disk options. I preferred to use Standard SSD. Next step is to create a network. As per RFC1918 there are three ranges of addresses that can be used in a private network: 10.0.0.0–10.255.255.255. 172.16.0.0–172.31.255.255. 192.168.0.0–192.168.255.255. I am going with 192.168.5.0/24 subnet.

VM Creation Details — Network

Apart from this, VMs will require public IP as well. So we need to associate with one. Interesting fact the first five static IPs are not charged for reservation whether associated with a running virtual machine’s network interface or Azure Load Balancer.

VM Creation Details — Network

Apart from above details we need to provide few more final details like like monitoring for VM setup. I will go with defaults this time.

VM Creations Details — Diagnostics

All set finally, Azure will run a validation test on the information which you have provided to create a machine. Once it is done you can click to create a VM.

VM Creation Details — Validation Test

It will take a min or two to create a VM.

VM Creation Details — Process

Once done, you can list the VM.

VM Created

Check the public IP of VM and try to connect it.

By the same way above create one more VM which will be your worker node.

Once done list down the VMs.

Once both nodes are ready, login and validate the configuration on both nodes. Now we have 2 VMs ready for Kubernetes deployment. master-1 VM will be used for Kubernetes master node deployments and worker-1 VM will be used for Worker node.

Environment Details

Now the next step is to install docker from docker provided repositories. In Kubernetes setup we need CRE (Container Runtime Engine)for running containers. We will docker for that.

# Following are set of shell commands to deploy Docker on Ubuntu> sudo apt update
> sudo apt install apt-transport-https ca-certificates curl software-properties-common
> curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
> sudo add-apt-repository “deb [arch=amd64] https://download.docker.com/linux/ubuntu bionic stable”
> sudo apt update
> apt-cache policy docker-ce
> sudo apt install docker-ce
> sudo systemctl status docker
> sudo usermod -aG docker azure #for user azure

Validate the docker service.

Before proceeding further we need to deploy Kubectl on master node. The kubectl command line utility is used to interact with the Kubernetes API Server.

Download and install it.

{
wget https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl
chmod +x kubectl
sudo mv kubectl /usr/local/bin/
}

Kubernetes cluster components are supposed to be interact over SSL. It provides secure communication between calls between cluster services. We will generate self signed certificates via OpenSSL to achieve this SSL communications. We need to provision a Certificate Authority that can be used to generate additional TLS certificates.

Generate the CA configuration file, certificate, and private key:

# First we need to create private key for CA
openssl genrsa -out ca.key 2048

# Second Generate CSR using the private key generated above
openssl req -new -key ca.key -subj "/CN=KUBERNETES" -out ca.csr

# Self sign the csr using its own private key
openssl x509 -req -in ca.csr -signkey ca.key -CAcreateserial -out ca.crt -days 1000
Self signed Certificate K8 Environment CA

In case if you face issue regarding with random number generator like below :-

azure@master-1:~/kubernetes$ openssl req -new -key ca.key -subj “/CN=KUBERNETES” -out ca.csr
Can’t load /home/azure/.rnd into RNG

To fix the above issue open openssl.cnf and comment this line
#RANDFILE = $ENV::HOME/.rnd

Now we will generate client and server certificates for each Kubernetes component and a client certificate for the Kubernetes admin user. Note that the admin user is part of the system:masters group.

# Generate private key for admin user
openssl genrsa -out admin.key 2048

# Generate CSR for admin user. Note the OU.
openssl req -new -key admin.key -subj "/CN=admin/O=system:masters" -out admin.csr

# Sign certificate for admin user using CA servers private key
openssl x509 -req -in admin.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out admin.crt -days 1000

For controller-manager -

openssl genrsa -out kube-controller-manager.key 2048
openssl req -new -key kube-controller-manager.key -subj "/CN=system:kube-controller-manager" -out kube-controller-manager.csr
openssl x509 -req -in kube-controller-manager.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-controller-manager.crt -days 1000

For Kubernetes Scheduler -

openssl genrsa -out kube-scheduler.key 2048
openssl req -new -key kube-scheduler.key -subj "/CN=system:kube-scheduler" -out kube-scheduler.csr
openssl x509 -req -in kube-scheduler.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out kube-scheduler.crt -days 1000

For Kubernetes API -

CSR for Kube-API component

The Kubernetes API server is automatically assigned the kubernetes internal dns name, which will be linked to the first IP address (10.96.0.1) from the address range (10.96.0.0/24) reserved for internal cluster services during the control plane bootstrapping lab.

The Kubernetes Controller Manager leverages a key pair to generate and sign service account tokens as described in the managing service accounts documentation.

Generate the service-account certificate and private key:

openssl genrsa -out service-account.key 2048
openssl req -new -key service-account.key -subj "/CN=service-accounts" -out service-account.csr
openssl x509 -req -in service-account.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out service-account.crt -days 1000

Now we will generate for certificate for ETCD. ETCD server certificate must have addresses of all the servers part of the ETCD cluster. It includes master and all workers.

cat > openssl-etcd.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
IP.1 = 192.168.5.4
IP.2 = 192.168.5.5
IP.3 = 127.0.0.1
EOF

Run OpenSSL

openssl genrsa -out etcd-server.key 2048
openssl req -new -key etcd-server.key -subj "/CN=etcd-server" -out etcd-server.csr -config openssl-etcd.cnf
openssl x509 -req -in etcd-server.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out etcd-server.crt -extensions v3_req -extfile openssl-etcd.cnf -days 1000

Now we are done with All certificates. Lets start configuring K8s components one by one. In this section you will generate kubeconfig files for the controller manager, kubelet, kube-proxy, and scheduler clients and the admin user.

The kube-proxy Kubernetes Configuration File

Generate a kubeconfig file for the kube-proxy service:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://192.168.5.4:6443 \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-credentials system:kube-proxy \
--client-certificate=kube-proxy.crt \
--client-key=kube-proxy.key \
--embed-certs=true \
--kubeconfig=kube-proxy.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-proxy \
--kubeconfig=kube-proxy.kubeconfig

kubectl config use-context default --kubeconfig=kube-proxy.kubeconfig
}

The kube-controller-manager Kubernetes Configuration File

Generate a kubeconfig file for the kube-controller-manager service:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-credentials system:kube-controller-manager \
--client-certificate=kube-controller-manager.crt \
--client-key=kube-controller-manager.key \
--embed-certs=true \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-controller-manager \
--kubeconfig=kube-controller-manager.kubeconfig

kubectl config use-context default --kubeconfig=kube-controller-manager.kubeconfig
}

The kube-scheduler Kubernetes Configuration File

Generate a kubeconfig file for the kube-scheduler service:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-credentials system:kube-scheduler \
--client-certificate=kube-scheduler.crt \
--client-key=kube-scheduler.key \
--embed-certs=true \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:kube-scheduler \
--kubeconfig=kube-scheduler.kubeconfig

kubectl config use-context default --kubeconfig=kube-scheduler.kubeconfig
}

The admin Kubernetes Configuration File

Generate a kubeconfig file for the admin user:

{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://127.0.0.1:6443 \
--kubeconfig=admin.kubeconfig

kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key \
--embed-certs=true \
--kubeconfig=admin.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=admin \
--kubeconfig=admin.kubeconfig

kubectl config use-context default --kubeconfig=admin.kubeconfig
}

Kubernetes stores a variety of data including cluster state, application configurations, and secrets. Kubernetes supports the ability to encrypt cluster data at rest.

We will use this Yaml file in Controller manager configuration.

Kubernetes components are stateless and store cluster state in etcd.

Bootstrapping the etcd Cluster

In this section we will bootstrap the etcd cluster from scratch. First we will download the binaries and copy it in bin folder.

{
wget -q — show-progress — https-only — timestamping “https://github.com/coreos/etcd/releases/download/v3.3.9/etcd-v3.3.9-linux-amd64.tar.gz"
tar -xvf etcd-v3.3.9-linux-amd64.tar.gz
sudo mv etcd-v3.3.9-linux-amd64/etcd* /usr/local/bin/
sudo mkdir -p /etc/etcd /var/lib/etcd
sudo cp ca.crt etcd-server.key etcd-server.crt /etc/etcd/
}

Once done generate the etcd.service systemd unit file:

ETCD Cluster variables for service file generation
cat <<EOF | sudo tee /etc/systemd/system/etcd.service
[Unit]
Description=etcd
Documentation=https://github.com/coreos

[Service]
ExecStart=/usr/local/bin/etcd \\
--name ${ETCD_NAME} \\
--cert-file=/etc/etcd/etcd-server.crt \\
--key-file=/etc/etcd/etcd-server.key \\
--peer-cert-file=/etc/etcd/etcd-server.crt \\
--peer-key-file=/etc/etcd/etcd-server.key \\
--trusted-ca-file=/etc/etcd/ca.crt \\
--peer-trusted-ca-file=/etc/etcd/ca.crt \\
--peer-client-cert-auth \\
--client-cert-auth \\
--initial-advertise-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-peer-urls https://${INTERNAL_IP}:2380 \\
--listen-client-urls https://${INTERNAL_IP}:2379,https://127.0.0.1:2379 \\
--advertise-client-urls https://${INTERNAL_IP}:2379 \\
--initial-cluster-token etcd-cluster-0 \\
--initial-cluster master-1=https://192.168.5.4:2380 \\
--initial-cluster-state new \\
--data-dir=/var/lib/etcd
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Make sure provide right values as per the cluster architecture.

Lets start the etcd service and validate the etcd service.

{
sudo systemctl daemon-reload
sudo systemctl enable etcd
sudo systemctl start etcd
}
sudo ETCDCTL_API=3 etcdctl member list \
--endpoints=https://127.0.0.1:2379 \
--cacert=/etc/etcd/ca.crt \
--cert=/etc/etcd/etcd-server.crt \
--key=/etc/etcd/etcd-server.key

Bootstrapping the Controller Manager, Scheduler and API Server

Create a directory first for Kubernetes configuration -

sudo mkdir -p /etc/kubernetes/config

Download the binaries and update executable permission on it —

wget -q --show-progress --https-only --timestamping \
"https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-apiserver" \
"https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-controller-manager" \
"https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-scheduler" \
"https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl"
{
# Update permission
chmod +x kube-apiserver kube-controller-manager kube-scheduler kubectl
# Move it in bin folder
sudo mv kube-apiserver kube-controller-manager kube-scheduler kubectl /usr/local/bin/
}
Operation

Configure the Kubernetes API Server

cat <<EOF | sudo tee /etc/systemd/system/kube-apiserver.service
[Unit]
Description=Kubernetes API Server
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-apiserver \\
--advertise-address=${INTERNAL_IP} \\
--allow-privileged=true \\
--apiserver-count=3 \\
--audit-log-maxage=30 \\
--audit-log-maxbackup=3 \\
--audit-log-maxsize=100 \\
--audit-log-path=/var/log/audit.log \\
--authorization-mode=Node,RBAC \\
--bind-address=0.0.0.0 \\
--client-ca-file=/var/lib/kubernetes/ca.crt \\
--enable-admission-plugins=NodeRestriction,ServiceAccount \\
--enable-swagger-ui=true \\
--enable-bootstrap-token-auth=true \\
--etcd-cafile=/var/lib/kubernetes/ca.crt \\
--etcd-certfile=/var/lib/kubernetes/etcd-server.crt \\
--etcd-keyfile=/var/lib/kubernetes/etcd-server.key \\
--etcd-servers=https://192.168.5.4:2379 \\
--event-ttl=1h \\
--encryption-provider-config=/var/lib/kubernetes/encryption-config.yaml \\
--kubelet-certificate-authority=/var/lib/kubernetes/ca.crt \\
--kubelet-client-certificate=/var/lib/kubernetes/kube-apiserver.crt \\
--kubelet-client-key=/var/lib/kubernetes/kube-apiserver.key \\
--kubelet-https=true \\
--runtime-config=api/all \\
--service-account-key-file=/var/lib/kubernetes/service-account.crt \\
--service-cluster-ip-range=10.96.0.0/24 \\
--service-node-port-range=30000-32767 \\
--tls-cert-file=/var/lib/kubernetes/kube-apiserver.crt \\
--tls-private-key-file=/var/lib/kubernetes/kube-apiserver.key \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF

Move the kube-controller-manager kubeconfig into place:

sudo mv kube-controller-manager.kubeconfig /var/lib/kubernetes/

Create the kube-controller-manager.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kube-controller-manager.service
[Unit]
Description=Kubernetes Controller Manager
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-controller-manager \\
--address=0.0.0.0 \\
--cluster-cidr=192.168.5.0/24 \\
--cluster-name=kubernetes \\
--cluster-signing-cert-file=/var/lib/kubernetes/ca.crt \\
--cluster-signing-key-file=/var/lib/kubernetes/ca.key \\
--kubeconfig=/var/lib/kubernetes/kube-controller-manager.kubeconfig \\
--leader-elect=true \\
--root-ca-file=/var/lib/kubernetes/ca.crt \\
--service-account-private-key-file=/var/lib/kubernetes/service-account.key \\
--service-cluster-ip-range=10.96.0.0/24 \\
--use-service-account-credentials=true \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
sudo mv kube-scheduler.kubeconfig /var/lib/kubernetes/cat <<EOF | sudo tee /etc/systemd/system/kube-scheduler.service
[Unit]
Description=Kubernetes Scheduler
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-scheduler \\
--kubeconfig=/var/lib/kubernetes/kube-scheduler.kubeconfig \\
--address=127.0.0.1 \\
--leader-elect=true \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
{
sudo systemctl daemon-reload
sudo systemctl enable kube-apiserver kube-controller-manager kube-scheduler
sudo systemctl start kube-apiserver kube-controller-manager kube-scheduler
}
kubectl get componentstatuses --kubeconfig admin.kubeconfig
cat > openssl-worker-1.cnf <<EOF
[req]
req_extensions = v3_req
distinguished_name = req_distinguished_name
[req_distinguished_name]
[ v3_req ]
basicConstraints = CA:FALSE
keyUsage = nonRepudiation, digitalSignature, keyEncipherment
subjectAltName = @alt_names
[alt_names]
DNS.1 = worker-1
IP.1 = 192.168.5.5
EOF
openssl genrsa -out worker-1.key 2048
openssl req -new -key worker-1.key -subj "/CN=system:node:worker-1/O=system:nodes" -out worker-1.csr -config openssl-worker-1.cnf
openssl x509 -req -in worker-1.csr -CA ca.crt -CAkey ca.key -CAcreateserial -out worker-1.crt -extensions v3_req -extfile openssl-worker-1.cnf -days 1000

In this section you will configure RBAC permissions to allow the Kubernetes API Server to access the Kubelet API on each worker node. Access to the Kubelet API is required for retrieving metrics, logs, and exec.

Create the system:kube-apiserver-to-kubelet ClusterRole with permissions to access the Kubelet API and perform most common tasks associated with managing pods:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRole
metadata:
annotations:
rbac.authorization.kubernetes.io/autoupdate: "true"
labels:
kubernetes.io/bootstrapping: rbac-defaults
name: system:kube-apiserver-to-kubelet
rules:
- apiGroups:
- ""
resources:
- nodes/proxy
- nodes/stats
- nodes/log
- nodes/spec
- nodes/metrics
verbs:
- "*"
EOF

The Kubernetes API Server authenticates to the Kubelet as the kubernetes user using the client certificate as defined by the --kubelet-client-certificate flag.

Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kubernetes
EOF
{
kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://:6443 \
--kubeconfig=worker-1.kubeconfig

kubectl config set-credentials system:node:worker-1 \
--client-certificate=worker-1.crt \
--client-key=worker-1.key \
--embed-certs=true \
--kubeconfig=worker-1.kubeconfig

kubectl config set-context default \
--cluster=kubernetes-the-hard-way \
--user=system:node:worker-1 \
--kubeconfig=worker-1.kubeconfig

kubectl config use-context default --kubeconfig=worker-1.kubeconfig
}
scp ca.crt worker-1.crt worker-1.key worker-1.kubeconfig worker-1

On Worker Node -

{
sudo apt-get update
sudo apt-get -y install socat conntrack ipset
}

Make sure swap is disabled on the node.

wget -q --show-progress --https-only --timestamping \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubectl \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kube-proxy \
https://storage.googleapis.com/kubernetes-release/release/v1.13.0/bin/linux/amd64/kubelet
sudo mkdir -p \
/etc/cni/net.d \
/opt/cni/bin \
/var/lib/kubelet \
/var/lib/kube-proxy \
/var/lib/kubernetes \
/var/run/kubernetes
{
chmod +x kubectl kube-proxy kubelet
sudo mv kubectl kube-proxy kubelet /usr/local/bin/
}
{
sudo mv ${HOSTNAME}.key ${HOSTNAME}.crt /var/lib/kubelet/
sudo mv ${HOSTNAME}.kubeconfig /var/lib/kubelet/kubeconfig
sudo mv ca.crt /var/lib/kubernetes/
}

Create the kubelet-config.yaml configuration file:

cat <<EOF | sudo tee /var/lib/kubelet/kubelet-config.yaml
kind: KubeletConfiguration
apiVersion: kubelet.config.k8s.io/v1beta1
authentication:
anonymous:
enabled: false
webhook:
enabled: true
x509:
clientCAFile: "/var/lib/kubernetes/ca.crt"
authorization:
mode: Webhook
clusterDomain: "cluster.local"
clusterDNS:
- "10.96.0.10"
resolvConf: "/run/systemd/resolve/resolv.conf"
runtimeRequestTimeout: "15m"
EOF

Create the kubelet.service systemd unit file:

cat <<EOF | sudo tee /etc/systemd/system/kubelet.service
[Unit]
Description=Kubernetes Kubelet
Documentation=https://github.com/kubernetes/kubernetes
After=docker.service
Requires=docker.service

[Service]
ExecStart=/usr/local/bin/kubelet \\
--config=/var/lib/kubelet/kubelet-config.yaml \\
--image-pull-progress-deadline=2m \\
--kubeconfig=/var/lib/kubelet/kubeconfig \\
--tls-cert-file=/var/lib/kubelet/${HOSTNAME}.crt \\
--tls-private-key-file=/var/lib/kubelet/${HOSTNAME}.key \\
--network-plugin=cni \\
--register-node=true \\
--v=2
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
sudo mv kube-proxy.kubeconfig /var/lib/kube-proxy/kubeconfigcat <<EOF | sudo tee /var/lib/kube-proxy/kube-proxy-config.yaml
kind: KubeProxyConfiguration
apiVersion: kubeproxy.config.k8s.io/v1alpha1
clientConnection:
kubeconfig: "/var/lib/kube-proxy/kubeconfig"
mode: "iptables"
clusterCIDR: "192.168.5.0/24"
EOF
cat <<EOF | sudo tee /etc/systemd/system/kube-proxy.service
[Unit]
Description=Kubernetes Kube Proxy
Documentation=https://github.com/kubernetes/kubernetes

[Service]
ExecStart=/usr/local/bin/kube-proxy \\
--config=/var/lib/kube-proxy/kube-proxy-config.yaml
Restart=on-failure
RestartSec=5

[Install]
WantedBy=multi-user.target
EOF
{
sudo systemctl daemon-reload
sudo systemctl enable kubelet kube-proxy
sudo systemctl start kubelet kube-proxy
}

In the previous step we configured a worker node by

  • Creating a set of key pairs for the worker node by ourself
  • Getting them signed by the CA by ourself
  • Creating a kube-config file using this certificate by ourself
  • Everytime the certificate expires we must follow the same process of updating the certificate by ourself

Master node

Each kubeconfig requires a Kubernetes API Server to connect to. To support high availability the IP address assigned to the external load balancer fronting the Kubernetes API Servers will be used.

Generate a kubeconfig file suitable for authenticating as the admin user:

{
KUBERNETES_ADDRESS=192.168.5.4

kubectl config set-cluster kubernetes-the-hard-way \
--certificate-authority=ca.crt \
--embed-certs=true \
--server=https://${KUBERNETES_ADDRESS}:6443

kubectl config set-credentials admin \
--client-certificate=admin.crt \
--client-key=admin.key

kubectl config set-context kubernetes-the-hard-way \
--cluster=kubernetes-the-hard-way \
--user=admin

kubectl config use-context kubernetes-the-hard-way
}

Download the CNI Plugins required for weave on each of the worker nodes — worker-1

wget https://github.com/containernetworking/plugins/releases/download/v0.7.5/cni-plugins-amd64-v0.7.5.tgz

Extract it to /opt/cni/bin directory

sudo tar -xzvf cni-plugins-amd64-v0.7.5.tgz --directory /opt/cni/bin/

Deploy weave network. Run only once on the master node.

kubectl apply -f "https://cloud.weave.works/k8s/net?k8s-version=$(kubectl version | base64 | tr -d '\n')"

Weave uses POD CIDR of 10.32.0.0/12 by default.

You cluster is ready now !!

One last step, The Kubernetes API Server authenticates to the Kubelet as the kubernetes user using the client certificate as defined by the --kubelet-client-certificate flag.

Bind the system:kube-apiserver-to-kubelet ClusterRole to the kubernetes user:

cat <<EOF | kubectl apply --kubeconfig admin.kubeconfig -f -
apiVersion: rbac.authorization.k8s.io/v1beta1
kind: ClusterRoleBinding
metadata:
name: system:kube-apiserver
namespace: ""
roleRef:
apiGroup: rbac.authorization.k8s.io
kind: ClusterRole
name: system:kube-apiserver-to-kubelet
subjects:
- apiGroup: rbac.authorization.k8s.io
kind: User
name: kube-apiserver
EOF

Execute a DNS lookup for the kubernetes service inside the busybox pod:

That’s all in this.

Keep Learning and stay focus !

In quest of understanding How Systems Work !

Get the Medium app