Tuesday, November 6, 2018

How to install a Kubernetes Cluster on CentOS 7

Goal:

How to install a Kubernetes Cluster on CentOS 7

Env:

CentOS 7.4
4 Nodes(v1 to v4, and v1 will be the master node for Kubernetes Cluster):
  • xx.xx.xx.41 v1.poc.com v1
  • xx.xx.xx.42 v2.poc.com v2
  • xx.xx.xx.43 v3.poc.com v3
  • xx.xx.xx.44 v4.poc.com v4
Kubernetes v1.12.2
Docker 18.06.1-ce

Solution:

1. Node preparation on all nodes

1.1 Disable SELinux

Change it at current runtime session level:
setenforce 0
Change it at system level by modifying /etc/selinux/config:
SELINUX=disabled
After rebooting, use below command to confirm selinux is disabled:
sestatus

1.2 Disable Swap

swapoff -a
Then edit /etc/fstab to comment out the swap, for example:
#/dev/mapper/centos-swap swap                    swap    defaults        0 0

1.3 Enable br_netfilter

br_netfilter module is required to enable transparent masquerading and to facilitate VxLAN traffic for communication between Kubernetes pods across the cluster.
modprobe br_netfilter
echo '1' > /proc/sys/net/bridge/bridge-nf-call-iptables

2. Install Docker

Follow below documentation:
https://docs.docker.com/install/linux/docker-ce/centos/
Quick commands are:
yum install -y yum-utils device-mapper-persistent-data lvm2
yum install docker-ce
systemctl start docker
systemctl enable docker
After that, Docker should be started and enabled.
$ ps -ef|grep -i docker
root      2468     1  0 16:41 ?        00:00:01 /usr/bin/dockerd
root      2476  2468  0 16:41 ?        00:00:00 docker-containerd --config /var/run/docker/containerd/containerd.toml
And verify the installation by running:
docker run hello-world

3. Install Kubernetes tools: kubectl, kubelet and kubeadm

Follow below documentation:
https://kubernetes.io/docs/tasks/tools/install-kubectl/
https://kubernetes.io/docs/setup/independent/install-kubeadm/
Quick commands are:
cat <<EOF > /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-x86_64
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
EOF
yum install -y kubectl kubelet kubeadm 
Enable kubelet:
systemctl enable kubelet
Then reboot all the nodes.
Note: When using Docker, kubeadm will automatically detect the cgroup driver for the kubelet and set it in the /var/lib/kubelet/kubeadm-flags.env file during runtime. Eg:
$ cat /var/lib/kubelet/kubeadm-flags.env
KUBELET_KUBEADM_ARGS=--cgroup-driver=cgroupfs --network-plugin=cni

4. Initialize Kubernetes Cluster on master node

Follow below documentation:
https://kubernetes.io/docs/setup/independent/create-cluster-kubeadm/ 
Quick commands are:
kubeadm init --pod-network-cidr 10.244.0.0/16
Here we will choose flannel as the POD network, and basically Flannel runs a small, single binary agent called flanneld on each host, and is responsible for allocating a subnet lease to each host out of a larger, preconfigured address space.
So we specify the network range as "--pod-network-cidr 10.244.0.0/16".
The steps to deploy flannel is in step #5.3.

If it completes successfully, save below "kubeadm join" command which will be used to join other worker nodes into this Kubernetes Cluster.
For example:
kubeadm join xx.xx.xx.41:6443 --token 65l31r.cc43l28kcyx4xefp --discovery-token-ca-cert-hash sha256:9c6ec245668161a61203776a0621911463df72b80a32590d9fa2bb16da2a46ac

5. Configure Kubernetes Cluster on master node

5.1 Create a user named "testuser" who has sudo privilege

5.2 Create config for Kubernetes Cluster for the testuser.

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

After that, run simple commands to verify:
$ kubectl get nodes
NAME         STATUS     ROLES    AGE   VERSION
v1.poc.com   NotReady   master   21m   v1.12.2

5.3 Deploy the flannel network

kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

5.4 Master Isolation

Here I want to be able to schedule PODs on master as well:
kubectl taint nodes --all node-role.kubernetes.io/master-

5.5 Join other nodes into Kubernetes Cluster

As root user on all other nodes:
kubeadm join xx.xx.xx.41:6443 --token 65l31r.cc43l28kcyx4xefp --discovery-token-ca-cert-hash sha256:9c6ec245668161a61203776a0621911463df72b80a32590d9fa2bb16da2a46ac
Verify on master node as testuser:
$ kubectl get nodes
NAME         STATUS   ROLES    AGE   VERSION
v1.poc.com   Ready    master   14m   v1.12.2
v2.poc.com   Ready    <none>   28s   v1.12.2
v3.poc.com   Ready    <none>   25s   v1.12.2
v4.poc.com   Ready    <none>   23s   v1.12.2

6. Test by creating a nginx POD and service

6.1 Create a nginx deployment

kubectl create deployment nginx --image=nginx
Verify nginx pod is running:
$ kubectl get pods
NAME                    READY   STATUS    RESTARTS   AGE
nginx-55bd7c9fd-7v9kz   1/1     Running   0          14s

6.2 Expose nginx service

kubectl create service nodeport nginx --tcp=80:80
Verify Service:
$ kubectl get svc
NAME         TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)        AGE
kubernetes   ClusterIP   10.96.0.1      <none>        443/TCP        18m
nginx        NodePort    10.103.4.141   <none>        80:31707/TCP   24s

6.3 Test nginx service

From above output, we know the nginx is actually on the port 31707 of the nodes.
Test by running below commands or open the browser with below addresses:
curl v1.poc.com:31707
curl v2.poc.com:31707
curl v3.poc.com:31707
curl v4.poc.com:31707

Common Issues

1. "kubeadm init" fails if swap is not turned off.
$ kubeadm init
[init] using Kubernetes version: v1.12.2
[preflight] running pre-flight checks
[preflight] Some fatal errors occurred:
 [ERROR Swap]: running with swap on is not supported. Please disable swap
[preflight] If you know what you are doing, you can make a check non-fatal with `--ignore-preflight-errors=...`
Solution is above step #1.2 to disable swap.
2. "kubectl" fails if there is no config for Kubernetes Cluster
$ kubectl get pods
The connection to the server localhost:8080 was refused - did you specify the right host or port?
The reason is there is no config for Kubernetes Cluster for this specific user.
That is why it thought the kubernetes API server is listening on 8080 port.
Solution is above step #5.2.
After that, the commands will connect to API server mentioned in the config file:
[testuser@v1 .kube]$ pwd
/home/testuser/.kube
[testuser@v1 .kube]$ cat config  |grep server
    server: https://xx.xx.xx.41:6443
And the API server should be listening on this port 6443:
# netstat -anp|grep 6443|grep LISTEN
tcp6       0      0 :::6443                 :::*                    LISTEN      9658/kube-apiserver



No comments:

Post a Comment