Set up Docker
Step 1: Install Docker
Kubernetes requires an existing Docker installation. If you already have Docker installed, skip ahead to Step 2.
If you do not have Kubernetes, install it by following these steps:
1. Update the package list with the command:
sudo apt-get update
2. Next, install Docker with the command:
sudo apt-get install docker.io
3. Repeat the process on each server that will act as a node.
4. Check the installation (and version) by entering the following:
docker ––version
Step 2: Start and Enable Docker
1. Set Docker to launch at boot by entering the following:
sudo systemctl enable docker
2. Verify Docker is running:
sudo systemctl status docker
To start Docker if it’s not running:
sudo systemctl start docker
3. Repeat on all the other nodes.
Install Kubernetes
Step 1: Install Kubernetes
In this step, we will be installing Kubernetes. Just like you did with Docker in the prerequisites, you must run the commands in both nodes to install Kubernetes. Use ssh to login into both nodes and proceed. You will start by installing the apt-transport-httpspackage which enables working with http and https in Ubuntu’s repositories. Also, install curl as it will be necessary for the next steps. Execute the following command:
| sudo apt install apt-transport-https curl |
Then, add the Kubernetes signing key to both nodes by executing the command:
| curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add |
Next, we add the Kubernetes repository as a package source on both nodes using the following command:
| echo "deb https://apt.kubernetes.io/ kubernetes-xenial main" >> ~/kubernetes.list sudo mv ~/kubernetes.list /etc/apt/sources.list.d |
After that, update the nodes:
Once the update completes, we will install Kubernetes. This involves installing the various tools that make up Kubernetes: kubeadm, kubelet, kubectl, and kubernetes-cni. These tools are installed on both nodes. We define each tool below:
| sudo apt-get install -y kubernetes-cni |
Optionally, you can install all four in a single command:
| sudo apt-get install -y kubelet kubeadm kubectl kubernetes-cni |
Step 2: Disabling Swap Memory
Kubernetes fails to function in a system that is using swap memory. Hence, it must be disabled in the master node and all worker nodes. Execute the following command to disable swap memory:
This
command disables swap memory until the system is rebooted. We have to
ensure that it remains off even after reboots. This has to be done on
the master and all worker nodes. We can do this by editing the fstab file and commenting out the /swapfile line with a #. Open the file with the nano text editor by entering the following command:
Inside the file, comment out the swapfile line as shown in the screenshot below:
If
you do not see the swapfile line, just ignore it. Save and close the
file when you are done editing. Follow the same process for both nodes.
Now, swap memory settings will remain off, even after your server
reboots.
Step 3: Setting Unique Hostnames
Your
nodes must have unique hostnames for easier identification. If you are
deploying a cluster with many nodes, you can set it to identify names
for your worker nodes such as node-1, node-2, etc. As we had mentioned
earlier, we have named our nodes as kubernetes-master and kubernetes-worker.
We have set them at the time of creating the server. However, you can
adjust or set yours if you had not already done so from the command
line. To adjust the hostname on the master node, run the following
command:
| sudo hostnamectl set-hostname kubernetes-master |
On the worker node, run the following command:
| sudo hostnamectl set-hostname kubernetes-worker |
You may close the current terminal session and ssh back into the server to see the changes.
Step 4: Letting Iptables See Bridged Traffic
For the master and worker nodes to correctly see bridged traffic, you should ensure net.bridge.bridge-nf-call-iptables is set to 1 in your config. First, ensure the br_netfilter module is loaded. You can confirm this by issuing the command:
| lsmod | grep br_netfilter |
Optionally, you can explicitly load it with the command:
| sudo modprobe br_netfilter |
Now, you can run this command to set the value to 1:
| sudo sysctl net.bridge.bridge-nf-call-iptables=1 |
Step 5: Changing Docker Cgroup Driver
By default, Docker installs with “cgroupfs” as the cgroup driver. Kubernetes recommends that Docker should run with “systemd”as
the driver. If you skip this step and try to initialize the kubeadm in
the next step, you will get the following warning in your terminal:
| [preflight] Running pre-flight checks [WARNING IsDockerSystemdCheck]: detected "cgroupfs" as the Docker cgroup driver. The recommended driver is "systemd". Please follow the guide at https://kubernetes.io/docs/setup/cri/ |
On both master and worker nodes, update the cgroupdriver with the following commands:
| sudo mkdir /etc/docker cat <<EOF | sudo tee /etc/docker/daemon.json { "exec-opts": ["native.cgroupdriver=systemd"], "log-driver": "json-file", "log-opts": { "max-size": "100m" }, "storage-driver": "overlay2" } EOF |
Then, execute the following commands to restart and enable Docker on system boot-up:
| sudo systemctl enable docker sudo systemctl daemon-reload sudo systemctl restart docker |
Once that is set, we can proceed to the fun stuff, deploying the Kubernetes cluster!
Step 6: Initializing the Kubernetes Master Node
The
first step in deploying a Kubernetes cluster is to fire up the master
node. While on the terminal of your master node, execute the following
command to initialize the kubernetes-master:
| sudo kubeadm init --pod-network-cidr=10.244.0.0/16 |
If
you execute the above command and your system doesn’t match the
expected requirements, such as minimum RAM or CPU as explained in the Prerequisites section, you will get a warning and the cluster will not start:
Note: If
you are building for production, it’s a good idea to always meet the
minimum requirements for Kubernetes to run smoothly. However, if you are
doing this tutorial for learning purposes, then you can add the
following flag to the kubeadm init command to ignore the error warnings:
sudo kubeadm init --ignore-preflight-errors=NumCPU,Mem --pod-network-cidr=10.244.0.0/16
The
screenshot below shows that the initialization was successful. We have
also added a flag to specify the pod network with the IP 10.244.0.0,
It’s the default IP that the kube-flannel uses. We will discuss more on the pod network in the next step.
In the output, you can see the kubeadm join command
(we’ve hidden our IP address) and a unique token that you will run on
the worker node and all other worker nodes that you want to join onto
this cluster. Next, copy-paste this command as you will use it later in
the worker node.
In
the output, Kubernetes also displays some additional commands that you
should run as a regular user on the master node before you start to use
the cluster. Let’s run these commands:
| mkdir -p $HOME/.kube sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config sudo chown $(id -u):$(id -g) $HOME/.kube/config |
We
have now initialized the master node. However, we also have to set up
the pod network on the master node before we join the worker nodes.
Step 7: Deploying a Pod Network
A
pod network facilitates communication between servers and it’s
necessary for the proper functioning of the Kubernetes cluster. You can
read more about Kubernetes Cluster Networking from the official docs. We will be using the Flannel pod network for this tutorial. Flannel is a simple overlay network that satisfies the Kubernetes requirements.
Before
we deploy the pod network, we need to check on the firewall status. If
you have enabled the firewall after following step 5 of the tutorial on setting up your Ubuntu server, you must first add a firewall rule to create exceptions for port 6443 (the default port for Kubernetes). Run the following ufw commands on both master and worker nodes:
| sudo ufw allow 6443 sudo ufw allow 6443/tcp |
After that, you can run the following two commands to deploy the pod network on the master node:
| kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/k8s-manifests/kube-flannel-rbac.yml |
This
may take a couple of seconds to a minute depending on your environment
to load up the flannel network. Run the following command to confirm
that everything is fired up:
| kubectl get pods --all-namespaces |
The output of the command should show all services status as running if everything was successful:
You can also view the health of the components using the get component status command:
| kubectl get componentstatus |
This command has a short form cs:
If
you see the unhealthy status, modify the following files and delete the
line at (spec->containers->command) containing this phrase ---port=0 :
| sudo nano /etc/kubernetes/manifests/kube-scheduler.yaml |
Do the same for this file:
| sudo nano /etc/kubernetes/manifests/kube-controller-manager.yaml |
Finally, restart the Kubernetes service:
| sudo systemctl restart kubelet.service |
Step 8: Joining Worker Nodes to the Kubernetes Cluster
With the kubernetes-master node
up and the pod network ready, we can join our worker nodes to the
cluster. In this tutorial, we only have one worker node, so we will be
working with that. If you have more worker nodes, you can always follow
the same steps as we will explain below to join the cluster.
First, log into your worker node on a separate terminal session. You will use your kubeadm join command that was shown in your terminal when we initialized the master node in Step 6. Execute the command:
| sudo kubeadm join 127.0.0.188:6443 --token u81y02.91gqwkxx6rnhnnly --discovery-token-ca-cert-hash sha256:4482ab1c66bf17992ea02c1ba580f4af9f3ad4cc37b24f189db34d6e3fe95c2d |
You should see similar output like the screenshot below when it completes joining the cluster:
Once
the joining process completes, switch the master node terminal and
execute the following command to confirm that your worker node has
joined the cluster:
In the screenshot from the output of the command above, we can see that the worker node has joined the cluster:
Step 9: Deploying an Application to the Kubernetes Cluster
At
this point, you have successfully set up a Kubernetes cluster. Let’s
make the cluster usable by deploying a service to it. Nginx is a popular
web server boasting incredible speeds even with thousands of
connections. We will deploy the Nginx webserver to the cluster to prove
that you can use this setup in a real-life application.
Execute the following command on the master node to create a Kubernetes deployment for Nginx:
| kubectl create deployment nginx --image=nginx |
You can view the created deployment by using the describe deployment command:
| kubectl describe deployment nginx |
To make the nginx service accessible via the internet, run the following command:
| kubectl create service nodeport nginx --tcp=80:80 |
The command above will create a public-facing service for the Nginx deployment. This being a nodeport deployment, Kubernetes assigns the service a port in the range of 32000+.
You can get the current services by issuing the command:
You can see that our assigned port is 32264. Take note of the port displayed in your terminal to use in the next step.
To verify that the Nginx service deployment is successful, issue a curl call to the worker node from the master. Replace your worker node IP and the port you got from the above command:
| curl your-kubernetes-worker-ip:32264 |
You should see the output of the default Nginx index.html:
Optionally, you can visit the worker node IP address and port combination in your browser and view the default Nginx index page:
You can delete a deployment by specifying the name of the deployment. For example, this command will delete our deployment:
| kubectl delete deployment nginx |
We have now successfully tested our cluster!
Conclusion
In
this tutorial, you have learned how to install a Kubernetes cluster on
Ubuntu 20.04. You set up a cluster consisting of a master and worker
node. You were able to install the Kubernetes toolset, created a pod
network, and joined the worker node to the master node. We also tested
our concept by doing a basic deployment of an Nginx webserver to the
cluster. This should work as a foundation to working with Kubernetes
clusters on Ubuntu.
While
we only used one worker node, you can extend your cluster with as many
nodes as you wish. If you would like to get deeper into DevOps with
automation tools like Ansible, we have a tutorial that delves into provisioning Kubernetes cluster deployments with Ansible and Kubeadm, check it out. If you want to learn how to deploy a PHP application on a Kubernetes cluster check this tutorial.
Happy Computing!