Create a multi-cloud Kubernetes cluster

Urvishtalaviya
4 min readMar 30, 2021

Hey there!!!

In this article, we’ll show how to create a multi-cloud Kubernetes cluster but first you requires cloud accounts for achieving this architecture. In this practical, I have used AWS, Azure, and local virtual machine. So, let’s get started with creating a master node on AWS.

Master node or Control plane

First, create an instance on AWS, I have used the Amazon Linux system. Kubernetes requires container runtime so we’ll install docker-engine on it and then starts the docker service.

yum install docker -ysystemctl enable docker --now

Now, let’s install kubeadm, kubelet, and kubectl but yum have to be configured first. Here is a command for doing it, just copy and paste it to the terminal.

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

the command for installing kubeadm, kubelet, and kubectl and starting the service:

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetessystemctl enable --now kubelet

Kubernetes control plane requires some pods which help to maintain and configure the entire cluster. For running those pods use this command. This command will pull the container images and run automatically:

kubeadm config images pull

By default, Docker uses cgroupfs cgroup drivers but Kubernetes supports systemd, So we must change it. For changing the driver use this command:

cat > /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"] }

Save the above JSON file and restarts the Docker service systemctl restart docker. For verifying the changes, use this command docker info | grep -i cgroup

Kubernetes also requires one more program for managing the traffic on Linux systems and the program name is iproute-tc.

yum install iproute-tc

Now, Almost all the things have been done, let’s initialize the Kubernetes Control plane.

kubeadm init --control-plane-endpoint "PUBLIC_IP:PORT" --pod-network-cidr=10.244.0.0/16 --ignore-preflight-errors=NumCPU --ignore-preflight-errors=Mem

Use the public IP of your instance and port 6443. This command will initialize the Kubernetes control plane.

And run these commands for controlling the Kubernetes cluster:

mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config

The final step for the control plane, download the flannel pods. Flannel pods create an overlay network between the different nodes. Below given command will download the flannel image and runs it.

kubectl apply  -f https://raw.githubusercontent.com/coreos/flannel/master/Documentation/kube-flannel.yml

Note: weave also works the same as flannel but it has some extra features like it allows to create ingress with Kubernetes.

Nodes

For this practical, we will launch three instances on different platforms. So the first would be on AWS, the second on Azure, and the third one will on the Oracle virtual box local machine.

Note: Disable the swap on all the instances because of some optimization benefits. For disabling the swap use swapoff -a command.

Azure has no Amazon Linux, so I will use centos 8 on Azure and RHEL 8 on the Oracle VM Box. For centos and RHEL, we have to configure yum for the docker engine. Copy this step to do it:

cat > /etc/yum.repo.d/docker-ce.repo
[docker-ce]
baseurl=http://download.docker.com/linux/centos/7server/x86_64/stable/
gpgcheck=0

On RHEL 8 and centos 8

yum install docker-ce -y --nobest

On Amazon Linux

yum install docker -y

And then start the service:

systemctl enable docker --now

Kubernetes Control planes and worker node follows some similar steps like Kubernetes software, Docker cgroup drive, and iproute-tc software. The commands are given below:

yum configuration for kubeadm, kubelet, and kubectl

cat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repocat <<EOF | sudo tee /etc/yum.repos.d/kubernetes.repo
[kubernetes]
name=Kubernetes
baseurl=https://packages.cloud.google.com/yum/repos/kubernetes-el7-$basearch
enabled=1
gpgcheck=1
repo_gpgcheck=1
gpgkey=https://packages.cloud.google.com/yum/doc/yum-key.gpg https://packages.cloud.google.com/yum/doc/rpm-package-key.gpg
exclude=kubelet kubeadm kubectl
EOF

Installation and Starting kubelet service

yum install -y kubelet kubeadm kubectl --disableexcludes=kubernetessystemctl enable --now kubelet

Docker cgroup driver and restarting the service

cat > /etc/docker/daemon.json
{ "exec-opts": ["native.cgroupdriver=systemd"] }
systemctl restart docker

iproute-tc

yum install iproute-tc

Besides it, the worker node has some more networking-related steps which are important.

cat <<EOF | sudo tee /etc/sysctl.d/k8s.conf
net.bridge.bridge-nf-call-ip6tables = 1
net.bridge.bridge-nf-call-iptables = 1
EOF
sysctl --system

Here the worker node has been configured, we just need to join the worker node with the Kubernetes control plane.

On the Kubernetes control plane run this command, it will give us a token to connect with the control plane.

kubeadm token create  --print-join-command

The token would look something like this:

kubeadm join <IP>:<Port> — token <token> — discovery-token-ca-cert-hash sha256:<hash>

Paste the token on the terminal of the worker node and Done. We are done.

Run kubectl get nodes command on the control plane for checking the status of all the nodes.

Thanks for the reading…

Feel free to give suggestions.

--

--

Urvishtalaviya

Competitive Programmer | Machine Learning Enthusiastic | Bigdata Enthusiastic