Hcrypt

Automating kubernetes cluster installation


I had gone over kubernetes a few months ago and it looked a lot more complicated than I thought. And I had quite a bit of things on my plate at that time. So, I thought why not start learning it again? I hadn’t installed it locally before and it took me some time to figure things out before I got it up and running.

Well, that’s just my short story.

Enough with the words, let’s get into the real meat.

Ansible

I am pretty sure most of you already figured out what I would be using to automate the installation.

Ansible is a very powerful tool for automating just about anything that deals with infrastructure management. Long gone are the days of writing hacky bash scripts.

I wouldn’t be going much over it. There are tons of great tutorials on it but I would recommend checking out Jeff Geerling’s Ansible series and his book Ansible for DevOps.

Configuration

We need to set up a few things first. I would be using 1 control plane and 2 worker nodes for this post. It would be running Ubuntu 22.04.

You would also need to assign static IPs to your VMs if you haven’t already done so. I am using Proxmox for spinning up 3 VMs. Choose whichever hypervisor you are comfortable with.

A control plane usually requires a minimum of 4 GB RAM and 2 cores. The worker nodes have a minimum of 2 GB RAM and 1 core.

After installation, you will have to set up SSH keys.

  • Generate SSH keys (for all your hosts)
ssh-keygen -a 100 -t ed25519 -f ~/.ssh/kubecontrol
  • Copy it (same as above)
ssh-copy-id -i ~/.ssh/kubecontrol root@host
  • It would a good idea to change your hosts file so that you do not have to remember the IPs.
vim /etc/hosts
  • Your IP address(es) would be on the left and the hostname(s) would be on the right
192.168.10.10       host

Now, we can proceed to the installation.

Installing kubernetes

Kubernetes nodes and control plane have a few requirements. Some of which are:

  • disable the swap
  • enable ipv4 forward
  • enable br_filter module
  • a container runtime such as containerd

Let’s write the playbook for it.

Ansible Playbook

First, we have to create ansible.cfg for pointing out our inventory and inventory for mentioning our hosts and other variables such as SSH keys.

  • ansible.cfg
[default]
inventory = inventory
  • inventory
kubecontrol ansible_ssh_private_key_file=~/.ssh/kubeworker ansible_user=kubecontrol

kubew1 ansible_ssh_private_key_file=~/.ssh/kubew1 ansible_user=kubew1

kubew2 ansible_ssh_private_key_file=~/.ssh/kubew2 ansible_user=kubew2

[control]
kubecontrol

[workers]
kubew1
kubew2

Let’s go over it

Variable What it does
kubecontrol, kubew1, kubew2 hostname
ansible_ssh_private_key_file path to your ssh private key
[control] and [workers] group names

Onto the playbook now!

It’s a big one so we will divide it into small parts so that it is easier for everyone to understand.

The full playbook is available here.

  • We are first specifying the hosts and mentioning become: true for running all commands with sudo
- hosts: all
  become: yes

  tasks:
    - name: update package cache
      apt:
        update_cache: yes

    - name: install required packages
      apt:
        name:
          - apt-transport-https
          - ca-certificates
          - curl
          - gnupg
          - containerd # container run time for kubernetes
        state: present
  • Next, we create br_filter by running modprobe br_filter. Then, we create an empty directory for storing containerd config file.
    - name: create br_filter
      command: modprobe br_filter

    - name: create a containerd directory
      file:
        path: /etc/containerd
        state: directory

    - name: Generate containerd default configuration
      command: containerd config default # this will generate the default config
      register: containerd_output  # register basically copies the output to a variable

  • Now, we gotta save the config to /etc/containerd/. I had to do it in 2 steps because I was getting permission denied for directly creating at /etc/containerd/.

  • The last block is about cgroups which deals with managing system resources such as CPUs, RAM, etc for the containers. I used regex to find the line and replace it with true.

    - name: Save configuration to $HOME/config.toml
      copy:
        content: ""
        dest: $HOME/config.toml

    - name: move it (can't directly create on /etc/containerd for some reason)
      command: mv $HOME/config.toml /etc/containerd/config.toml


    - name: modify containerd.toml
      lineinfile:
        path: /etc/containerd/config.toml
        regexp: '^SystemdCgroup ='
        line: 'SystemdCgroup = true'
  • We are almost done! The part after this is just installing kubelet, kubectl, and kubeadm.

  • Turn off the swap with swapoff -a. After that, we gotta replace net.ipv4.ip_forward to true in /etc/systl.conf so that kubes can communicate properly.

    - name: turn off swap
      command: swapoff -a

    - name: uncomment net.ipv4.ip_forward in sysctl.conf
      lineinfile:
        path: /etc/sysctl.conf
        regexp: '^#?net.ipv4.ip_forward='
        line: 'net.ipv4.ip_forward=1'
        state: present

Installing the kubernetes packages

We briefly took a look at the whole playbook. Now, all we gotta do is install the necessary packages. I am not going to go over it because it’s the same thing mentioned in the docs.

    - name: check if GPG key file exists
      stat:
        path: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
      register: gpg_key_file


    - name: download Kubernetes GPG key
      shell: curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
      when: not gpg_key_file.stat.exists


    - name: add Kubernetes repository
      shell: echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

    - name: update package cache after adding repository
      apt:
        update_cache: yes

    - name: install Kubernetes components
      apt:
        name:
          - kubelet
          - kubeadm
          - kubectl
        state: present

    - name: hold Kubernetes packages to prevent automatic updates
      command: sudo apt-mark hold kubelet kubeadm kubectl

Phew! That was a lot to take in. Now, you can initialize the cluster from control plane by running

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

After that, you will get a token which you could use to join worker nodes to the control plane.

That’s it for this one. Keep experimenting!