..

Automating kubernetes cluster installation


I had gone over kubernetes a few months ago and it looked a lot more complicated than I thought. And I had quite a bit of things on my plate at that time. So, I thought why not start learning it again? I hadn’t installed it locally before and it took me some time to figure things out before I got it up and running.

Well, that’s just my short story.

Enough with the words, let’s get into the real meat.

Ansible

I am pretty sure most of you already figured out what I would be using to automate the installation.

Ansible is a very powerful tool for automating just about anything that deals with infrastructure management. Long gone are the days of writing hacky bash scripts.

I wouldn’t be going much over it. There are tons of great tutorials on it but I would recommend checking out Jeff Geerling’s Ansible series and his book Ansible for DevOps.

Configuration

We need to set up a few things first. I would be using 1 control plane and 2 worker nodes for this post. It would be running Ubuntu 22.04.

You would also need to assign static IPs to your VMs if you haven’t already done so. I am using Proxmox for spinning up 3 VMs. Choose whichever hypervisor you are comfortable with.

A control plane usually requires a minimum of 4 GB RAM and 2 cores. The worker nodes have a minimum of 2 GB RAM and 1 core.

After installation, you will have to set up SSH keys.

ssh-keygen -a 100 -t ed25519 -f ~/.ssh/kubecontrol
ssh-copy-id -i ~/.ssh/kubecontrol root@host
vim /etc/hosts
192.168.10.10       host

Now, we can proceed to the installation.

Installing kubernetes

Kubernetes nodes and control plane have a few requirements. Some of which are:

Let’s write the playbook for it.

Ansible Playbook

First, we have to create ansible.cfg for pointing out our inventory and inventory for mentioning our hosts and other variables such as SSH keys.

[default]
inventory = inventory
kubecontrol ansible_ssh_private_key_file=~/.ssh/kubeworker ansible_user=kubecontrol

kubew1 ansible_ssh_private_key_file=~/.ssh/kubew1 ansible_user=kubew1

kubew2 ansible_ssh_private_key_file=~/.ssh/kubew2 ansible_user=kubew2

[control]
kubecontrol

[workers]
kubew1
kubew2

Let’s go over it

Variable What it does
kubecontrol, kubew1, kubew2 hostname
ansible_ssh_private_key_file path to your ssh private key
[control] and [workers] group names

Onto the playbook now!

It’s a big one so we will divide it into small parts so that it is easier for everyone to understand.

The full playbook is available here.

- hosts: all
  become: yes

  tasks:
    - name: update package cache
      apt:
        update_cache: yes

    - name: install required packages
      apt:
        name:
          - apt-transport-https
          - ca-certificates
          - curl
          - gnupg
          - containerd # container run time for kubernetes
        state: present
    - name: create br_filter
      command: modprobe br_filter

    - name: create a containerd directory
      file:
        path: /etc/containerd
        state: directory

    - name: Generate containerd default configuration
      command: containerd config default # this will generate the default config
      register: containerd_output  # register basically copies the output to a variable

    - name: Save configuration to $HOME/config.toml
      copy:
        content: ""
        dest: $HOME/config.toml

    - name: move it (can't directly create on /etc/containerd for some reason)
      command: mv $HOME/config.toml /etc/containerd/config.toml


    - name: modify containerd.toml
      lineinfile:
        path: /etc/containerd/config.toml
        regexp: '^SystemdCgroup ='
        line: 'SystemdCgroup = true'
    - name: turn off swap
      command: swapoff -a

    - name: uncomment net.ipv4.ip_forward in sysctl.conf
      lineinfile:
        path: /etc/sysctl.conf
        regexp: '^#?net.ipv4.ip_forward='
        line: 'net.ipv4.ip_forward=1'
        state: present

Installing the kubernetes packages

We briefly took a look at the whole playbook. Now, all we gotta do is install the necessary packages. I am not going to go over it because it’s the same thing mentioned in the docs.

    - name: check if GPG key file exists
      stat:
        path: /etc/apt/keyrings/kubernetes-apt-keyring.gpg
      register: gpg_key_file


    - name: download Kubernetes GPG key
      shell: curl -fsSL https://pkgs.k8s.io/core:/stable:/v1.28/deb/Release.key | sudo gpg --dearmor -o /etc/apt/keyrings/kubernetes-apt-keyring.gpg
      when: not gpg_key_file.stat.exists


    - name: add Kubernetes repository
      shell: echo 'deb [signed-by=/etc/apt/keyrings/kubernetes-apt-keyring.gpg] https://pkgs.k8s.io/core:/stable:/v1.28/deb/ /' | sudo tee /etc/apt/sources.list.d/kubernetes.list

    - name: update package cache after adding repository
      apt:
        update_cache: yes

    - name: install Kubernetes components
      apt:
        name:
          - kubelet
          - kubeadm
          - kubectl
        state: present

    - name: hold Kubernetes packages to prevent automatic updates
      command: sudo apt-mark hold kubelet kubeadm kubectl

Phew! That was a lot to take in. Now, you can initialize the cluster from control plane by running

sudo kubeadm init --pod-network-cidr=10.244.0.0/16

After that, you will get a token which you could use to join worker nodes to the control plane.

That’s it for this one. Keep experimenting!