Page cover

Run production ready lightweight kubernetes using K3s in Q Blocks instance

K3s is a production-ready lightweight Kubernetes distribution that allows easy and scalable container orchestration. Read more on K3s official Github Repo.

Pre-requisites:

  • You need a pro / business Q Blocks instance

  • Ask Q Blocks support to enable K3s support on your instance

Once pre-requisite is fulfilled, we can proceed ahead with K3s setup.

Steps to bring up K3s cluster inside Q Blocks GPU instance:

  1. Make sure nvidia-smi is running inside the container

  2. Install Docker

sudo apt-get update 
sudo apt-get install docker.io
  1. Install nvidia-container-toolkit

distribution=$(. /etc/os-release;echo $ID$VERSION_ID)
curl -s -L https://nvidia.github.io/libnvidia-container/gpgkey | sudo apt-key add -
curl -s -L https://nvidia.github.io/libnvidia-container/$distribution/libnvidia-container.list | sudo tee /etc/apt/sources.list.d/libnvidia-container.list

sudo apt-get update && sudo apt-get install -y nvidia-container-toolkit
  1. Set nvidia runtime as default container runtime:

By default, k3s prefers containerd runtime. But for GPUs to work we need default runtime of nvidia. So we setup nvidia runtime as follows in docker daemon file:

  1. Now, we will run setup K3s cluster using docker runtime:

First, we install K3s:

  1. Make sure k3s cluster is up and running

Wait for 5-10 seconds for the cluster to come up and then run this command:

  1. Install NVIDIA daemon for K3s*:

This makes instance GPU available for k3s cluster

8. Do check logs of nvidia-device-plugin to confirm GPU are detected:

Get the name of nvidia pod launched by step 7 using this command's output:

Add the pod name in below command:

This should return an output like this:

  1. Validate if GPUs are getting detected by K3s cluster node:

  1. If you are able to see GPU recognised and deamonSet not throwing an error its time to do a test run and make sure a pod can access the GPU. Make sure to run this container only on a node with GPU.

Make sure the docker image used for testing has same or lower cuda version as the one supported by nvidia driver in Instance.

  1. Create a .yaml file k3sgputest.yaml

  1. Run the gpu pod

  1. Please wait for 5-10 seconds for the pod to load and run. If it ran successfully, it would display a log like this:

If you face any difficulty in setting up K3s then please reach us out at [email protected].

Last updated