Breaking News

Journey to Deep Learning: Cuda GPU passthrough to a LXC container

Journey to Deep Learning: Cuda GPU passthrough to a LXC container

Part 2: Preparing Proxmox for GPU/CUDA passthrough

/offtopic mode on
Now if you came to this article through Google, you probably saw online
that you have to use the same OS for the container as for the host (i.e. Debian), or that the permission for
/dev/nvidia0, /dev/nvidiactl, /dev/nvidia-uvm should be nogroup:nobody.

This is wrong, and I lost my Sunday on this, but at least I can use Archlinux
instead of Debian in my container and get that back in term of customization,
compilation and maintenance ease.

Linux is Linux because of the kernel, everything else is flavor (a.k.a userland).
nobody:nogroup is what is displayed in the container if a group:user
of a shared folder/file exists in the host but not inside the container.
I’m pretty sure having 123:456 as group:user in the host (aka nogroup:nobody in the container)
won’t help for GPU passthrough

/ontopic mode on
Now let’s get to business, we will need the command line and a container

Step 1 – Creating a container

Go in your local storage

Click on template, and you will be able to download a container for your favorite distro (from gentoo to centos)

I will use Archlinux for myself.
The only thing you need to remember is: You do not need to install the Nvidia and CUDA kernel drivers/modules.
They will be installed in the host and passed to the container.
Actually everything that has to do with the kernel must be done at host level (linux-headers needed included).

Create your container via the Create CT button on the top right

Follow the steps, don’t forget to change the CPU and RAM to the max possible. You can overprovision your RAM provided you have the swap.
You can check online for further details if needed.

Once your container is ready you can start it to check if everything is working.
If yes, congrats, you virtualized an OS through containerization. You are now Docker without being Docker.

Please note that GPU passthrough will also work for unprivileged containers.
And multiple containers can access the GPU seemlessly.

Pages: 1 2 3 4 5 6 7 8

Related Articles

High performance tensor library in Nim

Toward a (smoking !) high performance tensor library in Nim Forewords In April I started Arraymancer, yet another tensor library™,

Predicting apartment interest from listing with structured data, free text, geolocalization, time data and images

3 months ago, Data Science competition website Kaggle published a challenge from Two Sigma, an investment fund, and Renthop, a

Data Science Bowl 2017 – Space-Time tricks

Here we are again for the third post on my journey to Deep Learning. Like all super-heroes, the data scientist