Home ยป Proxmox ยป How to Enable GPU Passthrough to LXC Containers in Proxmox
Proxmox

How to Enable GPU Passthrough to LXC Containers in Proxmox

Learn steps to enable Proxmox LXC GPU passthrough for AI workloads. Learn how to configure GPU access in LXC containers step-by-step.

I have been playing around lately with the GPU passthrough functionality in Proxmox and looking at the best ways to run AI workloads in Proxmox. Virtual machine passthrough is a great way to run something like Ollama. However, I wanted to experiment with GPU passthrough in an LXC container. Let’s look at Proxmox LXC container GPU passthrough configuration and see how this can be configured for use to run AI workloads like Ollama instead of using a virtual machine.

Why Use GPU Passthrough in LXC Containers?

GPU passthrough is the way in Proxmox that you can access the GPU that is physically installed in the host from a virtual machine or LXC container. This provides access to the GPU directly from these virtualized resources. It will allow you to have nearly native GPU performance for workloads like the following:

  • AI workloads like Ollama
  • Running a physical card in a workstation that is virtualized
  • Video decoding or encoding for workloads like Jellyfin
  • Remote desktop acceleration

LXC containers are more lightweight than full VMs and start up much faster and they are just easier to work with. LXC containers also offer GPU passthrough in addition to Proxmox VMs.

The passthrough process with LXC containers though is a little more tricky but not terribly challenging if you have the right steps to work with, which is why I am providing this guide!

You must choose between VM passthrough and LXC passthrough

In my tinkering with GPU passthrough in Proxmox, you must choose between either running VM passthrough or LXC passthrough. The reason for this is the steps needed to setup each of these methods conflict with one another and your LXC passthrough would be broken by implementing VM passthrough.

Why is this? Well, you have to do several things to ensure VM passthrough, including blacklisting NVIDIA drivers, etc. This will break LXC container passthrough since it needs the host to see the driver correctly and know about the GPU.

Cleaning up and reverting if you have already implemented VM passthrough in Proxmox

You will need to backtrack a bit if you have already implemented VM passthrough and “undo” some of those changes.

1. Undo the VFIO binding

Edit /etc/modprobe.d/vfio.conf and either:

Remove or comment out your line defining the vfio bindings (below is an example):

# options vfio-pci ids=10de:1c03,10de:10f1 disable_vga=1

Then regenerate initramfs:

update-initramfs -u -k all

2. Remove the NVIDIA driver blacklist

Edit or delete /etc/modprobe.d/blacklist-nvidia.conf

Alternatively, comment out all the blacklist lines:

# blacklist nouveau
# blacklist nvidia
# blacklist nvidiafb
# blacklist rivafb

Then run:

update-initramfs -u -k all

3. Reboot

You must reboot your Proxmox host for this to take effect.

Prerequisites

There are a few prerequisites that you need to be aware of for getting Proxmox GPU passthrough working with LXC containers. Note the following:

  • A Proxmox VE host (preferably 7.x or 8.x)
  • An NVIDIA GPU or compatible AMD GPU installed and visible in the host
  • GPU drivers from NVIDIA for the Proxmox host
  • An unprivileged or privileged LXC container created

โš ๏ธ Important Note: LXC containers, especially unprivileged ones, have more restricted access to host devices than VMs. Privileged containers offer easier device passthrough but are less secure.

Step 1: Disable secure boot

I found that if you have secure boot enabled, you will see a message about signing the driver:

Message about signing the driver with secure boot enabled
Message about signing the driver with secure boot enabled

If you select to sign the kernel module, it will ask you for a keypair. So, easiest way around this is to disable secure boot:

Disabling secure boot in the motherboard bios
Disabling secure boot in the motherboard bios

Step 2: Install Proxmox prerequisites

You will need to install some build prereqs for the driver install to work:

apt install build-essential software-properties-common make -y
update-initramfs -u
Installing build prerequisites
Installing build prerequisites

Also, install the Proxmox PVE headers:

Installing pve headers for the nvidia driver installation
Installing pve headers for the nvidia driver installation

Step 2: Install GPU drivers on the Proxmox host

As a juxtaposition to getting passthrough to work in a VM, an LXC container needs to have the GPU properly installed from a driver perspective on the Proxmox host.

For NVIDIA you can pull this from the apt repos like below (keep in mind this may not be the latest driver):

apt update
apt install -y nvidia-driver

For AMD:

apt update
apt install -y firmware-amd-graphics

Or, what I did was visit the NVIDIA driver download page and downloaded the latest available for my card that I am testing with. Copy your downloaded file to your Proxmox host, to the root directory is fine.

Downloading the latest nvidia driver
Downloading the latest nvidia driver

Once you have the driver downloaded, you need to copy this to your Proxmox host, to the root directory is fine. We need to set the execute bit on the file:

chmod +x NVI*

Then you just need to run the file on your Proxmox host:

./NVIDIA* then TAB
Changing the execute permissions and running the nvidia driver file
Changing the execute permissions and running the nvidia driver file

Next, follow the wizard prompts:

Building kernel modules
Building kernel modules
Message about x path
Message about x path
32 bit compatibility message
32 bit compatibility message
Installing the nvidia graphics driver on the proxmox host
Installing the nvidia graphics driver on the proxmox host
Automatic x config update
Automatic x config update
Installation complete
Installation complete
Back to the root bash prompt after the nvidia driver installation on proxmox host
Back to the root bash prompt after the nvidia driver installation on proxmox host

Checking that the driver is installed

Once installed, reboot the host and verify the driver is working by running the following command. Yo ushould see your card listed and metrics pulled from the card.

nvidia-smi
Checking that the nvidia driver is installed on the proxmox host
Checking that the nvidia driver is installed on the proxmox host

Step 3: Identify the GPU devices

Use the following command to identify the devices that we need to pass through:

ls -al /dev/nvidia*

These will look something like the following:

  • /dev/nvidia0
  • /dev/nvidiactl
  • /dev/nvidia-uvm
  • /dev/nvidia-uvm-tools
  • /dev/nvidia-caps/nvidia-cap1
  • /dev/nvidia-caps/nvidia-cap2
Viewing nvidia devices on the proxmox ve server host
Viewing nvidia devices on the proxmox ve server host

Step 4: Add the passthrough devices:

Now, we can add the passthrough devices to our LXC container. On Resources screen, navigate to Add > Device Passthrough. Then add these devices one-by-one.

Adding device passthrough on resources screenf or the proxmox lxc container
Adding device passthrough on resources screenf or the proxmox lxc container

Enter the device name/path as you see below.

Device adding to passthrough settings
Device adding to passthrough settings

One-by-one these have been added.

Gpu passthrough devices added to proxmox lxc container
Gpu passthrough devices added to proxmox lxc container

Step 5: Install the drivers inside the NVIDIA drivers in the Proxmox LXC container

Now, we can install the same file that we installed on our Proxmox VE host inside the LXC container (make sure the LXC container is powered on before doing this):

pct push <your LXC container ID> <nvidia file name> /root/<name of the file in your container>
Pushing the device driver to the proxmox lxc container
Pushing the device driver to the proxmox lxc container

Run this file now in the LXC container:

./NVI*
Run the driver inside the lxc container
Run the driver inside the lxc container

Go through the same wizard as you did on your Proxmox VE Server host:

Installing the nvidia driver inside the proxmox lxc container
Installing the nvidia driver inside the proxmox lxc container

Step 6: Do you need the NVIDIA container toolkit?

This is a good question and one we can answer by what your plans are inside the LXC container. If you are just going to run your app natively inside the LXC environment without Docker, you don’t need to installed it.

However, if you plan on running Docker inside the LXC container and spinning up your applications that way, like Ollama as a Docker container, then, yes you will need to install it. You can install it with the following command:

apt update
apt install -y nvidia-cuda-toolkit nvidia-container-toolkit

To test the GPU inside the container:

nvidia-smi
Viewing the nvidia card info from the command line in the proxmox lxc container
Viewing the nvidia card info from the command line in the proxmox lxc container

If everything is configured correctly, you should see the GPU info.

Step 7: Test your GPU application

For testing GPU passthrough in Proxmox with LXC containers, you can simply point an OpenWebUI environment to your new LXC container that has GPU passthrough enabled.

Install Ollama in your LXC container that has the GPU passed through:

You can now install Ollama in the LXC container with the simple install script below:

curl -fsSL https://ollama.com/install.sh | sh

Then, just point your OpenWebUI container to your Ollama instance (the IP in the string below for OLLAMA_BASE_URL is the address of my LXC container.

docker run -d -p 3000:8080 --add-host=host.docker.internal:host-gateway -e OLLAMA_BASE_URL=http://10.3.33.251:11434 -v open-webui:/app/backend/data --name open-webui --restart always ghcr.io/open-webui/open-webui:main

Additional notes about container security with LXC privileged containers

Privileged containers can access hardware more easily. However, this comes at the cost of possible isolation challenges from the host. Unprivileged containers are more secure but can be complex when dealing with /dev files.

For production or shared environments, it is best to consider keeping sensitive workloads in VMs or Kubernetes pods with GPU device plugins for better isolation.

Pros and Cons of LXC GPU Passthrough

ProsCons
Lightweight with faster boot timesMore complex to configure than VM passthrough
Lower overhead and near-native speedLess secure if using privileged containers
Better suited for specific workloadsDriver compatibility issues can happen
Ideal for single-purpose GPU appsTroubleshooting device access can be trickier

When to use VMs instead of LXC container

If you need to have the most isolation that you can get for your application, virtual machines still make the most sense. Proxmox configuration for virtual machine GPU passthrough is very straightforward (well relatively speaking) and can be done with a few steps.

Wrapping Up

It has been quite an adventure the last few days looking at AI workloads in Proxmox and seeing how best to spin these up. Proxmox is a great platform for running your AI workloads. And, thankfully, we can spin up GPU passthrough in LXC containers as well as virtual machines. Keep in mind that you won’t be able to do both. You will need to either passthrough GPU to a VM or have it configured to passthrough to LXC containers. If there is a trick I am missing here to have both, please let me know in the comments. Also, let me know what you guys are running mostly in your home lab environments for GPU passthrough – virtual machines, or LXC containers?


Subscribe to VirtualizationHowto via Email ๐Ÿ””

Enter your email address to subscribe to this blog and receive notifications of new posts by email.



Brandon Lee

Brandon Lee is the Senior Writer, Engineer and owner at Virtualizationhowto.com, and a 7-time VMware vExpert, with over two decades of experience in Information Technology. Having worked for numerous Fortune 500 companies as well as in various industries, He has extensive experience in various IT segments and is a strong advocate for open source technologies. Brandon holds many industry certifications, loves the outdoors and spending time with family. Also, he goes through the effort of testing and troubleshooting issues, so you don't have to.

Related Articles

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.