Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I wonder how well they will support some more advanced virtualisation. I.E. Booting a virtual machine with PCI GPU passtrough in Windows Guest with Linux host for gaming, or booting a virtual machine with PCI GPU passtrough in Linux Guest with Linux host for some deep learning with CUDA.


This should work already, hypothetically: https://en.wikipedia.org/wiki/List_of_IOMMU-supporting_hardw...

But, given how flaky this can be with Intel configurations, I'd expect the AMD ones to be even flakier. Ryzen's possible popularity may improve the situation.


Yes, that's my idea as well. In Intel it was already a bit of an hassle, I'm afraid that at least for quite a few months it will be an even bigger hassle on AMD.

Then again, I think this is just a very particular use case. I unfortunately have it, but for most people it shouldn't matter.


That's something I'm also interested in for my next desktop computer. I've been holding off until Zen comes out to see how they compare with Intel's but I really feel like having this exact setup.

Guess, we'll have to wait until they come out to see how they behave.


Why would you want to do Linux Guest with Linux host for CUDA?


Because it's (well, at least 6 months ago was, now it could have improved) the only solution to reuse the GPU without rebooting X.

With a practical example:

1 - You boot up your Linux base system.

2 - You fire up KVM with GPU PCI passthrough with a Windows guest to play some games.

3 - You shutdown windows.

Now you can't use the GPU anymore, the module was assigned by KVM and you can't - for instance - run a CUDA simulation in the Linux host. You need to reboot X (you don't need to reboot the system).

The workaround/solution is to fire up KVM with GPU PCI passthrough to another guest (This time a Linux guest) and in there you will have full access to the GPU to do CUDA computations (or whatever else you want).


>Now you can't use the GPU anymore, the module was assigned by KVM and you can't - for instance - run a CUDA simulation in the Linux host.

Maybe we had different setups, so this wouldn’t apply – I used to unbind the device with the following script, and then load the `nvidia` module; the device was then available on the host:

  for dev in "0000:01:00.0" "0000:01:00.1"; do
          if [ -e /sys/bus/pci/devices/${dev}/driver ]; then
                  echo "${dev}" > /sys/bus/pci/devices/${dev}/driver/unbind
          fi
  done


Ah I see, that makes more sense. If you had two NVIDIA GPUs while one of them was passed through to a guest then I think that problem may still exist.

However if you only have one nvidia gpu (and have a different kind of gpu for display), then I think simply running "sudo rmmod vfio" "sudo modprobe nvidia" would work.


One reason is to mix AMD and NVIDIA GPUs – I experimented with running X on the integrated Radeon device, while running CUDA on a Linux guest with GPU-passthrough. Trying to have CUDA on the host system led to library conflicts.


wintermute




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: