1 GPU : 3 Displays : 3 Localised VMs | Is it possible?


Recommended Posts

However...

 

I have not tested this, so I don't know for sure if it will work, but theoretically you could have hardware passthrough with 1 VM controlling all 3 screens, then open up RDP sessions to two other VM's in the primary VM and full screen them to the other 2 monitors. Only the first VM would have the benefit of full GPU acceleration, but that might work if you don't use apps that need it.

 

If, on the other hand, you expect to allow 3 people to access the single machine at once locally, I can't think of a way to do that.

 

Your question was not clear to me what you expected, so the answer is both no and maybe, depending on your performance and usage requirements. I personally would find it useful to have a "host" VM with 1 OS and 2 full screen "guest" VM's with different OS's available. If that is not your requirement, then no, you can't do it, unless someone has a way to assign a local keyboard and mouse to exclusively control a VNC or RDP session without accessing the host VM.

Link to comment

Thanks for your prompt replies and suggestions.

 

Sorry for not giving elaborate detail on my inquiry, however from your responses it seems clear that I cannot use unRAID as the OS for my configuration.

 

Maybe it would be an idea to incorporate the support for having 3 localised VMs running simultaneously through one GPU somewhere in the future.

Link to comment

Thanks for your prompt replies and suggestions.

 

Sorry for not giving elaborate detail on my inquiry, however from your responses it seems clear that I cannot use unRAID as the OS for my configuration.

 

Maybe it would be an idea to incorporate the support for having 3 localised VMs running simultaneously through one GPU somewhere in the future.

Unfortunately that's not under unraids control. The virtualization technology in use is not proprietary, it's open source KVM, with a very nice custom frontend. Your usage of splitting a single card to 3 different OSes isn't something I've even seen discussed, so it's unlikely to happen anytime soon.
Link to comment

At the very least, it works with those Tesla GPUs because they are multiple independent GPUs on a single card.

 

The only way to do this with a single dedicated GPU is to virtualize the GPU the way a desktop VM application does, presenting a false GPU with virtual acceleration capability to the VM, and forwarding everything to the host. This also requires the hypervisor to have full GPU acceleration drivers. It would not require a full desktop environment, only full screen presentation on attached monitors, but it would still put a heavy burden on the hypervisor, though.

Link to comment

However...

 

I have not tested this, so I don't know for sure if it will work, but theoretically you could have hardware passthrough with 1 VM controlling all 3 screens, then open up RDP sessions to two other VM's in the primary VM and full screen them to the other 2 monitors. Only the first VM would have the benefit of full GPU acceleration, but that might work if you don't use apps that need it.

 

I do this all the time -- works fine but none of the stuff I'm using the other RDP VMs for is overly demanding.

 

If, on the other hand, you expect to allow 3 people to access the single machine at once locally, I can't think of a way to do that.

 

In theory this might be possible-- years back (pre-Windows 8 release)-- I was looking for the ability to have one computer with multiple independent input devices. I was able to find some buggy software that would do it but it was too unstable to use. There were a few people working on software for that and rumors that it would be part of Windows 8 -- which it wasn't. I gave up but now that it is several years later possibly it exists.

Link to comment
Though it may only work with the Tesla line of addin cards ($$$$$), and probably not consumer grade cards.

 

So far as I am aware this is the case. Nvidia's Grid only works with certain datacenter GPUs and requires special software + licences to use (I think).

 

At the very least, it works with those Tesla GPUs because they are multiple independent GPUs on a single card.

 

The only way to do this with a single dedicated GPU is to virtualize the GPU the way a desktop VM application does, presenting a false GPU with virtual acceleration capability to the VM, and forwarding everything to the host. This also requires the hypervisor to have full GPU acceleration drivers. It would not require a full desktop environment, only full screen presentation on attached monitors, but it would still put a heavy burden on the hypervisor, though.

 

I believe Nvidia Grid technology actually presents hardware accelerated virtual GPUs to VMs. I think they claim 16 users per physical GPU.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.