Multiple Processors, PCIe Lanes, Web GUI lag, and My Stupidity


1812

Recommended Posts

I'm posting this for others with multiple processor servers (about 1.3% of the forum.) that might have web gui lag or degraded vm performance.

 

All of my multiple processor servers have had issues with web gui lag (currently running 6.2.4, 6.3.5 causes problems so waiting for 6.4 to go stable.) This was most noticeable when loading the main page and it, at times,taking up to 60 seconds to show all unassigned devices, system stats, or even the log (and sometimes never loading.) Also, vm's using gpu passthrough worked "well" enough, but gpu benchmarks were sometimes 20-30% lower than bare metal.

 

On my original dual processor rig, I did not think to take into account which pcie lanes ran to which processor, so when setting up gpu passthrough, I essentially had other constraints to that took priority, as in card size/fitments,etc. It was also just one big motherboard.

 

Flash forward a year to the new-to-me dual/quad processor servers I have. After purchasing an add in PCIe expansion board, it (finally) occurred to me that if i'm isolating a cpu from unraid for vm's, then I should probably be placing raid controllers, hba's, networking cards, etc that are used by unraid exclusively on the board that runs to cpu 0. And all the other gpu's, usb's, etc that are for vm's on the expansion board that runs to the second processor (cpu 1)when possible. So I did some quick rearranging last night, and everything as been super responsive vs before. Unassigned devices load in 4 seconds or less, as well as the logs and every other page. Vm GPU benchmarks appear within bare metal performance levels.

 

So there you go. My stupidity. Your gain. Carry on.

Link to comment

I've isolated all my cpu's going to vms, and mapped to the pci slots that i think they go to ... is there a map of pci slots to cpu sockets for the 580 g7 ? what mapping did you use - so far I;ve got unraid mapped to cpu 2 along w/ the nvme's .... cpu core 1 and its hyperthread for each cpu to unraid ( which im hoping to help with pci-pt ) ... using gtx 1060 with 14 cores ( using 4 in dx11 testing ) with about 100-130 fps @ 1080p ultra settings 4x aa and high tes on that unigne test 

Link to comment
20 hours ago, burningstarIV said:

I've isolated all my cpu's going to vms, and mapped to the pci slots that i think they go to ... is there a map of pci slots to cpu sockets for the 580 g7 ? what mapping did you use - so far I;ve got unraid mapped to cpu 2 along w/ the nvme's .... cpu core 1 and its hyperthread for each cpu to unraid ( which im hoping to help with pci-pt ) ... using gtx 1060 with 14 cores ( using 4 in dx11 testing ) with about 100-130 fps @ 1080p ultra settings 4x aa and high tes on that unigne test 

 

On my 2 processor server dl580 g7, I’m using an expansion board which requires a processor in socket 3 (cpu 2) to manage pci devices on it. I put my raid cards and melanox 10gbe card on the primary board an moved the gpu to the expansion board. The vm is only using the second cpu exclusively.

 

On my 4 processor dl, I run all the cores on a single vm for 4k video cruncching(it’s the only thing it runs, 80 threads) so pci lanes don’t matter as much since it all goes the same place.

 

 

I don’t have a lane map but will look to see if I can find one.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.