jordanmw Posted December 12, 2018 Share Posted December 12, 2018 Looking for some advice for storage setup. Here is the situation: X399 Taichi 64Gb Ram 4x gtx960 with 4 gaming VMs configured and 1 game server VM, now for the drives I currently have: 2x 512Gb(raid0) Plextor setup as cache currently 2x 3Tb WD Purple setup as disk 1 and parity (couple of dockers, nothing else) 2x 1Tb Samsung 850 ssd setup as passthrough to 2 of the gaming VMs as game storage drives 2x 1Tb WD black m.2 drives- not configured So after getting really bad load times for games- I am looking for the best way to optimize all 4 machines. The 2 that currently have passthrough to the 1Tb ssds have no loading time issues, so passthrough might just be what I do with the other 2 machines, but looking for direction. Anyone have any advice on the best way to set this all up to optimize speed for all 4 machines? Should I shift my VMs to one of the m.2 drives- or both for cache? I have 2 of the VMs mounting a VHDX file on SMB and those are really slow for load times. If you had this hardware- and my goal of 4 fast gaming machines and one game server- what would you do? Quote Link to comment
jordanmw Posted December 13, 2018 Author Share Posted December 13, 2018 Anyone have some experience with drive config with this SSD combo? Quote Link to comment
ufopinball Posted December 13, 2018 Share Posted December 13, 2018 22 hours ago, jordanmw said: Looking for some advice for storage setup. Here is the situation: X399 Taichi 64Gb Ram 4x gtx960 with 4 gaming VMs configured and 1 game server VM, now for the drives I currently have: 2x 512Gb(raid0) Plextor setup as cache currently 2x 3Tb WD Purple setup as disk 1 and parity (couple of dockers, nothing else) 2x 1Tb Samsung 850 ssd setup as passthrough to 2 of the gaming VMs as game storage drives 2x 1Tb WD black m.2 drives- not configured So after getting really bad load times for games- I am looking for the best way to optimize all 4 machines. The 2 that currently have passthrough to the 1Tb ssds have no loading time issues, so passthrough might just be what I do with the other 2 machines, but looking for direction. Anyone have any advice on the best way to set this all up to optimize speed for all 4 machines? Should I shift my VMs to one of the m.2 drives- or both for cache? I have 2 of the VMs mounting a VHDX file on SMB and those are really slow for load times. If you had this hardware- and my goal of 4 fast gaming machines and one game server- what would you do? To begin, are your m.2 drives the SATA variety, or the PCIe x4 variety? The former will run roughly the speed of your other SATA SSDs, the latter should run much, much faster. If you have PCIe x4 m.2 drives, you could try to have a mirrored cache and run all four gaming VMs off the drive. Samsung's SATA SSDs advertise "Up to 540 MBps" where as the PCIe x4 m.2 SSDs offer "Up to 3500 MBps". Even with four VMs running at a time, you should still have a lot of headroom speed? It may depend on what else (if any) you're using your cache drive for, though. The alternative is you have 4 VMs, and 4 SSD NVMe type drives. Pass through 1 drive for each VM and you should enjoy dedicated performance for each VM from its assigned drive. If performance is an absolute must, maybe this is the way to go? Quote Link to comment
jordanmw Posted December 13, 2018 Author Share Posted December 13, 2018 I appreciate the feedback- looks like you have a similar setup. The M.2 drives are the WD black 2280 and sit in the M.2 slots on the same board you have. The only thing that I am worried about now is that other people had issues with m.2 drives other than samsung. I am really hoping those drives work- then I may just pass them through to the other 2 machines and be done with it- but I can't help but think that I should move all the machines to those drives for the speed increase. Quote Link to comment
Taddeusz Posted December 13, 2018 Share Posted December 13, 2018 I got a Mushkin Pilot 500GB x4 NVMe to replace my SATA SSD cache drive and it works great. No issues that I've seen. I can't believe how fast it performs. Quote Link to comment
jordanmw Posted December 13, 2018 Author Share Posted December 13, 2018 3 minutes ago, Taddeusz said: I got a Mushkin Pilot 500GB x4 NVMe to replace my SATA SSD cache drive and it works great. No issues that I've seen. I can't believe how fast it performs. Do you know what controller chip it uses? Quote Link to comment
Taddeusz Posted December 13, 2018 Share Posted December 13, 2018 Just now, jordanmw said: Do you know what controller chip it uses? Looks like a Silicon Motion SM2262. Quote Link to comment
jordanmw Posted December 13, 2018 Author Share Posted December 13, 2018 Well that is a good sign.... Quote Link to comment
Taddeusz Posted December 13, 2018 Share Posted December 13, 2018 That Mushkin was the lowest cost 500GB NVMe I could find. It really didn't disappoint. Eventually I'm going to get a second so that my cache has redundancy. Going to spring for another 16GB of RAM before that though. Quote Link to comment
ufopinball Posted December 14, 2018 Share Posted December 14, 2018 11 hours ago, jordanmw said: I appreciate the feedback- looks like you have a similar setup. The M.2 drives are the WD black 2280 and sit in the M.2 slots on the same board you have. The only thing that I am worried about now is that other people had issues with m.2 drives other than samsung. I am really hoping those drives work- then I may just pass them through to the other 2 machines and be done with it- but I can't help but think that I should move all the machines to those drives for the speed increase. If I understand the proposed setup, the SSDs are passed through to the VMs, and are not governed by unRAID. They OS to worry about would be the target OS on each VM. Is that Windows 10? Quote Link to comment
jordanmw Posted December 17, 2018 Author Share Posted December 17, 2018 Yeah- passed through to windows 10. I did try to add the drives this weekend and they were recognized- but my array wouldn't start and it lost the cache drives. I am thinking it was related to the drive assignment within linux- maybe replaced my sdd and sde drives that are set as cache. Anyone know how I can prevent that? I also noticed that my array would lock up completely if I tried to do any operations on them like format. Going to try one at a time and see what that gets me- really a PITA having to take all the video cards out when swapping them out/in. Wish there were bios options to disable each slot individually. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.