reload

Members
  • Posts

    7
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

reload's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I dont think you need 12 cores for the server side. What i would do is to isolate 8 cores 16 threads from unraid using isocpu. Then pin them to the gaming vm. Then these cores cant be used by anything unless you specify it. So if plex is transcoding etc it will not touch the vm cores. Then pin the emulation calls for that vm to 2 of the other cores (the un isolated ones). I would then pin a further 4 cores of the remaining cores, to the plex docker so it can use these for transcoding. This would then leave you with 4 cores for the other dockers, unraid and the emulation calls from the vm pinned earlier. Useful info, thanks gridrunner - although I'm now looking at 2 x E5-2670 v3s which are 12 core/24 thread beasts. So 24 cores: 8 cores (16 threads) pinned to the gaming VM 2 other cores (4 threads) pinned to the VM emulation calls 8 cores (16 threads) to the Plex docker 6 cores (12 threads) for other dockers, unraid and/or other VMs as I spin them up Cooling wise... I'll admit to having second thoughts about water. I think I'm going to go JustJoshin's route at least at first with regular old Air as testdasi suggested. If the noise and temperatures get too much I'll go down the watercooling route.
  2. I so should have paid attention to this topic! Keen to hear how you get on with CPU pinning and performance, although judging by Gridrunner's experience all seems promising! With your motherboard beeps JustJoshin - what motherboard is it, you may get some mileage out of Googling beep codes with that model?
  3. Cheers testdasi - good response Yeah, I'm still pondering different board/chip combos, latest is in fact an EEB board with 2 of the E5-2680 v3s - the board has an Asmedia controller exactly as you mention. Case I'm looking at (The Corsair 900D) is compatible, so no worries there. Mixing unraid and RAID - I was planning on having the drives in a RAID 0 pair outside of the array, and yeah I'm well aware of the dangers of RAID 0 - I don't store any data I want to keep on my gaming desktop however, it's only stuff that's re-downloadable elsewhere. In any case though I'm probably just going to go for the one drive, but pick up a 1 TB from somewhere instead of the 2 x 512 gb. Watercooling wise... having been the victim of a leaky all in one loop and having had all in one pumps fail on me in the past too, I would normally agree with you. However, I get the feeling this box will generate a lot of noise and heat under regular air cooling. I am planning on using a reservoir with a dual pump configuration - so even if one of the pumps fail I'll still have flow. I dunno, I might give air a go first and see if it's tolerable - we'll see. And yeah, PSU wise - definitely not skimping here.
  4. Thanks for that HellDiverUk. It was indeed reviews of them that first got me thinking about this. http://www.guru3d.com/articles-pages/samsung-950-pro-m-2-ssd-review,6.html as an example. Yeah, I understand they're under proper thrash-testing scenarios but some of the temperatures I was seeing were somewhat alarming. I think I will more than likely use an Angelbird in any case - but I may hold out for a 1 TB PCIE SSD to plonk in it for VM OS outside of the array (https://www.overclockers.co.uk/angelbird-wings-px1-high-performance-pcie-x4-to-m.2-adapter-card-for-ahci-nvme-hd-001-ab.html). Unnecessary maybe, but certainly can't hurt system temperatures.
  5. Got a question related to this (and apologies for the thread resurrection). I'm looking to use unraid in a very similar scenario - Plex serving (with associated webapp services, PlexPy, PlexRequests, Sonarr and such), and a gaming VM. Drive configuration I think I've got a handle on - will be using a PCIE SSD outside of the array/cache pool for the VM. I'm looking to use a dual E5-2670 v3 set up, for 2 physical CPUs, 24 cores, 48 threads. Ideally, I would be using Plex through a docker, and transcode wise I can be doing anywhere up to 8 concurrent streams. Ideally what I'd like to configure CPU wise is Plex taking priority but the gaming VM able to utilise the maximum number of cores available when Plex isn't hammering them. I understand application pinning and CPU provisioning to some extent, but that doesn't seem to achieve what I'm after - splitting the resources rather than making everything available to both VM and unraid itself? Edit: And apologies as this thread isn't entirely relevent to the question, I didn't see that this was about unraid as a VM itself.
  6. Thanks Jonathan - interesting to know. How exactly does this work? Essentially remoting in from another machine? I'm now looking at an ASUS Z10PE-D8 with Xeon E5-2670 v3 chips instead - that board seems to have an onboard VGA adapter which should make life easier. Bit pricier but if I'm going large, I might as well go the whole hog on something half-modern. Yeah, had a feeling that'd be the case. Is there another way to achieve presenting two drives like this as one storage unit outside the array, for the purposes of storing a VM? I've read somewhere about partitioning a cache pool too which makes me wonder...
  7. Hi all, So I've been lurking for quite a while now and musing over possible build scenarios. Like many, I've become starry-eyed over the GPU passthrough possibilities with Unraid - along with everything else it can do, such as spinning up the occasional VM for work testbed purposes - and am looking to take the plunge in the next month or so on a new build. The build is subject to change as I'm still musing over 1 vs 2 physical CPUs and such. Existing background: I currently have two machines, again like many - a Windows gaming desktop, and a server box running Plex and a few associated webapps etc, torrents and whatnot. Short specs on my existing kit, but pretty much what you'd expect: Gaming machine: i7-4770k, 8 GB 2400mhz RAM, 500gb SSD, GTX 780 Server machine: i7-2600k, 16 GB 1600mhz RAM, 250gb OS SSD, 5 x 2 TB storage drives presented as one drive to the OS with Drivepool in what is basically a JBOD array. With this new build I'm looking at combining the two sets functions in to one monster of a machine. There are a couple points I'd like to clear up on, and am looking for some advice. So far I'm musing over: CPUs: 2 x Xeon E5-2670 v3 for 24 cores, 48 threads between them. Motherboard: Probably an Asus Z10PE-D8 - dual socket, dual LAN, plenty of x16 PCIE sockets with VT-d support, onboard VGA, 12 SATA ports of mixed flavour amongst other goodies. RAM: Probably 32 GB of 2133mhz ECC RAM, going by the above board. Drives: This is where it gets interesting. 6 x 4 TB spinning rust of one description or another for storage 2 x 250 GB SSDs on SATA3 (Cache array) 2 x 512 GB PCIE NVMe SSDs using PCIe X4 To M.2 adapter card heatsinks - plan is to use Samsung 950 Pros, and these things run hot! These will be outside of the array, and ideally run in Raid-0 for speed purposes as the VM host drive. Graphics: 1 x GeForce GTX 1080 for the VM PSU: 1 x Probably ~1000 watt PSU Case: 1 x Huge case - probably Corsair Obsidian 900D Cooling: Oh yes. Think this'll need watercooling. So yeah, a couple of questions initially. Is it possible to use unraid to set up a Raid-0 array of 2 PCIE drives like that? I'm fairly certain the board won't be able to create a raid array on PCIE disks natively (although wiser heads than me can feel free to tell me otherwise!). CPU overprovisioning/pinning - I've been trying to read up about this and get my head around the best setup. Would it be best to provision both the VM and Docker applications with their own cores/threads directly? Or should I just provision the VM and leave docker to eat the rest as it needs/wants? As an example - Plex transcoding for 7 users while I'm running an intensive gaming session, would unpinned Plex attempt to use the cores I'm using on my VM? LAN port splitting - is it possible to provision one LAN port directly to a VM, with the other used for the server functions? Any help much appreciated. Looking forward to getting this beast set up.