DarkZenith

Members
  • Posts

    5
  • Joined

  • Last visited

Everything posted by DarkZenith

  1. I have the parts to throw together a machine for my kids and for my nas/plex. It is a dual Xeon e5-2670 on a Supermicro Mainboard with ipmi (Remote network desktop/bios for unraid control and NAS access. So the machine will have 16 cores and 32 threads. It will also have 128gb of ram. I also have x2 nvidia gtx 1060s and a gtx 980ti for the passthrus for my kids machines. Here are my specific questions I want to get ironed out so I can finish my plans. 1) Cores when you have a typical i7 quad core are paired as (0,4) (1,5) (2,6) (3,7) for the physical and hyper threaded cores. In a dual cpu configuration does it do physical processor cores 0-15 and then HT cores as 16-31. Or are they split by processor? eg cpu 0 is (0-7, 8-15) and cpu 1 is (16-23, 24-31)? 2) the second question is about how I am planning to setup the kids processor assignments. I plan to give each x4 physical cores and x4 then cores for their machines. I was wondering if I did a funky core assignment like the following (assuming cores 0-15 are the physical ones) system 1, cores 0-3, 20-21, 24-25 system 2, cores 4-7, 16-17, 26-27 system 3, cores 8-11, 18-19, 22-23 system 4, cores 12-15, 28-31 System 4 is for unraid/nas functions, plex and such. The reasoning behind my core assignments is like this, each kid has x4 physical cores assigned to them, and x2 virtual cores assigned from the other 2 machines. The idea is that when the other machines are in light load or not in use that would give the matching virtual core near 100% performance available for those unused cores. End result is instead of 4 cores with 4 matching ht cores only effectively allowing 50% of the cpu resource for those cores each could potentially allow a full or near full 100% per core (as I said IF the matching physical core isn’t under load). Would this idea/concept work or is it just a paper theory that doesn’t pan out? 3) Would there be any issue of assigning each machine 2 cores off each physical processor (to split the load) of would it perform better keeping them on one die? I’ve been sitting on these parts for nearly a year now and it is time to finally put them together. Any advice would be appreciated. PS - a quick question, has USB port passthru gotten easier to configure yet as I have x3 cloud 2 headsets that would be in use. Also, can the ps/2 ports be hardware passed through or are those different because of the way they use direct bus access? (I have an x-arcade I was hoping to use as there is zero latency on ps/2 with it.
  2. Sorry for the necro on this thread but I had an idea. If you are using a processor that is capable of hyperthreading, what you can do (using 8 core example) is assign physical cores 0-3 to system 1, and 4-7 to system 2, then flip the hyperthreaded cores so HT cores 4-7 would be for system 1 and HT cores 0-3 would be for system 2. My thinking is that how HT works if the physical core isn't tied up it'll end up granting that performance (if called on) to the virtual core for use in the alternate OS. What do you think?
  3. I have several questions about how to setup my unRAID, I did some searching but didn't find everything I wanted to know. The parts enroute to setup my unRAID box for my Fileserver and my Kids gaming machinesare as follows. Dual Intel Xeon E5-2670 8Cores 16Threads (HT) 2.6Ghz 3.3GhzTurbo Dual Corsair H60 Rev2 Closed loop Water Coolers Supermicro X9DRI-F Motherboard - C602 Chipset, IPMI (16x8GB) 128GB DDR3 10800 ECC memory (Delivery was lost so it is still enroute/being replaced) x3 Nvidia Cards (GTX 580, GTX 760, 1 New one likely to be a 1060 or 1070) x2 SSDs (Likely a pair of Samsung 850 EVO's, 500GB) x5 Western Digital Red 5TB NAS Drives (2 I already have, 3 are new orders) The plan I have in mind is to have the unRAID run off the IPMI internal/remote system access. It will be setup as x3 VM's for my 3 kids as replacement gaming rigs for them. Also running my fileserver off this machine as well. Planned config is either 4 or 5 Base Cores per kids machine, The remaining base core and all hyperthreaded cores will be setup and reserved for the fileserver/unRAID use. The SSD's are going to be the cache drives for a storage array setup over the WD Reds. 24GB Ram per VM with the remaining 32 reserved for drive caching/unRAID use. Questions I have are - Regarding CPU and HyperThreading configuration Has Anyone tried setting up the vm's with only base cores and assigning all Hyperthreaded cores for filesystem use? How well does this function? Has anyone tried swapping Virtual Cores off between 2 VM's? I.E. VM 1 has Base cores 1-4 and virtual cores 5-8 meanwhile VM 2 has Base Cores 5-8 and Virtual Cores 1-4? I am wondering if this will essentially allow the systems to cross balance processing power. The thinking is that if VM1 isn't in use then 100% of the processing power for that core is available to the Hyperthreaded cores for use on VM2 and vice versa. when the systems are both under load then it'll balance out still. With x3 systems then run x2 virtual cores from each of the other 2 systems to balance it over all three. Would it be better to just setup the system to assign x4 cores + the matching virtual cores to each system leaving the last 4 cores + virtual cores in reserve for the system? With a dual cpu configuration would it be better to assign x2 cores from each processor to each vm to balance the load? Lastly, If I understand the processor architecture properly, each processor has x8 cores, with it being internally pairs of cores, each having 2 memory channels. Is there a way to determine and assign physical blocks of memory to each machine based on the processors that it is attached to? For example, Processor cores 2 and 3 have the pairing of memory channels 2 and 3 bound to the internal memory controller, in turn have the system assign memory range (with x8GB memory modules) 16.0GB to 31.99GB, and then for cores 4/5 use 32GB to 47.99GB range. Would there be any benefit to this? Is there any real perfomance loss if memory that is physically attached to the second processor is accessed by the first? Or is the internal bus fast enough to make this negligible? For the File Storage system I am planning to run the x5 WD Red drives in a redundant raid setup through unRAID. Use the x2 SSD's for cachine access to the array as well as leaving 32-64GB of memory available another level of caching. Is this a good setup/plan to maximize the performance of the system? Can the memory be setup as a full blown write cache to queue writes as lower priority than reads to really maximize the throughput between the machines?
  4. Small problem with that is that I have no slots available.... I have x3 double slot gpus and those cover the other 3 pcie slots... Sigh.
  5. I have been doing some searching and reading and understand that the issue with duplicate usb devices is that the system doesn't necessarily assign them in the same order every startup. I am planning on running x3 gaming systems for my kids off a dual Xeon E5-2670 rig with nvidia gtx 1060's. I have most of the components already and will be assembling in the next couple weeks. I was wondering if I was to invest in x3 different quality powered usb 3.0 hubs if those hubs can be seperately passed thru to the vm's and in turn have the devices attached then running the same. With my kids everything has to be the same to be fair so they will all have the same usb keyboard and mouse along with Cloud 2 Headsets (already have all of these). As I said I did a few searches and didn't find an answer to this specific question. I ask to confirm before making the investment on the hubs as with them still being usb devices I am unsure how the system/unRAID will respond to them as I *think* the usb chips for hubs may be standardized and don't necessarily have different device ids... Thanx, any advice would be appreciated.