1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. hope this helps.... a few more coming in next post.
  2. You can stretch out how far a physical vm goes by using an hdmi over cat 5e/6 extender, and the same for usb2 as well. This is not over an ip network but standalone cat. cables. I've read that using the hdmi over cat ip extenders can saturate and overwhelm a gigabit network, but have no experience personally with them. I run a few of these in my home to physically extend the desktop to locations about 50 feet from my server. Different extenders can go further as well. They run about 30-50 dollars for a kit on amazon. I can detect no latency in using them. The usb2 over cat 5e extender costs about 50 bucks. You can get a complete "KVM" over cat 5e/6 extender that puts the hdmi signal and the usb signal in one standalone cable, but cheapest i've seen that is 150 for a kit.
  3. I use smb. did you try to log in via network discovery using your new credentials to verify they work? even if the login/pw for synology is wrong, it will still show the name of the shares, or at least mine does.
  4. like this? Fresco Logic FL1100 USB 3.0 Host Controller (sold as Inatek KT4006) Usage: usb disk access/usb3 to ethernet adapter/keyboard-mouse/bluetooth dongles Rating:Good+, good bandwidth with on card, no aux. power cable needed for card, 1 out of 5 I ordered died within 2 weeks Instructions: for OS X-plug and play though Sierra 10.12.2, windows not tested but manufacture claims windows 7+ compatibility (come with disc of drivers for windows) 666172-001 MNPA19-XTR HP 10GB Ethernet Network Card (Mellanox ConnectX-2 EN) Usage: Moderate to heavy capable Rating: Excellent/Good if you need to cheaply move data faster than gigabit Instructions: no OS X support, win10 plug and play but works better with Mellanox drivers (downloadable from their website,) native unRaid support for server to the use card in 6.2.4 (possibly earlier,) I currently use a pair to backup one server to the other. Geforce GT 710 Usage: good cheap general desktop card, light gaming, watching movies Rating: Good Instructions: Plug and play in OS X through Sierra 10.12.2 as long as smbios set to 14,1 or 14,2 before assigning the card (otherwise black screen occurs,) did not try to give it nvidia drivers Geforce GT 730 Usage: good cheap desktop card, light gaming, watching movies Rating: Good Instructions: Plug and play in OS X through Sierra 10.12.2 as long as smbios set to 14,1 or 14,2 before assigning the card (otherwise black screen occurs,) did not try to give it nvidia drivers GTX 760 SC Usage: mild video editing in fcpx, 3 simultaneous screes utilized Rating: Good+ Instruction: Plug and play in OS X through Sierra 10.12.2 as long as smbios set to 14,1 or 14,2 before assigning the card (otherwise black screen occurs,) using Nvidia web drivers can be a challenge as incremental updates to OS X can cause the drivers to no longer work with the card.
  5. did you click "load shares" after entering your credential? Let's assume you did. that means something is setup incorrectly on your synology. I don't use admin to acces my synology for videos. As a safety precaution, you should use something else. On your Synology, create a new user, ( I use "video") then only give that user permission to read or read/write where the folder(s) that your videos are stored in. (I only allow read access since plex doesn't need to write, and I don't want to accidentally delete a video using plex.) then double check that works by using your computer to mount the files on the network via smb or nfs (whichever you have setup) in network discover or whatever you use to dig around your network. once you know that works, then go back to your unRaid server and enter those credentials, click load shares, and your video folder(s) will be there. If your videos are in more than 1 overall source location, then each one will have to be added to the plex container settings.
  6. If the share won't mount in unassigned devices when you click the mount button, then the docker can't use it either because the login, pw, or the path is wrong. also, the path looks funny to me. I have all my media on a synology and it maps it by /mnt/disks/SynologyNasName/folder if your synology is in the same "workgroup" then search for servers in the workgroup instead of using the ip, and select from the toggle/dropdown. then ensure your name/pw are correctly entered. once the window closes, if you've done it correctly, clicking mount will mount it, which you can then use the path and add it to plex by clicking "add another path". once that is set, open plex, add library, find the path in there. ---edit to add image of what the screen looks like when you add the mounted smb connection in plex docker settings you can set it to auto mount the smb connection in unassigned devices in the future if you have to reboot your server.
  7. mount the nfs share via unassigned devices assign the path to the docker find the path in the docker and set the destination there
  8. try switching your vm network connection from vibr0 to br0. It will then solicit an ip from your network dhcp provider. See if spamming continues.
  9. New test for audio/video stability and core pinning in os x (for those who care) For this one, I used one of my other servers with dual e5520 processors. Thread pairings as follows: Proc 1 cpu 0 <===> cpu 8 cpu 1 <===> cpu 9 cpu 2 <===> cpu 10 cpu 3 <===> cpu 11 Proc2 cpu 4 <===> cpu 12 cpu 5 <===> cpu 13 cpu 6 <===> cpu 14 cpu 7 <===> cpu 15 Proc 2 isolated from unRaid. vm1 with a gt730 gpu, assigned cores 4-7 vm2 no gpu, apple screen share, assigned cores 12-15 emulator pin 0-3 for both vm's vm disk images located on same ssd This cpu assignment was selected because it is accepted to cause audio/video issues when placing two vm's onto the same physical core(ht pairs). Result when vm2 was running cinebench with its cores at 100%, vm1 had zero audio/video issues streaming youtube, netflix, or blu-ray movies on vlc from a network server. No matter activities I did, I could detect no slow down, no lag, Just for fun, I put both vm’s on the same thread/core: vm1 with a gt730 gpu, assigned cores 4-7 vm2 no gpu, apple screen share, assigned cores 4-7 youtube videos and local blu-ray streamed with little video quality issues but audio was at times dirty and/or non functional (to be expected) even when vm2 was only idling. Interestingly, when cinebench pushed the shared cores at 100% during blu-ray playback, the video quality did not suffer at all, but as before, audio was less than desirable when it actually worked. So in my case, it is still fine (if not better performance-wise) to put os x vm’s on not ht paired cores since it does not cause audio/video issues while retaining he benefits of higher performance when the other vm isn’t 100% active. Why is like this? I don't know if this is OS X specific in regard to not causing the audio/video issues that are said to occur in windows 10 with this core assignment , if it is the particular gpu that is less susceptible to audio noise in ht pinning, if it is due to the fact that I am using it on older enterprise equipment, or only a gpu on a single vm (though people have reported issues with gpu-less dockers causing problems on ht cores of vm’s.) ok, no more tests for a while...
  10. and the results are in.... Details: HP DL380 G6 dual processor server Xeon x5670, 6 core HT processor, 12 logical cores per processor 2nd Processor used in testing isolated from unRaid/dockers/etc vm’s on separate unassigned SSD drives Cinebench R15 (higher scores are better) no emulator pin set 8gb ram on each vm clean non updated OS X sierra, (matching disk images) no GPU, vm’s accessed via apple screen share unless stated otherwise, cpu cores were run at 100% for benchmark, run 3x back to back no topology defined (I don't think os x cares, based on some previous tests I ran which showed almost no difference) Thread pairings Proc 1 cpu 0 <===> cpu 12 cpu 1 <===> cpu 13 cpu 2 <===> cpu 14 cpu 3 <===> cpu 15 cpu 4 <===> cpu 16 cpu 5 <===> cpu 17 Proc 2 cpu 6 <===> cpu 18 cpu 7 <===> cpu 19 cpu 8 <===> cpu 20 cpu 9 <===> cpu 21 cpu 10 <===> cpu 22 cpu 11 <===> cpu 23 ----------------------------- section 1 all testing in this section on following cores: vm1 6-11, vm2 18-23 vm1 6 cores non ht paired vcpus, vm 2 off (vm1 baseline score for non ht pairs) 521 527 526 vm1 while vm2 idle on ht cores 517 522 520 vm1 100% with vm2 20-50% use on ht cores 464 441 463 vm 2 while vm 1 idle on ht cores 497 514 515 vm1 & vm2 max 100% usage (simultaneous benchmark from here on down) vm1 314 vm2 309 vm1 315 vm2 311 vm1 317 vm2 315 ------------------------- section 2 all core configurations listed on each the test vm1 on 6-8, 18-20 (vm1 baseline score for HT paired cores) vm2 on 9-11, 21-23 HT core Pairs for each vm vm1 347 vm2 346 vm1 345 vm2 345 vm1 349 vm2 346 vm 1 on 6, 18, 7, 19, 8, 20 vm 2 on 9, 21, 10, 22, 11, 23 mixed vcpu ordering of ht paired cores vm1 346 vm2 345 vm1 346 vm2 344 vm1 348 vm2 346 vm 1 & 2 sharing 2 cores (8,20) utilizing only 10 total cores, but 6 on each vm vm1 on 6-8, 18-20 vm2 on 9,10,8, 21, 22, 20 (funny pairing to keep vcpu0 off the shared cores) vm1 286 vm2 286 vm1 286 vm2 283 vm1 288 vm2 285 and just for fun vm1 & vm2 on same cores (6-11) 100% utilization vm 1 254 vm2 255 vm1 254 vm 2 257 vm1 250 vm 2 255 -------------------------- What does it mean? Using HT paired cores scores a max of 349, vs non ht cores max of 522 with vm2 idle, meaning non ht cores have 50% more power in this setting. Even with 50% usage on vm2, vm1 only drops to 441, which is sill about 27% more power than ht paired cores (which makes sense actually.) Even if vm 2 is off, the max score of vm1 does not change if using ht pairs, so there is no possible performance gain regardless of what any other vm is doing or not doing, unlike using non ht paired assignments. Now, if both vm's are using 100% CPU resources, then ht pairs are the clear winner by about 10% over non HT core assignment which at best hit a 317 (compared to 349.) No video/audio testing was done in this situation because I only have 1 video card in this particular server. For many people none of this may apply. But for someone like me who needs cpu power and not graphics/audio on all vm's, then this has some merit and interest. I may drop in a card from on of my other servers and test for stability in the future. this actually explains quite a bit for me on why I thought my machines were wonky, it's more how I'm using them vs operating differently.
  11. The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows. Yes that is to be expected if the situation is like this. my cpu pairing is like this cpu 0 <===> cpu 14 cpu 1 <===> cpu 15 cpu 2 <===> cpu 16 cpu 3 <===> cpu 17 cpu 4 <===> cpu 18 cpu 5 <===> cpu 19 cpu 6 <===> cpu 20 cpu 7 <===> cpu 21 cpu 8 <===> cpu 22 cpu 9 <===> cpu 23 cpu 10 <===> cpu 24 cpu 11 <===> cpu 25 cpu 12 <===> cpu 26 cpu 13 <===> cpu 27 so if i pinned 8 vcpus 0,1,2,3,4,5,6,7 then they would be over 8 separate cores so would get good performance as those cores would not be doing anything else, But, Then if i set up another vm with 8 vcpus 14,15,16,17,18,19,20,21 this would be sharing the same 8 cores and when both machines run at once then performance would be bad. So over those 8 cores 2 vms should be, vm1 . 0,1,2,3,14,15,16,17 vm 2 4,5,6,7,18,19,20,21 That way there is no overlap. Hope that makes sense yes and no. If you have a bunch of cores, then you can run them non HT and get good performance. If you don't, you get reduced performance. Just for kicks, i'm going to setup a couple 4 core vm's tonight on each other's pair and run some simultaneous tests... I do expect degraded benchmarks, but I'm curious to know if it's better to have 1 vm on it's own ht pair, or let it share with another vm, which may not be using the ht pair at the same time as the vm, therefore lessening the performance hit vs using ht pairs... (sorry if that's confusing, still on a bunch of medicine... results in a few hours though...)
  12. The cinebench scores I posted earlier show the opposite in os x. Could you try some benchmarking of paired HT cores and non-paired vcpus in OS X? Maybe this will shed some light if my machines are just wonky of if os x is different in pairing cpu cores vs. windows.
  13. Good that you got it running. Want it faster? Where is the vm mounted? I see it's on a cache drive, but is that an SSD or spinning disk? Spinning disk is much more sluggish. Even if it's an SSD cache, some believe that it is better for disk i/o to have the vm on an SSD mounted via unassigned devices. That's how I do it, but for other reasons. Also, cpu pinning.... What i've found is that the common idea of using HT cores with os x vm's don't give the best cpu performance. For example, I ran the following cinebench test this morning on 12 of my 24 cores (on 2 processors) Non paired threads spanning 2 processors <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='17'/> <vcpupin vcpu='6' cpuset='18'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> Cinebench scores 1081 1104 1121 Paired HT threads on a single processor <vcpupin vcpu='0' cpuset=‘6’/> <vcpupin vcpu='1' cpuset=‘7’/> <vcpupin vcpu='2' cpuset=‘8’/> <vcpupin vcpu='3' cpuset=‘9’/> <vcpupin vcpu='4' cpuset=’10’/> <vcpupin vcpu='5' cpuset='11’/> <vcpupin vcpu='6' cpuset='18’/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> Cinebench scores 759 757 749 topology: in previous tests I've done with os x I didn't see any improvement or degradation specifying the topology to the vm. Quite a bit of difference. unRaid shows in the dashboard that when both tests are run, cpu usage is 100% on those cores, but the results are about a 45% increase in cpu power benchmark score when not using HT cores for vm assignments. I'm not saying the way I do is right for your setup, but you could at least experiment and run your own tests. ALSO isolate your vm cores from unRaid if you haven't already, and then set emulator pin to one of the cores you left for unRaid. This makes sure that only the vm uses the cores assigned to it. splashtop/teamviewer may still use cpu to render even if you have a video card. I don't remember... someone else will have to chime in on that. but OS X screen share will use the gpu but sadly no sound... There may be some other gains from adjusting clover settings. I am about to start a new topic on that to compare notes with others and no clutter this one up so much. Thanks for the tips I'm definitely going to do some playing around with it later tonight. I use my vm's for building and developing c++ applications. So most of my build environments are on a spinning 2tb array disk with 2*250gb and 2*256gb cache drives in btrfs raid. My main OS(win10) which i'm typing from now, has gpu passthrough and is stored directly on the raided cache drive. But I've never isolated any cores for the VM's or used an unmounted drive as my belief was that the VM would perform better with paired threads and a cache drive or on the raided cache drive itself. The only thing I have done is opted to leave cores 0/6 free for unraid. 1. Is it possible my other VM's could benefit from core isolation as well? Also, you said that most recommend to run the VM on an isolated drive which is in an unmounted state. 2. Wouldn't the VM benefit more from having a SSD array drive with raided SSD cache IF that SSD array drive was only being used for VM's? Or like I'm currently doing for performance on my main machine which is storing on the raided cache drive? maybe my machines are just weird with OS X, i don't know. but what I do know is that the scores are pretty solid. I use mine as a transcoding cluster for edited/produced video, and can confirm that a vm using non ht paired cores to transcode does it faster than one with ht cores. I haven't done the test with win10 (have a vm but don't use it often) but my guess is that windows, optimized for virtual environments, will perform better with ht pairs like the common usage dictates. 1. all vm's can benefit with not having to share cores with unRaid services and dockers. if you've used isolcpus in your syslinux.cfg, and isolated everything but 0,6, then 0,6 are the only cores unpaid gets. The rest is clear for whatever vm's you run. drives aren't unmounted, they are mounted using the unassigned devices plugin. essentially if you put the disk image there, it doesn't have to contend with any other action/file transfers/etc that the cache drive may end up doing. 2. ssd cache will be faster than spinning cache, which is way faster than spinning array. If the ssd was in the array, it will still be limited by the speed of the parity drive read/writes. With a spinning parity drive, that's 30-50ishMB/s write and probably less than normal ssd reads? Using turbo write you can hit parity protected write speeds of 80-125MB/. I don't know of anyone who has a wholy ssd only array (someone does I bet around here) but they'd be the ones to ask about ssd array performance for specific numbers. Many people just leave the vm's on the cache drive, and it works fine. you all get the benefit of having a redundant copy if 1 of the paired drives fail (depending on raid configuration of the cache pool.) otherwise, you have to back it up yourself or use one of the backup program/scripts (which I don't use, i just move them manually.) But since I have a few heavy plex/docker app users in the house, then it's better for me to move mine off the cache drive and onto an unassigned disk where it gets the full bandwidth available. (if this is rambling, sorry, i'm under a quite bit of medication for seasonal allergies and things...)
  14. Has anyone played with the clover settings to see how it affects vm performance? I did a few changes last night and what I found is below. Some of the changes I did not expect to make a difference in testing, but documented them as an incremental part of the process. As changes were made, system was rebooted, and the benchmark retested. Some of the parameters I entered are specific to my hardware and should not be used on others because it may cause system instability. Hardware Hp DL380 G6 dual processor x5670 8GB ram assigned to vm 12 cores (non paired) assigned to vm vm on unassigned SSD OS X 10.12.1 fresh install no updates Testing using cinebench R15 Clover version:2k3 rev 3579 New vm created using 1st version by gridrunner(non vmware install version) Results no settings (fresh install) 1021 1021 1020 smbios set to 14,1 1026 1027 1026 qpi and bus speed 3,200 1031 1028 1021 cpu frequency 2930 (base frequency of my processor) 1032 1020 1019 cpu frequency 3333 (max turbo frequency of my processor) Caused audio problem in some instances 1171 1167 1171 passed through GTX 760 1165 1165 1162 The biggest gain, about 10%, seems to be in telling OS X that the cpu frequency is higher. I have not set it above the max turbo frequency of the processor. Anyone else?
  15. Good that you got it running. Want it faster? Where is the vm mounted? I see it's on a cache drive, but is that an SSD or spinning disk? Spinning disk is much more sluggish. Even if it's an SSD cache, some believe that it is better for disk i/o to have the vm on an SSD mounted via unassigned devices. That's how I do it, but for other reasons. Also, cpu pinning.... What i've found is that the common idea of using HT cores with os x vm's don't give the best cpu performance. For example, I ran the following cinebench test this morning on 12 of my 24 cores (on 2 processors) Non paired threads spanning 2 processors <vcpupin vcpu='0' cpuset='12'/> <vcpupin vcpu='1' cpuset='13'/> <vcpupin vcpu='2' cpuset='14'/> <vcpupin vcpu='3' cpuset='15'/> <vcpupin vcpu='4' cpuset='16'/> <vcpupin vcpu='5' cpuset='17'/> <vcpupin vcpu='6' cpuset='18'/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> Cinebench scores 1081 1104 1121 Paired HT threads on a single processor <vcpupin vcpu='0' cpuset=‘6’/> <vcpupin vcpu='1' cpuset=‘7’/> <vcpupin vcpu='2' cpuset=‘8’/> <vcpupin vcpu='3' cpuset=‘9’/> <vcpupin vcpu='4' cpuset=’10’/> <vcpupin vcpu='5' cpuset='11’/> <vcpupin vcpu='6' cpuset='18’/> <vcpupin vcpu='7' cpuset='19'/> <vcpupin vcpu='8' cpuset='20'/> <vcpupin vcpu='9' cpuset='21'/> <vcpupin vcpu='10' cpuset='22'/> <vcpupin vcpu='11' cpuset='23'/> Cinebench scores 759 757 749 topology: in previous tests I've done with os x I didn't see any improvement or degradation specifying the topology to the vm. Quite a bit of difference. unRaid shows in the dashboard that when both tests are run, cpu usage is 100% on those cores, but the results are about a 45% increase in cpu power benchmark score when not using HT cores for vm assignments. I'm not saying the way I do is right for your setup, but you could at least experiment and run your own tests. ALSO isolate your vm cores from unRaid if you haven't already, and then set emulator pin to one of the cores you left for unRaid. This makes sure that only the vm uses the cores assigned to it. splashtop/teamviewer may still use cpu to render even if you have a video card. I don't remember... someone else will have to chime in on that. but OS X screen share will use the gpu but sadly no sound... There may be some other gains from adjusting clover settings. I am about to start a new topic on that to compare notes with others and no clutter this one up so much.
  16. http://lime-technology.com/forum/index.php?topic=50528.0
  17. ok.... grasping at straws but: in my logs my pcpu-alloc on a dual 6 core processor board looks like this: pcpu-alloc: s91480 r8192 d31400 u131072 alloc=1*2097152 Jan 22 08:46:11 Brahms1 kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 08 09 10 11 12 13 14 15 Jan 22 08:46:11 Brahms1 kernel: pcpu-alloc: [0] 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 Jan 22 08:46:11 Brahms1 kernel: Built 1 zonelists in Node order, mobility grouping on. Total pages: 18576862 yours shows: Jan 21 14:33:28 Tower kernel: pcpu-alloc: s91480 r8192 d31400 u131072 alloc=1*2097152 Jan 21 14:33:28 Tower kernel: pcpu-alloc: [0] 00 01 02 03 04 05 06 07 16 17 18 19 20 21 22 23 Jan 21 14:33:28 Tower kernel: pcpu-alloc: [1] 08 09 10 11 12 13 14 15 24 25 26 27 28 29 30 31 Jan 21 14:33:28 Tower kernel: Built 2 zonelists in Node order, mobility grouping on. Total pages: 8248817 Is it a real problem? I don't know. But, how is it managing cpu cores that you've isolated? Is it by the dashboard or by this set of numbers, and which are sharing a HT? (also, maybe mine is the one that is messed up, we'd need more info from others with dual processors to verify) ----edit to make it more interesting, here is my dual quad core machine: Jan 12 17:54:45 Brahms3 kernel: PERCPU: Embedded 32 pages/cpu @ffff880533c00000 s91480 r8192 d31400 u131072 Jan 12 17:54:45 Brahms3 kernel: pcpu-alloc: s91480 r8192 d31400 u131072 alloc=1*2097152 Jan 12 17:54:45 Brahms3 kernel: pcpu-alloc: [0] 00 02 04 06 08 10 12 14 16 18 20 22 24 26 28 30 Jan 12 17:54:45 Brahms3 kernel: pcpu-alloc: [1] 01 03 05 07 09 11 13 15 17 19 21 23 25 27 29 31 Jan 12 17:54:45 Brahms3 kernel: Built 2 zonelists in Node order, mobility grouping on. Total pages: 10319326 so maybe this has no bearing. in fact, just ignore this unless someone wants to come along and explain to you and me both.... ------------ I would also try changing isolcpus=12-15,28-31 in your syslinux to actually typing out the numbers like isolcpus=4,5,6,7,8,9,10,11,12,13,14,15,16,17,18,19,20,21,22,23 initrd=/bzroot (that's mine as an example) I've had wonky things happen when i've used shortened 4-23. I don't know why, but, it shouldn't hurt. I think the suggestion was made earlier? If you try this, post a full copy of your syslinux.cfg please. also this error exists in your logs; an 21 14:33:28 Tower kernel: x86: Booting SMP configuration: Jan 21 14:33:28 Tower kernel: .... node #0, CPUs: #1 #2 #3 #4 #5 #6 #7 Jan 21 14:33:28 Tower kernel: .... node #1, CPUs: #8 Jan 21 14:33:28 Tower kernel: mce: [Hardware Error]: Machine check events logged Jan 21 14:33:28 Tower kernel: mce: [Hardware Error]: Machine check events logged Jan 21 14:33:28 Tower kernel: CMCI storm detected: switching to poll mode Jan 21 14:33:28 Tower kernel: #9 #10 #11 #12 #13 #14 #15 Jan 21 14:33:28 Tower kernel: .... node #0, CPUs: #16 #17 #18 #19 #20 #21 #22 #23 Jan 21 14:33:28 Tower kernel: .... node #1, CPUs: #24 #25 #26 #27 #28 #29 #30 #31 Jan 21 14:33:28 Tower kernel: x86: Booted up 2 nodes, 32 CPUs Errors that probably need investigating. It ilsts some of the cores you're putting your windows vm on. Coincidence? A quick test might be to flip your pinning around and put plex/dockers on these cores and run the vm on unaffected/not listed ones. If you run a search on the board for mce: [Hardware Error] there are a couple threads about this. you all have the following on every cpu: Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:00 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:01 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:02 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:03 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:04 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:05 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:06 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:07 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:08 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:09 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:0a is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:0b is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:0c is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:0d is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:0e is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:0f is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:1e is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:1f is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:20 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:21 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:22 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:23 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:24 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:25 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:26 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:27 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:28 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:29 is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:2a is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:2b is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:2c is not defined Jan 21 14:52:44 Tower root: ACPI group processor / action LNXCPU:2d is not defined Might be just something relevant to your processor/board. More info here: https://lime-technology.com/forum/index.php?topic=53037.0
  18. I have a couple 710 cards and a 730. I just feed them straight to OS X with no issues. I'll try to post rom's soon.
  19. the topology would then be <topology sockets='1' cores='8' threads='1'/> You may know that, you may not. You never know who you're talking to on here!
  20. try putting your vm on cores 8-15, and make sure your topology in the xml don't try to make them hyper threaded pairs, but rather 8 cores. For some reason on my dual processor system, it doesn't like ht pairs, but rather prefers cores in straight numbers. It's counter to the advice in the cpu pinning thread, but experimentation is often better across disparate hardware.
  21. -bump for sas support!- when my sas drives are xfs, they spin down, but no when btrfs. also, temp display? Also funny: disks show as spun down but still usable....