1812

Members
  • Posts

    2625
  • Joined

  • Last visited

  • Days Won

    16

Everything posted by 1812

  1. I'm on 14,2 When you're changing core assignments, are you also changing vcpu static placement to match the number of cores assigned?
  2. then you should ask the question if you would like an answer.
  3. remove your topology and you can use anything you want up to 64 cores.
  4. it's that simple. do that and move the usb of unraid. for example, just three days ago i removed 8 drives from an external enclosure, put 4 into another server and 4 into a different external enclosure. plugged in the unraid usb that was running them from the other server and it cranked right up with no issues. for safety though, screen shot your main page showing what drive(s) are parity and where the disks are assigned. That way if there is a problem, you can re-assign them properly and retain parity.
  5. https://h20195.www2.hpe.com/v2/GetPDF.aspx/c04284193.pdf I would google around and see what others have put in as max procs as well. This includes looking at ebay ads too for complete working servers of the same model.
  6. you can try following this (some you have done) as gridrunner talks about breaking up the bridge
  7. That's what I get for being on the forums with only 2 hours of sleep...! take the vm down to basics, just 4-8 cores and 1 emulator pin and see if the problem can be duplicated. if so, post your diagnostics zip file and I'll see if anything jumps out. you might also compare c-states settings between the 2 servers. I haven't fiddled with those much. As far as bios reset, maybe, but I haven't played with g8's that much so I don't know their settings as closely as 7/6's.
  8. I have an hp z400 with no onboard graphics and 2 gpus. One pcie gpu is what it boots unraid with. Then when it autostarts the vm, the screen goes black until the os loads. I don't know if its normal, but it does happen on my system and does not affect usability. The other gpu acts as one would expect.
  9. not really any hard and fast rules. I've run all sorts of combinations and its all pretty close. emulator pin can help with some minor latency and audio hiccups. But many think that the more cpus you add the more emulator pins you should have. I just watch and if the pin(s) i'm using are maxing out, I add more. It seems really dependent on the type of workload. I don't know if unraild allocates the ram as "take all from one side(cpu) first" or "take from all sides(cpus) equally" allocation method. I've never run into any memory speed issues though using 2-4 procs, with ram on 1-4 processors so i've never looked into it. This isn't solid info, but just my experience. The equipment I run was used for running way more vm's than I do and it managed that fine. So I imagine that running a single vm doesn't stress it in terms of ram access. But I wouldnt mind being proven wrong and being shown better optimization that actually makes more than a 1% difference. your emulator pin assignments of 2, 24, 46, 68 in the xml are boxed/identified as in your unraid grouping, with the emulator pin box 3, 25, 47, 69 having nothing assigned to them. So thats why you are seeing activity as you described.
  10. https://lime-technology.com/forums/topic/47345-performance-improvements-in-vms-by-adjusting-cpu-pinning-and-assignment/ scroll down to the section about emulator pinning. "The 'emulatorpin' entry puts the emulator tasks on other CPUs and off the VM CPUs."
  11. I run a 64 core osx vm for video editing. Seems to work fine, minus a little performance loss due to virtualization. I end up using the remaining unused cores as emulator pins which it never maxes out. I've run as few as 2-3 emulator pins with single 64 core vm and not hit any limits in that regards either. If you're going to run high core counts in a vm, you need to be on 6.5.3.rc series as they updated a setting to allow MUCH faster booting vs previous high core count vm's.
  12. apple's implementation of vmxnet3 is less than desirable. I was only able to get it up to about 150-200MBps about a year ago. afaik there is no working osx virtual nic capable of 10gbe. my tests are here (before I basically stopped using the virtual nic): https://lime-technology.com/forums/topic/54641-increase-os-x-networking-performance-by-80-or-your-money-back/
  13. then https://lime-technology.com/forums/topic/47345-performance-improvements-in-vms-by-adjusting-cpu-pinning-and-assignment/
  14. If I were you, I'd investigate the call traces in your logs....
  15. plug in a monitor and go to the IP address it shows after booting.
  16. Very perplexing. I was just reading about some issues with sata disk recognition on this gen/model and some were resolved by a firmware update... have you looked into that? I'll keep digging, but it's very weird. Once the "damage is done" can you switch back on the onboard raid controller and see if it can see all the disks in the array creation menu? (obviously don't make one, just see if they show up.)
  17. fix for this log spam is on the first page.
  18. Ok. Replicate the problem again making the drives disappear. Then immediately after they are gone, grab the diagnostics zip(tools>diagnostics) and post that complete zip file. That might help us know what is going on, and maybe why.
  19. what hba are you using and is the onboard raid controller still enabled? Or are you using onboard raid, and if so, what model?
  20. You might have them disabled, the link is: