real-time

Members
  • Posts

    2
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

real-time's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Is it possible to have multiple users with this docker? Mineos itself seems to be setup for this but I can't figure out how it works. I'm guessing it uses system users. I assume that adding system users inside of the docker wouldn't be persistent.
  2. Hi All, This is my first post on this forum as I'm new to unraid(Recent convert from esxi). I've got my new system up and running with unraid 6.2(Latest RC) and have been experiencing some stability problems with both unraid and vms. Here is my hardware config: Motherbard: SuperMicro X8DTH-iF (Latest BIOS) CPU: Xeon x5687(Dual) Memory: 48GB Hynix DDR3 1333Mhz ECC(4GB Sticks) GPU1: Radeon 6870 GPU2: Nvidia GTX670(Asus) PSU: 850W EVGA(Brand new) Hard Drives: 3x 1TB Western Digital Blue 7200RPM VMs are both running windows 10. One is assigned the Radeon, and the Other is assigned the GTX970. Audio devices are also passed through from the gpus. Each also has a USB 3.0 card passed through. Both using Q35 2.5 on SeaBios, OVMF does not seem to work(no output). (Do I need a uefi system to emulate UEFI?) I've ruled out problems with hdd/cpu/memory by swapping in other components. CPUs were pulled from a working system and so was the memory. GPUs were working last time they were used. (Note that I'll just be running through the AMD card VM, but I get similar errors for the VM with the NVIDIA card) The first problem I ran into was this: internal error: process exited while connecting to monitor: 2016-08-29T02:55:46.854315Z qemu-system-x86_64: -device vfio-pci,host=85:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x4: vfio: failed to set iommu for container: Operation not permitted 2016-08-29T02:55:46.854357Z qemu-system-x86_64: -device vfio-pci,host=85:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x4: vfio: failed to setup container for group 26 2016-08-29T02:55:46.854369Z qemu-system-x86_64: -device vfio-pci,host=85:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x4: vfio: failed to get group 26 2016-08-29T02:55:46.854388Z qemu-system-x86_64: -device vfio-pci,host=85:00.0,id=hostdev0,x-vga=on,bus=pci.2,addr=0x4: Device initialization failed Now checking the iommu groups, the only other deivce in group 26 is the associated audio device for that device: 85:00.0 VGA compatible controller: Advanced Micro Devices, Inc. [AMD/ATI] Barts XT [Radeon HD 6870] 85:00.1 Audio device: Advanced Micro Devices, Inc. [AMD/ATI] Barts HDMI Audio [Radeon HD 6800 Series] /sys/kernel/iommu_groups/26/devices/0000:85:00.0 /sys/kernel/iommu_groups/26/devices/0000:85:00.1 To resolve the issue, I tried the ACS Override which did not help. The option was then disabled. What did help was adding "vfio_iommu_type1.allow_unsafe_interrupts=1" to syslinux.cfg. VMs were now able to be booted and worked fine. The Problem: A few hours later, I noticed that the windows 10 vms would hang. 100% cpu usage on one of the cores for the vm. This usually happens after a couple minutes of doing nothing on the vm. All power options set to "Never" in windows 10. This happens on both VMs. Nothing useful in any logs. Troubleshooting: I had initially suspected the NVIDIA driver, but just ruled it out by replicating the behavior on my AMD VM. It happens no matter which one is booted or if both are booted. I removed the gpu passthrough from one system. It was left on for 48 Hours and had no problems, even while I was experimenting with the other one. I altered the pinning of the CPUs after I discovered this in the logs: [ 0.199331] .... node #0, CPUs: #1 #2 #3 [ 0.215326] .... node #1, CPUs: #4 #5 #6 #7 [ 0.314313] .... node #0, CPUs: #8 #9 #10 #11 [ 0.334317] .... node #1, CPUs: #12 #13 #14 #15 I assume that this is accurate? Maybe not. I installed linux mint 17.3 Cinnamon on a vm with the NVIDIA GPU passed through to it and encountered no problems other than stuttering audio which I didn't try to fix. This machine was not run for too long. It may hang if left alone longer. I'll leave it on overnight tonight and update this thread. As soon as I post this thread, I'll be running each vm with a single CPU to see if the hang happens then. As part of my troubleshooting, I ran the heaven benchmark and a cpu burn-in test on both vms at the same time to rule out overheating/voltage drops from the psu. I was able to keep the vms running all the way to the point where one of the GPUs overheated and set off the motherboard's thermal alarm. Completely pegged, the system draws 720W and that's well within what my PSU can handle. North bridge was a little hot, but otherwise everything was fine. The system did not crash. I was able to shut down the benchmarks and get it back under control and down to safe temps. Can anyone offer me advice? Is my old 1366 socket motherboard too old? I've noticed another user on here with a very similar motherboard (X8DTH-6F which is the same as mine but with a sas controller instead of a sata controller) that had almost identical problems and his thread didn't get resolved. Is anyone using a dual westmere setup with gpu passthrough successfully? Should I upgrade to a dual sandy bridge setup? I feel like my problem may have to do with unsafe interrupts. Does this effect guest stability? Thanks for reading this wall of text. I'm willing to try anything at this point, but if I don't get this resolved by the end of the week, I'm going to return the motherboard and upgrade a couple of generations.