• Posts

  • Joined

  • Last visited

Everything posted by brando56894

  1. Thanks but I don't currently have unRAID installed, I'm using CentOS 7.4.
  2. What is the "default" command string for Qemu that is created after one creates a Windows 10 VM and passes through a GPU and HDMI sound card? unRAID is the only platform where I could pass through my Nvidia GTX 1070 to a Windows 10 VM without getting Error 43 at all. I was anticipating a mess but it was literally one of the easiest setups ever (create the VM, select the GPU and sound card from the drop downs and that's all it took)! I've tried to replicate this in Proxmox and oVirt but it happens in both, no matter what tweaks I seem to use, they both let Nvidia and Windows that it is being virtualized. unRAID's VM management is great, I'm just not a fan of JBOD. I like ZFS, which I know there is a plugin for, in addition to using JBOD, but it would be great if it was available as a supported replacement.
  3. I have my Nvidia Geforce 1070 passed through to OpenELEC and the VM boots, but then it won't start X because it's unable to load the nvidia kernel module. When I try to load it manually with modprobe it says "modprobe: FATAL: Module nvidia not found in directory /lib/modules/4.4.7" but it is there, under the nvidia directory (/lib/modules/4.4.7/nvidia/nvidia.ko). Why can't it find the module?
  4. Ah that sucks :-/ You may be on your own with this one for now because I just upgraded my CPU and motherboard so I doubt this will happen again, but it's only been on for less than 48 hours (Windows has only been up for about 8 hours), so who knows.
  5. No problem buddy, hope it helps. No responses on either of the Reddit threads either so this may be it.
  6. Most of the time over SSH, sometimes locally. This isn't specifically in unRAID, but all flavors of Linux, I think I've experienced it in FreeBSD also.
  7. I've noticed multiple times that tail stops following the log after a few hundred lines and has to be restarted.
  8. These are pretty hard to track down, I've experienced them on my system while running both unRAID and FreeNAS so it doesn't seem to be specific to Linux. I never really got an answer from anyone. All I know is that it's hardware related and may be related to the hardware being overloaded. I think it has to do with timer in the CPU being faulty since I always see "skew is too large" or something about decreasing the timeout.
  9. After doing a little more research, it may be as simple as just killing the qemu process that is hanging onto the device. It crashed for me last night but I hadn't seen this yet so I haven't had a chance to test it. My hung device is /dev/vfio/25 and IDK why I didn't think of this before but lsof will show the process that is using the device, which in this case is qemu root@unRAID:~# lsof /dev/vfio/25 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME qemu-syst 5388 root 24u CHR 251,0 0t0 97425 /dev/vfio/25 So if that process still exists after the VM crashes and is shutdown a simple kill -9 5388 should release that device and allow the VM to be restarted since theoretically nothing will be using that device node. Give it a try the next time you experience a crash and let me know what happens. I posted a thread about this on reddit since we're not getting any help here. I also find a similar thread there relating to this, but not Windows VM specific: (now that I see there is a VFIO subreddit I'm gonna cross-post it for more visibility)
  10. Nope, buying a new card won't help, this is a software issue with either Linux or qemu/libvirt.
  11. It was too confusing for me, I couldn't get any of the proxies to work for me so I just stuck with my Arch VM that has Nginx running.
  12. Yep, apparently it will dynamically create reverse proxies for you or you can set them up yourself. It appears that way, but I can't get Traefik to connect to the Docker socket, so it doesn't show anything. Edit: I forgot to map /var/run/docker.sock inside the container, so that's why it won't connect! D'oh! I'll give it another try when I get home and it should work as expected. Edit 2: yep just map /var/run/docker.sock to the same inside the container and it works as expected.
  13. I managed to get the management GUI to work. I can't figure out how to get it to connect to the Docker daemon so that it will list and watch the containers. I had to drop the default config file in /mnt/user/appdata/traefik/traefik.toml and enable a few things. I added a port mapping for 8080:8080 and a folder mapping for /mnt/usr/share/appdata/traefik:/etc/traefik/ Here's the default one that's about 1000 lines: /etc/traefik/traefik.toml debug = false traefikLogsFile = "/etc/traefik/traefik.log" logLevel = "INFO" [web] address = ":8080" [docker] endpoint = "unix:///var/run/docker.sock" domain = "docker.localhost" watch = true exposedbydefault = true
  14. You have folders mapped incorrectly somewhere and data is being written to the docker image instead of to your array. You can try du -xh --max-depth=1 /var/lib/docker|sort -hr and that should tell what what folders are consuming the most space in your image. Mine looks like this, removing the -x flag allows it to cross filesystem boundaries so that will show files in your Docker subvolumes. root@unRAID:~# du -xh --max-depth=1 /var/lib/docker|sort -hr 17M /var/lib/docker 12M /var/lib/docker/image 2.4M /var/lib/docker/unraid 2.4M /var/lib/docker/containers 104K /var/lib/docker/volumes 104K /var/lib/docker/network 0 /var/lib/docker/trust 0 /var/lib/docker/tmp-old 0 /var/lib/docker/tmp 0 /var/lib/docker/swarm 0 /var/lib/docker/plugins 0 /var/lib/docker/btrfs root@unRAID:~# du -h --max-depth=1 /var/lib/docker|sort -hr 22G /var/lib/docker/btrfs 22G /var/lib/docker 12M /var/lib/docker/image 2.4M /var/lib/docker/unraid 2.4M /var/lib/docker/containers 104K /var/lib/docker/volumes 104K /var/lib/docker/network 0 /var/lib/docker/trust 0 /var/lib/docker/tmp-old 0 /var/lib/docker/tmp 0 /var/lib/docker/swarm 0 /var/lib/docker/plugins
  15. I had the same issues with Linux/EFI and just decided to go with SeaBIOS and an MBR grub install since EFI isn't really needed for VMs.
  16. Samba is a notorious pain in the butt when going from Linux to Windows, how I usually solve it is by adding a a user with the same name as my windows user and the same password on the Samba server, or you can allow unprotected access to the shares, if you get an authentication box on windows when accessing the share try logging in as root.
  17. Looking at the config file (I also tried to set it up in Arch, but couldn't connect to the management UI, but it was serving on port 80) it looks like additional configuration needs to be done so it's not as easy as some of the other containers. Bummer.
  18. Very interesting...this may replace Nginx for me. Edit: I just tried to set it up and I can't really seem to get it to work either. It starts, but I can't connect to it.
  19. Are you able to connect to it from any other host? It may be a routing issue.
  20. Server: SuperMicro X10SDV-F-0 w/Xeon-D 1540 (16x 2 GHz), 1.2 KW EVGA PSU, 2x 32 GB DDR4 ECC RAM Pool: 5x HGST 4 TB HDDs Cache: 1x 512 GB Samsung 840 Pro SATA SSD <domain type='kvm' id='2'> <name>Windows 10</name> <uuid>f4914b40-ce13-7c85-09cf-1bbe740f2d41</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>16777216</memory> <currentMemory unit='KiB'>16777216</currentMemory> <memoryBacking> <nosharepages/> </memoryBacking> <vcpu placement='static'>4</vcpu> <cputune> <vcpupin vcpu='0' cpuset='0'/> <vcpupin vcpu='1' cpuset='1'/> <vcpupin vcpu='2' cpuset='2'/> <vcpupin vcpu='3' cpuset='3'/> </cputune> <resource> <partition>/machine</partition> </resource> <os> <type arch='x86_64' machine='pc-i440fx-2.9'>hvm</type> <loader readonly='yes' type='pflash'>/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd</loader> <nvram>/etc/libvirt/qemu/nvram/f4914b40-ce13-7c85-09cf-1bbe740f2d41_VARS-pure-efi.fd</nvram> </os> <features> <acpi/> <apic/> <hyperv> <relaxed state='on'/> <vapic state='on'/> <spinlocks state='on' retries='8191'/> <vendor_id state='on' value='none'/> </hyperv> </features> <cpu mode='host-passthrough' check='none'> <topology sockets='1' cores='2' threads='2'/> </cpu> <clock offset='localtime'> <timer name='hypervclock' present='yes'/> <timer name='hpet' present='no'/> </clock> <on_poweroff>destroy</on_poweroff> <on_reboot>restart</on_reboot> <on_crash>restart</on_crash> <devices> <emulator>/usr/local/sbin/qemu</emulator> <disk type='file' device='disk'> <driver name='qemu' type='raw' cache='writeback'/> <source file='/mnt/cache/domains/Windows 10/vdisk1.img'/> <backingStore/> <target dev='hdc' bus='virtio'/> <boot order='1'/> <alias name='virtio-disk2'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </disk> <controller type='usb' index='0' model='nec-xhci'> <alias name='usb'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x07' function='0x0'/> </controller> <controller type='pci' index='0' model='pci-root'> <alias name='pci.0'/> </controller> <controller type='virtio-serial' index='0'> <alias name='virtio-serial0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/> </controller> <interface type='bridge'> <mac address='52:54:00:20:41:f6'/> <source bridge='br0'/> <target dev='vnet1'/> <model type='virtio'/> <alias name='net0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/> </interface> <serial type='pty'> <source path='/dev/pts/1'/> <target port='0'/> <alias name='serial0'/> </serial> <console type='pty' tty='/dev/pts/1'> <source path='/dev/pts/1'/> <target type='serial' port='0'/> <alias name='serial0'/> </console> <channel type='unix'> <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/domain-2-Windows 10/org.qemu.guest_agent.0'/> <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/> <alias name='channel0'/> <address type='virtio-serial' controller='0' bus='0' port='1'/> </channel> <input type='mouse' bus='ps2'> <alias name='input0'/> </input> <input type='keyboard' bus='ps2'> <alias name='input1'/> </input> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x0'/> </source> <alias name='hostdev0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='pci' managed='yes'> <driver name='vfio'/> <source> <address domain='0x0000' bus='0x04' slot='0x00' function='0x1'/> </source> <alias name='hostdev1'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc22a'/> <address bus='3' device='6'/> </source> <alias name='hostdev2'/> <address type='usb' bus='0' port='1'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc22b'/> <address bus='3' device='4'/> </source> <alias name='hostdev3'/> <address type='usb' bus='0' port='2'/> </hostdev> <hostdev mode='subsystem' type='usb' managed='no'> <source> <vendor id='0x046d'/> <product id='0xc52b'/> <address bus='3' device='8'/> </source> <alias name='hostdev4'/> <address type='usb' bus='0' port='3'/> </hostdev> <memballoon model='virtio'> <alias name='balloon0'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x08' function='0x0'/> </memballoon> </devices> <seclabel type='none' model='none'/> <seclabel type='dynamic' model='dac' relabel='yes'> <label>+0:+100</label> <imagelabel>+0:+100</imagelabel> </seclabel> </domain> The BMC has a built-in aspeed 2400 GPU, which isn't used for anything other than IPMI/console access, so I have the GTX passed through to the Windows VM which is my HTPC.
  21. If there isn't one made for unRAID, there definitely is a regular Docker container available.
  22. My VMs crash when they're under load because my CPU sucks, I'm dealing with it until my new motherboard comes in a few weeks. Whenever my Windows 10 VM crashes, it locks my Nvidia GTX 1070 that I have passed through to it, and won't let me boot the VM back up, citing this issue: root@unRAID:~# virsh start Windows\ 10 error: Failed to start domain Windows 10 error: internal error: process exited while connecting to monitor: 2017-08-19T09:14:20.728766Z qemu-system-x86_64: -chardev pty,id=charserial0: char device redirected to /dev/pts/1 (label charserial0) 2017-08-19T09:14:20.810864Z qemu-system-x86_64: -device vfio-pci,host=04:00.0,id=hostdev0,bus=pci.0,addr=0x5: vfio error: 0000:04:00.0: failed to open /dev/vfio/25: Device or resource busy If I choose VNC as my video output it's starts fine. A reboot of unRAID also fixes the issue, but I would rather not have to reboot my server when the VM crashes but everything else works well. I found this string of commands relating to the same thing over on the RedHat forums, but the last one won't work and just fails with "-bash: echo: write error: No such device" That was exactly what was going wrong. efifb had attached to some of the nvidia device's memory. Since efifb can't be compiled as a module, and I'd rather not turn it off, here's what I did: echo 0 > /sys/class/vtconsole/vtcon0/bind echo 0 > /sys/class/vtconsole/vtcon1/bind echo efi-framebuffer.0 > /sys/bus/platform/drivers/efi-framebuffer/unbind This completely solves the problem and all is well doing passthrough on my Skylake system. Hopefully now that there's a solution with the right magic words in it on the Internet, others will find their answer here. Thanks again!
  23. As the other guys said, there's no reason to cache torrents and I have to seed mine. I currently have 2 TB of seeding torrents. So Usenet will be cache-only and torrents will be no cache since they will be written directly to downloads and then copied to either movies or shows.
  24. All I meant was that it's faster to transfer data locally on the SSD than it is to go from SSD to HDD. I understand what you're saying, and I guess I'll give it a try, I just have to reconfigure my shares since downloads is one share and I would have to split the into usenet and torrents.