brando56894

Members
  • Posts

    115
  • Joined

  • Last visited

About brando56894

  • Birthday 10/17/1985

Converted

  • Gender
    Male
  • Location
    NJ
  • Personal Text
    Unix/Linux SysAdmin

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

brando56894's Achievements

Apprentice

Apprentice (3/14)

5

Reputation

  1. Thanks but I don't currently have unRAID installed, I'm using CentOS 7.4.
  2. What is the "default" command string for Qemu that is created after one creates a Windows 10 VM and passes through a GPU and HDMI sound card? unRAID is the only platform where I could pass through my Nvidia GTX 1070 to a Windows 10 VM without getting Error 43 at all. I was anticipating a mess but it was literally one of the easiest setups ever (create the VM, select the GPU and sound card from the drop downs and that's all it took)! I've tried to replicate this in Proxmox and oVirt but it happens in both, no matter what tweaks I seem to use, they both let Nvidia and Windows that it is being virtualized. unRAID's VM management is great, I'm just not a fan of JBOD. I like ZFS, which I know there is a plugin for, in addition to using JBOD, but it would be great if it was available as a supported replacement.
  3. I have my Nvidia Geforce 1070 passed through to OpenELEC and the VM boots, but then it won't start X because it's unable to load the nvidia kernel module. When I try to load it manually with modprobe it says "modprobe: FATAL: Module nvidia not found in directory /lib/modules/4.4.7" but it is there, under the nvidia directory (/lib/modules/4.4.7/nvidia/nvidia.ko). Why can't it find the module?
  4. Ah that sucks :-/ You may be on your own with this one for now because I just upgraded my CPU and motherboard so I doubt this will happen again, but it's only been on for less than 48 hours (Windows has only been up for about 8 hours), so who knows.
  5. No problem buddy, hope it helps. No responses on either of the Reddit threads either so this may be it.
  6. Most of the time over SSH, sometimes locally. This isn't specifically in unRAID, but all flavors of Linux, I think I've experienced it in FreeBSD also.
  7. I've noticed multiple times that tail stops following the log after a few hundred lines and has to be restarted.
  8. These are pretty hard to track down, I've experienced them on my system while running both unRAID and FreeNAS so it doesn't seem to be specific to Linux. I never really got an answer from anyone. All I know is that it's hardware related and may be related to the hardware being overloaded. I think it has to do with timer in the CPU being faulty since I always see "skew is too large" or something about decreasing the timeout.
  9. After doing a little more research, it may be as simple as just killing the qemu process that is hanging onto the device. It crashed for me last night but I hadn't seen this yet so I haven't had a chance to test it. My hung device is /dev/vfio/25 and IDK why I didn't think of this before but lsof will show the process that is using the device, which in this case is qemu root@unRAID:~# lsof /dev/vfio/25 COMMAND PID USER FD TYPE DEVICE SIZE/OFF NODE NAME qemu-syst 5388 root 24u CHR 251,0 0t0 97425 /dev/vfio/25 So if that process still exists after the VM crashes and is shutdown a simple kill -9 5388 should release that device and allow the VM to be restarted since theoretically nothing will be using that device node. Give it a try the next time you experience a crash and let me know what happens. I posted a thread about this on reddit since we're not getting any help here. I also find a similar thread there relating to this, but not Windows VM specific: https://www.reddit.com/r/VFIO/comments/44f1oc/primary_gpu_hotplug/ (now that I see there is a VFIO subreddit I'm gonna cross-post it for more visibility)
  10. Nope, buying a new card won't help, this is a software issue with either Linux or qemu/libvirt.
  11. It was too confusing for me, I couldn't get any of the proxies to work for me so I just stuck with my Arch VM that has Nginx running.
  12. Yep, apparently it will dynamically create reverse proxies for you or you can set them up yourself. It appears that way, but I can't get Traefik to connect to the Docker socket, so it doesn't show anything. Edit: I forgot to map /var/run/docker.sock inside the container, so that's why it won't connect! D'oh! I'll give it another try when I get home and it should work as expected. Edit 2: yep just map /var/run/docker.sock to the same inside the container and it works as expected.
  13. I managed to get the management GUI to work. I can't figure out how to get it to connect to the Docker daemon so that it will list and watch the containers. I had to drop the default config file in /mnt/user/appdata/traefik/traefik.toml and enable a few things. I added a port mapping for 8080:8080 and a folder mapping for /mnt/usr/share/appdata/traefik:/etc/traefik/ Here's the default one that's about 1000 lines: https://raw.githubusercontent.com/containous/traefik/master/traefik.sample.toml /etc/traefik/traefik.toml debug = false traefikLogsFile = "/etc/traefik/traefik.log" logLevel = "INFO" [web] address = ":8080" [docker] endpoint = "unix:///var/run/docker.sock" domain = "docker.localhost" watch = true exposedbydefault = true
  14. You have folders mapped incorrectly somewhere and data is being written to the docker image instead of to your array. You can try du -xh --max-depth=1 /var/lib/docker|sort -hr and that should tell what what folders are consuming the most space in your image. Mine looks like this, removing the -x flag allows it to cross filesystem boundaries so that will show files in your Docker subvolumes. root@unRAID:~# du -xh --max-depth=1 /var/lib/docker|sort -hr 17M /var/lib/docker 12M /var/lib/docker/image 2.4M /var/lib/docker/unraid 2.4M /var/lib/docker/containers 104K /var/lib/docker/volumes 104K /var/lib/docker/network 0 /var/lib/docker/trust 0 /var/lib/docker/tmp-old 0 /var/lib/docker/tmp 0 /var/lib/docker/swarm 0 /var/lib/docker/plugins 0 /var/lib/docker/btrfs root@unRAID:~# du -h --max-depth=1 /var/lib/docker|sort -hr 22G /var/lib/docker/btrfs 22G /var/lib/docker 12M /var/lib/docker/image 2.4M /var/lib/docker/unraid 2.4M /var/lib/docker/containers 104K /var/lib/docker/volumes 104K /var/lib/docker/network 0 /var/lib/docker/trust 0 /var/lib/docker/tmp-old 0 /var/lib/docker/tmp 0 /var/lib/docker/swarm 0 /var/lib/docker/plugins