ncandy

Members
  • Posts

    15
  • Joined

  • Last visited

Everything posted by ncandy

  1. Sounds like you may have selected virbr0 instead of br0 as the Network Bridge for the VM. If you want your guest VMs to be on the same network as the host, change the bridge to br0. If you want the guest VM to be isolated from the host network, leave it on virbr0 and let the KVM/QEMU perform NAT.
  2. On the docker settings page, toggle to Advanced View in the upper right. Do you have --runtime=nvidia set in Extra Parameters?
  3. I have my server in the basement about 100 ft. away from my office and I've been using a 4kex70-h2 and a u2ex50 for a few years now for a Win10 VM. I use 2 cat5e cables terminated to keystone jacks in my office and a patch panel in the basement. Then use regular patch cables to connect the transmitters and receivers. I have the 4kex70-h2 transmitter connected via 6 ft. HDMI cable to a 3060 (recent upgraded from a 1050) with a resolution of 3440x1440 @60Hz. The receiving unit does get quite hot, as many of the reviewers have noted. I plan to run a new cat6 or cat7 cable soon for the 4kex70-h2 since I occasionally lose the signal which I attribute to the length of the run on cat5e. If your run is shorter, you can save some $$ with the 4kex60-h2.
  4. I had similar issues with Win10 after upgrading to 6.9.2. For me, disabling Hyper-V in Windows did the trick: Might be worth a shot.
  5. Perhaps due to SSD 1 MiB Partition Alignment?
  6. I see you're using a Samsung 860 EVO SSD for your cache pool. Perhaps you're suffering from the excessive write issue due to the first partition offset not being aligned at the 1MB boundary (see pre-release bug report for more info). Since you're storing the disk image for your VM on the cache pool, it may help to follow the instructions in the bug report to repartition the SSD so the first partition aligns properly.
  7. So it seems that nested virtualization was causing problems in my case. I reenabled WSL but left off Hyper-V and Windows Hypervisor Platform and everything is running normally. Interestingly, turning off Hyper-V in the Unraid VM settings did nothing to resolve the issue. I needed to disable virtualization features inside Windows itself.
  8. I spent the weekend trying to diagnose the problems and I may have found something. These are the things I've tried with mixed results: Moved everything off my cache pool (2 Samsung EVO SSDs), repartitioned to 1MB alignment, moved everything back (my vdisks are on the cache pool) Updated virtio windows drivers to latest (0.1.190-1) Updated VM Machine to i440fx-5.1 (was i440fx-4.2) Changed VM network model to virtio-net (was virtio) Reduced RAM to 16384 MB (was 32768 MB) Moved Windows 10 VM vdisk to separate SSD in unassigned devices Changed VM to use /dev/urandom for RNG I have two Windows 10 VMs and only one is having problems. The last change I made this morning looks to have resolved the issue for me. In the Windows 10 VM that is slow, I went into Windows Features and disabled Hyper-V, Windows Hypervisor Platform and Windows Subsystem for Linux. After the required reboot, I no longer saw all cores hitting 100% with 50-80% from system processes. I also saw a dramatic change in the interrupt processing on the cores dedicated to that VM from Unraid's perspective. My next step is to turn Windows Subsystem for Linux back on since I do need it for this VM. Hope this info helps some of you experiencing slowness.
  9. I might be experiencing the same symptoms. I upgraded from 6.8.3 to 6.9.0 and now to 6.9.1 and my Win10 VMs seem to be running much slower. I see long periods of high CPU utilization on a fresh boot with no applications started. Out of curiosity, do you store your VM disk images on an SSD cache pool? I just moved all of VM disk images off the cache and onto the array, and so far, it seems to help. Going to do some more testing, but I was planning to move everything off cache and repartition the SSDs to 1MB alignment anyways. Once aligned, I'll move the VM disk images back to cache to see if it helps.
  10. I'm running a Windows 10 VM passing through GPU, PCIe USB controller and SSD. My server is in the basement and my keyboard, mouse and monitor are on the second floor on the opposite side of my house (probably about 100 ft. away). I have Cat5e cabling between them. This is what I'm using: HDMI 4K60Hz HDBaseT Extender: https://smile.amazon.com/gp/product/B073QL6YT3/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 USB Extender: https://smile.amazon.com/gp/product/B01EV33R8S/ref=ppx_yo_dt_b_search_asin_title?ie=UTF8&psc=1 USB sound card: https://smile.amazon.com/gp/product/B06XKC2DNQ/ref=ppx_yo_dt_b_asin_title_o00_s00?ie=UTF8&psc=1 If you only need 1080p video, I've used https://smile.amazon.com/gp/product/B07H58829J/ref=ppx_yo_dt_b_asin_title_o00_s01?ie=UTF8&psc=1 successfully.
  11. I think the problem was the addition of StorageID to hdhomerun.conf. It needs to match the ID of the existing record engine. In my setup, the record engine adds the StorageID automatically to /etc/hdhomerun.conf since it's a writable file. Ideally, hdhomerun.conf should be moved to an external directory (e.g., /hdhomerun/hdhomerun.conf) and added as an argument to hdhomerun_record_x64 (e.g., /opt/hdhomerun/hdhomerun_record_x64 --conf=/hdhomerun/hdhomerun.conf).
  12. This applies "fruit:time machine max size" to all shares, but this setting is only meaningful to Time Machine: From https://www.samba.org/samba/docs/current/man-html/vfs_fruit.8.html fruit:time machine max size = SIZE [K|M|G|T|P] Useful for Time Machine: limits the reported disksize, thus preventing Time Machine from using the whole real disk space for backup. The option takes a number plus an optional unit. IMPORTANT: This is an approximated calculation that only takes into account the contents of Time Machine sparsebundle images. Therefor you MUST NOT use this volume to store other content when using this option, because it would NOT be accounted. The calculation works by reading the band size from the Info.plist XML file of the sparsebundle, reading the bands/ directory counting the number of band files, and then multiplying one with the other.
  13. Anything in SMB Extras applies to all shares (/etc/samba/smb.conf contains an include of /boot/confir/smb-extra.conf in the [global] section). Ideally, these settings should be applied to the shares themselves, but this was the easiest way to add these options persistently. You still need to enable "Enahanced OS X interoperability" in the SMB Security Settings for the share you want to use. Hopefully, the next version of Unraid will include options to add these Time Machine options to the SMB shares themselves. It's my understanding that setting a max size in the [global] section will apply the specified max size to each share, i.e., each share used for TM will be allowed 1TB in my example.
  14. Time Machine backups over AFP stopped working for me a while ago and I just now got around to moving it over to SMB. Thought I'd share how I got it to work. I created a share for the backup and enabled "Enhanced OS X Interoperability" along with Private security. I added the following to SMB Extras (under Settings->SMB): fruit:time machine = yes fruit:time machine max size = 1T The only way I could get Time Machine on High Sierra to use the share was via CLI: $ sudo tmutil setdestination -a smb://<username>:<password>@<unraid-server>/<timemachine-share> The share should then show up in the Time Machine UI. I'm averaging about 50 MB/s from my MBP with a gigabit ethernet connection.
  15. Just started using unRAID and installed this container for HDHR DVR. I noticed that hdhomerun_record_x64 runs as root in the container. Should this run as "nobody"? A simple change to supervisord.conf to change "user=root" to "user=nobody" should do the trick, yes? Is there a reason to run the record engine as root?