Leaderboard

Popular Content

Showing content with the highest reputation since 02/07/21 in Reports

  1. After updating to 6.9 Final the HDD's (sata) no longer go into standby (after 30 min), no spin down. I also set the delay to 15 minutes but the HDD's just don't go into standby. I have not changed any system settings before, only when I tried to solve the problem (uninstalled plugins etc.). Before the update, on 6.8.3, the spin down worked fine. I hope that you can help.
    2 points
  2. I don't search often so I can't say for certain it's the beta but it used to work consistently and now it doesn't, even in safe mode. No results returned no matter how long I wait. I'm running the latest MacOS Catalina. I also tested in MacOS Mojave (unraid VM), same result. I have a Raspberry Pi shared over SMB where search from the same two clients works fine. Diagnostics from safe mode attached. nas-diagnostics-20201027-1319.zip
    1 point
  3. For reference, 6.9 rc2, admittedly on different server
    1 point
  4. create a test directory in /mnt/user/Downloads root@MediaStore:/mnt/user/Downloads# ls -al test total 0 drwx------ 1 root root 0 Jan 20 23:33 ./ drwxrws--- 1 nobody users 205274 Jan 20 23:33 ../ root@MediaStore:/mnt/user/Downloads# ls -ld /mnt/{cache,user}/Downloads drwxrws--- 1 nobody users 205274 Jan 20 23:33 /mnt/cache/Downloads/ drwxrws--- 1 nobody users 205274 Jan 20 23:33 /mnt/user/Downloads/ when this directory is mounted in a container like so root@MediaStore:~# docker run --rm --name box -d -v /mnt/cache/Downloads:/media alpine sleep 3600 131ed3b6357ba8253513
    1 point
  5. Hi, while i was trying to test the new WSL2 feature it seems nester VM feature is not working anymore. Unraid 6 8 3 VM Windows 10 pro what have i done disabled all VMs, disabled VM Manager, unraid shell modprobe -r kvm_intel modprobe kvm_intel nested=1 edited VM xml template with the following entry under <cpu> section <feature policy='require' name='vmx'/> tried starting VM Manager, no way, always ended in <LIBVrt service failed to start> after some digging and results only showed check pathes
    1 point
  6. No issues in 6.7.2. Existing VMs (Server 2016) with hyper-v enabled won't boot after update -> stuck at TianoCore Logo Booting into recovery mode works, booting from a install DVD to load virtio drivers and modifiy the bcd works. Removing "hypervisorlaunchtype auto" from bcd makes the VM boot (but disables hyper-v) How to reproduce: (in my case, I hope its not just me ...) 1) new VM with Server 2016 Template 2) Install Server 2016/2019 3) enable hyper-v and reboot. It should either not reboot, boot into recovery or come back with a non working "VMbus"
    1 point