• Unraid OS version 6.9.0-rc2 available


    limetech

    As always, prior to updating, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".

     

    Hopefully spin-up/down sorted:

    • External code (docker containers) using 'smartctl -n standby' should work ok with SATA drives.  This will remain problematic for SAS until/unless smartmontools v7.2 is released with support for '-n standby' with SAS.
    • SMART is unconditionally enabled on devices upon boot.  This solves problem where some newly installed devices may not have SMART enabled.
    • Unassigned devices will get spun-down according to 'Settings/Disk Settings/Default spin down delay'.

     

    Updated to 5.10 Linux kernel (5.10.1).

     

    Updated docker.

     

    Fixed bug joining AD domains.

     


     

    Version 6.9.0-rc2 2020-12-18 (vs -rc1)

    Base distro:

    • bind: verison 9.16.8
    • docker: version 19.03.14
    • krb5: version 1.18.2

    Linux kernel:

    • version 5.10.1

    Management:

    • emhttpd: fix external 'smartctl -n standby' causing device spinup
    • emhttpd: enable SMART on devices upon startup
    • emhttpd: unassigned devices spin-down according to global default
    • emhttpd: restore 'poll_attributes' event callout
    • smb: fixed AD join issue
    • webgui: do not try to display SMART info that causes spin-up for devices that are spun-down
    • webgui: avoid php syntax error if autov() source file does not exist

    Edited by limetech

    • Like 8
    • Thanks 4



    User Feedback

    Recommended Comments



    7 hours ago, Moka said:

    enter 5 digits

    Hi Moka,

     

    Have you tried to do a manual edit on /boot/config/rsyslog.cfg to specify the port as a work around?

    Link to comment

    I'm seeing an issue when using ESXi and setting hypervisor.cpuid.v0 = FALSE in the vmx file.

    The system will not boot if you assign more than one cpu to the virtual machine.

     

    I'm using ESXi 6.7 with the latest patches and UnRAID 6.9.0-rc2.

    At first I thought it was an issue with passing through my Nvidia P400, but it happens even without any hardware passed through.

     

    My diagnostic zip file is attached.

    Here is a thread from another user discussing the same issue - 

     

     

     

     

    darkstor-diagnostics-20210111-1047.zip

    Link to comment
    18 hours ago, trurl said:

    Virtualizing Unraid isn't supported

    I rolled back to UnRAID version 6.9.0-beta35, and this issue is resolved.

    I'm not sure what changed exactly, but it might be something to do with CPU microcode

    Edited by zer0zer0
    Link to comment
    22 hours ago, zer0zer0 said:

    I rolled back to UnRAID version 6.9.0-beta35, and this issue is resolved.

    I'm not sure what changed exactly, but it might be something to do with CPU microcode

    The plugin "Disable Mitigation Settings" might help you troubleshoot that.

    Link to comment
    2 hours ago, jbartlett said:

    The plugin "Disable Mitigation Settings" might help you troubleshoot that.

    I have been running that for a long time. Maybe thats why I havent seen this issue.

     

    I dont have a video card to pass through on my test unRAID VM, so I cant do any additional testing without taking down my main unraid box.

    Link to comment
    19 minutes ago, StevenD said:

    I have been running that for a long time. Maybe thats why I havent seen this issue.

     

    I dont have a video card to pass through on my test unRAID VM, so I cant do any additional testing without taking down my main unraid box.


    I saw the boot freeze with just the hypervisor.cpuid.v0 = FALSE in the vmx file and no hardware pass through at all. 

    So it should be independent of video cards etc.  

    Link to comment
    16 hours ago, zer0zer0 said:


    I saw the boot freeze with just the hypervisor.cpuid.v0 = FALSE in the vmx file and no hardware pass through at all. 

    So it should be independent of video cards etc.  

    I finally got a chance to test this.

     

    I have a VM, on ESXi 7.0u1. It boots directly from USB and its on v6.9.0-rc2.  It boots just fine, with or without, hypervisor.cpuid.v0 = FALSE.

     

    I am not passing through any hardware on this test VM.

     

    Also, this VM is NOT running "Disable Mitigation Settings"

    Edited by StevenD
    Link to comment

    When not doing plex stuff, the gpu is doing a bit of folding. The gpu is trottled, is this hw specific or can it be switched off. The temp is only 37-42 C (liquid cooled), but the power is about max.  

    Screenshot 2021-01-13 at 19.28.05.png

    Edited by frodr
    Link to comment

    Can the GPU be used for dockers then switched to VMs as needed, or is that not supported? I understand it would be difficult.

    If that's not possible, is it possible to reserve one GPU for dockers and another for VMs?

    Link to comment
    48 minutes ago, bfh said:

    Can the GPU be used for dockers then switched to VMs as needed, or is that not supported? I understand it would be difficult.

    If that's not possible, is it possible to reserve one GPU for dockers and another for VMs?

    This is largely a case-by-case basis and depends on the motherboard, it's physical PCI slot to CPU/NUMA/device connections, and even BIOS version.

     

    If you populate a video card in a given PCIe slot and it shows up in its own IOMMU group, you should be able to pass it to different VM/Dockers. If that is true for multiple PCIe slots, you should be able to populate all of those PCIe slots with a different graphics card and pass each one to a different VM/Docker.

     

    In theory.

     

    You can have many VM/Dockers set up to use a given video card but only one of those can be active at any given time.

    Edited by jbartlett
    Link to comment
    15 minutes ago, jbartlett said:

    This is largely a case-by-case basis and depends on the motherboard, it's physical PCI slot to CPU/NUMA/device connections, and even BIOS version.

     

    If you populate a video card in a given PCIe slot and it shows up in its own IOMMU group, you should be able to pass it to different VM/Dockers. If that is true for multiple PCIe slots, you should be able to populate all of those PCIe slots with a different graphics card and pass each one to a different VM/Docker.

     

    In theory.

     

    You can have many VM/Dockers set up to use a given video card but only one of those can be active at any given time.

    Not 100% correct. You can have multiple containers share the GPU, but you can't have a VM and containers use the GPU at the same time.

    To be able use the GPU for a container you have to unbind it from vfio so the drivers can be loaded.

    Link to comment

    I've never done GPU passthrough before, so I don't know if this happens all the time, or if it's an issue with my setup, or if it's an rc2 bug (leaning towards the latter, but I have no idea). I have an old GeForce 210 I threw in to pass through to a VM, and I noticed that when I stopped the VM, I still had video.

     

    To my knowledge, monitors only display what they're actively told to, so if there's nothing utilizing the GPU, there should be no signal to the monitor, as it shouldn't be telling the monitor to do anything. Anyway, after stopping, and even completely nuking the VM and disks, it still persisted. Was using the VGA output on the card, and the monitor constantly displayed the last thing that was on screen when I shut it down (setup screen of a Windows ISO) as if it was actively being told to display it still.

     

    I did not check if stopping the VM properly avoids this. This was using the force stop option.

    Link to comment
    59 minutes ago, TechGeek01 said:

    did not check if stopping the VM properly avoids this. This was using the force stop option.

    Haven't tried this myself but doesn't seem surprising. Force stop is sort of a hard poweroff as far as the VM is concerned but not as far as the hardware is concerned.

     

    59 minutes ago, TechGeek01 said:

    an rc2 bug

      Does it behave any differently with another release?

     

     

    • Like 1
    Link to comment
    On 1/11/2021 at 4:04 AM, SimonF said:

    Hi Moka,

     

    Have you tried to do a manual edit on /boot/config/rsyslog.cfg to specify the port as a work around?

    I tried, but is not working

    Link to comment
    8 hours ago, trurl said:

    Haven't tried this myself but doesn't seem surprising. Force stop is sort of a hard poweroff as far as the VM is concerned but not as far as the hardware is concerned.

    Yeah, I know it's a hard power off to the VM. But displays require an active signal, correct? That is, the monitor won't continue to display when there's no signal, and has to be constantly told what to output. So how on earth, after killing and even nuking the VM, should the card still be telling it to display the same last frame of video the VM was working with?

     

    Actually, I wasn't able to get USB passthrough of keyboard and mouse working. I force stopped the VM once I realized I needed keyboard and mouse, and then edited the VM, and it wasn't responding to input. I'm not sure if that's a USB passthrough thing (either a bug, or something to do with IOMMU grouping, I have no idea), or if that was just the GPU still stuck on the last frame of video even after restarting it. I'll give that a new test in the morning to rule that out.

     

    8 hours ago, trurl said:

      Does it behave any differently with another release?

    My main Unraid server on 6.8.3 uses low profile cards. I unfortunately don't have anything that'll fit in there to test. Was *planning* on getting a low profile Quadro, but I haven't bought yet since I wanna make sure this'll work. Side note, would I be able to use said Quadro for HW acceleration in a Plex Docker?

    Link to comment
    5 hours ago, Moka said:

    I tried, but is not working

    Hi, sorry, forgot to mention you will need to run /usr/local/emhttp/plugins/dynamix/scripts/rsyslog_config to update the running config.

    Link to comment

    After the update to RC2 everything seemed fine but then I started to notice that Unraid would wake up disks even when data was copied to the *cache* not the array.

     

    I also noticed that throughout the day some disks get spun up for no apparent reason - at times where everyone is asleep - there are no VM's nor dockers, Unraid is used as a plain NAS
     

     

     

    Edited by cholzer
    Link to comment
    17 hours ago, bfh said:

    Can the GPU be used for dockers then switched to VMs as needed, or is that not supported? I understand it would be difficult.

    If that's not possible, is it possible to reserve one GPU for dockers and another for VMs?

    Not sure if typical for all setups, but certainly for me even with no dockers running and no processes utilising a GPU, it would hard lock the server if I tried to start a VM with the GPU passed (with drivers loaded).

     

    Binding GPU to VFIO driver avoided this situation. With 1 GPU bound to VFIO it can be used for VMs... Another GPU remains for host(GUI) and docker.

    Link to comment
    13 hours ago, TechGeek01 said:

    I'll give that a new test in the morning to rule that out.

    Okay, so gracefully stopping the VM did not work. Only thing that would actually kill it was a force stop. Usually a clean stop seems to send an ACPI shutdown command, and will force kill if it's still running after a minute or so. Anyway, force stopping this time also led to the same stuck frame on the monitor, even after the VM was powered off (jnstead of the expected "no  signal"). What I did notice is that unplugging the VGA cable from the card, and plugging back in does not reset this, so the detection of a monitor here makes no difference, and it seems to be the GPU that's frozen. This is fixed by restarting the VM, so the card isn't stuck, it's just being told to keep displaying an image after the VM is stopped, which might not be the desired effect.

     

    Also, I notice USB pasthrough isn't working on my system properly. Have a Dell R510 with a Logitech K120, and an M325 plugged in. Both work in Unraid itself, but they seem to control Unraid, and aren't passed through to the VM. This was with them on the front USBs of the server, so I don't know if that's an Unraid thing, or if it's just a grouping thing not being able to pass through. FWIW, the Unraid USB is on the same controller board as those front USBs (the internal USB on the R510 is on that front control board, so presumably the same USB controller). Rear USBs directly on the motherboard I/O had the same effect, controlling the host, not being passed into the VM.

     

    Additionally, editing a VM to change the USB device leads to an error "unable to find device" on power on. The ID it gives is for the new device, so it's letting go of the old device config properly, though it seems like it's unable to find and bind to a new USB device, despite it being in the USB passthrough list, and being checked. Deleting and recreating the VM fixes this, as it works fine when creating, just not editing.

    Link to comment
    On 1/13/2021 at 4:21 PM, saarg said:

    Not 100% correct. You can have multiple containers share the GPU, but you can't have a VM and containers use the GPU at the same time.

    To be able use the GPU for a container you have to unbind it from vfio so the drivers can be loaded.

    Just to clarify, does a GPU need to be unbind from vfio as well to work on a VM? Does the binding to vfio be the same as editing the 'Syslinux Configuration' to add a GPU's  IOMMU Group?

     

    Link to comment
    16 hours ago, TechGeek01 said:

    displays require an active signal

    Yes, the signal from the video card that wasn't shutdown 

    • Like 1
    Link to comment
    On 1/13/2021 at 5:18 PM, TechGeek01 said:

    Was using the VGA output on the card, and the monitor constantly displayed the last thing that was on screen when I shut it down (setup screen of a Windows ISO) as if it was actively being told to display it still.

    That's actually fairly typical, the video card just renders whatever is in the framebuffer. Until something flushes the buffer or otherwise resets the card, it's just going to show whatever was stuffed into those memory addresses last. If you kill the VM and nothing else takes control of the card, it will just stay at the last state. Normal hardware machines typically take back control of the video card after the OS shuts down.

    • Like 1
    Link to comment

    I'm having random system freezes always every few days. The first time this occurred I was away from home so was unable to physically access the box for a few days. When I did return home and reboot, I found my VM settings were completely wiped. I thought to restore to RC1 backup but then only discovered that the backup doesn't backup VM settings. In any case, restored and then recreated the VM and upgraded again to RC2. System freezing returns.

     

    I can't get anything useful from the logs but have attached diagnostics. I am going to shut down my VM when not in use as a precautionary measure in case the gpu or usb passthrough is causing the instability.

     

    Prior to RC2, I have been using Unraid 6.90 betas (am a new user since September 2020) and only had the very rare occasional issue so definitely seems like something in the latest release. I'll advise again after a few days if shutting down the VM improves stability.

    fatboy-diagnostics-20210116-1001.zip

    Link to comment
    On 1/13/2021 at 7:39 PM, frodr said:

    When not doing plex stuff, the gpu is doing a bit of folding. The gpu is trottled, is this hw specific or can it be switched off. The temp is only 37-42 C (liquid cooled), but the power is about max.  

    Screenshot 2021-01-13 at 19.28.05.png

    Well, it's been the normal behavior of the Nvidia drivers for a while. A "power limit" is enforced for the card by the vbios and drivers, and when the power drawn approaches this limit, the clocks are throttled.

    If you want to see what the power limits are and to which extent you can adjust them, you just have a look at the output of 'nvidia-smi -q'

    Mine looks like that on a P2000 (which is only powered by the PCIE slot, thus the 75W min & max) :

        Power Readings
            Power Management                  : Supported
            Power Draw                        : 65.82 W
            Power Limit                       : 75.00 W
            Default Power Limit               : 75.00 W
            Enforced Power Limit              : 75.00 W
            Min Power Limit                   : 75.00 W
            Max Power Limit                   : 75.00 W

    On this one, no adjustment is possible, as Min and Max Power Limits are the same. And it's almost constantly throttled due to the power cap when folding.

     

    Same output for a RTX 3060 Ti on another rig:

     

    [...]
    Clocks Throttle Reasons
            Idle                              : Not Active
            Applications Clocks Setting       : Not Active
            SW Power Cap                      : Active
            HW Slowdown                       : Not Active
                HW Thermal Slowdown           : Not Active
                HW Power Brake Slowdown       : Not Active
            Sync Boost                        : Not Active
            SW Thermal Slowdown               : Not Active
            Display Clock Setting             : Not Active
     [...]
        Power Readings
            Power Management                  : Supported
            Power Draw                        : 194.23 W
            Power Limit                       : 200.00 W
            Default Power Limit               : 200.00 W
            Enforced Power Limit              : 200.00 W
            Min Power Limit                   : 100.00 W
            Max Power Limit                   : 220.00 W
    [...]

    For this one, you can see it is throttled because 194W are drawn out of a 200W limit. But this power limit can be adjusted between 100W and 220W through the command 'nvidia-smi -pl XXX', where XXX is the desired limit in watts.

    That's the way it works, and it makes overclocking/undevolting more complicated. The way to go for efficient folding is to lower the default power limit, while overclocking the GPU (-> same perf with less power) . But it's impossible afaik on an Unraid server, as you need an X-server to launch the required 'nvidia-settings' overclocking utility ...

     

    To summarize, nothing worrying in what you see, and not much to do. The only thing you can try under Unraid is raise the power limit to the max and see if you get better results for your folding. From my personal experience, minimal impact on performance, and a bit more power drawn 😞

    • Like 1
    • Thanks 1
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.