• Unraid OS version 6.9.0-beta30 available


    limetech

    Changes vs. 6.9.0-beta29 include:

     

    Added workaround for mpt3sas not recognizing devices with certain LSI chipsets. We created this file:

    /etc/modprobe.d/mpt3sas-workaround.conf

    which contains this line:

    options mpt3sas max_queue_depth=10000

    When the mpt3sas module is loaded at boot, that option will be specified.  If you add "mpt3sas.max_queue_depth=10000" to syslinux kernel append line, you can remove it.  Likewise, if you manually load the module via 'go' file, can also remove it.  When/if the mpt3sas maintainer fixes the core issue in the driver we'll get rid of this workaround.

     

    Reverted libvirt to v6.5.0 in order to restore storage device passthrough to VM's.

     

    A handful of other bug fixes, including 'unblacklisting' the ast driver (Aspeed GPU driver).  For those using that on-board graphics chips, primarily Supermicro, this should increase speed and resolution of local console webGUI. 

     


     

    Version 6.9.0-beta30 2020-10-05 (vs -beta29)

    Base distro:

    • libvirt: version 6.5.0 [revert from version 6.6.0]
    • php: version 7.4.11 (CVE-2020-7070, CVE-2020-7069)

    Linux kernel:

    • version 5.8.13
    • ast: removed blacklisting from /etc/modprobe.d
    • mpt3sas: added /etc/modprobe.d/mpt3sas-workaround.conf to set "max_queue_depth=10000"

    Management:

    • at: suppress session open/close syslog messages
    • emhttpd: correct 'Erase' logic for unRAID array devices
    • emhtppd: wipefs encrypted device removed from multi-device pool
    • emhttpd: yet another btrfs 'free/used' calculation method
    • webGUI: Update statuscheck
    • webGUI: Fix dockerupdate.php warnings

     

    • Like 5
    • Thanks 5



    User Feedback

    Recommended Comments



    New Issue: I can't unpin CPUs from dockers. When I do so in pinning and I press apply, everything seems fine. After entering the pinning settings, everything is back to "as it was".

    If I go into the container settings for the docker and change settings there, it persists.

    Edited by Jaster
    Link to comment
    8 hours ago, Jaster said:

    New Issue: I can't unpin CPUs from dockers. When I do so in pinning and I press apply, everything seems fine. After entering the pinning settings, everything is back to "as it was".

    If I go into the container settings for the docker and change settings there, it persists.

    There is already a defect for this. You may want to review and add any new details you have.

     

     

    Link to comment

    After Installing 6.9 and formatting my SSD to 1m can I go back to 6.8.3?

     

    I was getting massive amounts of writes to my SSD in 6.8.3. I'm still seeing a lot of writes, but it appears to be less in total with 6.9. I want to revert back to 6.8.3 to try something, but if the current formatting of 1M isn't liked by 6.8.3 it would save me a lot of trial and error to prior to going back and having to rebuild my SSD again. Lol

    Link to comment

    Hmm, went backwards to 6.8.3 and unraid didn't see my SSD or my Dockers. I'm formatted as btrfs and I'm using the Cache Pool. Spelt simply as "Cache".

     

    I wonder if I set something up incorrectly from the start so it can't go back. I used 1M btrfs. 

    Link to comment
    11 hours ago, kizer said:

    Hmm, went backwards to 6.8.3 and unraid didn't see my SSD or my Dockers. I'm formatted as btrfs and I'm using the Cache Pool. Spelt simply as "Cache".

    Like mentioned it only works for a pool, it looks like the partition layout isn't checked for pools, probably some oversight, or maybe on purpose, anyway you can have a single device btrfs "pool" if the pool slots are set to more than 1.

     

     

    Link to comment
    On 10/26/2020 at 8:55 PM, Jaster said:

    knowlage-diagnostics-20201026-2051.zip 462.1 kB · 1 download

     

    Hi guys,

    I see tons of

    
    unexpected GSO type: 0x0, gso_size 35, hdr_len 89

    Messages - I attached the diagnostics.

    Anything I can/should do about it?

    Starting VMs takes way longer than it used to be on 6.8.3.

    It takes about ~7Minutes to get my VMs with passthrough up and running.
    The Docker and Apps tab are loading for about ~30 seconds on every refresh. 

    In general it feels like VM's are slower over all - I'm running on an XFS Cache right now.

    Edited by Jaster
    Link to comment

    I run Unraid on two Intel NUCs. Both use the onboard Intel NIC and one has a Realtek RT-8123 USB to gigabit ethernet dongle. The Intel NIC always works and up until 6.9release25 the USB dongle worked. Both versions 29 and 30 fail with a call trace followed by:

     

    "r8152 2-3.3:1.0: couldn't register the device"

     

    I realize that with the kernel change and Realtek not being good at keeping up, there is likely an issue with the driver. I have tried two different USB dongles (RT-8152 and RT-8153). I wish I could find one that wasn't a Realtek chipset, but I haven't had any luck in that regard. I have attached the diagnostics for it working under release 25 and for it not working under release 30.

     

    Diagnostics-6.9release25-working.zip Diagnostics-6.9release30-not-working.zip

    Link to comment

    Thanks for the addition of the Aspeed driver!

     

    On that note, I have a Supermicro X10-DRi running Unraid, and with the latest firmware for the BMC, I can't boot into GUI mode. I just get a black screen after all the scrolling text instead of the login screen that should show up. I actually had to revert to BMC firmware 3.77, as 3.80 and up do not work with GUI mode. This was failing for me on Unraid 6.8.3 after migrating the USB from my old server.

     

    Are you guys by chance able to confirm that the latest Aspeed driver you've included works in GUI mode on this latest firmware for the board?

     

    For reference, X10 is based on the ASPEED AST 2400 controller. Seems like this same latest driver is required for almost all X10 motherboards, so if you could confirm that a Supermicro X10 board with IPMI firmware 3.80 or later can successfully boot into GUI mode, that would be amazing.

    • Like 2
    Link to comment

    I'm experiencing regular freezes on my WIndows VMs, it's intermittent and sometimes I can use the VM for a few hours before it happens and sometimes it happens as soon as the VM boots.

     

    The allocated CPU cores hit 100% when this happens and everything freezes, including video output. How long it freezes for appears to be random but it can be from 10 mins to a few hours.

     

    Have been running this VM starting with beta 25. Only seems to be happening since beta 30 but I downgraded to beta 29 and the same still happens. I even setup a new VM (using the same SSD) and removing and re-adding the GPU from passthrough and vfio-pci but it dosen't seem to help.

     

    Edit: Seems to work reliably when only using one core / thread pair. But the issue occurs again when using any more.

     

    Edit 2: Updated VirtIO drivers, nVidia GPU drivers and made sure Windows was up to date but no dice yet. I also notice I get this in the VirtIO logs - not sure if it's related:
     

    2020-11-04 09:58:14.366+0000: 6887: warning : qemuDomainObjTaint:6075 : Domain id=7 name='Windows 10 Game' uuid=62b0ac6c-dffc-f8d3-b506-474906225d4c is tainted: high-privileges
    2020-11-04 09:58:14.366+0000: 6887: warning : qemuDomainObjTaint:6075 : Domain id=7 name='Windows 10 Game' uuid=62b0ac6c-dffc-f8d3-b506-474906225d4c is tainted: host-cpu

    Edit 3: Could be an anomaly but half the time if I open the VNC viewer the system becomes responsive again. (Normally use Steamlink or Chrome Remote Desktop to control the machine)

     

    home-server-diagnostics-20201103-1427.zip

    Edited by joshkrz
    Link to comment
    9 hours ago, TechGeek01 said:

    For reference, X10 is based on the ASPEED AST 2400 controller. Seems like this same latest driver is required for almost all X10 motherboards, so if you could confirm that a Supermicro X10 board with IPMI firmware 3.80 or later can successfully boot into GUI mode, that would be amazing.

    That's good to know, and good to point out.  I don't boot into the GUI mode but it is a nice option if required so I'd like to have that option.  I run a X10SRA-F and remember seeing a note that a new VGA driver was required when updating the BMC to 3.80 or later (I'm on 3.88 now)

    • Like 1
    Link to comment
    10 hours ago, civic95man said:

    That's good to know, and good to point out.  I don't boot into the GUI mode but it is a nice option if required so I'd like to have that option.  I run a X10SRA-F and remember seeing a note that a new VGA driver was required when updating the BMC to 3.80 or later (I'm on 3.88 now)

    I don't use GUI mode often, but I usually set it to boot there by default, so that on the off chance I can't get to the web GUI if it locked up, or if there's a network problem, I can reconfigure it and such.

     

    I actually had to downgrade the BMC to 3.77 because 3.80 and up didn't work in GUI mode. When I moved to this new Supermicro server from the Dell R510, the network changed, so I wouldn't have been able to get to it on another computer. It actually looks like it boots normally, and then as soon as the scrolling text goes away and you're supposed to be dumped at the login screen, it's just black.

     

    So yeah, if that latest driver could be included with the rest of the Aspeed stuff if it's not already, and verified to work on updated 3.88 on an X10 board, that would be awesome!

    • Like 1
    Link to comment

    Hi, looks like i run now in the 2nd long run issue with 6.9 beta 30 here

     

    it starts now again to struggle with the unraid web services (tested with different browser too)

    uptime now 19 days ...

     

    sample, CPU usage doesnt update anymore (and of course there should be something)

    image.png.7169837b3aefd03ea6025a7438efdf9e.png

     

    open terminal is broken now

    image.png.92a1fb52ad7557bd21d3703d41fad926.png

     

    log page is broken (spins forever)

    image.png.f09248ac1e2f70304a99a74dabeaeece.png

     

    Rest still seems to be ok ... VM's up, dockers up, shares up, all reachable (not like last time)

     

    all this was working a few hours ago.

     

    from the tools/system log these are the latest entries

    Nov  4 17:23:06 AlsServer kernel: br0: port 4(vnet2) entered blocking state
    Nov  4 17:23:06 AlsServer kernel: br0: port 4(vnet2) entered disabled state
    Nov  4 17:23:06 AlsServer kernel: device vnet2 entered promiscuous mode
    Nov  4 17:23:06 AlsServer kernel: br0: port 4(vnet2) entered blocking state
    Nov  4 17:23:06 AlsServer kernel: br0: port 4(vnet2) entered forwarding state
    Nov  4 17:23:07 AlsServer avahi-daemon[10889]: Joining mDNS multicast group on interface vnet2.IPv6 with address fe80::fc54:ff:feb5:951d.
    Nov  4 17:23:07 AlsServer avahi-daemon[10889]: New relevant interface vnet2.IPv6 for mDNS.
    Nov  4 17:23:07 AlsServer avahi-daemon[10889]: Registering new address record for fe80::fc54:ff:feb5:951d on vnet2.*.
    Nov  4 18:01:21 AlsServer nginx: 2020/11/04 18:01:21 [error] 13223#13223: *6383070 connect() to unix:/var/run/ttyd.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.200, server: , request: "GET /webterminal/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "192.168.1.2", referrer: "http://192.168.1.2/Dashboard"
    Nov  4 18:04:01 AlsServer webGUI: Successful login user root from 192.168.1.200
    Nov  4 18:04:36 AlsServer nginx: 2020/11/04 18:04:36 [error] 13223#13223: *6383365 connect() to unix:/var/run/ttyd.sock failed (111: Connection refused) while connecting to upstream, client: 192.168.1.200, server: , request: "GET /webterminal/ HTTP/1.1", upstream: "http://unix:/var/run/ttyd.sock:/", host: "192.168.1.2", referrer: "http://192.168.1.2/Dashboard"
    Nov  4 18:05:20 AlsServer unassigned.devices: Error: shell_exec(/bin/df '/mnt/disks/192.168.1.45_internal' --output=size,used,avail | /bin/grep -v '1K-blocks' 2>/dev/null) took longer than 2s!

    diags attached.

     

    i leave it now running for a little before i reboot or even may rather roll back to .29 which was running nice from day 1 to update to .30 (24/7)

     

    if you need any further infos from the system, let me know, but as this is now the 2nd time the system runs into a  "more or less" unreachable state ... i guess it has something todo with the emhttpd changes ...

     

    no changes, no new VM's, no new Dockers, ... just a small docker switch a few days ago but that shouldnt be the reason for unraid webgui issues.

     

    may a way to restart the httpd service ? just to make sure ?

     

    external ssh access is still working too (like last time)diags.zip

    Edited by alturismo
    Link to comment
    On 10/19/2020 at 11:47 AM, limetech said:

    Where do you see that 5.9 will be next LTS?

    Anyway, they've just declared 5.8 EOL.

    Link to comment
    On 10/14/2020 at 4:33 PM, limetech said:

    There is something misconfigured, please post diagnostics.zip.

     

    On 10/14/2020 at 4:40 PM, atconc said:

     

    Bumping this - I just temporarily reinstalled Beta 30 for another reason and took the opportunity to check if this was still happening - it is, very slow web ui and apps in docker containers, extremely high cpu usage for shfs while this is happening.  Reverting to b25 again solves this for me.  Any idea what's going on? Trying to avoid ending up with this issue on the next stable

    Link to comment

    Hi, what do I have to consider if I want to downgrade from Unraid 6.9-beta30 to Unraid 6.8.3 ?

    I have upgraded from Unraid 6.8.3 -> to Unraid 6.9-beta29, then to Unraid 6.9-beta30.

    I can probably only go back to Unraid 6.9-beta29 via software. What do I have to do to get Unraid 6.8.3 back and keep the system working.

    Currently I'm doing a backup of Docker / VM and my shares. Then I move the directories appdata / domains / system from cache to array and now ?

     

     

    Link to comment

    Assuming you're not running any impacted 3rd party plugins like ZFS, you also need to ensure your cache drive is backed up.  I'm not sure there's an automated way - I think you have to copy it somewhere else and reformat the drive after you've downgraded, then put the data back in.  Could be wrong.  It definitely kicks it out of the array though.

    Link to comment

    I am having an issues with the pools.

     

    I setup a APP-Storage Pool to be used for Dockers and VM's. This pool comprises of 3 x Samsung 970 Evo NVME Drives.

     

    I want this pool to be a raid 5 or raid 0 not sure yet. But that is not the issue. The issue is when I go to the App-Storage pool drive and then go to balance BTRFS and set it to convert Raid 0 it doesn't change. Is there a way to do it via terminal / ssh or via config?

     

    Also Erase Button Doesnt work in the Pool...? Not sure.

     

    THank you

    Edited by emsbas
    Link to comment
    37 minutes ago, emsbas said:

    The issue is when I go to the App-Storage pool drive and then go to balance BTRFS and set it to convert Raid 0 it doesn't change. Is there a way to do it via terminal / ssh

    There is but it's unlikely to work if the GUI method doesn't because the GUI simply invokes the same command. There's something else wrong. Your diagnostics should show what it is.

    Link to comment

    i just upgraded from 6.8.3 to 6.9.0 Beta30

    and when i start a Windows 10 VM (no passthrough) i get lots of errors like this:

     

    Nov 9 09:07:43 UNRAID kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 97
    Nov 9 09:07:43 UNRAID kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Nov 9 09:07:43 UNRAID kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Nov 9 09:07:43 UNRAID kernel: tun: 01 00 00 00 a1 59 24 59 37 ed 10 dc 38 77 e1 04 .....Y$Y7...8w..
    Nov 9 09:07:43 UNRAID kernel: tun: 8b 2c 4c a4 d3 57 66 6d 66 ec a9 0e 53 ab d9 6e .,L..Wfmf...S..n
    Nov 9 09:08:15 UNRAID kernel: tun: unexpected GSO type: 0x0, gso_size 55, hdr_len 121
    Nov 9 09:08:15 UNRAID kernel: tun: b4 6b 08 0f 5d fd 72 a5 2d 71 a7 dc f8 75 ab 71 .k..].r.-q...u.q
    Nov 9 09:08:15 UNRAID kernel: tun: 20 24 6d 5e c6 a0 e9 d1 7c 90 25 8f ef 8d ac 3b $m^....|.%....;
    Nov 9 09:08:15 UNRAID kernel: tun: a7 ab aa c3 0c 42 56 2a cb b7 53 bc b6 c3 3f 45 .....BV*..S...?E
    Nov 9 09:08:15 UNRAID kernel: tun: ed 47 a2 f4 d5 ed 53 b8 6b b7 a6 29 fb c2 de b8 .G....S.k..)....
    Nov 9 09:08:15 UNRAID kernel: tun: unexpected GSO type: 0x0, gso_size 55, hdr_len 121
    Nov 9 09:08:15 UNRAID kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Nov 9 09:08:15 UNRAID kernel: tun: 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 00 ................
    Nov 9 09:08:15 UNRAID kernel: tun: 01 00 00 00 07 6f ac 36 95 6f bf de 9b 66 fb f5 .....o.6.o...f..
    Nov 9 09:08:15 UNRAID kernel: tun: c5 04 4b e4 9f dd 64 b6 85 18 2a f3 2b c6 3e 57 ..K...d...*.+.>W

     

    unraid-diagnostics-20201109-0912.zip

    Link to comment

    Thx alot.  i was reading beta releases... thanks for pointing this out to me 🙂

    Fixed it.   Set Networkcard to "Virtio-net"

    • Like 1
    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.