• Unraid OS version 6.9.0-beta30 available


    limetech

    Changes vs. 6.9.0-beta29 include:

     

    Added workaround for mpt3sas not recognizing devices with certain LSI chipsets. We created this file:

    /etc/modprobe.d/mpt3sas-workaround.conf

    which contains this line:

    options mpt3sas max_queue_depth=10000

    When the mpt3sas module is loaded at boot, that option will be specified.  If you add "mpt3sas.max_queue_depth=10000" to syslinux kernel append line, you can remove it.  Likewise, if you manually load the module via 'go' file, can also remove it.  When/if the mpt3sas maintainer fixes the core issue in the driver we'll get rid of this workaround.

     

    Reverted libvirt to v6.5.0 in order to restore storage device passthrough to VM's.

     

    A handful of other bug fixes, including 'unblacklisting' the ast driver (Aspeed GPU driver).  For those using that on-board graphics chips, primarily Supermicro, this should increase speed and resolution of local console webGUI. 

     


     

    Version 6.9.0-beta30 2020-10-05 (vs -beta29)

    Base distro:

    • libvirt: version 6.5.0 [revert from version 6.6.0]
    • php: version 7.4.11 (CVE-2020-7070, CVE-2020-7069)

    Linux kernel:

    • version 5.8.13
    • ast: removed blacklisting from /etc/modprobe.d
    • mpt3sas: added /etc/modprobe.d/mpt3sas-workaround.conf to set "max_queue_depth=10000"

    Management:

    • at: suppress session open/close syslog messages
    • emhttpd: correct 'Erase' logic for unRAID array devices
    • emhtppd: wipefs encrypted device removed from multi-device pool
    • emhttpd: yet another btrfs 'free/used' calculation method
    • webGUI: Update statuscheck
    • webGUI: Fix dockerupdate.php warnings

     

    • Like 5
    • Thanks 5



    User Feedback

    Recommended Comments



    Quote

    Version 6.9.0-beta30 2020-10-05

    Linux kernel: version 5.8.13

    Now that the 5.9 kernel has gone stable, and 5.9 will be the one designated "longterm", not 5.8, then maybe we should try going with the 5.9 series?

    • Like 1
    Link to comment

    With 6.9.0-beta30 i can observe a performance-issue with my "SoNNeT G10E-1X-E3" network card. It is based on the chipset "Aquantia AQC-107S".
    With 6.9.0-beta25 i got the following iperf3 results:

    toskache@10GPC ~ % iperf3 -c 192.168.2.4
    Connecting to host 192.168.2.4, port 5201
    [  5] local 192.168.2.199 port 50204 connected to 192.168.2.4 port 5201
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-1.00   sec  1.15 GBytes  9.90 Gbits/sec
    [  5]   1.00-2.00   sec  1.15 GBytes  9.89 Gbits/sec
    [  5]   2.00-3.00   sec  1.15 GBytes  9.89 Gbits/sec
    [  5]   3.00-4.00   sec  1.15 GBytes  9.89 Gbits/sec
    [  5]   4.00-5.00   sec  1.15 GBytes  9.89 Gbits/sec
    [  5]   5.00-6.00   sec  1.15 GBytes  9.89 Gbits/sec
    [  5]   6.00-7.00   sec  1.15 GBytes  9.89 Gbits/sec
    [  5]   7.00-8.00   sec  1.15 GBytes  9.89 Gbits/sec
    [  5]   8.00-9.00   sec  1.15 GBytes  9.90 Gbits/sec
    [  5]   9.00-10.00  sec  1.15 GBytes  9.89 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec                  sender
    [  5]   0.00-10.00  sec  11.5 GBytes  9.89 Gbits/sec                  receiver
    
    iperf Done.

    And there where no dropped packets.

     

    With the current 6.9.0-beta 30 I get a lot of dropped packets and the iperf3 performance is halved.
    But just the RX-side. TX is fine.


    Since 6.9.0-beta 29 also the plugin page is loading extremely slow (approx. 15 seconds). With 6.9.0-beta 25 the loading process took less than 2 seconds.

    Here are some network informations and attached you can find the diagnostics-file:

    root@nas:~# ifconfig
    eth0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
            ether 00:30:93:14:08:72  txqueuelen 1000  (Ethernet)
            RX packets 206238861  bytes 292428787789 (272.3 GiB)
            RX errors 0  dropped 87152  overruns 0  frame 0
            TX packets 88635371  bytes 128707642804 (119.8 GiB)
            TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
    root@nas:~# ethtool eth0
    Settings for eth0:
            Supported ports: [ TP ]
            Supported link modes:   100baseT/Full
                                    1000baseT/Full
                                    10000baseT/Full
                                    2500baseT/Full
                                    5000baseT/Full
            Supported pause frame use: Symmetric Receive-only
            Supports auto-negotiation: Yes
            Supported FEC modes: Not reported
            Advertised link modes:  100baseT/Full
                                    1000baseT/Full
                                    10000baseT/Full
                                    2500baseT/Full
                                    5000baseT/Full
            Advertised pause frame use: No
            Advertised auto-negotiation: Yes
            Advertised FEC modes: Not reported
            Speed: 10000Mb/s
            Duplex: Full
            Auto-negotiation: on
            Port: Twisted Pair
            PHYAD: 0
            Transceiver: internal
            MDI-X: Unknown
            Supports Wake-on: pg
            Wake-on: g
            Current message level: 0x00000005 (5)
                                   drv link
            Link detected: yes
    hermann@Hacky ~ % iperf3 -c 192.168.2.4
    Connecting to host 192.168.2.4, port 5201
    [  5] local 192.168.2.26 port 52792 connected to 192.168.2.4 port 5201
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-1.00   sec   519 MBytes  4.36 Gbits/sec
    [  5]   1.00-2.00   sec   509 MBytes  4.27 Gbits/sec
    [  5]   2.00-3.00   sec   491 MBytes  4.12 Gbits/sec
    [  5]   3.00-4.00   sec   410 MBytes  3.44 Gbits/sec
    [  5]   4.00-5.00   sec   390 MBytes  3.27 Gbits/sec
    [  5]   5.00-6.00   sec   485 MBytes  4.07 Gbits/sec
    [  5]   6.00-7.00   sec   447 MBytes  3.75 Gbits/sec
    [  5]   7.00-8.00   sec   452 MBytes  3.79 Gbits/sec
    [  5]   8.00-9.00   sec   449 MBytes  3.76 Gbits/sec
    [  5]   9.00-10.00  sec   481 MBytes  4.03 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-10.00  sec  4.53 GBytes  3.89 Gbits/sec                  sender
    [  5]   0.00-10.00  sec  4.52 GBytes  3.89 Gbits/sec                  receiver
    hermann@Hacky ~ % iperf3 -c 192.168.2.4 -P5
    Connecting to host 192.168.2.4, port 5201
    [  5] local 192.168.2.26 port 52831 connected to 192.168.2.4 port 5201
    [  7] local 192.168.2.26 port 52832 connected to 192.168.2.4 port 5201
    [  9] local 192.168.2.26 port 52833 connected to 192.168.2.4 port 5201
    [ 11] local 192.168.2.26 port 52834 connected to 192.168.2.4 port 5201
    [ 13] local 192.168.2.26 port 52835 connected to 192.168.2.4 port 5201
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-1.00   sec   200 MBytes  1.67 Gbits/sec
    [  7]   0.00-1.00   sec   199 MBytes  1.67 Gbits/sec
    [  9]   0.00-1.00   sec   199 MBytes  1.67 Gbits/sec
    [ 11]   0.00-1.00   sec   199 MBytes  1.67 Gbits/sec
    [ 13]   0.00-1.00   sec   200 MBytes  1.67 Gbits/sec
    [SUM]   0.00-1.00   sec   996 MBytes  8.36 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   1.00-2.00   sec   199 MBytes  1.67 Gbits/sec
    [  7]   1.00-2.00   sec   199 MBytes  1.67 Gbits/sec
    [  9]   1.00-2.00   sec   199 MBytes  1.67 Gbits/sec
    [ 11]   1.00-2.00   sec   199 MBytes  1.67 Gbits/sec
    [ 13]   1.00-2.00   sec   199 MBytes  1.67 Gbits/sec
    [SUM]   1.00-2.00   sec   994 MBytes  8.34 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   2.00-3.00   sec   194 MBytes  1.62 Gbits/sec
    [  7]   2.00-3.00   sec   196 MBytes  1.65 Gbits/sec
    [  9]   2.00-3.00   sec   203 MBytes  1.70 Gbits/sec
    [ 11]   2.00-3.00   sec   190 MBytes  1.59 Gbits/sec
    [ 13]   2.00-3.00   sec   186 MBytes  1.56 Gbits/sec
    [SUM]   2.00-3.00   sec   968 MBytes  8.12 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   3.00-4.00   sec   205 MBytes  1.72 Gbits/sec
    [  7]   3.00-4.00   sec   201 MBytes  1.69 Gbits/sec
    [  9]   3.00-4.00   sec   204 MBytes  1.71 Gbits/sec
    [ 11]   3.00-4.00   sec   202 MBytes  1.69 Gbits/sec
    [ 13]   3.00-4.00   sec   170 MBytes  1.43 Gbits/sec
    [SUM]   3.00-4.00   sec   982 MBytes  8.24 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   4.00-5.00   sec   200 MBytes  1.68 Gbits/sec
    [  7]   4.00-5.00   sec   200 MBytes  1.68 Gbits/sec
    [  9]   4.00-5.00   sec   200 MBytes  1.67 Gbits/sec
    [ 11]   4.00-5.00   sec   200 MBytes  1.67 Gbits/sec
    [ 13]   4.00-5.00   sec   189 MBytes  1.58 Gbits/sec
    [SUM]   4.00-5.00   sec   988 MBytes  8.29 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   5.00-6.00   sec   199 MBytes  1.67 Gbits/sec
    [  7]   5.00-6.00   sec   199 MBytes  1.67 Gbits/sec
    [  9]   5.00-6.00   sec   199 MBytes  1.67 Gbits/sec
    [ 11]   5.00-6.00   sec   198 MBytes  1.66 Gbits/sec
    [ 13]   5.00-6.00   sec   196 MBytes  1.64 Gbits/sec
    [SUM]   5.00-6.00   sec   991 MBytes  8.31 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   6.00-7.00   sec   187 MBytes  1.56 Gbits/sec
    [  7]   6.00-7.00   sec   186 MBytes  1.56 Gbits/sec
    [  9]   6.00-7.00   sec   186 MBytes  1.56 Gbits/sec
    [ 11]   6.00-7.00   sec   186 MBytes  1.56 Gbits/sec
    [ 13]   6.00-7.00   sec   187 MBytes  1.57 Gbits/sec
    [SUM]   6.00-7.00   sec   932 MBytes  7.82 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   7.00-8.00   sec   142 MBytes  1.19 Gbits/sec
    [  7]   7.00-8.00   sec   142 MBytes  1.19 Gbits/sec
    [  9]   7.00-8.00   sec   142 MBytes  1.19 Gbits/sec
    [ 11]   7.00-8.00   sec   142 MBytes  1.19 Gbits/sec
    [ 13]   7.00-8.00   sec   142 MBytes  1.19 Gbits/sec
    [SUM]   7.00-8.00   sec   708 MBytes  5.94 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   8.00-9.00   sec   139 MBytes  1.17 Gbits/sec
    [  7]   8.00-9.00   sec   139 MBytes  1.17 Gbits/sec
    [  9]   8.00-9.00   sec   139 MBytes  1.17 Gbits/sec
    [ 11]   8.00-9.00   sec   139 MBytes  1.17 Gbits/sec
    [ 13]   8.00-9.00   sec   139 MBytes  1.17 Gbits/sec
    [SUM]   8.00-9.00   sec   696 MBytes  5.84 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [  5]   9.00-10.00  sec   184 MBytes  1.54 Gbits/sec
    [  7]   9.00-10.00  sec   184 MBytes  1.54 Gbits/sec
    [  9]   9.00-10.00  sec   184 MBytes  1.54 Gbits/sec
    [ 11]   9.00-10.00  sec   184 MBytes  1.54 Gbits/sec
    [ 13]   9.00-10.00  sec   183 MBytes  1.54 Gbits/sec
    [SUM]   9.00-10.00  sec   918 MBytes  7.70 Gbits/sec
    - - - - - - - - - - - - - - - - - - - - - - - - -
    [ ID] Interval           Transfer     Bitrate
    [  5]   0.00-10.00  sec  1.80 GBytes  1.55 Gbits/sec                  sender
    [  5]   0.00-10.00  sec  1.80 GBytes  1.55 Gbits/sec                  receiver
    [  7]   0.00-10.00  sec  1.80 GBytes  1.55 Gbits/sec                  sender
    [  7]   0.00-10.00  sec  1.80 GBytes  1.55 Gbits/sec                  receiver
    [  9]   0.00-10.00  sec  1.81 GBytes  1.56 Gbits/sec                  sender
    [  9]   0.00-10.00  sec  1.81 GBytes  1.55 Gbits/sec                  receiver
    [ 11]   0.00-10.00  sec  1.79 GBytes  1.54 Gbits/sec                  sender
    [ 11]   0.00-10.00  sec  1.79 GBytes  1.54 Gbits/sec                  receiver
    [ 13]   0.00-10.00  sec  1.75 GBytes  1.50 Gbits/sec                  sender
    [ 13]   0.00-10.00  sec  1.75 GBytes  1.50 Gbits/sec                  receiver
    [SUM]   0.00-10.00  sec  8.96 GBytes  7.70 Gbits/sec                  sender
    [SUM]   0.00-10.00  sec  8.95 GBytes  7.69 Gbits/sec                  receiver
    
    iperf Done.

     

    nas.fritz.box-diagnostics-20201018-1758.zip

    • Like 1
    Link to comment
    17 minutes ago, Tulip said:

    i cant get cpu pinning on docker working. Any known beta bug?

    Please post separate bug report.

    Link to comment
    3 hours ago, Toskache said:

    With 6.9.0-beta30 i can observe a performance-issue with my "SoNNeT G10E-1X-E3" network card. It is based on the chipset "Aquantia AQC-107S".
    With 6.9.0-beta25 i got the following iperf3 results:

    Nice report but I'd like to ask you to past as a separate bug report.

    Link to comment
    1 hour ago, bdydrp said:

    Just upgraded to Beta 30 

    I get this error when i click VNC Remote for my VM

     

    noVNC.png

    Try clearing your browser's cache

    • Like 1
    Link to comment

    What I care about is when to support Radeon™ Vega 11 Graphics,Don't know how long to wait。

    Many people can't wait。

     

    Edited by turingking
    Link to comment
    On 10/17/2020 at 8:01 PM, Pourko said:

    Now that the 5.9 kernel has gone stable, and 5.9 will be the one designated "longterm", not 5.8, then maybe we should try going with the 5.9 series?

    Where do you see that 5.9 will be next LTS?

    Link to comment
    59 minutes ago, limetech said:

    Where do you see that 5.9 will be next LTS?

    They might be basing it on that currently (or at least since the 4.x kernel line) every 5th version is the LTS kernel (4.4, 4.9, 4.14, 4.19, 5.4, 5.9 would be the next in that pattern).

    Link to comment

    Hi,

    I have tried to read up, but don't know if anyone got the same error as I have.

     

    I have a major performance decrease after beta 25 i think it was. Took like 10h to update a Windows 10 VM. And my Nextcloud docker is unbearable! I have looked at the excessive SSD write But i don't have any of that.

     

    Here's 6.8.3 Cache write. Normal operation. graph looks like this most of the time.
    image.png.339d4099b9fa20984c37f7a14f79f3bb.png

     

    Compared to 6.9b30, it's about the same. Different timeline though. 

    image.png.adfc6c14bc86e314164b8ba28ef9bb17.png

     

    Here is where i changed from beta30 to 6.8.3 unpacking a large file from windows server., first at 16MB/s. then at 55+MB/s
    image.png.1f4bc28a4aecc0fb31f41b35bf41e574.png

     

    My cache is a RAID10 BTRFS pool.

    I'm on 6.8.3 now, so a diagnostic may not help much. But i can go back to 6.9b30 to create one if it's of any help. Or is it the same known problem with btrfs? And i need to reformat the cache disks in pool to make it work?

     

    Thanks

     

     

    Link to comment
    On 10/17/2020 at 9:01 PM, Pourko said:

    Now that the 5.9 kernel has gone stable, and 5.9 will be the one designated "longterm", not 5.8, then maybe we should try going with the 5.9 series?

    Renowned kernel maintainer Greg Kroah-Hartman revealed in August 2019 that the next LTS kernel would be the “last released” kernel of the year.

     

    So 5.10 will be next LTS

    Link to comment
    7 hours ago, Dazog said:

    Renowned kernel maintainer Greg Kroah-Hartman revealed in August 2019 that the next LTS kernel would be the “last released” kernel of the year.

     

    So 5.10 will be next LTS

    That was back in August.  At this time it's not so certain that 5.10-stable will be out by the end of the year. So it's very likely they'll go back to the original plan and make 5.9 be the next LTS.

     

    In any case, 5.9 has some really cool improvements over 5.8.  Worth considering.

     

    Not that it matters much though, as by the time 6.9-beta turns into 6.9-release, the kernel will be at 5.14-LTS. :-)

    Link to comment
    On 10/6/2020 at 2:02 PM, Dava2k7 said:

    Hi Thanks for the update any chance for a fix on VM's I keep getting these issues in my syslog

     

    vfio-pci 0000:09:00.0: vfio_ecap_init: hiding ecap 0x19@0x900
    vfio-pci 0000:09:00.0: No more image in the PCI ROM

     

    vfio-pci 0000:09:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=io+mem

     

    and this in my VM Log

     

    Domain id=2 is tainted: high-privileges
    Domain id=2 is tainted: host-cpu

     

    I get no output on my g-card whatsoever any ideas? could we maybe get a fix on beta 31 Please...

    movbuster-diagnostics-20201006-1409.zip 110.72 kB · 23 downloads

    Hi Is there any news or update as of yet? And can we expect a fix in beta31? 

    Link to comment
    On 10/20/2020 at 6:53 PM, jowe said:

    Hi,

    I have tried to read up, but don't know if anyone got the same error as I have.

     

    I have a major performance decrease after beta 25 i think it was. Took like 10h to update a Windows 10 VM. And my Nextcloud docker is unbearable! I have looked at the excessive SSD write But i don't have any of that.

     

    Here's 6.8.3 Cache write. Normal operation. graph looks like this most of the time.
    image.png.339d4099b9fa20984c37f7a14f79f3bb.png

     

    Compared to 6.9b30, it's about the same. Different timeline though. 

    image.png.adfc6c14bc86e314164b8ba28ef9bb17.png

     

    Here is where i changed from beta30 to 6.8.3 unpacking a large file from windows server., first at 16MB/s. then at 55+MB/s
    image.png.1f4bc28a4aecc0fb31f41b35bf41e574.png

     

    My cache is a RAID10 BTRFS pool.

    I'm on 6.8.3 now, so a diagnostic may not help much. But i can go back to 6.9b30 to create one if it's of any help. Or is it the same known problem with btrfs? And i need to reformat the cache disks in pool to make it work?

     

    Thanks

     

     

    I did some more testing, reformated my cache pool with 1MB alignment. After that i tested with beta30 and the 6.8.3

     

    It's a WS2019 no other difference than.

    On b30 i have Virtio-net anf i440fx-5.2

    on 6.8.3 virtio and i440fx 4.2

     

    Disk mark		7259		17187
    CPU Mark		933		3135
    Memorymark		813		1879

    Quite a difference! 

    (forgot to pull diagnostics file while on b30)

     

    image.png.54d7cad287b37d1e6a62e6e7f0211e03.png

     

    Link to comment
    On 10/21/2020 at 2:13 PM, Dava2k7 said:

    Hi Is there any news or update as of yet? And can we expect a fix in beta31? 

    I have had the same problem with GPU passthrough and the tainted CPU messages.

    Link to comment

    I have been running the beta for awhile without issue, REALLY liking the multiple cache pools.

     

    Is there a way to have directory splits on the cache pools happen like the array?

     

    For example on the array I will have "share/random folders" and will move the random folders to the disk of my choosing. I then disabled directory splitting on the share and then any new files will go to the disk with the existing folder.

     

    I would like to do this with the cache pools as well, have a single share but spread the subfolders over multiple cache pools and have future files go into the same folders. (put high traffic data on SSD's I don't care about, most used data on the fastest drives etc).

     

    Right now my only choices seem to be cache only where any new files get put onto the single selected cache pool. Or cache disabled where it will put them on the array.

     

    Yes I can create more shares to split things up but that makes things a lot more complicated with having to map even more folders to all my computers and making them more disorganized over SMB.

     

    Just a suggestion.

    Edited by TexasUnraid
    Link to comment
    On 10/22/2020 at 6:02 PM, TexasUnraid said:

    Is there a way to have directory splits on the cache pools happen like the array?

    mergerfs ?

    Link to comment
    3 minutes ago, Lev said:

    mergerfs ?

    I am new to both linux and unraid, I have no idea what this refers to?

     

    A quick google looks like it is some kind of linux software but is it natively supported in unraid? I don't like going outside of an ecosystem I don't fully understand if possible.

     

    I was thinking simply allow multiple cache pools to be selected in the share setup, thus allowing them to work like the array. This would be even nicer once multiple arrays and possible ZFS support is introduced.

    Edited by TexasUnraid
    Link to comment
    2 minutes ago, TexasUnraid said:

    A quick google looks like it is some kind of linux software but is it natively supported in unraid? I don't like going outside of an ecosystem I don't fully understand if possible.

     

    Makes sense. Nevermind my suggestion.

    Link to comment
    1 minute ago, cdoublejj said:

    if i read the beta writes up correctly, it AUTO formats the SSD drives to the correct alignment?

    Correct, if no partition exists, if it does first wipe with blkdiscard.

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.