• Unraid OS version 6.9.0-beta1 available


    limetech

    Welcome to 6.9 beta release development!

     

    This initial -beta1 release is exactly the same code set of 6.8.3 except that the Linux kernel has been updated to 5.5.8 (latest stable as of this date).  We have only done basic testing on this release; normally we would not release 'beta' code but some users require the hardware support offered by the latest kernel along with security updates added into 6.8.

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Unfortunately none of our out-of-tree drivers will build with this kernel.  Some were reverted back to in-tree version, some were omitted.  We anticipate that by the time 6.9 enters 'rc' phase, we'll be on the 5.6 kernel and hopefully some of these out-of-tree drivers will be restored.

     

    We will be phasing in some new features, improvements, and changes to address certain bugs over the coming weeks - these will all be -beta releases.  Don't be surprised if you see a jump in the beta number, some releases will be private.

     

    Version 6.9.0-beta1 2020-03-06

    Linux kernel:

    • version 5.5.8
    • igb: in-tree
    • ixgbe: in-tree
    • r8125: in-tree
    • r750: (removed)
    • rr3740a: (removed)
    • tn40xx: (removed)
    • Like 10
    • Thanks 3



    User Feedback

    Recommended Comments



    I might of missed it, but is there any chance we'll see iscsi support in this beta, or 6.9 RC?

    I know a lot of people would like to see it :D

    Link to comment

    When do you think we'll start to see proper VFIO nvme driver support for gust VMs? It's great that we can pass through Nvme drives directly to Windows 10 VMs but even nicer to have Windows use an nvme driver and not the poor performing SCSI alternative.

    Link to comment
    On 5/24/2020 at 2:11 PM, mikeyosm said:

    When do you think we'll start to see proper VFIO nvme driver support for gust VMs? It's great that we can pass through Nvme drives directly to Windows 10 VMs but even nicer to have Windows use an nvme driver and not the poor performing SCSI alternative.

    Just read this....

     

    https://blog.christophersmart.com/2019/12/18/kvm-guests-with-emulated-ssd-and-nvme-drives/

     

    Anyone tried this and measured performance compared with passthrough nvme?

    Link to comment
    1 hour ago, david279 said:

    I tried this recently with a widows VM. Read performance was way higher than virtio blk/scsi but the write performance was about the same. I just setup a basic windows VM so i didnt test gaming or anything like that. 

    Good to hear. I hope to be able to test this in the next month or so when my z490 10900k system is built.

    Link to comment

    Hey all,

    Just started running this build to play around with Quicksync on the newer Intel CPUs (Intel Pentium Gold g5400 - coffee lake 9th gen). Have hit an issue:

     

    I'm finding the machine no longer shuts down or reboots successfully. This is particularly frustrating as it triggers a parity check each time I have to forcefully power it off.

     

    ** Screenshot of it hung on the console: https://u.pcloud.link/publink/show?code=XZPIXEkZGxNefdLs4L0W0fED3P5ARyKKzBHV

    ** Log/server diag: https://u.pcloud.link/publink/show?code=XZONXEkZikGon3C8JCVcYHNv8W7fTyByKUHy

     

    Specifically, it seems to be stuck here; like it's not successfully stopped everything when it's trying to take down the array?:

    Jun  7 15:09:01 server emhttpd: Retry unmounting user share(s)...
    Jun  7 15:09:06 server emhttpd: shcmd (2730): umount /mnt/user
    Jun  7 15:09:06 server root: umount: /mnt/user: target is busy.
    Jun  7 15:09:06 server emhttpd: shcmd (2730): exit status: 32
    Jun  7 15:09:06 server emhttpd: shcmd (2731): rmdir /mnt/user
    Jun  7 15:09:06 server root: rmdir: failed to remove '/mnt/user': Device or resource busy
    Jun  7 15:09:06 server emhttpd: shcmd (2731): exit status: 1

     

    Any tips would be much appreciated!

     

    Edit:
    I rebooted just now and it was fine; i'm not sure what was holding the mount open, maybe a stale SSH session or something - but might just need to report to the user/logs what's holding it open, and try to forcefully kill those things (lsof?)

    Edited by dalgibbard
    Link to comment

    Just wanted to say I’ve been running this for five days and needed it in order to get networking on a new i7-10700K. I’ve been transferring data to it, running dockers, and a VM and haven’t had any issues.

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.