• Unraid OS version 6.9.0-beta29 available


    limetech

    Back in the saddle ... Sorry for the long delay in publishing this release.  Aside from including some delicate coding, this release was delayed due to several team members, chiefly myself, having to deal with various non-work-related challenges which greatly slowed the pace of development.  That said, there is quite a bit in this release, LimeTech is growing and we have many exciting features in the pipe - more on that in the weeks to come.  Thanks to everyone for their help and patience during this time.

    Cheers,

    -Tom

     

    IMPORTANT: This is Beta software.  We recommend running on test servers only!

     

    KNOWN ISSUE: with this release we have moved to the latest Linux 5.8 stable kernel.  However, we have discovered that a regression has been introduced in the mtp3sas driver used by many LSI chipsets, e.g., LSI 9201-16e.  Typically looks like this on System Devices page:

    Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02)

    The problem is that devices are no longer recognized.  There are already bug reports pertaining to this issue:

    https://bugzilla.kernel.org/show_bug.cgi?id=209177

    https://bugzilla.redhat.com/show_bug.cgi?id=1878332

     

    We have reached out to the maintainer to see if a fix can be expedited, however we feel that we can neither revert back to 5.7 kernel nor hold the release due to this issue.  We are monitoring and will publish a release with fix asap.

     

    ANOTHER known issue: we have added additional btrfs balance options:

    • raid1c3
    • raid1c4
    • and modified the raid6 balance operation to set meta-data to raid1c3 (previously was raid1).

     

    However, we have noticed that applying one of these balance filters to a completely empty volume leaves some data extents with the previous profile.  The solution is to simply run the same balance again.  We consider this to be a btrfs bug and if no solution is forthcoming we'll add the second balance to the code by default.  For now, it's left as-is.

     

    THE PRIMARY FOCUS of this release is to put tools in place to help users migrate data off SSD-based pools so that those devices may be re-partitioned if necessary, and then migrate the data back.

     

    What are we talking about?  For several years now, storage devices managed by Unraid OS are formatted with an "Unraid Standard Partition Layout".  This layout has partition 1 starting at offset 32KiB from the start of the device, and extending to the end of the device.  (For devices with 512-byte sectors, partition 1 starts in sector 64; for 4096-byte sector size devices, partition 1 starts in sector 8.)  This layout achieves maximum storage efficiency and ensures partition 1 starts on a 4096-byte boundary.

     

    Through user reports and extensive testing however, we have noted that many modern SSD devices, in particular Samsung EVO, do not perform most efficiently using this partition layout, and the devices seem to write far more than one would expect, and with SSD, one wants to minimize writes to SSD as much as possible.

     

    The solution to the "excessive SSD write" issue is to position partition 1 at offset 1MiB from the start of the device instead of at 32KiB.  The will both increase performance and decrease writes with affected devices.  Do you absolutely need to re-partition your SSD's?  Probably not depending on what devices you have.  Click on a device from Main, scroll down to Attributes and take a look at Data units written.  If this is increasing very rapidly then you probably would benefit by re-partitioning.

     

    Note: if you have already (re)Formatted using previous 6.9-beta release, for SSD smaller than 2TiB the proper partition layout will appear like this on the Device Information page:

    Partition format:   MBR: 1MiB-aligned

    For SSD larger than 2TiB:

    Partition format:   GPT: 1MiB-aligned

     

    Here's what's in this release to help facilitate re-partitioning of SSD devices:

     

    An Erase button which appears in the Device Information page.

     

    The Erase button may be used to erase (delete) content from a volume. A volume is either the content of an unRAID array data disk, or the content of a pool. In the case of an unRAID disk, only that device is erased; in the case of a multiple-device pool ALL devices of the pool are erased.

    The extent of Erase varies depending on whether the array is Stopped, or Started in Maintenance mode (if started in Normal mode, all volume Erase buttons are disabled).

    Started/Maintenance mode: in this case the LUKS header (if any) and any file system within partition 1 is erased. The MBR (master boot record) is not erased.

    Stopped - in this case, unRAID array disk volumes and pool volumes are treated a little differently:

    • unRAID array disk volumes - if Parity and/or Parity2 is valid, then operation proceeds exactly as above, that is, content of only partition 1 is erased but the MBR (master boot record) is left as-is; but, if there is no valid parity, then the MBR is also erased.
    • Pool volumes - partition 1 of all devices within the pool are erased, and then the MBR is also erased.


    The purpose of erasing the MBR is to permit re-partitioning of the device if required.  Upon format, Unraid OS will position partition 1 at 32KiB for HDD devices and at 1MiB for SSD devices.

     

    Note that erase does not overwrite the storage content of a device, it simply clears the LUKS header if present (which effectively makes the device unreadable), and file system and MBR signatures.  A future Unraid OS release may include the option of overwriting the data.

     

    Additional "Mover" capabilities.

     

    Since SSD pools are commonly used to store vdisk images, shfs/mover is now aware of:

    • sparse files - when a sparse file is moved from one volume to another, it's sparseness is preserved
    • NoCOW attribute - when a file or directory in a btrfs volume has the NoCOW attribute set, the attribute is preserved when the file or directory is moved to another btrfs volume.

     

    Note that btrfs subvolumes are not preserved.  A future Unraid OS release may include preservation of btrfs subvolumes.

     

    Ok how do I re-partition my SSD pools?

     

    Outlined here are two basic methods:

    1. "Mover" method - The idea is to use the Mover to copy all data from the pool to a target device in the unRAID array.  Then erase all devices of the pool, and reformat.  Finally use the Mover to copy all the data back.
    2. "Unassign/Re-assign" method - The idea here is, one-by-one, remove a device from a btrfs pool, balance the pool with reduced device count, then re-assign the device back to the pool, and balance pool back to include the device.  This works because Unraid OS will re-partition new devices added to an existing btrfs pool.  This method is not recommended for a pool with more than 2 devices since the first balance operation may be write-intensive, and writes are what we're trying to minimize.  Also it can be tricky to determine if enough free space really exists after removing a device to rebalance the pool.  Finally, this method will introduce a time window where your data is on non-redundant storage.

     

    No matter which method, if you have absolutely critical data in the pool we strongly recommend making an independent backup first (you are already doing this right?).

     

     

    Mover Method

    This procedure presumes a multi-device btrfs pool containing one or more cache-only or cache-prefer shares.

     

    1. With array Started, stop any VM's and/or Docker applications which may be accessing the pool you wish to re-partition.  Make sure no other external I/O is targeting this pool.

     

    2. For each share on the pool, go to the Share Settings page and make some adjustments:

    • change from cache-only (or cache-prefer) to cache-yes
    • assign an array disk or disks via Include mask to receive the data.  If you  wish to preserve the NoCOW attribute (Copy-on-write set to No) on files and directories, these disks should be formatted with btrfs.  Of course ensure there is enough free space to receive the data.

     

    3. Now go back to Main and click the Move button.  This will move the data of each share to the target array disk(s).

     

    4. Verify no data left on the pool, Stop array, click on the pool and then click the Erase button.

     

    5. Start the array and the pool should appear Unformatted - go ahead and Format the pool (this is what will re-write the partition layout).

     

    6. Back to Share Settings page; for each above share:

    • change from cache-yes to cache-prefer

     

    7. On Main page click Move button.  This will move data of each share back to the pool.

     

    8. Finally, back to Share Settings page; for each share:

    • change from cache-prefer back to cache-only if desired

     

    Unassign/Re-assign Method

    1. Stop array and unassign one of the devices from your existing pool; leave device unassigned.
    2. Start array.  A balance will take place on your existing pool.  Let the balance complete.
    3. Stop array.  Re-assign the device, adding it back to your existing pool.
    4. Start array.  The added device will get re-partitioned and a balance will start moving data to the new device.  Let the balance complete.
    5. Repeat steps 1-4 for the other device in your existing pool.

     

    Whats happening here is this:

    At the completion of step 2, btrfs will 'delete' the missing device from the volume and wipe the btrfs signature from it.

    At the beginning of step 4, Unraid OS will re-partition the new device being added to an existing pool.

     

    I don't care about preserving data in the pool.  In this case just Stop array, click on the pool and then click Erase.  Start array and Format the pool - done.  Useful to know: when Linux creates a file system in an SSD device, it will first perform a "blkdiscard" on the entire partition.  Similarly, "blkdisard" is initiated on partition 1 on a new device added to an existing btrfs pool.

     

    What about array devices?  If you have SSD devices in the unRAID array the only  way to safely re-partition those devices is to either remove them from the array, or remove parity devices from the array.  This is because re-partitioning will invalidate parity.  Note also the volume size will be slightly smaller.

     


     

    Version 6.9.0-beta29 2020-09-27 (vs -beta25)

    Base distro:

    • at-spi2-core: version 2.36.1
    • bash: version 5.0.018
    • bridge-utils: version 1.7
    • brotli: version 1.0.9
    • btrfs-progs: version 5.6.1
    • ca-certificates: version 20200630
    • cifs-utils: version 6.11
    • cryptsetup: version 2.3.4
    • curl: version 7.72.0 (CVE-2020-8231)
    • dbus: version 1.12.20
    • dnsmasq: version 2.82
    • docker: version 19.03.13
    • ethtool: version 5.8
    • fribidi: version 1.0.10
    • fuse3: version 3.9.3
    • git: version 2.28.0
    • glib2: version 2.66.0 build 2
    • gnutls: version 3.6.15
    • gtk+3: version 3.24.23
    • harfbuzz: version 2.7.2
    • haveged: version 1.9.13
    • htop: version 3.0.2
    • iproute2: version 5.8.0
    • iputils: version 20200821
    • jasper: version 2.0.21
    • jemalloc: version 5.2.1
    • libX11: version 1.6.12
    • libcap-ng: version 0.8
    • libevdev: version 1.9.1
    • libevent: version 2.1.12
    • libgcrypt: version 1.8.6
    • libglvnd: version 1.3.2
    • libgpg-error: version 1.39
    • libgudev: version 234
    • libidn: version 1.36
    • libpsl: version 0.21.1 build 2
    • librsvg: version 2.50.0
    • libssh: version 0.9.5
    • libvirt: version 6.6.0 (CVE-2020-14339)
    • libxkbcommon: version 1.0.1
    • libzip: version 1.7.3
    • lmdb: version 0.9.26
    • logrotate: version 3.17.0
    • lvm2: version 2.03.10
    • mc: version 4.8.25
    • mpfr: version 4.1.0
    • nano: version 5.2
    • ncurses: version 6.2_20200801
    • nginx: version 1.19.1
    • ntp: version 4.2.8p15 build 2
    • openssl-solibs: version 1.1.1h
    • openssl: version 1.1.1h
    • p11-kit: version 0.23.21
    • pango: version 1.46.2
    • php: version 7.4.10 (CVE-2020-7068)
    • qemu: version 5.1.0 (CVE-2020-10717, CVE-2020-10761)
    • rsync: version 3.2.3
    • samba: version 4.12.7 (CVE-2020-1472)
    • sqlite: version 3.33.0
    • sudo: version 1.9.3
    • sysvinit-scripts: version 2.1 build 35
    • sysvinit: version 2.97
    • ttyd: version 1.6.1
    • util-linux: version 2.36
    • wireguard-tools: version 1.0.20200827
    • xev: version 1.2.4
    • xf86-video-vesa: version 2.5.0
    • xfsprogs: version 5.8.0
    • xorg-server: version 1.20.9 build 3
    • xterm: version 360
    • xxHash: version 0.8.0

    Linux kernel:

    • version 5.8.12
    • kernel-firmware: version kernel-firmware-20200921_49c4ff5
    • oot: Realtek r8152: version 2.13.0
    • oot: Tehuti tn40xx: version 0.3.6.17.3

    Management:

    • btrfs: include 'discard=async' mount option
    • emhttpd: avoid using remount to set additional mount options
    • emhttpd: added wipefs function (webgui 'Erase' button)
    • shfs: move: support spares files
    • shfs: move: preserve ioctl_iflags when moving between same file system types
    • smb: remove setting 'aio' options in smb.conf, use samba defaults
    • webgui: Update noVNC to v1.2.0
    • webgui: Docker: more intuitive handling of images
    • webgui: VMs: more intuitive handling of image selection
    • webgui: VMs: Fixed: rare cases vdisk defaults to Auto when it should be Manual
    • webgui: VMs: Fixed: Adding NICs or VirtFS mounts to a VM is limited
    • webgui: VM manager: new setting "Network Model"
    • webgui: Added new setting "Enable user share assignment" to cache pool
    • webgui: Dashboard: style adjustment for server icon
    • webgui: Update jGrowl to version 1.4.7
    • webgui: Fix ' appearing
    • webgui: VM Manager: add 'virtio-win-0.1.189-1' to VirtIO-ISOs list
    • webgui: Prevent bonded nics from being bound to vfio-pci too
    • webgui: better handling of multiple nics with vfio-pci
    • webgui: Suppress WG on Dashboard if no tunnels defined
    • webgui: Suppress Autofan link on Dashboard if plugin not installed
    • webgui: Detect invalid session and logout current tab
    • webgui: Added support for private docker registries with basic auth or no auth, and improvements for token based authentication
    • webgui: Fix notifications continually reappearing
    • webgui: Support links on notifications
    • webgui: Add raid1c3 and raid1c4 btrfs pool balance options.
    • webgui: For raid6 btrfs pool data profile use raid1c3 metadata profile.
    • webgui: Permit file system configuration when array Started for Unmountable volumes.
    • webgui: Fix not able to change parity check schedule if no cache pool present
    • webgui: Disallow "?" in share names
    • webgui: Add customizable timeout when stopping containers

    Edited by limetech

    • Like 6
    • Thanks 6


    User Feedback

    Recommended Comments



    4 minutes ago, JorgeB said:

    df reports the correct used and free space for every possible combination (AFAIK) except raid1 with an odd number of devices, but that's a btrfs bug and should be fixed in the near future.

    Not any more in my testing.  This is a maddening subject.

    • Haha 1
    Link to comment
    Share on other sites
    30 minutes ago, John_M said:

    So I have (in round figures) four 2TB disks, which should give me 6TB of usable storage, with 2TB being used for parity. I

    In my testing, latest kernel, latest btrfs-tools, there is no way to get proper size for raid5 and raid6 profiles.

    Link to comment
    Share on other sites
    3 minutes ago, limetech said:

    In my testing, latest kernel, latest btrfs-tools, there is no way to get proper size for raid5 and raid6 profiles.

    If there's any way to get FREE reported correctly, as it was in beta25, I'd be happy to forego accurancy in the other two values, but if that isn't possible I'll move on and not mention it again.

    Link to comment
    Share on other sites

    Getting this error when I passthrough a whole device to a VM:

     

    2020-09-29 13:09:01.351+0000: 7249: info : libvirt version: 6.6.0
    2020-09-29 13:09:01.351+0000: 7249: info : hostname: chipsServer
    2020-09-29 13:09:01.351+0000: 7249: warning : qemuDomainObjTaint:5983 : Domain id=1 name='Debian' uuid=dc6cf767-855b-f071-bfd7-d915e74a61e9 is tainted: high-privileges
    2020-09-29 13:09:01.351+0000: 7249: warning : qemuDomainObjTaint:5983 : Domain id=1 name='Debian' uuid=dc6cf767-855b-f071-bfd7-d915e74a61e9 is tainted: host-cpu
    2020-09-29 13:09:01.371+0000: 7249: error : virDevMapperOnceInit:78 : internal error: Unable to find major for device-mapper
    2020-09-29 13:09:01.371+0000: 7249: error : qemuSetupImagePathCgroup:91 : Unable to get devmapper targets for /dev/disk/by-id/ata-WDC_WD5000AZLX-60K2TA0_WD-WCC6Z4DKE5L5: Success
    2020-09-29 13:09:01.615+0000: 7249: error : qemuAutostartDomain:219 : internal error: Failed to autostart VM 'Debian': Unable to get devmapper targets for /dev/disk/by-id/ata-WDC_WD5000AZLX-60K2TA0_WD-WCC6Z4DKE5L5: Success

     

    Seems like this issue isn't resolved with Libvirt 6.6.0 and CVE-2020-14339 applied.

    Will try to compile libvirt 6.7.0 if I got time but I'm on vacation now and don't have much time for this...

    Link to comment
    Share on other sites
    22 minutes ago, limetech said:

    Not any more in my testing.

    Do you mean newer kernel/tools than this beta?

     

    df is still working correctly for me:

     

    Filesystem      Size  Used Avail Use% Mounted on
    /dev/sdf1       1.4T  3.6M  930G   1% /mnt/cache

     

    And now the newer btrfs tools in beta29 also support raid5/6:

     

    btrfs fi usage -T /mnt/cache
    Overall:
        Device size:                   1.36TiB
        Device allocated:             17.06GiB
        Device unallocated:            1.35TiB
        Device missing:                  0.00B
        Used:                        288.00KiB
        Free (estimated):            930.15GiB      (min: 700.11GiB)
        Data ratio:                       1.50
        Metadata ratio:                   2.00
        Global reserve:                3.25MiB      (used: 0.00B)
        Multiple profiles:                  no

     

     

    Link to comment
    Share on other sites
    51 minutes ago, CS01-HS said:

    II have a RAID 1 btrfs cache (with equally sized disks.) I figure I can convert it to a single-drive pool then use the newly-freed drive as my "btrfs destination." But is there a way do that without adding it to Array Devices (which will introduce new problems), maybe through unassigned devices?

     

    Otherwise I'm not sure how most plan to do this. 

    NoCOW seems important so I don't want to lose it.

    Here's another way to handle 2-device btrfs pool:

     

    1. Stop array and unassign one of the devices from your existing pool; leave device unassigned.
    2. Start array.  A balance will take place on your existing pool.  Let the balance complete.
    3. Stop array.  Re-assign the device, adding it back to your existing pool.
    4. Start array.  The added device will get re-partitioned and a balance will start moving data to the new device.  Let the balance finish.
    5. Repeat steps 1-4 for the other device in your existing pool.

     

    Whats happening here is this:

    At the completion of step 2, btrfs will 'delete' the missing device from the volume and wipe the btrfs signature from it.

    At the beginning of step 4, Unraid OS will re-partition the new device being added to an existing pool.

     

    Not recommended to do this for a pool with more than 2 devices since the balance operation in step 2 will result in a much more write-intensive balance - and writes is what we're trying to minimize.

     

    This procedure will introduce a time window where your data is on non-redundant storage.

     

    Of course make a backup first of critical data in the pool before attempting this.

    • Thanks 1
    Link to comment
    Share on other sites
    14 minutes ago, ich777 said:

    Will try to compile libvirt 6.7.0 if I got time but I'm on vacation now and don't have much time for this...

    We tried to upgrade to 6.7.0 also but starting with that release, they changed the make tools and we couldn't get it to compile correctly and ran out of time to investigate further.  We'll try to get at this again.

    • Thanks 1
    Link to comment
    Share on other sites
    1 minute ago, limetech said:

    We tried to upgrade to 6.7.0 also but starting with that release, they changed the make tools and we couldn't get it to compile correctly and ran out of time to investigate further. 

    What was the error that you ran into?

    Now they build with meson/ninja.

    If I can help please feel free to contact me, the build options for Unraid or better speaking the path's that you specify would be helpful.

    Link to comment
    Share on other sites
    4 hours ago, Chess said:

     

     

    What storaage card are you using? See above below:

     

    KNOWN ISSUE: with this release we have moved to the latest Linux 5.8 stable kernel.  However, we have discovered that a regression has been introduced in the mtp3sas driver used by many LSI chipsets, e.g., LSI 9201-16e.  Typically looks like this on System Devices page:

    Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02)

     

    LSI 9201-16

    Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02)

    so yeah....  me too
    sorry had 10 minutes to go to work, did not have time to read both pages....

     

    BIG-D - Storage / Emby Server - UNRAID 6.7.2 - Rosewill RSV-L4500 15 bay chassis - Threadripper 1950X - 16GB - ROG Zenith Extreme - LSI 9201-16 SAS - 30 TB + 2x 5 TB Parity + 1TB SSD cache

    Edited by TRusselo
    Link to comment
    Share on other sites
    5 minutes ago, TRusselo said:

    sorry had 10 minutes to go to work, did not have time to read both pages....

    Easy to miss.  For now stick to beta 25, looks like they have asked to see if the driver can be updated.

    Link to comment
    Share on other sites
    28 minutes ago, limetech said:

    Here's another way to handle 2-device btrfs pool:

    No need to change the shares' caching setting or run mover? Wow, that's easier than I expected, thanks.

     

     

    Edited by CS01-HS
    Link to comment
    Share on other sites
    47 minutes ago, ich777 said:

    What was the error that you ran into?

    Now they build with meson/ninja.

    If I can help please feel free to contact me, the build options for Unraid or better speaking the path's that you specify would be helpful.

    My mistake, was not a build error, whenever a VM starts, it crashes virtlogd and fails to load the VM.  This is in syslog as it happens:

     

    Sep 9 15:54:18 threadripper kernel: virtlogd[9538]: segfault at 0 ip 0000000000000000 sp 00007ffc531ea438 error 14 in libnss_files-2.30.so[14ae558ca000+3000]

     

    This is what we didn't have time to chase down before release.

    Link to comment
    Share on other sites
    6 minutes ago, limetech said:

    My mistake, was not a build error, whenever a VM starts, it crashes virtlogd and fails to load the VM.  This is in syslog as it happens:

     

    Sep 9 15:54:18 threadripper kernel: virtlogd[9538]: segfault at 0 ip 0000000000000000 sp 00007ffc531ea438 error 14 in libnss_files-2.30.so[14ae558ca000+3000]

     

    This is what we didn't have time to chase down before release.

    Thanks for the quick reply but the error with the passed through devices seems like to affect several users.

    Link to comment
    Share on other sites

    Is this Beta-relase workin with new Intel S-1200 platforms and the new revisions of Intel NIC i219V and i225V?

    Thanks for your help

    Link to comment
    Share on other sites

    I had these errors in beta 25, an almost clean reinstall of beta 25 solved them.

    The problem was described here, and I was not the only one with this problem. No one could help.

     

    Now I have upgraded to beta 29 and I get this errors again

     

    Sep 29 18:25:42 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31dde87.044110ba does not match aorg 0000000000.00000000 from server@213.251.52.234 xmt 0xe31dde86.72b4c6d5
    Sep 29 18:48:24 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31de3d8.61c48512 does not match aorg 0000000000.00000000 from server@147.156.7.26 xmt 0xe31de3d7.d98ce902
    Sep 29 20:05:17 Unraid kernel: mdcmd (59): spindown 2
    Sep 29 20:23:39 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31dfa2b.d87b55b4 does not match aorg 0000000000.00000000 from server@147.156.7.26 xmt 0xe31dfa2b.514764b2
    Sep 29 20:52:38 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31e00f7.2759964c does not match aorg 0000000000.00000000 from server@213.251.52.234 xmt 0xe31e00f6.8a548cf6
    Sep 29 21:26:42 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31e08f3.2c8665f7 does not match aorg 0000000000.00000000 from server@213.251.52.234 xmt 0xe31e08f2.a290931a
    Sep 29 22:07:02 Unraid ntpd[1794]: receive: Unexpected origin timestamp 0xe31e1267.285aaa3f does not match aorg 0000000000.00000000 from server@147.156.7.26 xmt 0xe31e1266.9179e8df
    Sep 29 22:32:00 Unraid webGUI: Successful login user root from 10.10.10.30
    Sep 29 22:32:04 Unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog 
    Sep 29 22:42:48 Unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog 
    Sep 29 22:44:20 Unraid emhttpd: shcmd (243): ln -sf /usr/share/zoneinfo/Europe/Paris /etc/localtime-copied-from
    Sep 29 22:44:20 Unraid emhttpd: shcmd (244): cp /etc/localtime-copied-from /etc/localtime
    Sep 29 22:44:20 Unraid emhttpd: shcmd (245): /usr/local/emhttp/webGui/scripts/update_access
    Sep 29 22:44:20 Unraid root: sshd: no process found
    Sep 29 22:44:21 Unraid emhttpd: shcmd (246): /etc/rc.d/rc.ntpd restart
    Sep 29 22:44:21 Unraid ntpd[1794]: ntpd exiting on signal 1 (Hangup)
    Sep 29 22:44:21 Unraid ntpd[1794]: 127.127.1.0 local addr 127.0.0.1 -> <null>
    Sep 29 22:44:21 Unraid ntpd[1794]: 213.251.52.234 local addr 10.10.10.5 -> <null>
    Sep 29 22:44:21 Unraid ntpd[1794]: 147.156.7.26 local addr 10.10.10.5 -> <null>
    Sep 29 22:44:21 Unraid ntpd[1794]: 193.145.15.15 local addr 10.10.10.5 -> <null>
    Sep 29 22:44:21 Unraid root: Stopping NTP daemon...
    Sep 29 22:44:22 Unraid ntpd[25214]: ntpd 4.2.8p15@1.3728-o Sat Aug 15 18:24:48 UTC 2020 (1): Starting
    Sep 29 22:44:22 Unraid ntpd[25214]: Command line: /usr/sbin/ntpd -g -u ntp:ntp
    Sep 29 22:44:22 Unraid ntpd[25214]: ----------------------------------------------------
    Sep 29 22:44:22 Unraid ntpd[25214]: ntp-4 is maintained by Network Time Foundation,
    Sep 29 22:44:22 Unraid ntpd[25214]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
    Sep 29 22:44:22 Unraid ntpd[25214]: corporation.  Support and training for ntp-4 are
    Sep 29 22:44:22 Unraid ntpd[25214]: available at https://www.nwtime.org/support
    Sep 29 22:44:22 Unraid ntpd[25214]: ----------------------------------------------------
    Sep 29 22:44:22 Unraid ntpd[25216]: proto: precision = 0.040 usec (-24)
    Sep 29 22:44:22 Unraid ntpd[25216]: basedate set to 2020-08-02
    Sep 29 22:44:22 Unraid ntpd[25216]: gps base set to 2020-08-02 (week 2117)
    Sep 29 22:44:22 Unraid ntpd[25216]: Listen normally on 0 lo 127.0.0.1:123
    Sep 29 22:44:22 Unraid ntpd[25216]: Listen normally on 1 br0 10.10.10.5:123
    Sep 29 22:44:22 Unraid ntpd[25216]: Listen normally on 2 lo [::1]:123
    Sep 29 22:44:22 Unraid ntpd[25216]: Listening on routing socket on fd #19 for interface updates
    Sep 29 22:44:22 Unraid ntpd[25216]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized
    Sep 29 22:44:22 Unraid ntpd[25216]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized
    Sep 29 22:44:22 Unraid root: Starting NTP daemon:  /usr/sbin/ntpd -g -u ntp:ntp
    Sep 29 22:44:29 Unraid emhttpd: shcmd (247): ln -sf /usr/share/zoneinfo/Europe/Paris /etc/localtime-copied-from
    Sep 29 22:44:29 Unraid emhttpd: shcmd (248): cp /etc/localtime-copied-from /etc/localtime
    Sep 29 22:44:29 Unraid emhttpd: shcmd (249): /usr/local/emhttp/webGui/scripts/update_access
    Sep 29 22:44:29 Unraid root: sshd: no process found
    Sep 29 22:44:30 Unraid emhttpd: shcmd (250): /etc/rc.d/rc.ntpd restart
    Sep 29 22:44:30 Unraid ntpd[25216]: ntpd exiting on signal 1 (Hangup)
    Sep 29 22:44:30 Unraid ntpd[25216]: 127.127.1.0 local addr 127.0.0.1 -> <null>
    Sep 29 22:44:30 Unraid ntpd[25216]: 216.239.35.0 local addr 10.10.10.5 -> <null>
    Sep 29 22:44:30 Unraid ntpd[25216]: 216.239.35.4 local addr 10.10.10.5 -> <null>
    Sep 29 22:44:30 Unraid ntpd[25216]: 216.239.35.8 local addr 10.10.10.5 -> <null>
    Sep 29 22:44:30 Unraid root: Stopping NTP daemon...
    Sep 29 22:44:31 Unraid ntpd[25480]: ntpd 4.2.8p15@1.3728-o Sat Aug 15 18:24:48 UTC 2020 (1): Starting
    Sep 29 22:44:31 Unraid ntpd[25480]: Command line: /usr/sbin/ntpd -g -u ntp:ntp
    Sep 29 22:44:31 Unraid ntpd[25480]: ----------------------------------------------------
    Sep 29 22:44:31 Unraid ntpd[25480]: ntp-4 is maintained by Network Time Foundation,
    Sep 29 22:44:31 Unraid ntpd[25480]: Inc. (NTF), a non-profit 501(c)(3) public-benefit
    Sep 29 22:44:31 Unraid ntpd[25480]: corporation.  Support and training for ntp-4 are
    Sep 29 22:44:31 Unraid ntpd[25480]: available at https://www.nwtime.org/support
    Sep 29 22:44:31 Unraid ntpd[25480]: ----------------------------------------------------
    Sep 29 22:44:31 Unraid ntpd[25482]: proto: precision = 0.040 usec (-24)
    Sep 29 22:44:31 Unraid ntpd[25482]: basedate set to 2020-08-02
    Sep 29 22:44:31 Unraid ntpd[25482]: gps base set to 2020-08-02 (week 2117)
    Sep 29 22:44:31 Unraid ntpd[25482]: Listen normally on 0 lo 127.0.0.1:123
    Sep 29 22:44:31 Unraid ntpd[25482]: Listen normally on 1 br0 10.10.10.5:123
    Sep 29 22:44:31 Unraid ntpd[25482]: Listen normally on 2 lo [::1]:123
    Sep 29 22:44:31 Unraid ntpd[25482]: Listening on routing socket on fd #19 for interface updates
    Sep 29 22:44:31 Unraid ntpd[25482]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized
    Sep 29 22:44:31 Unraid ntpd[25482]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized
    Sep 29 22:44:31 Unraid root: Starting NTP daemon:  /usr/sbin/ntpd -g -u ntp:ntp
    Sep 29 22:44:32 Unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog 
    Sep 29 22:50:19 Unraid ntpd[25482]: kernel reports TIME_ERROR: 0x2041: Clock Unsynchronized
    Sep 29 22:55:29 Unraid emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog

     

    Edited by L0rdRaiden
    Link to comment
    Share on other sites
    10 hours ago, Womabre said:

    Running on a 3700X here. I can confirm this bug is fixed.

    What was this problem about?

    Does it require qemu 5.1?

    Edited by L0rdRaiden
    Link to comment
    Share on other sites
    7 hours ago, Chess said:

    Ah, where you using and AMD CPU by any chance? beta 25 did have a bug that had to be worked around to allow any vm with passed through CPUs to work. but it only affected AMD 3XXX CPUs.

    Yeah Ryzen 3900x I did find ways on the forum which got VNC working but couldn’t get anything on the TV it was a nightmare to say the least I tried loads of different things but I’m glad it’s sorted now I tried changing passthrough to model I tried different lines I found on here and they got VNC to work but that was it 👍🏻

    Edited by Dava2k7
    • Like 1
    Link to comment
    Share on other sites

    I didn't receive my daily Array Status email today. The last one I received was yesterday, before I upgraded from beta25 to beta29. I did receive emails from other servers, running 6.8.3 though. I haven't changed anything in Settings -> Notifications but I see this in the syslog:

    Sep 30 00:20:01 Lapulapu crond[1567]: exit status 255 from user root /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null

    Hopefully, exit status 255 gives a clue.

    lapulapu-diagnostics-20200930-0045.zip

     

     

    Edited by John_M
    Added the fact that I received email from 6.8.3 servers
    Link to comment
    Share on other sites
    53 minutes ago, John_M said:

    I didn't receive my daily Array Status email today. The last one I received was yesterday, before I upgraded from beta25 to beta29. I did receive emails from other servers, running 6.8.3 though. I haven't changed anything in Settings -> Notifications but I see this in the syslog:

    
    Sep 30 00:20:01 Lapulapu crond[1567]: exit status 255 from user root /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null

    Hopefully, exit status 255 gives a clue.

    lapulapu-diagnostics-20200930-0045.zip 163.85 kB · 0 downloads

     

     

    Thanks for reporting.  Fixed next release (my bad)

    • Thanks 1
    • Haha 1
    Link to comment
    Share on other sites

    so i've had my unraid for a few years now and am looking at blowing away my array and replacing all my drives and starting new.

     

    can i use this new mover function?  ....other wise i was going to copy all my data over to some 10TB external drives over the network and copy it back once i rebuild my array (any my shares?)

     

    does share configuration go away when i delete my array?

    Link to comment
    Share on other sites
    13 hours ago, _whatever said:

    FYI, it appears to only be an issue when you try to create a bond using eth0, no matter which interface is assigned eth0.  I can bond all of the interfaces on my 4 port NIC and it works fine as long as none of them are eth0.

    I'm not sure why, but after disabling the onboard NIC, setting an IP on eth0 which was now an interface on my 4 port card, and then re-enabling the onboard NIC (which then became eth4) I was able to get LACP working with all 5 ports again and they seem to be stable.  I'm just chalking this up to either something in the new kernel or something with the e1000 and/or bonding driver.

    Link to comment
    Share on other sites
    11 hours ago, DZMM said:

    I've had 2 lockups today on v29 which is very rare - had to use power button to shutdown as totally locked out/full crash.  I'm not sure if the diags after boot will shed any light

     

     

    highlander-diagnostics-20200929-2245.zip 126.35 kB · 1 download

    I woke up to a unresponsive system this morning.  My Windows 10 VMs were locked, but my pfsense VM I think was still running as I had Wi-Fi connectivity on other devices.  I couldn't connect to unRAID though - even with my laptop via ethernet.  I've had to rollback to beta25 as that's 3 lockups in 24 hours, whereas I had no issues with beta25.

     

    Diags attached after reboot again, so not sure if they will help - I couldn't previous diags as I had to shutdown with hardware power button.

    highlander-diagnostics-20200930-0826.zip

    • Thanks 1
    Link to comment
    Share on other sites
    10 hours ago, John_M said:

    I didn't receive my daily Array Status email today. The last one I received was yesterday, before I upgraded from beta25 to beta29. I did receive emails from other servers, running 6.8.3 though. I haven't changed anything in Settings -> Notifications but I see this in the syslog:

    
    Sep 30 00:20:01 Lapulapu crond[1567]: exit status 255 from user root /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null

     

    lapulapu-diagnostics-20200930-0045.zip 163.85 kB · 0 downloads

     

     

    9 hours ago, Squid said:

    Thanks for reporting.  Fixed next release (my bad)

     

    Plans to address this.  How long to next release-- days or Soon(TM)?   If Soon(TM), I will have to roll back as I really want the E-mail notifications to work.  (Thinking perhaps, a    -beta29a   release might be in order...)

    Link to comment
    Share on other sites

    Not critical but VNC Remote for VMs fails to load with the following error in Safari 14 (latest version) for MacOS. Works properly in Brave.2139396733_noVNCencounteredanerror.thumb.png.dc28bc7c3a741714b0d8081660ae2c1b.png 

    Link to comment
    Share on other sites



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.