Jump to content

Leaderboard


Popular Content

Showing content with the highest reputation since 08/29/20 in Reports

  1. 7 points
    Back in the saddle ... Sorry for the long delay in publishing this release. Aside from including some delicate coding, this release was delayed due to several team members, chiefly myself, having to deal with various non-work-related challenges which greatly slowed the pace of development. That said, there is quite a bit in this release, LimeTech is growing and we have many exciting features in the pipe - more on that in the weeks to come. Thanks to everyone for their help and patience during this time. Cheers, -Tom IMPORTANT: This is Beta software. We recommend running on test servers only! KNOWN ISSUE: with this release we have moved to the latest Linux 5.8 stable kernel. However, we have discovered that a regression has been introduced in the mtp3sas driver used by many LSI chipsets, e.g., LSI 9201-16e. Typically looks like this on System Devices page: Serial Attached SCSI controller: Broadcom / LSI SAS2116 PCI-Express Fusion-MPT SAS-2 [Meteor] (rev 02) The problem is that devices are no longer recognized. There are already bug reports pertaining to this issue: https://bugzilla.kernel.org/show_bug.cgi?id=209177 https://bugzilla.redhat.com/show_bug.cgi?id=1878332 We have reached out to the maintainer to see if a fix can be expedited, however we feel that we can neither revert back to 5.7 kernel nor hold the release due to this issue. We are monitoring and will publish a release with fix asap. ANOTHER known issue: we have added additional btrfs balance options: raid1c3 raid1c4 and modified the raid6 balance operation to set meta-data to raid1c3 (previously was raid1). However, we have noticed that applying one of these balance filters to a completely empty volume leaves some data extents with the previous profile. The solution is to simply run the same balance again. We consider this to be a btrfs bug and if no solution is forthcoming we'll add the second balance to the code by default. For now, it's left as-is. THE PRIMARY FOCUS of this release is to put tools in place to help users migrate data off SSD-based pools so that those devices may be re-partitioned if necessary, and then migrate the data back. What are we talking about? For several years now, storage devices managed by Unraid OS are formatted with an "Unraid Standard Partition Layout". This layout has partition 1 starting at offset 32KiB from the start of the device, and extending to the end of the device. (For devices with 512-byte sectors, partition 1 starts in sector 64; for 4096-byte sector size devices, partition 1 starts in sector 8.) This layout achieves maximum storage efficiency and ensures partition 1 starts on a 4096-byte boundary. Through user reports and extensive testing however, we have noted that many modern SSD devices, in particular Samsung EVO, do not perform most efficiently using this partition layout, and the devices seem to write far more than one would expect, and with SSD, one wants to minimize writes to SSD as much as possible. The solution to the "excessive SSD write" issue is to position partition 1 at offset 1MiB from the start of the device instead of at 32KiB. The will both increase performance and decrease writes with affected devices. Do you absolutely need to re-partition your SSD's? Probably not depending on what devices you have. Click on a device from Main, scroll down to Attributes and take a look at Data units written. If this is increasing very rapidly then you probably would benefit by re-partitioning. Here's what's in this release to help facilitate re-partitioning of SSD devices: An Erase button which appears in the Device Information page. The Erase button may be used to erase (delete) content from a volume. A volume is either the content of an unRAID array data disk, or the content of a pool. In the case of an unRAID disk, only that device is erased; in the case of a multiple-device pool ALL devices of the pool are erased. The extent of Erase varies depending on whether the array is Stopped, or Started in Maintenance mode (if started in Normal mode, all volume Erase buttons are disabled). Started/Maintenance mode: in this case the LUKS header (if any) and any file system within partition 1 is erased. The MBR (master boot record) is not erased. Stopped - in this case, unRAID array disk volumes and pool volumes are treated a little differently: unRAID array disk volumes - if Parity and/or Parity2 is valid, then operation proceeds exactly as above, that is, content of only partition 1 is erased but the MBR (master boot record) is left as-is; but, if there is no valid parity, then the MBR is also erased. Pool volumes - partition 1 of all devices within the pool are erased, and then the MBR is also erased. The purpose of erasing the MBR is to permit re-partitioning of the device if required. Upon format, Unraid OS will position partition 1 at 32KiB for HDD devices and at 1MiB for SSD devices. Note that erase does not overwrite the storage content of a device, it simply clears the LUKS header if present (which effectively makes the device unreadable), and file system and MBR signatures. A future Unraid OS release may include the option of overwriting the data. Additional "Mover" capabilities. Since SSD pools are commonly used to store vdisk images, shfs/mover is now aware of: sparse files - when a sparse file is moved from one volume to another, it's sparseness is preserved NoCOW attribute - when a file or directory in a btrfs volume has the NoCOW attribute set, the attribute is preserved when the file or directory is moved to another btrfs volume. Note that btrfs subvolumes are not preserved. A future Unraid OS release may include preservation of btrfs subvolumes. Ok how do I re-partition my SSD pools? The idea is to use the Mover to copy all data from the pool to a target device in the unRAID array. Then erase all devices of the pool, and reformat. Finally use the Mover to copy all the data back. If you have absolutely critical data in the pool we strongly recommend making an independent backup first (you are already doing this right?). This procedure presumes a multi-device btrfs pool containing one or more cache-only or cache-prefer shares. 1. With array Started, stop any VM's and/or Docker applications which may be accessing the pool you wish to re-partition. Make sure no other external I/O is targeting this pool. 2. For each share on the pool, go to the Share Settings page and make some adjustments: change from cache-only (or cache-prefer) to cache-yes assign an array disk or disks via Include mask to receive the data. These disks should be formatted with btrfs and of course have enough free space to receive the data. 3. Now go back to Main and click the Move button. This will move the data of each share to the target array disk(s). 4. Verify no data left on the pool, Stop array, click on the pool and then click the Erase button. 5. Start the array and the pool should appear Unformatted - go ahead and Format the pool (this is what will re-write the partition layout). 6. Back to Share Settings page; for each above share: change from cache-yes to cache-prefer 7. On Main page click Move button. This will move data of each share back to the pool. 8. Finally, back to Share Settings page; for each share: change from cache-prefer back to cache-only if desired I don't care about preserving data in the pool. In this case just Stop array, click on the pool and then click Erase. Start array and Format the pool - done. Useful to know: when Linux creates a file system in an SSD device, it will first perform a "blkdiscard" on the entire partition. Similarly, "blkdisard" is initiated on partition 1 on a new device added to an existing btrfs pool. What about array devices? If you have SSD devices in the unRAID array the only way to safely re-partition those devices is to either remove them from the array, or remove parity devices from the array. This is because re-partitioning will invalidate parity. Note also the volume size will be slightly smaller. Version 6.9.0-beta29 2020-09-27 (vs -beta25) Base distro: at-spi2-core: version 2.36.1 bash: version 5.0.018 bridge-utils: version 1.7 brotli: version 1.0.9 btrfs-progs: version 5.6.1 ca-certificates: version 20200630 cifs-utils: version 6.11 cryptsetup: version 2.3.4 curl: version 7.72.0 (CVE-2020-8231) dbus: version 1.12.20 dnsmasq: version 2.82 docker: version 19.03.13 ethtool: version 5.8 fribidi: version 1.0.10 fuse3: version 3.9.3 git: version 2.28.0 glib2: version 2.66.0 build 2 gnutls: version 3.6.15 gtk+3: version 3.24.23 harfbuzz: version 2.7.2 haveged: version 1.9.13 htop: version 3.0.2 iproute2: version 5.8.0 iputils: version 20200821 jasper: version 2.0.21 jemalloc: version 5.2.1 libX11: version 1.6.12 libcap-ng: version 0.8 libevdev: version 1.9.1 libevent: version 2.1.12 libgcrypt: version 1.8.6 libglvnd: version 1.3.2 libgpg-error: version 1.39 libgudev: version 234 libidn: version 1.36 libpsl: version 0.21.1 build 2 librsvg: version 2.50.0 libssh: version 0.9.5 libvirt: version 6.6.0 (CVE-2020-14339) libxkbcommon: version 1.0.1 libzip: version 1.7.3 lmdb: version 0.9.26 logrotate: version 3.17.0 lvm2: version 2.03.10 mc: version 4.8.25 mpfr: version 4.1.0 nano: version 5.2 ncurses: version 6.2_20200801 nginx: version 1.19.1 ntp: version 4.2.8p15 build 2 openssl-solibs: version 1.1.1h openssl: version 1.1.1h p11-kit: version 0.23.21 pango: version 1.46.2 php: version 7.4.10 (CVE-2020-7068) qemu: version 5.1.0 (CVE-2020-10717, CVE-2020-10761) rsync: version 3.2.3 samba: version 4.12.7 (CVE-2020-1472) sqlite: version 3.33.0 sudo: version 1.9.3 sysvinit-scripts: version 2.1 build 35 sysvinit: version 2.97 ttyd: version 1.6.1 util-linux: version 2.36 wireguard-tools: version 1.0.20200827 xev: version 1.2.4 xf86-video-vesa: version 2.5.0 xfsprogs: version 5.8.0 xorg-server: version 1.20.9 build 3 xterm: version 360 xxHash: version 0.8.0 Linux kernel: version 5.8.12 kernel-firmware: version kernel-firmware-20200921_49c4ff5 oot: Realtek r8152: version 2.13.0 oot: Tehuti tn40xx: version 0.3.6.17.3 Management: btrfs: include 'discard=async' mount option emhttpd: avoid using remount to set additional mount options emhttpd: added wipefs function (webgui 'Erase' button) shfs: move: support spares files shfs: move: preserve ioctl_iflags when moving between same file system types smb: remove setting 'aio' options in smb.conf, use samba defaults webgui: Update noVNC to v1.2.0 webgui: Docker: more intuitive handling of images webgui: VMs: more intuitive handling of image selection webgui: VMs: Fixed: rare cases vdisk defaults to Auto when it should be Manual webgui: VMs: Fixed: Adding NICs or VirtFS mounts to a VM is limited webgui: VM manager: new setting "Network Model" webgui: Added new setting "Enable user share assignment" to cache pool webgui: Dashboard: style adjustment for server icon webgui: Update jGrowl to version 1.4.7 webgui: Fix ' appearing webgui: VM Manager: add 'virtio-win-0.1.189-1' to VirtIO-ISOs list webgui: Prevent bonded nics from being bound to vfio-pci too webgui: better handling of multiple nics with vfio-pci webgui: Suppress WG on Dashboard if no tunnels defined webgui: Suppress Autofan link on Dashboard if plugin not installed webgui: Detect invalid session and logout current tab webgui: Added support for private docker registries with basic auth or no auth, and improvements for token based authentication webgui: Fix notifications continually reappearing webgui: Support links on notifications webgui: Add raid1c3 and raid1c4 btrfs pool balance options. webgui: For raid6 btrfs pool data profile use raid1c3 metadata profile. webgui: Permit file system configuration when array Started for Unmountable volumes. webgui: Fix not able to change parity check schedule if no cache pool present webgui: Disallow "?" in share names webgui: Add customizable timeout when stopping containers
  2. 1 point
    https://gitlab.com/libvirt/libvirt/-/issues/31 Issue described in link above. Has been patched, fix should be in the 6.5.0 release. (For VM Backup plugin users, i don't use the plugin but this is likely your issue.)
  3. 1 point
    When I try to use Veeam as a backup target using NFS, I always have to reboot my Veeam Backup server before starting the job or I will have errors about stale file handles in the Veeam logs and the job failes. The options provided on the forum with the tunables didn't work. The best possible solution would be to implement a newer version of NFS (4.x) so we can finally get rid of all these old time errors for multiple users. I'm running beta25 at the moment (latest at the time of posting)
  4. 1 point
    This is not a big deal and has been reported before on the general support forum, when a custom controller type is used for SMART you can see all the SMART attributes including temp, but for some reason temp is not displayed on the GUI or dash, other user was using the HP cciss controller and the same happens to me with an LSI MegaRAID controller, so looks like a general issue when using a custom SMART controller type. Note, I'm using two disks in RAID0 for each device here, so I can only choose SMART from one of the member disks form that RAID, still I should be seeing temp from that disk, since all SMART attributes appear correctly, unless the GUI is not using the custom controller type to get the temp.