• Unraid OS version 6.9.0-beta25 available


    limetech

    6.9.0-beta25 vs. -beta24 Summary:

    • fixed emhttpd crash resulting from having NFS exported disk shares
    • fixed issue where specifying 1 MiB partition alignment was being ignored (see 1 MiB Partition Alignment below)
    • fixed spin-up/down issues
    • ssh improvements (see SSH Improvements below)
    • kernel updated from 5.7.7 to 5.7.8
    • added UI changes to support new docker image file handling - thank you @bonienl.  Refer also to additional information re: docker image folder, provided by @Squid under Docker below.
    • known issue: "Device/SMART Settings/SMART controller type" is ignored, will be fixed in next release

     

    Important: Beta code is not fully tested and not feature-complete.  We recommend running on test servers only!

     

    Multiple Pools

    This features permits you to define up to 35 named pools, of up to 30 storage devices/pool.  The current "cache pool" is now simply a pool named "cache".  Pools are created and managed via the Main page.

     

    Note: When you upgrade a server which has a cache pool defined, a backup of config/disk.cfg will be saved to config/disk.cfg.bak, and then cache device assignment settings are moved out of disk.cfg and into a new file, config/pools/cache.cfg.  If later you revert back to a pre-6.9 Unraid OS release you will lose your cache device assignments and you will have to manually re-assign devices to cache.  As long as you reassign the correct devices, data should remain intact.

     

    When you create a user share, or edit an existing user share, you can specify which pool should be associated with that share.  The assigned pool functions identically to current cache pool operation.

     

    Something to be aware of: when a directory listing is obtained for a share, the unRAID array disk volumes and all pools which contain that share are merged in this order:

      pool assigned to share

      disk1

      :

      disk28

      all the other pools in strverscmp() order.

     

    As with the current "cache pool", a single-device pool may be formatted with either xfs, btrfs, or reiserfs.  A multiple-device pool may only be formatted with btrfs.  A future release will include support for multiple "unRAID array" pools.  We are also considering zfs support.

     

    Something else to be aware of: Suppose you have a 2-device btrfs pool. This will be what btrfs calls "raid1" and what most people would understand to be "mirrored disks". Well this is mostly true in that the same data exists on both disks but not necessarily at the block-level.  Now suppose you create another pool, and what you do is unassign one of the devices from the existing 2-device btrfs pool and assign it to this new pool - now you have x2 single-device btrfs pools.  Upon array Start you might understandably assume there are now x2 pools with exactly the same data.  However this is not the case. Instead, when Unraid OS sees that a btrfs device has been removed from an existing multi-device pool, upon array Start it will do a 'wipefs' on that device so that upon mount it will not be included in the old pool.  This of course effectively deletes all the data on the moved device.

     

    1 MiB Partition Alignment

    We have added another partition layout where the start of partition 1 is aligned on 1 MiB boundary. That is, for devices which present 512-byte sectors, partition 1 will start in sector 2048; for devices with 4096-byte sectors, in sector 256.  This partition type is now used for all non-rotational storage (only).

     

    It is not clear what benefit 1 MiB alignment offers.  For some SSD devices, you won't see any difference; others, perhaps big performance difference.  LimeTech does not recommend re-partitioning an existing SSD device unless you have a compelling reason to do so (or your OCD just won't let it be).

     

    To re-partition a SSD it is necessary to first wipe out any existing partition structure on the device.  Of course this will erase all data on the device.  Probably the easiest way to accomplish this is, with array Stopped, identify the device to be erased and use the 'blkdiscard' command:

    blkdiscard /dev/xxx  # for exmaple /dev/sdb or /dev/nvme0n1 etc)

            WARNING: be sure you type the correct device identifier because all data will be lost on that device!

     

    Upon next array Start the device will appear Unformatted, and since there is now no partition structure, Unraid OS will create it.

     

    Language Translation

    A huge amount of work and effort has been implemented by @bonienl to provide multiple-language support in the Unraid OS Management Utility, aka, webGUI.  There are several language packs now available, and several more in the works.  Thanks to @Squid, language packs are installed via the Community Applications plugin - look for a new category entitled Language.

     

    Note: Community Applications must be up to date to install languages.  See also here.

     

    Each language pack exists in public Unraid organization github repos.  Interested users are encouraged to clone and issue Pull Requests to correct translations errors.  Language translations and PR merging is managed by @SpencerJ.

     

    Linux Kernel

    Upgraded to 5.7.

     

    These out-of-tree drivers are currently included:

    • QLogic QLGE 10Gb Ethernet Driver Support (from staging)
    • RealTek r8125: version 9.003.05 (included for newer r8125)
    • HighPoint rr272x_1x: version v1.10.6-19_12_05 (per user request)

    Note that as we update the Linux kernel, if an out-of-tree driver no longer builds, it will be omitted.

     

    These drivers are currently omitted:

    • Highpoint RocketRaid r750 (does not build)
    • Highpoint RocketRaid rr3740a (does not build)
    • Tehuti Networks tn40xx (does not build)

    If you require one of these drivers, please create a Bug Report and we'll spend some time looking for alternatives.  Better yet, pester the manufacturer of the controller and get them to update their drivers.

     

    Base Packages

    All updated to latest versions.  In addition, Linux PAM has been integrated.  This will permit us to implement 2-factor authentication in a future release.

     

    Docker

    Updated to version 19.03.11

     

    It's now possible to select different icons for multiple containers of the same type.  This change necessitates a re-download of the icons for all your installed docker applications.  A delay when initially loading either the dashboard or the docker tab while this happens is to be expected prior to the containers showing up.

     

    We also made some changes to add flexibility in assigning storage for the Docker engine.  First, 'rc.docker' will detect the filesystem type of /var/lib/docker.  We now support either btrfs or xfs and the docker storage driver is set appropriately.

     

    Next, 'mount_image' is modifed to support loopback formatted either with btrfs or xfs depending on the suffix of the loopback file name.  For example, the file name ends with ".img", as in "docker.img" then we use mkfs.btrfs.  If file name ends with "-xfs.img", as in "docker-xfs.img" then we use mkfs.xfs.


    We also added the ability to bind-mount a directory instead of using a loopback.  If file name does not end with ".img" then code assumes this is the name of directory (presumably on a share) which is bind-mounted onto /var/lib/docker.

     

    For example, if "/mnt/user/system/docker/docker" then we first create, if necessary the directory "/mnt/user/system/docker/docker".  If this path is on a user share we then "dereference" the path to get the disk path which is then bind-mounted onto /var/lib/docker.  For exmaple, if "/mnt/user/system/docker/docker" is on "disk1", then we would bind-mount "/mnt/disk1/system/docker/docker".  Caution: the share should be cache-only or cache-no so that 'mover' will not attempt to move the directory, but the script does not check this.

     

    Additional information from user @Squid:

     

    Quote

    Just a few comments on the ability to use a folder / share for docker

     

    If you're one of those users who continually has a problem with the docker image filling up, this is the solution, as the "image" will be able to expand (and shrink) to the size of the asigned share.  Just be aware though that this new feature is technically experimental.  (I have however been running this on an XFS formatted cache drive for a while now, and don't see any problems at all)

     

    I would recommend that you use a share that is dedicated to the docker files, and not a folder from another existing share (like system as show in the OP).  

     

    My reasoning for this is that:

    1. If you ever have a need to run the New Permissions tool against the share that you've placed the docker folder into, then that tool will cause the entire docker system to not run.  The folder will have to be removed (via the command line), and then recreated.

    2. All of the folders contained within the docker folder are not compatible with being exported over SMB, and you cannot gain access to them that way.  Using a separate share will also allow you to not export it without impacting the other shares' exporting.  (And there are no "user-modifiable" files in there anyways.  If you do need to modify a file within that folder, (ie: a config file for a container and that config isn't available within appdata), you should be doing it via going to the container's shell)

    You definitely want the share to be cache-only or cache-no (although cache-prefer should probably be ok).  Setting it to cache:yes will undoubtedly cause you problems if mover winds up relocating files to the array for you.

     

    I did have some "weirdness" with using a Unassigned Device as the drive for the docker folder.  This may however been a glitch in my system.

     

    Fix Common Problems (and the Docker Safe New Permissions Tool) will wind up getting updated to let you know of any problems that it detects with how you've configured the folder.

     

    Virtualization

    libvirt updated to version 6.4.0

    qemu updated to version 5.0.0

     

    In addition, integrated changes to System Devices page by user @Skitals with modifications by user @ljm42.  You can now select PCI devices to isolate from Linux upon boot simply by checking some boxes.  This makes it easier to reserve those devices for assignment to VM's.

     

    Note: If you had the VFIO-PCI Config plugin installed, you should remove it as that functionality is now built-in to Unraid OS 6.9.  Refer also @ljm42's excellent guide.

     

    In a future release we will include the NVIDIA and AMD GPU drivers natively into Unraid OS.  The primary use case is to facilitate accelerated transcoding in docker containers.  For this we require Linux to detect and auto-install the appropriate driver.  However, in order to reliably pass through an NVIDIA or AMD GPU to a VM, it's necessary to prevent Linux from auto-installing a GPU driver for those devices upon boot, which can be easily done now through System Devices page.  Users passing GPU's to VM's are encouraged to set this up now.

     

    "unexpected GSO errors"

    If your system log is being flooded with errors such as:

    Jun 20 09:09:21 Tower kernel: tun: unexpected GSO type: 0x0, gso_size 31, hdr_len 66

    You need to edit each VM and change the model type for the Ethernet bridge from "virtio" to "virtio-net".  In most cases this can be accomplished simply by clicking Update in "Form View" on the VM Edit page.  For other network configs it may be necessary to directly edit the xml.  Example:

    <interface type='bridge'>
          <mac address='xx:xx:xx:xx:xx:xx'/>
          <source bridge='br0'/>
          <model type='virtio-net'/>
          <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>

     

    SSH Improvements

    There are changes in /etc/ssh/sshd_conf to improve security (thanks to @Mihai and @ljm42 for suggestions):

    • only root user is permitted to login via ssh (remember: no traditional users in Unraid OS - just 'root')
    • non-null password is now required
    • non-root tunneling is disabled

     

    In addition, upon upgrade we ensure the 'config/ssh/root' directory exists on the USB flash boot device; and, we have set up a symlink: /root/.ssh to this directory.  This means any files you might put into /root/.ssh will be persistent across reboots.

     

    Note: if you examine the sshd startup script (/etc/rc.d/rc.sshd), upon boot all files from the 'config/ssh' directory are copied to /etc/ssh (but not subdirs).  The purpose is to restore the host ssh keys; however, this mechanism can be used to define custom ssh_conf and sshd_conf files (not recommended).

     

    Other

    • AFP support has been removed.
    • Numerous other Unraid OS and webGUI bug fixes and improvements.

     


    Version 6.9.0-beta25 2020-07-12

    Linux kernel:

    • version 5.7.8

    Management:

    • fix emhttpd crash resulting from exporting NFS disk share(s)
    • fix non-rotational device partitions were not actually being 1MiB aligned
    • dhcpcd: ipv6: use slaac hwaddr instead of slaac private
    • docker: correct storage-driver assignemnt logic
    • ssh: allow only root user, require passwords, disable non-root tunneling
    • ssh: add /root/.ssh symlink to /boot/config/ssh/root directory
    • syslog: configure to also listen on localhost udp port 514
    • webgui: Added btrfs info for all pools in diagnostics
    • webgui: Docker: allow BTRFS or XFS vdisk, or folder location
    • webgui: Multi-language: Fixed regression error: missing indicator for required fields
    • webgui: Dashboard: fix stats of missing interface
    • Like 2
    • Thanks 3



    User Feedback

    Recommended Comments



    8 hours ago, Marshalleq said:

    I actually also discovered my disk timeout had been reset to none, in the disk settings and applied to all disks.  So while crash plan was activating the disks (as it should), it was actually that they weren't set to spin down.  I never looked there  because I never go into those disk settings and had forgotten there was even a setting for it.  I'd suggest to double check that setting in case it really is resetting for some people as a result of the upgrade.  It seems unlikely, but who knows.

    My entire disk settings section was reset after the upgrade, being new to unraid I didn't know what to make of it and didn't consider the upgrade causing it since I didn't know exactly when it was reset.

    Link to comment

    I don't remember disk settings being reset with the update, but not entirely sure, note that they are reset after a new config.

    Link to comment
    1 minute ago, JorgeB said:

    I don't remember disk settings being reset with the update, but not entirely sure, note that they are reset after a new config.

    Didn't know that, a new config might of done it then. Very possible I did that around the same time. Made a lot of changes around that time which is why I didn't report it.

    Link to comment

    1) Is there a key one can use for running beta trials so there's no 30 day limit?

    2) How in the world do you delete network routes when the interface refuses to let you delete them?

     

    I set a static IP address for my test box. Is this where my issue lays? It seems to be unable to connect to the internet to check for updates, and add the CA plugin.

    Link to comment
    5 minutes ago, SLNetworks said:

    Is there a key one can use for running beta trials so there's no 30 day limit?

    A paid one ;) 

    Link to comment

    Upgrade it to the beta then...  My main server always runs the latest beta / rc of the software.  My secondary server (which is barely used) always runs the latest stable.  I only even turn on the test server to run private betas

     

    Never had a problem, but to each their own.

    Link to comment

    I think never had a problem might be pushing it a bit - there are tons of problems, but also problems with stable lol.  I think I'd just like to add that it's an easy rollback and despite the disclaimer, it's pretty safe for everything that matters - e.g. your data.

    Link to comment

    Ran into an issue today with BTRFS balance loop. It is the same as from this thread, seem to be a known issue and there are some fixes available:

     

     

     

    This cache does NOT have snapshots enabled as that was mentioned as a possible cause.

     

    I am getting an endless loop of this:

     

    Sep  9 10:54:10 NAS kernel: BTRFS info (device dm-4): found 361 extents, stage: update data pointers
    Sep  9 10:54:10 NAS kernel: BTRFS info (device dm-4): found 361 extents, stage: update data pointers
    Sep  9 10:54:10 NAS kernel: BTRFS info (device dm-4): found 361 extents, stage: update data pointers
    Sep  9 10:54:10 NAS kernel: BTRFS info (device dm-4): found 361 extents, stage: update data pointers

     

    Only thing out of the ordinary I did was start a balance, then canceled it as I needed to do some stuff with the drives first, then ran a scrub as the balance seems to go faster after a scrub and then restarted the balance.

     

    Woke up to this loop in the log.

     

    Gonna try restarting when I can take the server offline and see if that fixes it.

     

    EDIT: Have not been able to restart it yet but did try moving everything off the cache to another drive and then tried balance again along with scrub but still got the loop, just with 1 extents this time.

     

    Not a big deal, I figure putting everything back on the cache will balance it automatically.

    Edited by TexasUnraid
    Link to comment
    26 minutes ago, TexasUnraid said:

    seem to be a known issue

    Yes, it's a known btrfs bug, fixed on kernel 5.7.11

    Link to comment

    I have an issue with 6.9.0-beta25. The server crashed/froze for the first time on the 8th (updated to beta25 on the 30th) but now turns off every few hours. I have no idea what is going on but here is my log and a foto of the server's monitor.

    I turned of Containers, updated plugins and removed my overclock but it doesn't seem to help.

     

    Hope someone can help, Ent

    ds9haku895m51.jpg

    entxvault-diagnostics-20200910-0054.zip

    Link to comment
    1 hour ago, Entxawp said:

    I have an issue with 6.9.0-beta25. The server crashed/froze for the first time on the 8th (updated to beta25 on the 30th) but now turns off every few hours. I have no idea what is going on but here is my log and a foto of the server's monitor.

    I turned of Containers, updated plugins and removed my overclock but it doesn't seem to help.

     

    Hope someone can help, Ent

    ds9haku895m51.jpg

    entxvault-diagnostics-20200910-0054.zip 142.54 kB · 0 downloads

    I'd start with recreating the docker.img file.  The problems start with Plex, and then go down from there.

     

    https://forums.unraid.net/topic/57181-docker-faq/#comment-564309

     

    Link to comment

    Jesus the beta is really buggy for me, It just removed my primary cache drive, can anyone help me as how to get I t back/ how to downgrade back to 6.9.3 stable?

    20200912_172753.jpg

    20200912_172802.jpg

    Link to comment
    3 minutes ago, Entxawp said:

    Jesus the beta is really buggy for me, It just removed my primary cache drive, can anyone help me as how to get I t back/ how to downgrade back to 6.9.3 stable?

    Unmountable, possibly unrelated to the beta.

     

    Go to Tools - Diagnostics and attach the complete Diagnostics ZIP file to your NEXT post in this thread.

    • Thanks 1
    Link to comment

    Here is the diagnostics

    entxvault-diagnostics-20200912-2022.zip

     

    Sep 12 17:27:05 entxvault emhttpd: shcmd (91): mkdir -p /mnt/cache
    Sep 12 17:27:05 entxvault emhttpd: /mnt/cache uuid: da42f43c-1bfa-495a-a654-77d5e00d774d
    Sep 12 17:27:05 entxvault emhttpd: /mnt/cache TotDevices: 2
    Sep 12 17:27:05 entxvault emhttpd: /mnt/cache NumDevices: 2
    Sep 12 17:27:05 entxvault emhttpd: /mnt/cache NumFound: 2
    Sep 12 17:27:05 entxvault emhttpd: /mnt/cache NumMissing: 0
    Sep 12 17:27:05 entxvault emhttpd: /mnt/cache NumMisplaced: 0
    Sep 12 17:27:05 entxvault emhttpd: /mnt/cache NumExtra: 0
    Sep 12 17:27:05 entxvault emhttpd: /mnt/cache LuksState: 0
    Sep 12 17:27:05 entxvault emhttpd: shcmd (92): mount -t btrfs -o noatime,space_cache=v2 -U da42f43c-1bfa-495a-a654-77d5e00d774d /mnt/cache
    Sep 12 17:27:05 entxvault kernel: BTRFS info (device nvme1n1p1): using free space tree
    Sep 12 17:27:05 entxvault kernel: BTRFS info (device nvme1n1p1): has skinny extents
    Sep 12 17:27:06 entxvault kernel: BTRFS info (device nvme1n1p1): enabling ssd optimizations
    Sep 12 17:27:06 entxvault kernel: BTRFS info (device nvme1n1p1): start tree-log replay
    Sep 12 17:27:06 entxvault kernel: BTRFS error (device nvme1n1p1): bad tree block start, want 4529101275136 have 0
    ### [PREVIOUS LINE REPEATED 1 TIMES] ###
    Sep 12 17:27:06 entxvault kernel: BTRFS: error (device nvme1n1p1) in __btrfs_free_extent:3080: errno=-5 IO failure
    Sep 12 17:27:06 entxvault kernel: BTRFS: error (device nvme1n1p1) in btrfs_run_delayed_refs:2189: errno=-5 IO failure
    Sep 12 17:27:06 entxvault kernel: BTRFS: error (device nvme1n1p1) in btrfs_replay_log:2243: errno=-5 IO failure (Failed to recover log tree)
    Sep 12 17:27:06 entxvault root: mount: /mnt/cache: can't read superblock on /dev/nvme1n1p1.
    Sep 12 17:27:06 entxvault kernel: BTRFS error (device nvme1n1p1): open_ctree failed
    Sep 12 17:27:06 entxvault emhttpd: shcmd (92): exit status: 32
    Sep 12 17:27:06 entxvault emhttpd: /mnt/cache mount error: No file system
    Sep 12 17:27:06 entxvault emhttpd: shcmd (93): umount /mnt/cache
    Sep 12 17:27:06 entxvault root: umount: /mnt/cache: not mounted.
    Sep 12 17:27:06 entxvault emhttpd: shcmd (93): exit status: 32
    Sep 12 17:27:06 entxvault emhttpd: shcmd (94): rmdir /mnt/cache
    Sep 12 17:27:06 entxvault emhttpd: shcmd (95): mkdir -p /mnt/vmdrive
    Sep 12 17:27:06 entxvault emhttpd: shcmd (96): mount -t btrfs -o noatime /dev/nvme0n1p1 /mnt/vmdrive
    Sep 12 17:27:06 entxvault kernel: BTRFS info (device nvme0n1p1): disk space caching is enabled
    Sep 12 17:27:06 entxvault kernel: BTRFS info (device nvme0n1p1): has skinny extents
    Sep 12 17:27:06 entxvault kernel: BTRFS info (device nvme0n1p1): enabling ssd optimizations

     

    Edited by Entxawp
    Added Log
    Link to comment
    11 hours ago, Entxawp said:

    It seems to be a damaged superblock.

    It's not the superblock, first error lines are the important ones, it's extent tree corruption, loos like part of it was wiped/trimmed, if it's just the extent tree btrfs restore should be able to recover most of your data.

    • Thanks 1
    Link to comment
    7 hours ago, JorgeB said:

    It's not the superblock, first error lines are the important ones, it's extent tree corruption, loos like part of it was wiped/trimmed, if it's just the extent tree btrfs restore should be able to recover most of your data.

    Neither of the three options given worked in my situation

     

    root@entxvault:~# mkdir /x
    mkdir: cannot create directory ‘/x’: File exists
    root@entxvault:~# mount -o usebackuproot,ro /dev/nvme1n1 /x
    mount: /x: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
    root@entxvault:~# mount -o usebackuproot,ro /dev/nvme2n1 /x
    mount: /x: wrong fs type, bad option, bad superblock on /dev/nvme2n1, missing codepage or helper program, or other error.
    root@entxvault:~# mount -o degraded,usebackuproot,ro /dev/nvme1n1 /x
    mount: /x: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
    root@entxvault:~# 
    root@entxvault:~# mount -o degraded,usebackuproot,ro /dev/nvme2n1 /x
    mount: /x: wrong fs type, bad option, bad superblock on /dev/nvme2n1, missing codepage or helper program, or other error.
    root@entxvault:~# mount -o ro,notreelog,nologreplay /dev/nvme1n1 /x
    mount: /x: wrong fs type, bad option, bad superblock on /dev/nvme1n1, missing codepage or helper program, or other error.
    root@entxvault:~# btrfs check --repair /dev/nvme1n1
    enabling repair mode
    WARNING:
    
            Do not use --repair unless you are advised to do so by a developer
            or an experienced user, and then only after having accepted that no
            fsck can successfully repair all types of filesystem corruption. Eg.
            some software or hardware bugs can fatally damage a volume.
            The operation will start in 10 seconds.
            Use Ctrl-C to stop it.
    10 9 8 7 6 5 4 3 2 1
    Starting repair.
    Opening filesystem to check...
    No valid Btrfs found on /dev/nvme1n1
    ERROR: cannot open file system
    root@entxvault:~# 

     

    Edited by Entxawp
    Spelling
    Link to comment
    On 9/12/2020 at 8:10 PM, Entxawp said:

    Jesus the beta is really buggy for me, It just removed my primary cache drive, can anyone help me as how to get I t back/ how to downgrade back to 6.9.3 stable?

    20200912_172753.jpg

    20200912_172802.jpg

    I had a backup from 2 days ago and ended up going that route after going back to unraid 6.8.3 that also stopped my issues with plex crashing/freezing the server

    Link to comment

    Hey, I realise some might see this as unhelpful, but I'm honestly trying to be the opposite.  To my observation, this is just btrfs.  I don't know why, but stuff happens on this filesystem.  It might have been a new version in the upgrade or something.  I've used a stack of file systems and all of them have been great except btrfs and I did also have an issue with reiserfs at some point maybe 10-15 years ago and that's it over 30 years or so.  I've had unrecoverable btrfs on my cache array also, on a previous version of unraid.  

     

    My solution after several failures with the cache drive with btrfs was to run a single XFS drive.  Anything that needs the redundancy as soon as it's written bypasses the cache.  (I am hoping in an upcoming version, they give us an alternative option for mirroring the cache drive).

     

    I do apologise if this is considered a hijack - but I did want to help in the sense that while this may be considered rare, it is certainly not a one off case and lend some 'moral' support from that perspective! :D

     

    I used to run btrfs on my array also, but ultimately changed it back.  The big benefits of btrfs are mostly lost in the unraid implementation.

     

    And yes, I realise there are plenty of people without issues.  I'm not trying to turn this into a btrfs vs something else discussion.

     

    Marshalleq

    Link to comment
    7 hours ago, Entxawp said:

    Neither of the three options given worked in my situation

    I said to use btrfs restore.

    Link to comment

    I have a question please about using the multiple pools.

     

    I currently have my dockers in my appdata share /mnt/user/appdata using a pool called apps (/mnt/apps/appdata).  I also have another pool called cache (/mnt/cache) that I use for most of my other shares.

     

    I want to move some of my dockers e.g. /mnt/user/appdata/radarr to the cache pool i.e. from /mnt/apps/appdata/radarr to /mnt/cache/appdata/radarr.  As long as I set the appdata share to cache only so that files never get moved to the array, is this safe? 

     

    I created a few test folders e.g. /mnt/cache/appdata/test and I can see that it was still visible at /mnt/user/appdata/test even though the appdata share is set to the /mnt/apps pool, so it seems to work i.e. even if a pool isn't setup in the GUI for a share, all files stored still bubble up to /mnt/user/sharename.

     

    Thanks in advance.

     

    Edit: realised a bad idea as new files will get added to /mnt/apps/appdata.  I'll try a different way

    Edited by DZMM
    Link to comment
    5 hours ago, DZMM said:

    realised a bad idea as new files will get added to /mnt/apps/appdata.  I'll try a different way

    You can map the appdata for a specific application to the actual pool instead of to the appdata user share. So, you could use /mnt/cache/appdata in the docker mappings for some apps, and /mnt/apps/appdata in the docker mappings for other apps.

     

    And as long as appdata user share is cache-only, it will be ignored by mover whether the appdata is on cache or apps.

    Link to comment

     have noticed that spin down of additional pools is using the following command:

     

    Sep 15 14:44:08 Tower emhttpd: shcmd (154): /usr/sbin/hdparm -y /dev/sdb &> /dev/null

     

    would it be better to use smartctl -s standby,now ? I have submitted code changes to the smartctl team to support SAS drives

    1) to support -n standby as currently unraid will spin up drives on device pool.

    2) enable -s standby,now for SCSI drives

    3) Addition option of -s active to spin up both ATA and SCSI drives.

     

    Existing ATA options 

    smartctl -in standby  /dev/sdb
    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.7.8-Unraid] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

    Device is in STANDBY mode, exit(2)

     

    root@Tower:/# smartctl -s standby,now  /dev/sdb
    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.7.8-Unraid] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

    Device placed in STANDBY mode

     

    New Version for SCSI Drives

     

    root@Tower:/# smartctl -in standby /dev/sdd
    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.7.8-Unraid] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

    Device is in STANDBY BY COMMAND mode, exit(2)

     

    root@Tower:/# smartctl -s standby,now  /dev/sdd
    smartctl 7.1 2019-12-30 r5022 [x86_64-linux-5.7.8-Unraid] (local build)
    Copyright (C) 2002-19, Bruce Allen, Christian Franke, www.smartmontools.org

    Device placed in STANDBY mode

     

    If the changes are implemented into smartctl it will support spin down of SAS drives in pools.

    Link to comment



    Guest
    This is now closed for further comments

  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.