Allow cache disk to be mounted even if array is stopped


SSD

Recommended Posts

This is how the flash works. This would allow certain add-ons, like unmenu, plugins, and other packages to be placed on the cache disk instead of the flash. And users could copy things like syslogs to cache instead of flash. All this would dramatically reduce the use of the flash and increase its lifespan.

 

Would have to be a way to mount and unmount the cache disk separate from starting and stopping the array.

Link to comment

It makes even more sense when we consider that docker in its recommended unRAID form must use the cache drive.

 

One gotcha is that we have been recommending using /mnt/user/* in docker land even when accessing a cache only share as it ensure that FUSE does not get confused when doing cross share moves/copy's or cross filesystem moves/copy's  (i.e. BTRFS <> ReiserFS). Without consistently using /mnt/user/ we often find unRAIDs FUSE often does the slow copy/move then delete safe fallback mechanism rather than the instant direct change you would expect when the source and destination are on the same physical device.

 

This means we would also need to make the FUSE /mnt/user/ version of the cache shares available as well. Likely this is the safe option regardless.

Link to comment

This presents certain config issues.  At present, when "Stopped" user is able to do all off-line management of storage devices such as assigning new devices, removing devices, etc.  This includes the cache device - because really it's a cache pool now and you have to be able to add devices to the pool, remove devices etc. just like the array.  It keeps things much simpler if this kind of work can happen with all devices unmounted.

 

This feature suggestion is really an implementation suggestion.  The feature that's wanted is a way to minimize what's stored on the flash.

Link to comment

What about something like unionfs or aufs ?

What for?  BTW I would probably never add those filesystems.

 

The feature that's wanted is a way to minimize what's stored on the flash.

 

union or aufs would allow two filesystems to look like one. Thus minimizing what is stored on the flash, but looks like its on the flash.

 

With something like unionfs and aufs and root on tmpfs, the whole root tree can be made to have a transient ramdisk

version and/or a full operating system version.

 

I tested it years ago, It worked for me.  I'm sure they've come a long way since then.  It was just a quick idea.

Link to comment
  • 8 months later...

I know this is necroposting, and might not get noticed, but -shrug- I don't think I've ever actually looked at the development roadmap stuff before, so today was my first time seeing this!

 

Anyways...

 

I have a slightly odd config for my cache drive that does exactly what is being requested here.

 

While this was originally only intended to add a swap partition on a machine with only 512MiB, I figured it would be useful to have some non-array, non-flash storage too, even if it were just so I could keep the write-count down on the usb stick...  I've been running with the same essential configuration since one of the Unraid 5 betas (or maybe it was an RC... its a long time ago anyway)...

 

My config? Well, my 300GiB cache drive is partitioned into 3... sdX1 is the cache portion, sdX2 is a btrfs partition (but was ext4 under unraid 5... ) and sdX3 is a 4GiB swap partition...

 

NOTE: I'd better point out that I'm running 6b6, so I don't know if this configuration works with 6b15 (current at time of writing), although I can't see any reason why it shouldn't.

 

So far as I remember, to get to this state, I:

Note: This is deliberately kinda vague.... if you don't know how to deterine what device your cache drive is or how to use fdisk, mkfs.* and mkswap, you really shouldn't be doing this!!! Hopefully, atleast having to read up on those particulars will either provide you with enough knowledge to be confident enough to proced at your own risk (obv) orrrrr scare you into running away from this idea very quickly!

  • set up the cache drive using the webgui so that it was usable
  • used fdisk to modify the partition table. I
    • recorded the partition details for the webgui-created cache partition
    • deleted the webgui-created partition
    • created my three partitions, working backwards from the end of the disk.. so, creating sdX3 first (with start=(disk.end - 4GiB)), then sdX2 (with start=(sdX3.start - 32GiB))... thus, when I came to recreate the cache partition (sdX1), I recreated it with the same start sector as the webgui had used (on my disk, this meant start=64.. it might be different on other/newer/bigger/AF drives), and I just let it take up all the remaining space.
      (Admittedly, the sdX1.start *might* not matter, but its probably best to stick with what the webgui used)

    [*]mkfs and mkswap... my two 'extra' partitions have labels ('ExtPartition' and 'SwapPartition') so they are device-detection agnostic.

 

/boot/config/go is configured to start swap, create a mountpoint, update /etc/fstab and mount sdX2

swapon -L SwapPartition
mkdir /mnt/ext
echo "/dev/disk/by-label/ExtPartition /mnt/ext btrfs auto 0 0" >> /etc/fstab
mount /mnt/ext
chmod 755 /mnt/ext

 

If memory serves, this next bit is somewhat extraneous on an Unraid *5* system, as there is a 'swapoff -a' somewhere in the shutdown scripts, but I figured it was a good idea to have it anyway, as I didn't think swap partitions were a supported configuration for Unraid, so the 'swapoff -a' *might* dissappear... I haven't even looked to see if there is a 'swapoff -a' somewhere in Unraid 6 shutdown scrips....

 

Anyway, /boot/config/stop has

swapoff -L SwapPartition

 

/boot/config/stop deliberately doesn't attempt to unmount /mnt/ext though... I can't remember if this is because I was running the 'powerdown' script though, or whether Unraid provides a "straggler killer" when its doing the "unmount all"s...

 

Presumably, on a system that already has a cache disk configured (so long as it has a few cylinders-worth more free spae than you want for your non-Array partition(s)... ) you would just tarz the cache content to a file on your Array before deleting the cache partition, then untarz it back once you have created the new (smaller) cache partition FS...

 

Back when my system was running Unraid 5, I had some scripting in place that would install packages from a location on the 'ext' partition and then patch the running root filesystem with all the config files, etc, then add in things like NUT, Apache and PHP, before issuing SIGHUPs (etc) to get running processes to reread configs and funally calling the startup scripts for the added daemons... so it kind of mimiced a read/write unioned FS, but the volatile layer was simply a quick'n'dirty combination of a list of files and

tar -cvf /mnt/ext/preserve.tar -T /mnt/ext/preserve.files

This *was* originally even lower tech, just comprising a bash script that looped over the file list doing 'cp's. Since I'd 'hijacked' my webserver hardware for Unraid, I also had Apache's DocumentRoot, vhost roots, etc on 'ext,' as it was serving stuff that needed to be available regardless of the Array status.

 

When I jumped from 5-stable to 6b6, I scrapped all my customisations of the boot sequence, etc. When I was setting things up cleanly for 6, I decided not to revisit my faked-union code until atleast 6-rc1... the principles that my old setup worked on haven't changed though, I just didn't want to find I'd relied on something in an early 6-beta that was removed closer to 6-stable happening... I do still use the same bit of scripting to bring up swap and ExtPartition, and I do still have a few boot-time file customisations in place, I've just reverted to a basic bit of bash to copy them over (ie, a few 'cp' commands)

 

As for how it behaves, well:

  • The cache drive shows up in the webgui, and the free space count is accurately reported (obviously this is only for the cache partition, not the whole drive)
  • The webgui shows the 'space used' is (drive.size - sdX1.free), so its an accurate reflection of the space that isn't available in the cache partition, rather than reporting the space used on the cache partition like'df' would... this is actually what I think you want to know from the webgui, and was a pleasant surprise, bearing in mind this is a hack... I'd expected to see a more 'df'-like "Used" value, which would then make (free + used != size)
  • The cache partition is mounted on /mnt/cache by Unraid and (so far as I can tell) works as-adverised... I've never noticed it not do something its supposed to
  • I have working swap, even if the array is stopped (as shown in 'top'... Back when I first set this all up, I would even *use* some of it.... but I've got somewhat newer hardware now and more RAM, so this is much less frequent now
  • I have a ~32GB non-array partition that I use for storing stuff that I want available even without the array being up

The only things thats really missing is that the webgui (obviously) doesn't report the existence of  SwapPartition or ExtPartition and therefore there is no size/used/free information for the ExtPartition filesystem... but I'm not really sure how partitions would fit into the Unraid device list...

Link to comment

This feature suggestion is really an implementation suggestion.  The feature that's wanted is a way to minimize what's stored on the flash.

 

Agree.  I haven't tried this, but I know some folks who have virtualized UnRAID on ESXi boot it from a hard drive, and just have the flash connected so the key file is seen okay.  If a system is set up like this, can that same hard drive be used for Dockers?    And will UnRAID find the config and addon folders if they're on the hard drive, or do they still need to be on the flash?

 

In essence, what I'm thinking of is something along the lines of "If the boot device is a hard drive, leave everything there -- just look to the USB flash drive for key confirmation".

 

Link to comment
  • 5 years later...

I'd rewrite this as: The ability to individually start/stop/configure pools. (In context of the new multi pool feature of 6.9)

 

It's really annoying when you want to work on say, one cache or the like, and everything has to stop (especially docker and the VMs when their storage is else where).

Edited by johner
more context
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.