little_miss_goth

Members
  • Posts

    3
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

little_miss_goth's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I know this is necroposting, and might not get noticed, but -shrug- I don't think I've ever actually looked at the development roadmap stuff before, so today was my first time seeing this! Anyways... I have a slightly odd config for my cache drive that does exactly what is being requested here. While this was originally only intended to add a swap partition on a machine with only 512MiB, I figured it would be useful to have some non-array, non-flash storage too, even if it were just so I could keep the write-count down on the usb stick... I've been running with the same essential configuration since one of the Unraid 5 betas (or maybe it was an RC... its a long time ago anyway)... My config? Well, my 300GiB cache drive is partitioned into 3... sdX1 is the cache portion, sdX2 is a btrfs partition (but was ext4 under unraid 5... ) and sdX3 is a 4GiB swap partition... NOTE: I'd better point out that I'm running 6b6, so I don't know if this configuration works with 6b15 (current at time of writing), although I can't see any reason why it shouldn't. So far as I remember, to get to this state, I: Note: This is deliberately kinda vague.... if you don't know how to deterine what device your cache drive is or how to use fdisk, mkfs.* and mkswap, you really shouldn't be doing this!!! Hopefully, atleast having to read up on those particulars will either provide you with enough knowledge to be confident enough to proced at your own risk (obv) orrrrr scare you into running away from this idea very quickly! set up the cache drive using the webgui so that it was usable used fdisk to modify the partition table. Irecorded the partition details for the webgui-created cache partition deleted the webgui-created partition created my three partitions, working backwards from the end of the disk.. so, creating sdX3 first (with start=(disk.end - 4GiB)), then sdX2 (with start=(sdX3.start - 32GiB))... thus, when I came to recreate the cache partition (sdX1), I recreated it with the same start sector as the webgui had used (on my disk, this meant start=64.. it might be different on other/newer/bigger/AF drives), and I just let it take up all the remaining space. (Admittedly, the sdX1.start *might* not matter, but its probably best to stick with what the webgui used) [*]mkfs and mkswap... my two 'extra' partitions have labels ('ExtPartition' and 'SwapPartition') so they are device-detection agnostic. /boot/config/go is configured to start swap, create a mountpoint, update /etc/fstab and mount sdX2 swapon -L SwapPartition mkdir /mnt/ext echo "/dev/disk/by-label/ExtPartition /mnt/ext btrfs auto 0 0" >> /etc/fstab mount /mnt/ext chmod 755 /mnt/ext If memory serves, this next bit is somewhat extraneous on an Unraid *5* system, as there is a 'swapoff -a' somewhere in the shutdown scripts, but I figured it was a good idea to have it anyway, as I didn't think swap partitions were a supported configuration for Unraid, so the 'swapoff -a' *might* dissappear... I haven't even looked to see if there is a 'swapoff -a' somewhere in Unraid 6 shutdown scrips.... Anyway, /boot/config/stop has swapoff -L SwapPartition /boot/config/stop deliberately doesn't attempt to unmount /mnt/ext though... I can't remember if this is because I was running the 'powerdown' script though, or whether Unraid provides a "straggler killer" when its doing the "unmount all"s... Presumably, on a system that already has a cache disk configured (so long as it has a few cylinders-worth more free spae than you want for your non-Array partition(s)... ) you would just tarz the cache content to a file on your Array before deleting the cache partition, then untarz it back once you have created the new (smaller) cache partition FS... Back when my system was running Unraid 5, I had some scripting in place that would install packages from a location on the 'ext' partition and then patch the running root filesystem with all the config files, etc, then add in things like NUT, Apache and PHP, before issuing SIGHUPs (etc) to get running processes to reread configs and funally calling the startup scripts for the added daemons... so it kind of mimiced a read/write unioned FS, but the volatile layer was simply a quick'n'dirty combination of a list of files and tar -cvf /mnt/ext/preserve.tar -T /mnt/ext/preserve.files This *was* originally even lower tech, just comprising a bash script that looped over the file list doing 'cp's. Since I'd 'hijacked' my webserver hardware for Unraid, I also had Apache's DocumentRoot, vhost roots, etc on 'ext,' as it was serving stuff that needed to be available regardless of the Array status. When I jumped from 5-stable to 6b6, I scrapped all my customisations of the boot sequence, etc. When I was setting things up cleanly for 6, I decided not to revisit my faked-union code until atleast 6-rc1... the principles that my old setup worked on haven't changed though, I just didn't want to find I'd relied on something in an early 6-beta that was removed closer to 6-stable happening... I do still use the same bit of scripting to bring up swap and ExtPartition, and I do still have a few boot-time file customisations in place, I've just reverted to a basic bit of bash to copy them over (ie, a few 'cp' commands) As for how it behaves, well: The cache drive shows up in the webgui, and the free space count is accurately reported (obviously this is only for the cache partition, not the whole drive) The webgui shows the 'space used' is (drive.size - sdX1.free), so its an accurate reflection of the space that isn't available in the cache partition, rather than reporting the space used on the cache partition like'df' would... this is actually what I think you want to know from the webgui, and was a pleasant surprise, bearing in mind this is a hack... I'd expected to see a more 'df'-like "Used" value, which would then make (free + used != size) The cache partition is mounted on /mnt/cache by Unraid and (so far as I can tell) works as-adverised... I've never noticed it not do something its supposed to I have working swap, even if the array is stopped (as shown in 'top'... Back when I first set this all up, I would even *use* some of it.... but I've got somewhat newer hardware now and more RAM, so this is much less frequent now I have a ~32GB non-array partition that I use for storing stuff that I want available even without the array being up The only things thats really missing is that the webgui (obviously) doesn't report the existence of SwapPartition or ExtPartition and therefore there is no size/used/free information for the ExtPartition filesystem... but I'm not really sure how partitions would fit into the Unraid device list...
  2. hi, ok, sooo..... just in case anyone has read my last post..... erm ....so... obviously I don't use the webgui very much... turns out that just issuing killall -g cache_dirs can end up b0rking emhttp... :-/ Apart from that, it seems to work perfectly! :'( I only came across this today because I was trying to add scripts to make cache_dirs go up and down with the array rather than from the go and stop scripts... and somewhere in my experimenting I pressed the 'stop array' button and the moment I did, I ran slap bang into a completely unresponsive webgui.... when the array is stopped as part of system shutdown, you're expecting the webgui to stop working, and because I tend to shutdown from the shell, not the webgui, its even less apparent.... I think I've worked out whats going on though; what I think is that when its being started by scripts that live in /usr/local/emhttp/plugins/*some_plugin*/event/ , cache_dirs runs as part of the same process group as emhttp, and when you issue the killall -g cache_dirs to kill the process group, it takes out emhttp at the same time, what with them being in the same group.... If the processes are stopped manually (using ps and kill to stop cache_dirs and find in sequence, lowest PID to highest), emhttp doesn't b0rk... Been trying to figure out what to do about it for a good few hours, even wrote the large part of a script that would carefully the individual kill the main cache_dirs process, its children cache_dirs and the find processes (and even check the at-queue for scheduled cache_dirs jobs) And then I fell across setsid , which I'd forgotten alllll about...! Turns out that if you start cache_dirs by doing setsid /path/to/cache_dirs -w ... then cache_dirs gets its own process group, and killall works as expected, even when stopping the array from array event scripts :-) So sorry if anyone has run into this. -Jo
  3. Hi all, OK, so this is my first post in the forums, so please be gentle! 1st up, many thanks to Joe for the cache_dirs script; most definitely a Good Thing to have on an unraid system :-) So, I've been running unraid v5 since about 5b5 (I think), and have been on the stable 5.0 release since not long after it came out. *was* running a combination of 2TB and 3TB SATA array drives for a total of 17TB, with a non-array 300GB IDE disk providing me a swap partition (not strictly necessary with 4TB, I know, but I've been using Linux since 1996, and old habbits die hard!) and somewhere to keep data that didn't need to be protected cos I have it burnt to dvd). When I set the machine up for unraid, I went down the "oh, that could be useful" route, so I had unmenu, powerdown, various additional unmenu packages (like htop, powerdown, etc) and a rather customised way of launching them so that I could store the majority of them on the non-array disk and kind of inject them into the live unraid filesystem, inspired by the way Tiny/Micro Core linux can (or atleast could when I last used it) be configured to do package persistence. Anyway, a couple of months back, I had a combination failure of an array drive and a molex/sata adapter (with impressive/scary amounts of smoke billowing from the machine!)... the result being a completely b0rked 3TB drive and some very melted/charred wires. The long and the short of it is an upgraded array with 25TB on 4TB Parity with the 300GB drive now set up as cache (although not actually *caching* any of the shares.... its doing the same job as before (even down to having a sneaky 2nd partition set up for swap), its just that its now formatted with BTRFS so that I have the option of playing with Docker if I fancy) and a bigger PSU with lots of SATA power connections (eliminating what would have been 6 molex/sata adapters for the now 10 SATA drives) and running unraid 6b6 with powerdown 2.04 the only 'extra' added to the stock unraid setup (except for the minute tweak of moving emhttp onto port 8000). While I'd been poking in the forums to find out as much as I could about the changes in powerdown 2.x *before* I tried it, I came across the cache_dirs script and, since I've got a reasonable amount of data on my array in a various shares, it seemed like it would be a very nice addition. I don't run my server 24/7 (mainly cos its audible from where I sleep, but also to keep electric use down), so there are only really two possibilities when its powered up; I'm either actively doing something that uses the array or I'm not (because I'm either going to or have already done so)... so I have my spindown time set to 15 minutes, which suits this usage pattern right up to the point when what I want to do is quickly check if I have already got something on the array or am hunting for a specific thing that I know I've got stored on there *somewhere*... at that point, it becomes a right royal PITA,... quickly rapidly becomes slowly..... as multiple spinup delays have to happen My unraid system is my main storage for *everything*.... so it has a Software share with all *sorts* of software installers, updaters, boot images, etc from/for all *sorts* of OSs on it, a TV share with dvr'd movies and tv shows, a Media share carrying music and e-books and a final share that acts as my archive of personal *stuff*.... its got A Level and Uni notes, software designs, photos, etc, etc thats been accumulated over some twenty years of having atleast one PC.... this share is by far the least organised and most messy of the 4, and its also the one that suffers the worst from spinup delays when I'm trying to find things, as its distributed across pretty much every disk in the array (and yes, I know I should amalgamate it on one disk, that would help; I just never seem to find the time to shuffle it all around!) So yeah, I've only been using it a couple of days but cache_dirs is *brilliant* for me as it has mostly eliminated those spinup delays and made me a very happy bunny! :-) However, I did come across a little niggle that I'd like to report... OK FYI: I'm running cache_dirs from my /boot/config/go script as /boot/tools/cache_dirs -w -u -B I am (purposefully) not using a maxdepth value with cache_dirs (because my archive and software shares are the trees that will benefit me the most from being cached as they are my "I know I've got that somewhere" go-to locations and they are both quite deep and range from being poorly organised to completely not organised at all...!). This unrestricted depth means that the first cache_dirs scape takes a pretty long time (ie, ballpark 20mins). Not (in and of itself) an issue, but because my first cache_dirs scrape takes a long time , I come across something that I suspect most people won't.... which is that if I attempt to use cache_dirs -q in preparation for shutting down the unraid system while the first scrape is still in progress (ie, there is a 'find -noleaf' process running that was spawned by cache_dirs), only the cache_dirs process who's PID is stored in the LCK file ends and I still have a cache_dirs process and a find process listed when I do: ps -ef | grep "\(find\|cache_d\)" | grep -v grep Attempting cache_dirs -q for a second time, outputs a statement: cache_dirs is not currently running (presumably because the LCK file no longer exists), even though ps shows it is; indeed, the cache_dirs process that is still running shows in ps as having now been re-parented to PID 1, since it was orphaned when the process referenced in the LCK file ended. Keeping an eye on the process list, the find that was running when the 'quit' was signalled runs to completion, and another find starts, so it looks to me as though the cache_dirs process that is still running will continue to run until it completes, regardless of the 'quit' having being signalled. But... I got bored waiting for that to happen, so I decided to kill cache_dirs and find. Just killing that orphaned cache_dirs process doesn't stop the find, so I ended up using killall -g cache_dirs which immediately stops the orphaned cache_dirs process group (ie cache_dirs ands its child find(s)). Now, admittedly this is a less-than-usual use pattern, but from my reading about powerdown, I know there is atleast one user who experiences frequent (ie, multiple powerouts in short succession) power outages; when their power goes out, the powerdown script is invoked (ie, via UPS signalling) to gracefully shutdown the array and machine, so I don't think its too far-fetched that someone could run into this issue by being unlucky enough to suffer a power outage during cache_dirs initial scrape... I am now using the following bit of scripting to stop cache_dirs on my own system (so I don't end up with a modified cache_dirs scripts to maintain): #!/bin/bash cache_dirs -q sleep 2 killall -s SIGTERM -g cache_dirs But if Joe felt it worthwhile, I think the 'quit' handler could simply be tweaked to use killall -g "$program_name" instead of kill "$lock_pid" I thought that I should probably report this, as it could be an issue that someone might run in to (albeit infrequently) that could end up with an array disk being ungracefully unmounted, thus triggering a parity check on the next boot for no obvious or easily-identified reason. Editted to include the possible tweak, and again to make it use $program_name not $lock_pid