Jump to content

Best process to turn one of my array drives into a cache drive


tillkrueger

Recommended Posts

after one of my oldest 2TB drives started to give me some weird permissions problems i started copy most (if not all) of its data onto another drive in my array in preparation of "wiping" the drive empty and "re-formatting" it to eliminate the files the permissions problems didn't allow me to delete.

 

after i was done with copying the data onto another drive i decided to replace this old 2TB with a new 3TB drive and use the old 2TB drive as a drive to offload movies that are taking up space on my Mac (nothing important, but stuff i don't wanna delete just yet).

 

i guess i didn't think ahead when i put in the 3TB drive and started up the array, as those same files with the same permission errors have now been rebuilt onto this new 3TB drive...so my questions now are this:

 

> if i want to empty this new drive entirely, what terminal command do i need to use to properly "format" it (make it appear totally empty)?

> once it's empty, what is the best process to remove it from the protected array and designate it as a cache drive?

 

i've been wanting to use the cache drive feature for a long time now, but haven't gotten around to trying this until now, so i hope someone can give me a brief step-by-step to accomplish those two tasks above.

Link to comment

If this drive is not XFS currently, then you just stop the array, change it to XFS, start the array, and it will get formatted to XFS. Formatting a drive does not mean deleting all of its files. Formatting a drive means creating an empty file system. So if you change it to XFS from some other file system, it will get formatted to XFS.

 

If you want this drive to be a cache drive instead of an array drive, then you will have to do New Config without the drive in the array and let it rebuild parity. Then you can assign it as cache.

 

You can do the format to XFS before or after you remove it from the array and assign it as cache.

 

All of this can be done in the webGUI. No command line needed.

 

Link to comment

thanks trurl...

 

did it as you suggested, even though i slapped my forehead when realizing that New Config erased my disk config (duh!), and being off premises and away for over a week, i got a bit irritated with myself for having been so "unknowing" about this step (which i have taken a few times before, but forgotten about, apparently), but then i found a recent backup of one of my syslogs and was able to re-construct the disk order from reading it.

 

i do have some more questions, though:

 

parity is now re-syncing and the new drive is selected as cache drive, but how do i re-format it as XFS? it didn't seem to give me that option *before* i chose it (probably bc it was already formatted as reiserfs) and i don't see any place in the Web GUI to do so now...if i do need to go into a telnet session, how would i delete all the old data from this cache drive and re-format it to XFS? if i can do so from the Web GUI, where and how?

 

does the cache drive "just work" (meaning, i copy to my array's User Share as i always do, the data goes to the cache drive first, then gets copied to the array at some point, probably configurable somewhere), or do i have to re-create the same User Share folder structure on the cache drive in order for it to "know" what goes where?

 

i really did scour the Wiki and tried to answer these myself, but failed to find these answers...sorry.

 

 

Link to comment

parity is now re-syncing and the new drive is selected as cache drive, but how do i re-format it as XFS? it didn't seem to give me that option *before* i chose it (probably bc it was already formatted as reiserfs) and i don't see any place in the Web GUI to do so now...if i do need to go into a telnet session, how would i delete all the old data from this cache drive and re-format it to XFS? if i can do so from the Web GUI, where and how?

You can only change the format of a drive if the array is stopped.  In that state clicking on the drive in the Main tab allows you to change the format to a new one.    When you now start the array you will be given the chance to format the drive to the new format.    Do not forget that doing so erase the current contents so make sure there is nothing on the drive you need to first copy off.

 

does the cache drive "just work" (meaning, i copy to my array's User Share as i always do, the data goes to the cache drive first, then gets copied to the array at some point, probably configurable somewhere), or do i have to re-create the same User Share folder structure on the cache drive in order for it to "know" what goes where?

 

i really did scour the Wiki and tried to answer these myself, but failed to find these answers...sorry.

In principle it should 'just work'.   

 

You do have to have shares that you want to use the cache drive with the share settings for the cache drive  on 'yes'.  There is a 'no' setting you can use for shares where you never want files to go via the cache drive.  Also if there any shares you want to never be moved off the cache drive they need to be set to 'only'. 

 

The default setting for the mover (which moves files from the cache to the data drives) is to run it once a day at 3:40 AM, although this is user configurable both for frequency and time.

Link to comment

I think the default is for a share to be set to Use cache: Yes.

 

You would set a share to Use cache: Only if you want to store docker and other app data on cache. Many people do this because if you store app data on a parity protected disk, it will keep both that disk and parity active.

 

It is very important to use the cache-only setting or your apps will break when mover moves them off cache.

Link to comment

so if i understood you correctly, itimpi, i can stop the array once parity is synced, then select the cache drive and choose XFS for the format? do i need to un-assigne it from being a cache drive first to get this option, or can i do this with it being an active cache drive? i am aware (and *want*) the old data on it to be erased, so that would be a nice way to kill two flies with one swat (as we say in Germany).

 

and thanks for clarifying, trurl...i'll need to wrap my head around the whole docker apps thing soon...seems very cool what is possible now...i already installed the own cloud app and it appears to be working, but i will have some questions about that when i come up for air again...still in the throes of a big production that i'm about to finish.

 

so many questions, so little time.

 

thanks guys!!

Link to comment

No you do not need to unassign the drive from being a cache drive.  You just go into its settings and change the format type from the dropdown.  You would not have been allowed to change that type while the array is active.    You could also have set the format type when the disk was initially assigned as a cache disk instead of leaving it at the default of 'auto' and that would force it to be treated as the selected format.

Link to comment

damn, i was so close...for safety i did one final Sync of the old contents of the Cache drive to my User Share, and when i wanted to stop the array after that it just hung there for an unusually long time until i realized that my Sync software was still running and probably didn't allow some drives to unmount...next the Web GUI couldn't be reached at all anymore...i went into a Telnet session and tried:

 

killall emhttp

/usr/local/sbin/emhttp &

 

but the Web GUI refuses to come back up now...since i'll have to leave for a week early morning tomorrow and won't have physical access to the server until i return i'm afraid of doing anything that will either destroy Parity again or keep the server from being reachable...given that one or more of the drives are probably not un-mounting right now, what's my best chance of successfully being able to get the Web GUI back up? if i try to telnet in and issue a "reboot" (or is it "restart"?) i'll run risk of it just hanging in the process, until that or those drives are un-mounted, which wouldn't happen just like that, right?

 

what should i do to do a clean restart (if that is necessary) or get the Web GUI back up to finish what i started (stopping the array, un-assigning the cache drive, changing the default disk format and re-formatting the cache drive to XFS)?

 

if it ain't one thing...

Link to comment

well, i followed all of the instructions and potential pitfalls in this thread: http://lime-technology.com/forum/index.php?topic=27168.0

 

and then published the "reboot" command when i thought that all is safe, fingers crossed...happy to say the server is back up and the Web GUI is exactly where i left it (Array stopped)...on we go...

 

but this is where i fail to see how the Cache drive can be forced to re-format with XFS? what is the exact process to wipe out that drive and make it an empty XFS drive? sorry if i'm being a bit "slow" to understand here.

Link to comment

If the drive is not already XFS, then with the array stopped, you change it to XFS and when you restart it will format.

 

From the main page click on the drive and it will take you to a page for that drive. One of the settings is File system type, with a dropdown to choose.

Link to comment

oooh! it wasn't until this last post of yours, trurl, that i realized that i had to literally click *on* the drive name ("cache") to get to the page you've been talking about...the whole time i kept looking for a drop-down under the "FS" column, or something like that.

 

anyway, when i did find the drop-down in that page you get to after clicking the drive's name link, i saw that it was already set to XFS (even though the "FS" column clearly states "reiserfs" for the cache drive (and all others), but when i went back to start the array again it then did say "xfs", but under the "Used" column it now says "Unmountable"...i also don't see its disk share in my Finder...what could that be about?

 

just went back to check, and that page clearly shows that SMB Security Settings are set to Export "Yes" and "Public". (AFP is set to "No", since i have had no love from AFP since Apple changed something about their implementation of it in 10.9, i think?)

 

but when i started up the array again, shouldn't there have been some process that takes place while re-formatting that drive from reiserfs to xfs? it seemed to be instant, but since it also says "Unmountable" and the disk share is not visible, there clearly seems to be some sort of problem, no?

 

 

Link to comment

when i click the "Log" button in the Web GUI i do get some info that seems to indicate that unRAID can't find a valid file-system on that disk:

 

Jun 7 22:20:59 unRAID logger: missing codepage or helper program, or other error

Jun 7 22:20:59 unRAID logger: In some cases useful info is found in syslog - try

Jun 7 22:20:59 unRAID logger: dmesg | tail or so

Jun 7 22:20:59 unRAID logger:

Jun 7 22:20:59 unRAID emhttp: shcmd: shcmd (425): exit status: 32

Jun 7 22:20:59 unRAID emhttp: mount error: No file system (32)

Jun 7 22:20:59 unRAID emhttp: shcmd (426): rmdir /mnt/cache

Jun 7 22:20:59 unRAID emhttp: shcmd (427): sync

Jun 7 22:21:00 unRAID emhttp: shcmd (428): mkdir /mnt/user0

Jun 7 22:21:00 unRAID emhttp: shcmd (429): /usr/local/sbin/shfs /mnt/user0 -disks 16777214 -o noatime,big_writes,allow_other |& logger

Jun 7 22:21:00 unRAID emhttp: shcmd (430): mkdir /mnt/user

Jun 7 22:21:00 unRAID emhttp: shcmd (431): /usr/local/sbin/shfs /mnt/user -disks 16777215 2048000000 -o noatime,big_writes,allow_other -o remember=0 |& logger

Jun 7 22:21:00 unRAID emhttp: shcmd (432): cat - > /boot/config/plugins/dynamix/mover.cron <<< "# Generated mover schedule:#01240 3 * * * /usr/local/sbin/mover |& logger#012"

Jun 7 22:21:00 unRAID emhttp: shcmd (433): /usr/local/sbin/update_cron &> /dev/null

Jun 7 22:21:00 unRAID emhttp: shcmd (434): :>/etc/samba/smb-shares.conf

Jun 7 22:21:00 unRAID emhttp: shcmd (435): cp /etc/netatalk/afp.conf- /etc/netatalk/afp.conf

Jun 7 22:21:00 unRAID avahi-daemon[388]: Files changed, reloading.

Jun 7 22:21:00 unRAID avahi-daemon[388]: Files changed, reloading.

Jun 7 22:21:00 unRAID avahi-daemon[388]: Files changed, reloading.

Jun 7 22:21:00 unRAID avahi-daemon[388]: Files changed, reloading.

Jun 7 22:21:00 unRAID emhttp: Restart SMB...

Jun 7 22:21:00 unRAID emhttp: shcmd (436): killall -HUP smbd

Jun 7 22:21:00 unRAID emhttp: shcmd (437): cp /etc/avahi/services/smb.service- /etc/avahi/services/smb.service

Jun 7 22:21:00 unRAID avahi-daemon[388]: Files changed, reloading.

Jun 7 22:21:00 unRAID avahi-daemon[388]: Service group file /services/smb.service changed, reloading.

Jun 7 22:21:00 unRAID emhttp: shcmd (438): pidof rpc.mountd &> /dev/null

Jun 7 22:21:00 unRAID emhttp: shcmd (439): /etc/rc.d/rc.atalk status

Jun 7 22:21:00 unRAID emhttp: Restart AFP...

Jun 7 22:21:00 unRAID emhttp: shcmd (440): /etc/rc.d/rc.atalk reload |& logger

Jun 7 22:21:00 unRAID emhttp: shcmd (441): cp /etc/avahi/services/afp.service- /etc/avahi/services/afp.service

Jun 7 22:21:00 unRAID avahi-daemon[388]: Files changed, reloading.

Jun 7 22:21:00 unRAID avahi-daemon[388]: Service group file /services/afp.service changed, reloading.

Jun 7 22:21:00 unRAID emhttp: Starting Docker...

Jun 7 22:21:00 unRAID kernel: BTRFS info (device loop0): disk space caching is enabled

Jun 7 22:21:00 unRAID kernel: BTRFS: has skinny extents

Jun 7 22:21:00 unRAID logger: Resize '/var/lib/docker' of 'max'

Jun 7 22:21:00 unRAID logger: starting docker ...

Jun 7 22:21:00 unRAID kernel: BTRFS: new size for /dev/loop0 is 10737418240

Jun 7 22:21:01 unRAID avahi-daemon[388]: Service "unRAID" (/services/smb.service) successfully established.

Jun 7 22:21:01 unRAID avahi-daemon[388]: Service "unRAID-AFP" (/services/afp.service) successfully established.

Jun 7 22:21:03 unRAID logger: ownCloud: started succesfully!

Jun 7 22:29:35 unRAID emhttp: /usr/bin/tail -n 42 -f /var/log/syslog 2>&1

 

where are the full syslogs saved to in v6? i don't seem to be able to find one on my Flash drive.

Link to comment

No version of unRAID has ever stored syslog on flash.

 

If you're on rc4, go to Tools - Diagnostics. If on an earlier version, go to Tools - System Log. Depending on your version, there may also be a Download button at Tools - System Log. If not you can copy and paste it into a text file and zip it. If all else fails on getting a syslog, see v6 help in my sig.

Link to comment

 

anyway, when i did find the drop-down in that page you get to after clicking the drive's name link, i saw that it was already set to XFS (even though the "FS" column clearly states "reiserfs" for the cache drive (and all others), but when i went back to start the array again it then did say "xfs", but under the "Used" column it now says "Unmountable"...i also don't see its disk share in my Finder...what could that be about?

That is expected behaviour as at this point the disk has not been formatted to XFS - just the fact you want XFS has been recorded.  This means it is still possible to back out if you made a mistake.  You have to select the option to format the disk at this point to actually create the XFS file system on the disk.  Once that has been done it becomes usable for files.
Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...