Jump to content
jonp

Accelerator drives

146 posts in this topic Last Reply

Recommended Posts

I just managed to shoe horn ~1M files onto 100GB SSD. Whilst this is a small % of array usage in terms of GB usage it is a huge % of the file count.

 

Are you still using Freddie's script with your own modifications to achieve this?

 

Share this post


Link to post

I just managed to shoe horn ~1M files onto 100GB SSD. Whilst this is a small % of array usage in terms of GB usage it is a huge % of the file count.

 

Considering how cheap this SSD was that a lot of bang for the buck. Happy

 

Not trying to bring back a dead topic, but....

 

NAS, do you have a guide or any condensed post we could follow?

 

Any issues with the SSD drive in the array?

 

Thanks

Dave

Share this post


Link to post

It is still a kludge unfortunately. Essentially I script

 

https://github.com/trinapicot/unraid-diskmv

 

to loop through each of my shares on each of my drives for these commands

 

"/mnt/user/apps/scripts/diskmv -f -s 1000"

"/mnt/user/apps/scripts/diskmv -f -s 100000 -e idx,ifo,jpg,log,md5,nfo,nzb,par2,pl,sfv,smi,srr,srt,sub,torrent,txt,url,png,ico,csv,py,pyo"

 

this actually works quite well but the problem is if i every get low on disk space and make a mistake with the share allocation settings since uNRAID has no idea what an accelerator drive is it dumps large files on there.

 

Currently I have no scripted way to clean up from that.

 

 

 

Share this post


Link to post

Thanks for the reply.

 

I used to be pretty good at bash scripting.  Once I get all my files moved over I'll have to give it a shot.

 

Did you mark the SSD drive in the array as "global exclude"?  From the posts I have found, it appears that unRAID still reads the drive, but it doesn't write to it if you have that set.  I could be wrong though.

Share this post


Link to post

TBH i cant remember about global exclude I gave up a bit on this when i couldn't even get the feature moved out of unscheduled to "sometime in the future".

 

Saying that if you are keen I would be more than happy to start working on this again with you. My bash script is very specific to me and utilitarian but if you are interested i could post a generic version of it.

Share this post


Link to post

Yes, posting the script would be helpful.  I have a few more days of data transfer before I get to dig into this though.  Those 8TB drives get slow once you max out their cache.

 

I would like to hopefully check things like the total size of the files to move and also how much space is left on the drive it's moving to.  I think this can be done with some awk commands and file lists.

 

How often do you run the script to move files?  once a day?  Few times a day?

Share this post


Link to post

I just run it once in a while manually. As i say i was hoping for more upsteam interest and when it wasnt accepted as a future feature i just stopped thinking of developing it for anyone other than me.

Share this post


Link to post

Here you go.

 

Please since we are potentially touching loads of files here sanity check first.

 

Not pretty but easy to hack

 

#!/usr/bin/bash

#Create an array of commands
array0[0]="/mnt/user/apps/scripts/diskmv -f -s 1000"
array0[1]="/mnt/user/apps/scripts/diskmv -f -s 100000 -e idx,ifo,jpg,log,md5,nfo,nzb,par2,pl,sfv,smi,srr,srt,sub,torrent,txt,url,png,ico,csv,py,pyo"

#Create an array of shares to accelerate
array1[0]="/mnt/user/backups/"
array1[1]="/mnt/user/pictures/"

#Create an array of diskids to strip
array2=( 2 5 6 7 8 9 10 11 12 13 15 16 )

#Define accelerator disk
accelerator=disk3

#For each share and each disk run each diskmv command
for command in "${array0[@]}"
do
        for path in "${array1[@]}"
        do
                for diskid in "${array2[@]}"
                do
                        $command $path disk$diskid $accelerator
                done
        done
done

Share this post


Link to post

Thanks again for the script! 

I just moved all my small files and the other extensions you have listed.  And it's only taking up 2.6GB.  I think I'm going to have to add an extra zero to my "small size" files.

 

Since I bought a cheap OCZ 960GB SSD, I plan to look at the special diskmv they posted for you, to add a Move New, and Move Old flags.  i.e. If the file is <30 days, move it to the SSD, and to run against the ssd drive, If file is >30 days, move to a data disk.  I'm sure there is some precautions and other checks I will need to do when selecting the target disk, but I think I can take care of that in your bash script.

 

FYI, had to search a bit in this thread to find the modified diskmv with the size flag.

https://github.com/trinapicot/unraid-diskmv

 

Thanks again.

Share this post


Link to post

Parity is maintained within this tool and script. its just file moves within the array nothign fancy

Share this post


Link to post

Yes because it's moved under the shfs driver but what about long term. 

 

Since parity works on the block level and the XOR is calculated across all disks in the array what does the background garbage collection do to valid parity.

 

In essence. Are you finding many parity corrections during monthly checks as a result of having a ssd in the protected array?

Share this post


Link to post

Seems like it is only an issue if you run fstrim.  I don't plan to do this.  Or If I do it, I'll just run the parity check right after it.

 

I also understand that it's not a supported method and am okay with taking the risk of possibly losing data.  I don't want to, but I understand it.  What I do want is the faster response times (Mainly read) of the SSD.  Kind of like an L2ARC drive in ZFS.

Share this post


Link to post

That thread went completely under the radar for me. Will research more.

Share this post


Link to post

Well, I took a first shot at writing a script to move "newer" files over to my SSD.  Seems to be working fine.  But the "Total Files" size in GB is not correct.... but it's "close"

 

https://github.com/hugenbd/unRAID-Accelerator-Drive

 

I next need to write or update this script to move "older" files off and back over to one of the data drives.  This should be able to keep the drives spun down most of the time, and allow for a fast response of any apps that need newer files.

 

Next up, to go and test my parity again.  (15 hour wait....)

 

 

 

Share this post


Link to post

Nice work. I am glad to see people using the accelerator idea and this looks like it will make it even easier.

 

Hopefully with enough traffic the feature will be promoted to scheduled but I think it is more realistic that it will remain the province of power user hackery and if it does scripts like this will help new users perhaps consider the idea.

 

Share this post


Link to post

Thanks NAS

 

I actually think I'm coming at this problem a little backwards.  The script I wrote can be used for initially populating the SSD/Accelerator drive, however there is a better way to continually populating it.

 

  • Set the shares you want to include only the Accelerator Drive
  • set the other drives as exclude
  • If using cache drives, the mover should put the files on the Accelerator Drive according to your schedule
  • Run a script that uses diskmv to move files out of the Accelerator drive to the other data drives in the "exclude" list.  This would be based on your own requirements.  i.e. date, size, filetype. Basically anything able to be used in a find command.

 

I believe that unRAID still looks at all drives to READ a file.  But follows the share include/exclude selection when writing.

 

The one downside to this approach, is that you would be losing the ability to utilize the included "methods" of filling up a disk.  i.e. highwater, fillup, etc.

Share this post


Link to post

  • Set the shares you want to include only the Accelerator Drive
  • set the other drives as exclude

Not necessary and not recommended to set both included and excluded drives for a user share. Not necessary because Included means Only listed drives and Excluded means Except listed drives so there is no good reason to use both. Not recommended because there have been reports of unexpected behavior if both are used even if you manage to make them consistent with each other.

Share this post


Link to post

Perhaps the easiest way to implement this is to allow variables or settings to be passed to the mover script.

 

Let's use tv_shows as an example:

 

1. Set the share to use cache: Yes

2. Allow variables to be passed to the mover script such as size and age requirements from within the webGUI.

 

This can keep JPGs, SUBs, NFO, etc. on the cache device(s) as well as keep newly populated/downloaded/recorded tv_shows on the cache drive for a given period of time (say, 2 weeks).

 

Just thinking out loud here...

Share this post


Link to post

I wouldn't expend much effort on how this would work in unRAID proper. It is multiple YEARS since i proposed this first time and this is merely the 3rd generation post to match the forum reorganization and it hasnt even made it into the confirmed pile and thats over a year unscheduled..

 

No negativity meant but rather, better to direct energy to community script/addon ideas for implementation which we know can and do happen.

Share this post


Link to post

  • Set the shares you want to include only the Accelerator Drive
  • set the other drives as exclude

Not necessary and not recommended to set both included and excluded drives for a user share. Not necessary because Included means Only listed drives and Excluded means Except listed drives so there is no good reason to use both. Not recommended because there have been reports of unexpected behavior if both are used even if you manage to make them consistent with each other.

 

Thanks trurl, I'll take a look at the include/exclude setting more.    Would be nice if the GUI didn't allow you to select both include/exclude if it causes problems...

 

Would you recommend just setting the include then?

Share this post


Link to post

...Would you recommend just setting the include then?

Seems the most logical for this purpose.

Share this post


Link to post

I have found when running this that invariably large files sneak into the accelerator drive. Since we will not have complete control over the drive contents we need to consider a means to strip non matching files as well.

Share this post


Link to post

I have found when running this that invariably large files sneak into the accelerator drive. Since we will not have complete control over the drive contents we need to consider a means to strip non matching files as well.

 

Yup, that's what I plan to work on.  Basically the opposite of what I posted earlier.    i.e. Instead of moving files to the accelerator disk, but moving them off.

 

I actually want large files on my SSD as it has plenty of space (1TB).  However, I don't want OLD large files on it.  Those should be on the spinning disks.  i.e. I want all recent files in the past 40 days or so... or whatever I determine that date to be that I can hold on the drive to maximize it's utilization.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.