Accelerator drives


jonp

Recommended Posts

My first impression was that I would not be intersted in your use case and then I realised that I absolutely would but that opens a problem becasue now I have 3 use cases that conflict slightly

 

1. All tiny files onto SSD one

2. As many filess of type x,y and z onto SSD one but not filling it so that #1 use case keeps running

3. All recent files that arent covered by #1 or #2 onto SSD two i.e. recent TV

 

My gut feeling is that this can only be realistically covered by having three SSD accelerator drives

 

Edit: actually i think this all boils down to making sure any scripts we do can be taught about more than one accelerator drive. In that case then its just about script run order.

Link to comment

My first impression was that I would not be intersted in your use case and then I realised that I absolutely would but that opens a problem becasue now I have 3 use cases that conflict slightly

 

1. All tiny files onto SSD one

2. As many filess of type x,y and z onto SSD one but not filling it so that #1 use case keeps running

3. All recent files that arent covered by #1 or #2 onto SSD two i.e. recent TV

 

My gut feeling is that this can only be realistically covered by having three SSD accelerator drives

 

Edit: actually i think this all boils down to making sure any scripts we do can be taught about more than one accelerator drive. In that case then its just about script run order.

 

I think your looking at this new unwritten script as a way to move files onto the accelerator drive.  I want to do the opposite. I want to move them off.  As for getting files to the accelerator drive, I will just use the normal "Mover" and use the "include" fields for the shares I want accelerated.  This way all files start out on the accelerator drive and I get to control what files come off of it.

 

Comments on above list

1.) This can be accomplished with the flags for diskmv already.

2.) diskmv may need an update to do this,  I would think it's possible as they have the -e or exclude flag above, but would just need to tweak it to be an "include".  I have not looked at diskmv enough to see if there is a flag for this.

3.) You can use my script to accomplish this.

 

And I don't think you have to have 3 SSD drives.  One large one would work.  Having multiple SSD's has no real advantage other than buying cheaper/smaller drives when you can.  However, I see the disadvantage of having 3 being, unRAID license usage and more drives to maintain.

Link to comment

Inline replies get messy so...

 

1. Agree. I have been doing this for years but now use diskmv as it is more elegant.

2. In hind sight i can do this just as #1 by only listing spinning disks in the include and making sure it runs sequentially after #1

3. Will actively watch how your progressesing. For my use case I would prefer if unRAID could be taught to just keep stuff on the cache drive until the space is actually needed for newer content FILO style. I suppose this could be achieved with some changes to the mover script but I am always loathed to alter something that is maintained upstream for a different use case. I dont think this is a million miles away from what you are doing though but equally I dont think we are waling the same use case path.

 

 

Link to comment

NAS

I think we are really pretty close to wanting the same things.  Let me list them out why /what I want to use the SSD for.

 

1.) To make reading of certain files faster without having to spin up the HD's.

2.) Small files kept on SSD as random IO is much faster.

3.) Commonly used files kept on SSD (ignoring file size, maybe it's a "type" of file....)

4.) Recently created files as they are most likely to be read again.  i.e. Recent Recording of a TV show. (Recent date can be modified.  Smaller SSD's may use 10 days or newer, larger may use something like 60 days or newer) - This is where I want the standard unRAID mover script to only write to the SSD and I will take care of moving files off of it.

 

Nice to have's or to expand on in the far future

-Recently read files (i.e. plex played a file a few times, so lets move it to the ssd as it is probably going to be read again soon.)

 

The end goal, is to make things load much faster.  Things like scrolling through your plex library, starting playing a video in plex.  i.e. so you don't have to wait for the spin-up of a spun-down drive.

 

Please let me know if these are different or if I missed any from what you want to use it for.

Link to comment

First shot at trying to make another script that moves files off of the SSD/Accelerator drive.  I have not run this in a "real" mode yet, just test.  But the test looks good.  I will probably run this in a few weeks when my SSD starts to get full and I need space.

 

Basically this is a glorified bash script to create a file list, then send each file over to diskmv one by one....

 

https://github.com/hugenbd/unRAID-Accelerator-Drive/blob/master/accel_moveoff.sh

 

Lots of variables to update in the script.

 

#ignore files that are smaller than this size

Ignoresmallsize=10M

 

#ignore all files that have an extension below

Ignorefileextensions="xml,nfo,sfv,sff,properties,jpg,idx,sub"

 

#days old - Only look at files as old as or older than

Daysold="10"

 

#Ignoreddirectories is not added in the code but I know how to add it to the find command.... future enchancment.

#Comma Seperated list of directories to ignore on the accelerator drive

Ignoredirectories="Good Show 1, Good Show 2"

 

#What percentage do you want to fill the drive to before moving on to the next drive, or exiting.

TargetDriveFilltopercentage=55

 

 

 

 

 

 

Link to comment

Well, I took a first shot at writing a script to move "newer" files over to my SSD.  Seems to be working fine.  But the "Total Files" size in GB is not correct.... but it's "close"

 

https://github.com/hugenbd/unRAID-Accelerator-Drive

 

I next need to write or update this script to move "older" files off and back over to one of the data drives.  This should be able to keep the drives spun down most of the time, and allow for a fast response of any apps that need newer files.

 

Next up, to go and test my parity again.  (15 hour wait....)

 

What was the result of your parity check?

Link to comment

I haven't moved any files off the SSD, so I don't think it will trigger anything.  (and it didn't)

 

I have just been moving files to the SSD.  So far 0 errors.

 

Last checked on Wed 17 Feb 2016 11:41:20 AM EST (three days ago), finding 0 errors. 
Duration: 14 hours, 46 minutes, 44 seconds. Average speed: 150.4 MB/s 

Link to comment

That's what I did with the check I posted above.  0 errors.  (i.e. move files to SSD in array, and then check parity)  Disk 3 in the picture is the SSD.

 

From reading the thread on SSD's in the parity array, it appears it's only an issue with "SOME" ssd's and when you delete files.  i.e. recovering the space.  BUT, the internal firmware of the drive, if idle enough, will take care of that, and parity is still good.

 

So I just need to run the parity check after moving files off the SSD.  Then that has both scenario's covered.

array_snapshot.JPG.de8abe9d65612d4c443a6a235e3eba24.JPG

Link to comment

Will be running the parity check tomorrow as I moved about 30GB off of the SSD yesterday, but I think I have run into a small issue.

 

When moving files with diskmv, it will leave empty or "blank" directories of the files moved, if I run it with a filelist.  I think this may be a result of my script running an individual file for each diskmv instead of a directory.  So, with these left, it appears that the disk are spun up even if nothing is in them.  Again, I think this is the reason due to another thread that was posted earlier this week.  (Which I can't find now.)

 

Anyways, I again, think I have a solution to this problem.  Basically a find command that delete's empty folder.  Not sure if there is another solution already posted to the forum, but this is what I found. (Haven't tested yet)

 

find ./ -depth -type d -empty -delete

 

Has anyone else had disk spun up with empty directories?

Link to comment

I moved about 20GB of files off the SSD drive about three days ago.  Then ran a parity check last night.  So far, no errors during parity check.

 

Last checked on Thu 25 Feb 2016 07:13:26 AM EST (today), finding 0 errors. 
Duration: 14 hours, 46 minutes, 49 seconds. Average speed: 150.4 MB/s

parity_check.JPG.f553380d3b5a7a0ca36297c583e1493a.JPG

Link to comment

Have you power cycled the server since moving the files off?

 

From what I know you are the first to run parity checks without corrections whilst having a SSD in the protected array.

 

Lime-tech either hasn't done any testing, or they're not ready to call it "safe" yet and doing everything possible to "break" the integrity of parity with a SSD in the array is what we need to do before declaring it "safe".

 

You're on the right track :)

Link to comment

I just rebooted this morning.

 

Will run another Parity check next week or is there a set of steps you would like me to take?

 

I'm not very concerned at this point.  The other thread mentioned only seem to have issues when trim was run.

 

With SSD's getting cheaper I would assume more people would want to do something similar to this.

Link to comment

I just rebooted this morning.

 

Will run another Parity check next week or is there a set of steps you would like me to take?

 

I'm not very concerned at this point.  The other thread mentioned only seem to have issues when trim was run.

 

With SSD's getting cheaper I would assume more people would want to do something similar to this.

 

Have you run trim on the SSD?

 

Yes I'm looking at getting a small SSD for recent and small files, same as you. Makes watching TV shows quicker on Plex within a few days of airing.

Link to comment

I just rebooted this morning.

 

Will run another Parity check next week or is there a set of steps you would like me to take?

 

I'm not very concerned at this point.  The other thread mentioned only seem to have issues when trim was run.

 

With SSD's getting cheaper I would assume more people would want to do something similar to this.

 

Have you run trim on the SSD?

 

Yes I'm looking at getting a small SSD for recent and small files, same as you. Makes watching TV shows quicker on Plex within a few days of airing.

 

I have not run trim on the ssd.  I don't plan to unless something goes "wrong" with it.  I also don't plan to keep it 100% full. Maybe 75-80% full.

Link to comment

I just rebooted this morning.

 

Will run another Parity check next week or is there a set of steps you would like me to take?

 

I'm not very concerned at this point.  The other thread mentioned only seem to have issues when trim was run.

 

With SSD's getting cheaper I would assume more people would want to do something similar to this.

 

Have you run trim on the SSD?

 

Yes I'm looking at getting a small SSD for recent and small files, same as you. Makes watching TV shows quicker on Plex within a few days of airing.

 

I have not run trim on the ssd.  I don't plan to unless something goes "wrong" with it.  I also don't plan to keep it 100% full. Maybe 75-80% full.

 

Trim won’t work on any SSD that is part of the array, don’t know if that was done on purpose by LT, it will only work on cache or cache pool.

Link to comment

This was not always the case but its removal is not included in any change log that I can find.

 

Also the current official manual categorically states you cannot use SSD as a parity or data drive.

 

This may very well be wrong but keep in mind trim is not the only mecahism that performs this type of out of band garbage collection.

 

The only way to put this to bed for sure is to get the manual officially updated. If have escalated this as hard as I can. Until then do not use SSD in a production array.

Link to comment
  • 2 weeks later...

It has been confirmed that the risk with SSD is due solely to garbage/TRIM.

 

TRIM is no longer an issue in recent unRAID as it cannot be run against protected disks however some disks may do internal garbage collection in a fashion that is not seen by unRAID and as such corrupts the parity slightly.

 

The scale of the problem is unknown and until some time can be spent researching this you should not use SSD in the array and definitely not for the parity disk.

Link to comment
  • 2 months later...

Guys just an update to say there is no update. As it stands you officially cannot and should not use SSD in the array. There are several users that are doing it but since this might come down to specific drives and firmware being supported YMMV and therefore your risk.

 

As I hear more info you will because I REALLY want to stop having to use mechanical disks when an SSD would be a better/ideal fit

Link to comment

Guys just an update to say there is no update. As it stands you officially cannot and should not use SSD in the array. There are several users that are doing it but since this might come down to specific drives and firmware being supported YMMV and therefore your risk.

 

As I hear more info you will because I REALLY want to stop having to use mechanical disks when an SSD would be a better/ideal fit

 

Maybe someone with an SSD or few SSD's in the array could test.

Add a bunch of files to the SSD. Do a parity check.

Remove some files.

Add some other files.

Do a parity check.

wait a few days,weeks or so.

Do another parity check.

 

This or possibly a more advanced suite of tests.

 

I would surmise that if people are doing monthly parity checks, this situation might have reared it's ugly head.

Link to comment

It definitely cant harm but given the sample set can we get beyond where we are this now with this approach.

 

I like many have an SSD in my array because when I installed it either the one-liner was not in the manual or I missed it. I have no parity errors but that is only because we stopped people being able to run trim.

 

TBH I am surprised this is not getting more attention given what unRAID is all about.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.