John_M Posted September 7, 2015 Share Posted September 7, 2015 I just managed to shoe horn ~1M files onto 100GB SSD. Whilst this is a small % of array usage in terms of GB usage it is a huge % of the file count. Are you still using Freddie's script with your own modifications to achieve this? Quote Link to comment
hugenbdd Posted February 1, 2016 Share Posted February 1, 2016 I just managed to shoe horn ~1M files onto 100GB SSD. Whilst this is a small % of array usage in terms of GB usage it is a huge % of the file count. Considering how cheap this SSD was that a lot of bang for the buck. Happy Not trying to bring back a dead topic, but.... NAS, do you have a guide or any condensed post we could follow? Any issues with the SSD drive in the array? Thanks Dave Quote Link to comment
NAS Posted February 5, 2016 Share Posted February 5, 2016 It is still a kludge unfortunately. Essentially I script https://github.com/trinapicot/unraid-diskmv to loop through each of my shares on each of my drives for these commands "/mnt/user/apps/scripts/diskmv -f -s 1000" "/mnt/user/apps/scripts/diskmv -f -s 100000 -e idx,ifo,jpg,log,md5,nfo,nzb,par2,pl,sfv,smi,srr,srt,sub,torrent,txt,url,png,ico,csv,py,pyo" this actually works quite well but the problem is if i every get low on disk space and make a mistake with the share allocation settings since uNRAID has no idea what an accelerator drive is it dumps large files on there. Currently I have no scripted way to clean up from that. Quote Link to comment
hugenbdd Posted February 5, 2016 Share Posted February 5, 2016 Thanks for the reply. I used to be pretty good at bash scripting. Once I get all my files moved over I'll have to give it a shot. Did you mark the SSD drive in the array as "global exclude"? From the posts I have found, it appears that unRAID still reads the drive, but it doesn't write to it if you have that set. I could be wrong though. Quote Link to comment
NAS Posted February 5, 2016 Share Posted February 5, 2016 TBH i cant remember about global exclude I gave up a bit on this when i couldn't even get the feature moved out of unscheduled to "sometime in the future". Saying that if you are keen I would be more than happy to start working on this again with you. My bash script is very specific to me and utilitarian but if you are interested i could post a generic version of it. Quote Link to comment
hugenbdd Posted February 5, 2016 Share Posted February 5, 2016 Yes, posting the script would be helpful. I have a few more days of data transfer before I get to dig into this though. Those 8TB drives get slow once you max out their cache. I would like to hopefully check things like the total size of the files to move and also how much space is left on the drive it's moving to. I think this can be done with some awk commands and file lists. How often do you run the script to move files? once a day? Few times a day? Quote Link to comment
NAS Posted February 5, 2016 Share Posted February 5, 2016 I just run it once in a while manually. As i say i was hoping for more upsteam interest and when it wasnt accepted as a future feature i just stopped thinking of developing it for anyone other than me. Quote Link to comment
NAS Posted February 5, 2016 Share Posted February 5, 2016 Here you go. Please since we are potentially touching loads of files here sanity check first. Not pretty but easy to hack #!/usr/bin/bash #Create an array of commands array0[0]="/mnt/user/apps/scripts/diskmv -f -s 1000" array0[1]="/mnt/user/apps/scripts/diskmv -f -s 100000 -e idx,ifo,jpg,log,md5,nfo,nzb,par2,pl,sfv,smi,srr,srt,sub,torrent,txt,url,png,ico,csv,py,pyo" #Create an array of shares to accelerate array1[0]="/mnt/user/backups/" array1[1]="/mnt/user/pictures/" #Create an array of diskids to strip array2=( 2 5 6 7 8 9 10 11 12 13 15 16 ) #Define accelerator disk accelerator=disk3 #For each share and each disk run each diskmv command for command in "${array0[@]}" do for path in "${array1[@]}" do for diskid in "${array2[@]}" do $command $path disk$diskid $accelerator done done done Quote Link to comment
hugenbdd Posted February 13, 2016 Share Posted February 13, 2016 Thanks again for the script! I just moved all my small files and the other extensions you have listed. And it's only taking up 2.6GB. I think I'm going to have to add an extra zero to my "small size" files. Since I bought a cheap OCZ 960GB SSD, I plan to look at the special diskmv they posted for you, to add a Move New, and Move Old flags. i.e. If the file is <30 days, move it to the SSD, and to run against the ssd drive, If file is >30 days, move to a data disk. I'm sure there is some precautions and other checks I will need to do when selecting the target disk, but I think I can take care of that in your bash script. FYI, had to search a bit in this thread to find the modified diskmv with the size flag. https://github.com/trinapicot/unraid-diskmv Thanks again. Quote Link to comment
mr-hexen Posted February 13, 2016 Share Posted February 13, 2016 Is parity maintained or hosed as a result of background garbage collection? Quote Link to comment
NAS Posted February 13, 2016 Share Posted February 13, 2016 Parity is maintained within this tool and script. its just file moves within the array nothign fancy Quote Link to comment
mr-hexen Posted February 13, 2016 Share Posted February 13, 2016 Yes because it's moved under the shfs driver but what about long term. Since parity works on the block level and the XOR is calculated across all disks in the array what does the background garbage collection do to valid parity. In essence. Are you finding many parity corrections during monthly checks as a result of having a ssd in the protected array? Quote Link to comment
mr-hexen Posted February 13, 2016 Share Posted February 13, 2016 Read this thread: http://lime-technology.com/forum/index.php?topic=44944.0 Quote Link to comment
hugenbdd Posted February 13, 2016 Share Posted February 13, 2016 Seems like it is only an issue if you run fstrim. I don't plan to do this. Or If I do it, I'll just run the parity check right after it. I also understand that it's not a supported method and am okay with taking the risk of possibly losing data. I don't want to, but I understand it. What I do want is the faster response times (Mainly read) of the SSD. Kind of like an L2ARC drive in ZFS. Quote Link to comment
NAS Posted February 14, 2016 Share Posted February 14, 2016 That thread went completely under the radar for me. Will research more. Quote Link to comment
hugenbdd Posted February 15, 2016 Share Posted February 15, 2016 Well, I took a first shot at writing a script to move "newer" files over to my SSD. Seems to be working fine. But the "Total Files" size in GB is not correct.... but it's "close" https://github.com/hugenbd/unRAID-Accelerator-Drive I next need to write or update this script to move "older" files off and back over to one of the data drives. This should be able to keep the drives spun down most of the time, and allow for a fast response of any apps that need newer files. Next up, to go and test my parity again. (15 hour wait....) Quote Link to comment
NAS Posted February 16, 2016 Share Posted February 16, 2016 Nice work. I am glad to see people using the accelerator idea and this looks like it will make it even easier. Hopefully with enough traffic the feature will be promoted to scheduled but I think it is more realistic that it will remain the province of power user hackery and if it does scripts like this will help new users perhaps consider the idea. Quote Link to comment
hugenbdd Posted February 16, 2016 Share Posted February 16, 2016 Thanks NAS I actually think I'm coming at this problem a little backwards. The script I wrote can be used for initially populating the SSD/Accelerator drive, however there is a better way to continually populating it. Set the shares you want to include only the Accelerator Drive set the other drives as exclude If using cache drives, the mover should put the files on the Accelerator Drive according to your schedule Run a script that uses diskmv to move files out of the Accelerator drive to the other data drives in the "exclude" list. This would be based on your own requirements. i.e. date, size, filetype. Basically anything able to be used in a find command. I believe that unRAID still looks at all drives to READ a file. But follows the share include/exclude selection when writing. The one downside to this approach, is that you would be losing the ability to utilize the included "methods" of filling up a disk. i.e. highwater, fillup, etc. Quote Link to comment
trurl Posted February 16, 2016 Share Posted February 16, 2016 Set the shares you want to include only the Accelerator Drive set the other drives as exclude Not necessary and not recommended to set both included and excluded drives for a user share. Not necessary because Included means Only listed drives and Excluded means Except listed drives so there is no good reason to use both. Not recommended because there have been reports of unexpected behavior if both are used even if you manage to make them consistent with each other. Quote Link to comment
mr-hexen Posted February 16, 2016 Share Posted February 16, 2016 Perhaps the easiest way to implement this is to allow variables or settings to be passed to the mover script. Let's use tv_shows as an example: 1. Set the share to use cache: Yes 2. Allow variables to be passed to the mover script such as size and age requirements from within the webGUI. This can keep JPGs, SUBs, NFO, etc. on the cache device(s) as well as keep newly populated/downloaded/recorded tv_shows on the cache drive for a given period of time (say, 2 weeks). Just thinking out loud here... Quote Link to comment
NAS Posted February 16, 2016 Share Posted February 16, 2016 I wouldn't expend much effort on how this would work in unRAID proper. It is multiple YEARS since i proposed this first time and this is merely the 3rd generation post to match the forum reorganization and it hasnt even made it into the confirmed pile and thats over a year unscheduled.. No negativity meant but rather, better to direct energy to community script/addon ideas for implementation which we know can and do happen. Quote Link to comment
hugenbdd Posted February 16, 2016 Share Posted February 16, 2016 Set the shares you want to include only the Accelerator Drive set the other drives as exclude Not necessary and not recommended to set both included and excluded drives for a user share. Not necessary because Included means Only listed drives and Excluded means Except listed drives so there is no good reason to use both. Not recommended because there have been reports of unexpected behavior if both are used even if you manage to make them consistent with each other. Thanks trurl, I'll take a look at the include/exclude setting more. Would be nice if the GUI didn't allow you to select both include/exclude if it causes problems... Would you recommend just setting the include then? Quote Link to comment
trurl Posted February 16, 2016 Share Posted February 16, 2016 ...Would you recommend just setting the include then? Seems the most logical for this purpose. Quote Link to comment
NAS Posted February 16, 2016 Share Posted February 16, 2016 I have found when running this that invariably large files sneak into the accelerator drive. Since we will not have complete control over the drive contents we need to consider a means to strip non matching files as well. Quote Link to comment
hugenbdd Posted February 16, 2016 Share Posted February 16, 2016 I have found when running this that invariably large files sneak into the accelerator drive. Since we will not have complete control over the drive contents we need to consider a means to strip non matching files as well. Yup, that's what I plan to work on. Basically the opposite of what I posted earlier. i.e. Instead of moving files to the accelerator disk, but moving them off. I actually want large files on my SSD as it has plenty of space (1TB). However, I don't want OLD large files on it. Those should be on the spinning disks. i.e. I want all recent files in the past 40 days or so... or whatever I determine that date to be that I can hold on the drive to maximize it's utilization. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.