IronBeardKnight

Members
  • Posts

    69
  • Joined

  • Last visited

Everything posted by IronBeardKnight

  1. Hi @ich777, Thank you first of all for all your application containers. RapidPhotoDownloader seems to be broken. I have tried wiping container completely and then redownloading with both the latest url (version 24) and also the default url that you have published the docker with being (version 17) and both seem to be getting the below error over and over again.
  2. very interesting. I'm wondering if this is an older version thing or something because at default compression levels pigz is better for speed of both compression and massively faster at decompression however actual size is not so good. I do believe that this plugin could use some work on the descriptions of each setting for example doing away with gzip and just referencing pigz to avoid confusion as to wether its using multithread or not. This is an awesome comparison of may different compression methods. Compression Comparison just scroll down.
  3. hmm if its munching cpu hard is it trying to move something that is actually running like vm's or docker containers or anything. cpu would spike hard if you where to try move something that is actively processing data. you may also have downloads trying to feed your cache i you have set it up that way and when the mover runs its trying to move active files ? What are your share settings per share that use the cache? There are many possibilities here and given to the amount of variables that can be associated with using cache and thus the mover its just a process of narrowing things down one at a time. Its possible if you have an overclock as well that bclk if modified is too high causing further instability on other system device e,g sata/sas drives, pcie lanes which could have adverse affects. I know its a pain but stock everything except your raid itself and go one by one if all else fails. I'm not having a go just trying to be helpful as much as I can , Please let us know how you go.
  4. Hey Mate, yeah just suggestions I think option one would also be the easiest as well. The second option was is such a rare case situation that it can be stored in the archive for later use if you ever need or there is demand for it later maybe. Oh also I think the spelling mistake may still be there fyi as I could see it even after updating the plugin. Hint For The Masses: We need keep in mind that exclusion file types or locations should always come before criteria to move based on ages/ last accessed/ space or what ever else as exclusions are for sub-directories and/or files that need to stay in cache no matter what other options you have selected. Personally these new features have sped up my nextcloud instance alone exponentially and I'm looking to do some testing with game load time as well in future. Thank you again to @hugenbdd for doing all the ground work.
  5. A Small Guide For those that want to use the new features but are a little lost or perhaps this will save you some time. Open Unraid CMD. cd /boot/config/plugins/ca.mover.tuning sudo nano skiplist.txt In the nano editor you can list your locations to skip like the following: Ctl + o Ctl +x Note: The list of locations may include or not include /mnt/cache/ as this is already catered for withing the mover script Find Specific files of a name or kind you may find for example all .png in a location and then put them in your exclusion list incase they are mixed in with other files and such example below. find "/mnt/cache" -type f \( -name "*.png"\) -print Open CA Mover Tuning GUI In CA Mover Tuning Plugin Set the location of your skiplist.txt as below File list path: /boot/config/plugins/ca.mover.tuning/skiplist.txt
  6. Hello @guythnick and @hugenbdd I also though of this but I tough one thing at a time. Option 1 This is the change I was thinking removing yes and adding in bigger and smaller as the on options which change the functionality of the size in the next line. Option 2 The option that has just been introduced to filter files on extension would also solve this same issue if it was ONLY used in relation to your locations you stipulate in the path skiplist.txt file, however this is not the case as the extension option covers all files with specified extension across the ENTIRE cache. On a side note: @hugenbdd I also tested your sparseness option using ( Create Sparse File ) however it did not seem to pick it up for movement, however I may be testing incorrectly I'm not sure.
  7. You should not have to rename it. The strings in the file should be being passed through this plugin with any paths being encased in double quote. @hugenbdd I think I also mentioned this to you in my notes from first testings with you as well, All file Paths need to be encased in "" otherwise the script language thinks its a new parameter or command. Also form one terrible speller too probably someone how made a mistake, you have a spelling error I'm setting up for some testing tonight.
  8. hahahah I tried to explain it much simpler so others reading this could understand as well, that is exactly the same thing was talking about. I did not know about this thread though. I'm still working out some other kinks from my side with some other features I'm working on and will try to test the Sparsness if i understand it correctly when I get some more time.
  9. ok cool that is what i was trying to convey with the mtime parameters, maybe i did not correctly convoy correctly. From my testing for you i have deduced: mtime: mtime 0 [Younger than 24hrs and Older] mtime +0 [Older than 24hrs] mtime +1 [Older than 48hrs] mtime +2 [Older than 72hrs] if you "skip" from the statement mtime completely based on == 0 or == +0 then this only allowing for 48hrs or older in your script. I'm only going off what i have seen so far. Looking at the schedular options for mover I was just wondering if the options actually reflected the time frames true and / or may cause some confusion. I'm happy to test sparseness for sure but I am having a hard time understanding what it actually is to test it.
  10. Hi @hugenbdd Please find attached your script containing my notes and recommendations for things and why I recommend them. I hope that was ok. I am not sure as some people get iffy editing their files so I only added some notes if you would like to make some minor tweaks and test with a new script. I have explained as best I can as to why you may have been getting inconsistent results using the find command in the past but let me know if you need more. @LeoFender I am not sure if you have had a chance to test as well yet but please feel free to also review my information and proposedmovertest changes. Oh P.S I'm not the best at spelling and I have lazy grammar so please excuse.
  11. Apologies my work has been very busy as of late. Agreed! functionality first always. I'm happy to give this some testing over the week after I finish work each day however the real progress will be made on the weekend most likely. Ill get back at you with my results, HOWEVER if anyone else in the forum is interested by all means jump in as well and giver the script a test as well. @hugenbdd Ill get back at you asap.
  12. I might have to take a quick shortcut here and give props to bergware/dynamix for his folder caching plugin. https://github.com/bergware/dynamix/tree/master/source/cache-dirs This plugin has done most of the heavy lifting for us in a way. Its not exactly what we need as unfortunately ram is more expansive than SSD or NVME. However to list the check boxes this git provides for a start (with minor tweaking to meet our needs). GUI & PLAN This picture below shows the kind of option menu that would be perfect although ideally we would like it to go deeper into the folder structure if needed for the user as this one only collects the first level of folder names. Once a single or multiple folder locations are selected we then pipe the directory values into a recursive scan to list/pass every file and / or folder location string there after to the mover script to then miss as needed. Using perhaps these commands its possible to get a bunch of information that we may need to a list, array or variable string i'm really not sure how or where the actual mover script is, I don't know the best method of transferring the location strings to the mover script itself. commands: https://www.cyberciti.biz/faq/linux-show-directory-structure-command-line/ #This gives us share folders that are setup in unraid to be cached. -d (directories only) -L (Tree Level deep from stated location) tree -d -L 1 /mnt/cache https://www.explainshell.com/explain?cmd=find+.+-type+f+-name+"*confli*"+-exec+rm+-i+-f+{}+\; #This line will search and return the entire provided location (string or variable) find /mnt/cache -print #This line will search and return the entire provided location (string or variable) for any file or folder that has the exact name of NextCloud find /mnt/cache -name "NextCloud" -print #This line will search and return the entire provided location (string or variable) all .png files find /mnt/cache -name "*.png" -print This would be an very effective way to select folders to exclude from move and thus stay cached providing you can go deeper into the folder structure than what is coded in the folder caching plugin code from what I can see it looks to be pretty simple to change but then having the GUI adjust size for the structure as you navigate i'm a little unsure as web dev is not my apple pie just yet OPTIONAL Future Improvement Perhaps the below could be optionally configured per folder but if left blank then everything in that folder would be classed as excluded from move. Just a thought. and would require further thought or dynamicly generated User defined options to be generated per your selected folder. MUCH Thought out. CURRENT OPTION LOGIC In regards to how your existing options will be affected or affect this I believe this may be easier than first thought by myself. We add Boolean menu option like your others under the folder selection list with the title "Include other Mover conditions on these Folders/files" . its just a matter of having an if statement in the mover script check the passed in location strings and if a match skip that directory and child items. I'm not sure if this is what you where actually looking for @hugenbdd but it took me a little while to put this together so I hope its useful. Happy to try and help any way I can.
  13. Nice! ill have a gander around internet and try some testing now to see if I can get something together.
  14. Hi Guys, I know this is a little long but please hear me out and let me know if you thing this is something we should be chasing as a community stuck at home. Pretty Pretty Please with a cherry on top can we have an option that allows for exclude files and or folders from the mover via perhaps name xyz or within a regex range (self configured regex string) this would allow a long standing issue to be at least half addressed. Issue: some config folders allow for faster loading or web pages and also when running dockers usually a seperate data share is created but you don't want all the data in that share in the cache as it will fill the cache too quickly. Hmm a better example is Nextcloud in docker and having a separate userdata folder as storing everything in cache is not really an awesome idea. thumbnails for pictures are stored in this userdata folder along with the images themselves which causes quite a bit of delay in nextcloud reading from spinning disk. The same can be said for plex or emby in some cases where folder art, meta data and images are stored with the media, by allowing the option to exclude files or folders via regex string or name you can set your entire share directory to Yes for caching and when the mover runs it will move all except the imprtant files off the cache to the array. Hell even as far as particular game install files if you know what your doing you can effectively speed up your networked game directories of massive size or per game. This gives the ability to improve docker efficacy astronomically while not being as risky as caching in ram. Ideally it would be great if unraid had the option to select folders or files to be cached and give the same options given to the shares however this is something they where aware of a in early 2018 i think and never came to life so having this plugin skip allocated files and folders is the next best thing. Is this something we can look at. I'm happy to give it a crack developing such a thing on the back of this plugin to help out. cheers guys
  15. Excellent news. I however still seem to be having issues with shucked easystore WD100EMAZ drives waiting for yet another rebuild then once its done ill probably roll back to 6.7 to see if that helps
  16. Ill check this out tonight. This bios download changes happened a few times now on the gigabyte site. I know that bifurcation is supported in f12i at 4x4 and a newer agesa but not sure what else may be in the firmware. It is very annoying as they put no information as to why its taken down or what changes where done.. On the BIOS note I'm hesitant do downgrade as it appears that the agesa version in f11 is a beta from the looks of it So after booting in this morning before work I found that another drive went offline along with my two parity drives. I'm thinking that this may be drive type specific more than controller and cable as all others are working without issue. Either that or the latest kernal may be having support issues in my board perhaps will need to do further digging.
  17. Sure I’ll have some time tomorrow night and pull some stats for you. I’m running designer rev 1 board with f12i bios but I’m thinking it’s more a controller issue perhaps for me or sata cables but I plan on doing some testing tomorrow after work will let you know.
  18. Hi there. I have the same board and cpu combo as you running 65 tb total all sata occupied with 2 x nvme and 3 GPUs. You may be overloading people with this much info straight away lol Ever since upgrading to 6.8 I have been having none stop freeze drives dropping mainly my parity drives I recently added what I believe to be hgst or WD Red 10 tb drives to the array a little time before upgrading and it appears they are having the issues only after updating to the new version 6.8 from 6.7.2. My board and cpu combo was rock solid on 6.7.2 even with the new drives. I seem to be giving quite a bit of instability with vms shutting off and drives in parity disabling as well with the new build of Unraid. ps I forgot to mention I’m on the same bios version as you but gigabyte seem to have con issues as the latest revision now has gone back to f11 using 8.8.8.8 from Australia
  19. Unraid 6.8.0 Stable has been released but nothing in the nvidia plugin. What is the usual turn around for nvidia version update as I have already moved to stable 6.8 and for the last 24 hours seems to be stable across my system running 11 disks and 20 docker container with 6 vm's. Sorry forgot to mention that I appreciate all your hard work guys and have a Merry Christmas