IronBeardKnight

Members
  • Posts

    34
  • Joined

  • Last visited

Everything posted by IronBeardKnight

  1. 6.10 release should be backwards compatible no ? its always better to dev on the latest main. will look into these links in a bit
  2. If I get some time over the next few days ill give it a shot otherwise our own or derived version of (active io streams) or (open files) plugin would need to be created to keep track of accessed / last accessed files which would give you your list.
  3. mmm Everything is possible with time, however, I don't believe there has been significant need or request for something like this yet as what we have currently caters for 95% of situations, however I have come across a couple of situations where having temporary delayed moving on a file/folder level would have been good. So to state your request another way, basically wanting a timer option that can be set per file/folder/share? As for getting this done, the best point to start would be then to modify this current "File list path" option to do what you want E.g you would just add as normal your file/ folder locations with a (space at the end followed by 5d, 7h, 9m, 70s) this would be the time it stays on cache for when using the parent share folder with cache Yes. Code changes would need to change from: looped if (exists) to something like: looped if (exists && current date < $variableFromLine). Not actual code ^ but you get the hint The problem with this is that the script now has to store dates and times of arrival on cache for said folder / file per line but on every subsequent child file/folder and give them individual id's and times to reference you can imagine how this now grows very fast in both compute, io, that the mover would need not to mention now a database need, etc etc, a mover database which won't be as easy as first through to implement with a bash script which is what is run for the mover is a-lot of extra coding to be required and potentially this edges on complete mover redesign. I cannot see this being implemented from my point of view. @-Daedalus thank you for raising it here though. New Idea! What could potentially solve your issue and would be very cool in nearly every single case is if we where to use for example the code from the " Active-IO-Streams" and/or "Open-Files" plugins then modify a little to then advise the mover what files where frequently most accessed by way of access count and time open and have the mover be bi-directional, as in taking those frequent files and moving them to the cache. Having the option to auto add said files to the "exclude file list option" of the mover would also be great as this stops the files from being moved back so soon if your like me and have your mover run hourly or so, at this point of adding to the exclude list you could have each and every file and or folder added to the list automatically ( basically using the txt file as a db ) which would allow you to then add a blanket timeframe to each entry as well if you want for your original needs OR instead of a blanket have the timeframe auto configure based on the usage of the file or folder e.g accessed > 9 times etc The mover then becomes essentially a proper bidirectional cache and your system gets a little smarter by making what is accessed frequently available on the faster drives but again basically a mover plugin redesign. I would be happy to help out getting something like this out the door but as this is not my plugin and my time is limited its a decision that is not up to me. @hugenbdd not sure if your down to go this far but its all future potential and idea's Pardon the grammer and spelling was in a rush.
  4. Has anyone actually done a perf test of different size files using xfs vs btrfs with mover to help speed things up. I know that xfs yes less features has better io in general on linux, however, with the mover choking on larger amounts of small files eg kb's using btrfs I wonder has anyone actually tested move time / perf of the two file systems for the purpose of the the mover?
  5. this is correct it is used for keeping things on the cache (excluding from the move) Situation example: You have a share set to cache yes data is read written to the cache until criteria is met for the mover to run, the mover runs and normally every bit of that data currently under that share that is on the cache is moved to the array. Let;s say you have a bunch of sub files or folders in that share that you would like to have stay on the array when the mover runs so that applications that depend on that data can run faster using the cache. having this option allows you to have less shares created and increase speed of some application you have used it for. e.g Nextcloud requires a share for user data which includes docs. thumbnails, photos etc, if you set that share to cache_yes the all the data that was once on the cache becomes very slow now especially small files after the mover runs and it gets transfered/moved to the array as things like thumbnails etc have to be then read from the array instead of the cache. Enter this mover feature! Allowing you to find the thumbnail sub sub sub folder or what ever else you want and set it to stay on the cache regardless of mover run, however, all the actual pictures docs etc not specified get moved to the array still, keeping your end user experience nice and fast in the gui/webpage of nextcloud as you cached your thumbnails but allowing you to optimize your used storage of your cache by having the huge files sit in slower storage as they are not regularly accessed, Summery is that this feature of mover allows for more: Granular cache control cache Space Saving Application/docker performance Less Mover run time faster load times of games: if you set assets or .exe files etc to stay on cache etc etc
  6. check your share settings as you may have had something going to cache that has now been set to only use the array thus the files get left on the cache and never moved. Correct procedure if changing your caching of shares to not use the cache anymore is always to stop what ever is feeding that share, run a full mover run then change the share setting back to array. I hope this helps
  7. This is already a feature via exclude location list.
  8. I have fixed this issue for anyone wanting to know what the problem is step 1: Grafana Step 2: Grafana make sure this is your local ip if your not exposing unraid or grafana to the internet and your keeping it local. Step 3: Grafana Confirm you have the correct Encryption method selected or if your running non engcryption and apply. Do this for every graph that uses the Unraid-API
  9. Still not able to see icons. Everything else is working just not the icons. Linking back to that post on the first form page does nothing it does not explain whats going on or perhaps how to fix it.
  10. I believe from memory I just wiped the kvm image from unraid but left the vdisk in tact then recreated the kvm and pointed it to the old disk and it worked.
  11. Same for me the server now has to be fully rebooted just to bring kvm back up. did you guys get any updates on this?
  12. tried editing in GEN_SSL but it just completely breaks the container.
  13. man I'm having so much trouble with this container .env trick then get chown error for it in the setup page, so chow whole app directory as the ,env does not exist anymore. Wooo hoo one sep further. then straight up after submit on setup page straight up 500 error. and so many errors in the logs child 37 said into stderr: "src/Illuminate/Pipeline/Pipeline.php(149): Illuminate\Cookie\Middleware\EncryptCookies->handle(Object(Illuminate\Http\Request), Object(Closure)) #36 etc etc. would be so good if this worked properly. Not sure why this container self generates ssl cert through lets encrypt but most people running the container will be using reverse proxies anyway.
  14. Currently Invoiceninja unraid container is not working at all and no instructions to navigate the errors. It appears your docker container is broken. Not only do you have to run php artisan migrate but after you get the db and everything setup for this you run into the below errors along with many more of the same type. [15-Mar-2021 14:19:32] WARNING: [pool www] child 31 said into stderr: "[2021-03-15 04:19:32] production.ERROR: ***RuntimeException*** [0] : /var/www/app/vendor/turbo124/framework/src/Illuminate/Encryption/Encrypter.php [Line 43] => The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths. {"context":"PHP","user_id":0,"account_id":0,"user_name":"","method":"GET","user_agent":"Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/89.0.4389.86 Safari/537.36","locale":"en","ip":"192.168.1.14","count":1,"is_console":"no","is_api":"no","db_server":"mysql","url":"/"} []" [15-Mar-2021 14:19:32] WARNING: [pool www] child 31 said into stderr: "[2021-03-15 04:19:32] production.ERROR: [stacktrace] 2021-03-15 04:19:32 The only supported ciphers are AES-128-CBC and AES-256-CBC with the correct key lengths.: #0 /var/www/app/vendor/turbo124/framework/src/Illuminate/Encryption/EncryptionServiceProvider.php(28): Illuminate\Encryption\Encrypter->__construct('7kg2Ca9E8BTaSa8...', 'AES-256-CBC') #1 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(749): Illuminate\Encryption\EncryptionServiceProvider->Illuminate\Encryption\{closure}(Object(Illuminate\Foundation\Application), Array) #2 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(631): Illuminate\Container\Container->build(Object(Closure)) #3 /var/www/app/vendor/turbo124/framework/src/Illuminate/Container/Container.php(586): Illuminate\Container\Container->resolve('encrypter', Array) #4 /var/www/app/vendor/turbo124/framework/src/Illuminate/Foundation/Application.php(732): Illu...
  15. Hi @ich777, Thank you first of all for all your application containers. RapidPhotoDownloader seems to be broken. I have tried wiping container completely and then redownloading with both the latest url (version 24) and also the default url that you have published the docker with being (version 17) and both seem to be getting the below error over and over again.
  16. very interesting. I'm wondering if this is an older version thing or something because at default compression levels pigz is better for speed of both compression and massively faster at decompression however actual size is not so good. I do believe that this plugin could use some work on the descriptions of each setting for example doing away with gzip and just referencing pigz to avoid confusion as to wether its using multithread or not. This is an awesome comparison of may different compression methods. Compression Comparison just scroll down.
  17. hmm if its munching cpu hard is it trying to move something that is actually running like vm's or docker containers or anything. cpu would spike hard if you where to try move something that is actively processing data. you may also have downloads trying to feed your cache i you have set it up that way and when the mover runs its trying to move active files ? What are your share settings per share that use the cache? There are many possibilities here and given to the amount of variables that can be associated with using cache and thus the mover its just a process of narrowing things down one at a time. Its possible if you have an overclock as well that bclk if modified is too high causing further instability on other system device e,g sata/sas drives, pcie lanes which could have adverse affects. I know its a pain but stock everything except your raid itself and go one by one if all else fails. I'm not having a go just trying to be helpful as much as I can , Please let us know how you go.
  18. Hey Mate, yeah just suggestions I think option one would also be the easiest as well. The second option was is such a rare case situation that it can be stored in the archive for later use if you ever need or there is demand for it later maybe. Oh also I think the spelling mistake may still be there fyi as I could see it even after updating the plugin. Hint For The Masses: We need keep in mind that exclusion file types or locations should always come before criteria to move based on ages/ last accessed/ space or what ever else as exclusions are for sub-directories and/or files that need to stay in cache no matter what other options you have selected. Personally these new features have sped up my nextcloud instance alone exponentially and I'm looking to do some testing with game load time as well in future. Thank you again to @hugenbdd for doing all the ground work.
  19. A Small Guide For those that want to use the new features but are a little lost or perhaps this will save you some time. Open Unraid CMD. cd /boot/config/plugins/ca.mover.tuning sudo nano skiplist.txt In the nano editor you can list your locations to skip like the following: Ctl + o Ctl +x Note: The list of locations may include or not include /mnt/cache/ as this is already catered for withing the mover script Find Specific files of a name or kind you may find for example all .png in a location and then put them in your exclusion list incase they are mixed in with other files and such example below. find "/mnt/cache" -type f \( -name "*.png"\) -print Open CA Mover Tuning GUI In CA Mover Tuning Plugin Set the location of your skiplist.txt as below File list path: /boot/config/plugins/ca.mover.tuning/skiplist.txt
  20. Hello @guythnick and @hugenbdd I also though of this but I tough one thing at a time. Option 1 This is the change I was thinking removing yes and adding in bigger and smaller as the on options which change the functionality of the size in the next line. Option 2 The option that has just been introduced to filter files on extension would also solve this same issue if it was ONLY used in relation to your locations you stipulate in the path skiplist.txt file, however this is not the case as the extension option covers all files with specified extension across the ENTIRE cache. On a side note: @hugenbdd I also tested your sparseness option using ( Create Sparse File ) however it did not seem to pick it up for movement, however I may be testing incorrectly I'm not sure.
  21. You should not have to rename it. The strings in the file should be being passed through this plugin with any paths being encased in double quote. @hugenbdd I think I also mentioned this to you in my notes from first testings with you as well, All file Paths need to be encased in "" otherwise the script language thinks its a new parameter or command. Also form one terrible speller too probably someone how made a mistake, you have a spelling error I'm setting up for some testing tonight.
  22. hahahah I tried to explain it much simpler so others reading this could understand as well, that is exactly the same thing was talking about. I did not know about this thread though. I'm still working out some other kinks from my side with some other features I'm working on and will try to test the Sparsness if i understand it correctly when I get some more time.
  23. ok cool that is what i was trying to convey with the mtime parameters, maybe i did not correctly convoy correctly. From my testing for you i have deduced: mtime: mtime 0 [Younger than 24hrs and Older] mtime +0 [Older than 24hrs] mtime +1 [Older than 48hrs] mtime +2 [Older than 72hrs] if you "skip" from the statement mtime completely based on == 0 or == +0 then this only allowing for 48hrs or older in your script. I'm only going off what i have seen so far. Looking at the schedular options for mover I was just wondering if the options actually reflected the time frames true and / or may cause some confusion. I'm happy to test sparseness for sure but I am having a hard time understanding what it actually is to test it.
  24. Hi @hugenbdd Please find attached your script containing my notes and recommendations for things and why I recommend them. I hope that was ok. I am not sure as some people get iffy editing their files so I only added some notes if you would like to make some minor tweaks and test with a new script. I have explained as best I can as to why you may have been getting inconsistent results using the find command in the past but let me know if you need more. @LeoFender I am not sure if you have had a chance to test as well yet but please feel free to also review my information and proposedmovertest changes. Oh P.S I'm not the best at spelling and I have lazy grammar so please excuse.