IronBeardKnight

Members
  • Posts

    39
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

IronBeardKnight's Achievements

Newbie

Newbie (1/14)

5

Reputation

  1. It almost seems like if you download a clean docker pull from scratch again it works but then as soon as its restarted by automation it breaks. It feels like permissions or something is getting changed for this docker by the system even.
  2. Also was getting this issue when enabling vpn yes on the latest tag. I loose all access to the gui. Rolled back as per the previous posts has brought me back up and running. Obviously not a full solution. Found this is the Supervisord.log Edit: Found that this still did not fix the issue as after I did a CA Backup and the container auto started again it was back to no gui and this error above. Please help
  3. Also experiencing strange appdata behaviour with docker containers loosing there permissions.
  4. Also having this same ui error. Also having some very strange cache issues further testing on my end needed but problems only started after update. Hardware is not the issue.
  5. 6.10 release should be backwards compatible no ? its always better to dev on the latest main. will look into these links in a bit
  6. If I get some time over the next few days ill give it a shot otherwise our own or derived version of (active io streams) or (open files) plugin would need to be created to keep track of accessed / last accessed files which would give you your list.
  7. mmm Everything is possible with time, however, I don't believe there has been significant need or request for something like this yet as what we have currently caters for 95% of situations, however I have come across a couple of situations where having temporary delayed moving on a file/folder level would have been good. So to state your request another way, basically wanting a timer option that can be set per file/folder/share? As for getting this done, the best point to start would be then to modify this current "File list path" option to do what you want E.g you would just add as normal your file/ folder locations with a (space at the end followed by 5d, 7h, 9m, 70s) this would be the time it stays on cache for when using the parent share folder with cache Yes. Code changes would need to change from: looped if (exists) to something like: looped if (exists && current date < $variableFromLine). Not actual code ^ but you get the hint The problem with this is that the script now has to store dates and times of arrival on cache for said folder / file per line but on every subsequent child file/folder and give them individual id's and times to reference you can imagine how this now grows very fast in both compute, io, that the mover would need not to mention now a database need, etc etc, a mover database which won't be as easy as first through to implement with a bash script which is what is run for the mover is a-lot of extra coding to be required and potentially this edges on complete mover redesign. I cannot see this being implemented from my point of view. @-Daedalus thank you for raising it here though. New Idea! What could potentially solve your issue and would be very cool in nearly every single case is if we where to use for example the code from the " Active-IO-Streams" and/or "Open-Files" plugins then modify a little to then advise the mover what files where frequently most accessed by way of access count and time open and have the mover be bi-directional, as in taking those frequent files and moving them to the cache. Having the option to auto add said files to the "exclude file list option" of the mover would also be great as this stops the files from being moved back so soon if your like me and have your mover run hourly or so, at this point of adding to the exclude list you could have each and every file and or folder added to the list automatically ( basically using the txt file as a db ) which would allow you to then add a blanket timeframe to each entry as well if you want for your original needs OR instead of a blanket have the timeframe auto configure based on the usage of the file or folder e.g accessed > 9 times etc The mover then becomes essentially a proper bidirectional cache and your system gets a little smarter by making what is accessed frequently available on the faster drives but again basically a mover plugin redesign. I would be happy to help out getting something like this out the door but as this is not my plugin and my time is limited its a decision that is not up to me. @hugenbdd not sure if your down to go this far but its all future potential and idea's Pardon the grammer and spelling was in a rush.
  8. Has anyone actually done a perf test of different size files using xfs vs btrfs with mover to help speed things up. I know that xfs yes less features has better io in general on linux, however, with the mover choking on larger amounts of small files eg kb's using btrfs I wonder has anyone actually tested move time / perf of the two file systems for the purpose of the the mover?
  9. this is correct it is used for keeping things on the cache (excluding from the move) Situation example: You have a share set to cache yes data is read written to the cache until criteria is met for the mover to run, the mover runs and normally every bit of that data currently under that share that is on the cache is moved to the array. Let;s say you have a bunch of sub files or folders in that share that you would like to have stay on the array when the mover runs so that applications that depend on that data can run faster using the cache. having this option allows you to have less shares created and increase speed of some application you have used it for. e.g Nextcloud requires a share for user data which includes docs. thumbnails, photos etc, if you set that share to cache_yes the all the data that was once on the cache becomes very slow now especially small files after the mover runs and it gets transfered/moved to the array as things like thumbnails etc have to be then read from the array instead of the cache. Enter this mover feature! Allowing you to find the thumbnail sub sub sub folder or what ever else you want and set it to stay on the cache regardless of mover run, however, all the actual pictures docs etc not specified get moved to the array still, keeping your end user experience nice and fast in the gui/webpage of nextcloud as you cached your thumbnails but allowing you to optimize your used storage of your cache by having the huge files sit in slower storage as they are not regularly accessed, Summery is that this feature of mover allows for more: Granular cache control cache Space Saving Application/docker performance Less Mover run time faster load times of games: if you set assets or .exe files etc to stay on cache etc etc
  10. check your share settings as you may have had something going to cache that has now been set to only use the array thus the files get left on the cache and never moved. Correct procedure if changing your caching of shares to not use the cache anymore is always to stop what ever is feeding that share, run a full mover run then change the share setting back to array. I hope this helps
  11. This is already a feature via exclude location list.
  12. I have fixed this issue for anyone wanting to know what the problem is step 1: Grafana Step 2: Grafana make sure this is your local ip if your not exposing unraid or grafana to the internet and your keeping it local. Step 3: Grafana Confirm you have the correct Encryption method selected or if your running non engcryption and apply. Do this for every graph that uses the Unraid-API
  13. Still not able to see icons. Everything else is working just not the icons. Linking back to that post on the first form page does nothing it does not explain whats going on or perhaps how to fix it.