• Posts

  • Joined

  • Last visited

Everything posted by IronBeardKnight

  1. Hello, Perhaps I can provide an explanation of where you going wrong. you have not actually provided a path properly to a list of directories you want skipped. To be clear Ignore file types and Ignore files / folders listed inside of text file are not related they are individual settings and can be used independently. Please see below the examples:
  2. Testing now! It will take a while to run through but i'm sure this is the solution. Thank you so much for your help, I just overlooked it lol I vaguely remember having it set to 00 4 previously but that was many unraid revisions ago probably before backup version 2 came out. Ill also try giving your plugin revision a bash over the next couple of weeks
  3. OMG you are right i'm clearly having a major brain cell malfunction lol I should remove the * and replace it with 00 so it looks like this " 00 4 * * 1,3,5 " Do you think that is correct?
  4. is what I used as referance and from my understanding it is supposed to run at 4am every monday, wednesday and friday. From BackupOptions.json :
  5. Please see below is a couple of screenshots of the duplicate issue that has me a bit stumped. I have been through the code that lives within the plugin as a brief overview but nothing stood out as incorrect in regards to this issue. "\flash\config\plugins\ca.backup2\ca.backup2-2022.07.23-x86_64-1.txz\ca.backup2-2022.07.23-x86_64-1.tar\usr\local\emhttp\plugins\ca.backup2\" I found an interesting file with paths in it that led me to a backup log file "/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" but as it gets re-written each backup it only shows data for the 4:48am backup and not the 4:00am one. so that is not useful, I did not find any errors within this log either. The fact that its getting re-written most likely in my opinion means that the plugin is having issues interpreting either the settings entered or the cron that I have set
  6. Ill have to wait for a backup to run so I can get a screenshot for you of the duplicate issue. Yes restoring of single archives. for example I only want to restore duplicate or nextcloud etc. Currently if you want to do only a particular application/docker archive you need to extract the tar, stop and replace things manually. Looking forward to the merge and release. I would class this plugin as one of unraid's core utilities and its importance is high in the community. I have setup multiple unraid servers for friends and family and its always first on my list of must have things to be installed and configured. Currently I'm using it in conjunction with duplicate for encrypted remote backups between a few different unraid servers, pre-request of my time to help them setup unraid etc and way cheaper than a google drive etc. Being able to break backups apart into single archives without having to rely on post script extraction saves on ssd and hdd io and life. Per docker container / folder would make upload and restoring much easier and faster for example doing a restore from remote having to pull a full 80+gb single backup file can take a very long time and is very susceptible to interruption over only a 2 mb's line for example.
  7. Sorry no Repository: binhex/arch-qbittorrentvpn is the repo i still pull from at the moment.
  8. I might have run into another issue as well. it seems that when running on a scheduale it will split the backup into two files or 3 some times causing duplication of taken space. I have not been able to figure out what is causing this but my hunch is that it does this when it fills up one disk and has to move to the next or if a move runs on the cache. I have tried direct to array and also to cache first but always duplicate .tar files
  9. Amazing work mate! Are you working on separate restore options also ? Can we expect you to merge to original master branch or create your own to be added to Community Applications ? These changes I have been anticipating an am happy to do some testing if need be.
  10. This does not seem to be a profile specific issue from my end it seems like a bug to me or incorrect usage of regex or something. As I'm not allowed to have just one backup even with a custom profile.
  11. I believe the issue could be on line 299 of although this pattern string does not make much sense to me anyway, does ^(0|([1-9]))$ not work?
  12. I have discovered what I think may be a bug with this plugin. I'm sure this one could be a simple fix but its possible I might be missing something. Using 2 for this field allows the plugin to work as expected. for some reason I cannot select 1 backup for the amount of copies as I only want one backup and then I use other tools to ship a copy encrypted offsite. Please see below how it turns read then will not allow you apply 1 even thought the vm only has 1 vdisk. You cannot proceed and press apply when its red.
  13. It almost seems like if you download a clean docker pull from scratch again it works but then as soon as its restarted by automation it breaks. It feels like permissions or something is getting changed for this docker by the system even.
  14. Also was getting this issue when enabling vpn yes on the latest tag. I loose all access to the gui. Rolled back as per the previous posts has brought me back up and running. Obviously not a full solution. Found this is the Supervisord.log Edit: Found that this still did not fix the issue as after I did a CA Backup and the container auto started again it was back to no gui and this error above. Please help
  15. Also experiencing strange appdata behaviour with docker containers loosing there permissions.
  16. Also having this same ui error. Also having some very strange cache issues further testing on my end needed but problems only started after update. Hardware is not the issue.
  17. 6.10 release should be backwards compatible no ? its always better to dev on the latest main. will look into these links in a bit
  18. If I get some time over the next few days ill give it a shot otherwise our own or derived version of (active io streams) or (open files) plugin would need to be created to keep track of accessed / last accessed files which would give you your list.
  19. mmm Everything is possible with time, however, I don't believe there has been significant need or request for something like this yet as what we have currently caters for 95% of situations, however I have come across a couple of situations where having temporary delayed moving on a file/folder level would have been good. So to state your request another way, basically wanting a timer option that can be set per file/folder/share? As for getting this done, the best point to start would be then to modify this current "File list path" option to do what you want E.g you would just add as normal your file/ folder locations with a (space at the end followed by 5d, 7h, 9m, 70s) this would be the time it stays on cache for when using the parent share folder with cache Yes. Code changes would need to change from: looped if (exists) to something like: looped if (exists && current date < $variableFromLine). Not actual code ^ but you get the hint The problem with this is that the script now has to store dates and times of arrival on cache for said folder / file per line but on every subsequent child file/folder and give them individual id's and times to reference you can imagine how this now grows very fast in both compute, io, that the mover would need not to mention now a database need, etc etc, a mover database which won't be as easy as first through to implement with a bash script which is what is run for the mover is a-lot of extra coding to be required and potentially this edges on complete mover redesign. I cannot see this being implemented from my point of view. @-Daedalus thank you for raising it here though. New Idea! What could potentially solve your issue and would be very cool in nearly every single case is if we where to use for example the code from the " Active-IO-Streams" and/or "Open-Files" plugins then modify a little to then advise the mover what files where frequently most accessed by way of access count and time open and have the mover be bi-directional, as in taking those frequent files and moving them to the cache. Having the option to auto add said files to the "exclude file list option" of the mover would also be great as this stops the files from being moved back so soon if your like me and have your mover run hourly or so, at this point of adding to the exclude list you could have each and every file and or folder added to the list automatically ( basically using the txt file as a db ) which would allow you to then add a blanket timeframe to each entry as well if you want for your original needs OR instead of a blanket have the timeframe auto configure based on the usage of the file or folder e.g accessed > 9 times etc The mover then becomes essentially a proper bidirectional cache and your system gets a little smarter by making what is accessed frequently available on the faster drives but again basically a mover plugin redesign. I would be happy to help out getting something like this out the door but as this is not my plugin and my time is limited its a decision that is not up to me. @hugenbdd not sure if your down to go this far but its all future potential and idea's Pardon the grammer and spelling was in a rush.
  20. Has anyone actually done a perf test of different size files using xfs vs btrfs with mover to help speed things up. I know that xfs yes less features has better io in general on linux, however, with the mover choking on larger amounts of small files eg kb's using btrfs I wonder has anyone actually tested move time / perf of the two file systems for the purpose of the the mover?
  21. this is correct it is used for keeping things on the cache (excluding from the move) Situation example: You have a share set to cache yes data is read written to the cache until criteria is met for the mover to run, the mover runs and normally every bit of that data currently under that share that is on the cache is moved to the array. Let;s say you have a bunch of sub files or folders in that share that you would like to have stay on the array when the mover runs so that applications that depend on that data can run faster using the cache. having this option allows you to have less shares created and increase speed of some application you have used it for. e.g Nextcloud requires a share for user data which includes docs. thumbnails, photos etc, if you set that share to cache_yes the all the data that was once on the cache becomes very slow now especially small files after the mover runs and it gets transfered/moved to the array as things like thumbnails etc have to be then read from the array instead of the cache. Enter this mover feature! Allowing you to find the thumbnail sub sub sub folder or what ever else you want and set it to stay on the cache regardless of mover run, however, all the actual pictures docs etc not specified get moved to the array still, keeping your end user experience nice and fast in the gui/webpage of nextcloud as you cached your thumbnails but allowing you to optimize your used storage of your cache by having the huge files sit in slower storage as they are not regularly accessed, Summery is that this feature of mover allows for more: Granular cache control cache Space Saving Application/docker performance Less Mover run time faster load times of games: if you set assets or .exe files etc to stay on cache etc etc