IronBeardKnight

Members
  • Posts

    54
  • Joined

  • Last visited

Posts posted by IronBeardKnight

  1. On 9/25/2022 at 6:37 PM, alicex said:

    Hi

    May i have some configuration examples?

    I'd want to ignore moving some folders and some file types like *.!qb

    but my config seems not working:(

    Ignore files listed inside of a text file: yes
    File list path: /mnt/user/media/upload/

     

    Ignore file types: Yes
    comma seperated list of file types: .!qb,.part

    or type  "*.!qb,*.part" instead? or ".!qb;.part"?

     

    Thanks in advance.

     

     

    Hello, Perhaps I can provide an explanation of where you going wrong.
     

    you have not actually provided a path properly to a list of directories you want skipped.

    To be clear Ignore file types and Ignore files / folders listed inside of text file are not related they are individual settings and can be used independently.

     

    Please see below the examples: 

    image.thumb.png.eb6d02af2d972d1d98ccc7d0822c5200.png

    image.png.b83eae2fa22cb71efa8fa139f1dee7a3.png

    • Like 1
  2. 3 minutes ago, KluthR said:

    Exactly. It should be starting with 0 4, not * 4.

    Testing now! It will take a while to run through but i'm sure this is the solution. Thank you so much for your help, I just overlooked it lol I vaguely remember having it set to 00 4 previously but that was many unraid revisions ago probably before backup version 2 came out. 

    Ill also try giving your plugin revision a bash over the next couple of weeks :)

     

  3. 9 minutes ago, KluthR said:

    That seems incorrect - could you post your schedule settings from inside the plugin settings page? The current setting says "At every minute past hour 4 on Monday, Wednesday, and Friday."

    OMG you are right i'm clearly having a major brain cell malfunction lol I should remove the * and replace it with 00

    so it looks like this " 00 4 * * 1,3,5 "

     

    Do you think that is correct?

  4. On 11/13/2022 at 9:29 PM, IronBeardKnight said:

    Ill have to wait for a backup to run so I can get a screenshot for you of the duplicate issue.

    Yes restoring of single archives. for example I only want to restore duplicate or nextcloud etc. Currently if you want to do only a particular application/docker archive you need to extract the tar, stop and replace things manually.

     

    Looking forward to the merge and release. I would class this plugin as one of unraid's core utilities and its importance is high in the community. I have setup multiple unraid servers for friends and family and its always first on my list of must have things to be installed and configured.

    Currently I'm using it in conjunction with duplicate for encrypted remote backups between a few different unraid servers, pre-request of my time to help them setup unraid etc :) and way cheaper than a google drive etc.

    Being able to break backups apart into single archives without having to rely on post script extraction saves on ssd and hdd io and life. Per docker container / folder would make upload and restoring much easier and faster for example doing a restore from remote having to pull a full 80+gb single backup file  can take a very long time and is very susceptible to interruption over only a 2 mb's line for example.

     

    Please see below is a couple of screenshots of the duplicate issue that has me a bit stumped.
    image.thumb.png.78239d39f3b4c950dce35a499b932a53.png
    Untitled.thumb.png.5d3e1d5245b4dc649afbed3160d0971f.png

     

    I have been through the code that lives within the plugin as a brief overview but nothing stood out as incorrect in regards to this issue.

    "\flash\config\plugins\ca.backup2\ca.backup2-2022.07.23-x86_64-1.txz\ca.backup2-2022.07.23-x86_64-1.tar\usr\local\emhttp\plugins\ca.backup2\"

     

    I found an interesting file with paths in it that led me to a backup log file "/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" but as it gets re-written each backup it only shows data for the 4:48am backup and not the 4:00am one. so that is not useful, I did not find any errors within this log either.

    The fact that its getting re-written most likely in my opinion means that the plugin is having issues interpreting either the settings entered or the cron that I have set

    image.thumb.png.668f0bcfa125e981ab42a26daf46cba0.png

     

  5. On 11/11/2022 at 3:38 PM, KluthR said:

    Please clarify - you want to select the single archives which to restore and which not?

     

    I want this merged, of course! As soon as Andrew has time to review everything (https://github.com/Squidly271/ca.backup2/pulls). I already made changes on top of my changes which prevents me currently of creating more PRs but I sort that out later.

     

    Could you show a screeshot?

    Any last log entries?

    Ill have to wait for a backup to run so I can get a screenshot for you of the duplicate issue.

    Yes restoring of single archives. for example I only want to restore duplicate or nextcloud etc. Currently if you want to do only a particular application/docker archive you need to extract the tar, stop and replace things manually.

     

    Looking forward to the merge and release. I would class this plugin as one of unraid's core utilities and its importance is high in the community. I have setup multiple unraid servers for friends and family and its always first on my list of must have things to be installed and configured.

    Currently I'm using it in conjunction with duplicate for encrypted remote backups between a few different unraid servers, pre-request of my time to help them setup unraid etc :) and way cheaper than a google drive etc.

    Being able to break backups apart into single archives without having to rely on post script extraction saves on ssd and hdd io and life. Per docker container / folder would make upload and restoring much easier and faster for example doing a restore from remote having to pull a full 80+gb single backup file  can take a very long time and is very susceptible to interruption over only a 2 mb's line for example.

  6. I might have run into another issue as well. it seems that when running on a scheduale it will split the backup into two files or 3 some times causing duplication of taken space. 

    I have not been able to figure out what is causing this but my hunch is that it does this when it fills up one disk and has to move to the next or if a move runs on the cache.

    I have tried direct to array and also to cache first but always duplicate .tar files

     

  7. 10 hours ago, KluthR said:

    I packed all things to an experimental/unofficial plg, which is simply an update. I dont know if its the best option to publish it here - so: If anyone wants to test ALL my changes, please PM me.

     

    So far:

     

    • Fixed tar error detection during backup
    • Improve backuplog format
    • Option to create separate backup archives
    • Option to hide Docker warning message box
    • Check docker start result and wait 2 seconds between each start/stop operation (because of the: "failed to set IPv6 gateway" issue - which still needs some interperation of a docker guru)

     

    Amazing work mate!

    Are you working on separate restore options also ?

    Can we expect you to merge to original master branch or create your own to be added to Community Applications ?

    These changes I have been anticipating an am happy to do some testing if need be.

  8. This does not seem to be a profile specific issue from my end it seems  like a bug to me or incorrect usage of regex or something.

    As I'm not allowed to have just one backup even with a custom profile.

    image.png.0b936d570b6ed06921afa2d816b19dc9.png

  9. 6 minutes ago, IronBeardKnight said:

    I have discovered what I think may be a bug with this plugin.

     

    I'm sure this one could be a simple fix but its possible I might be missing something.
     

    Using 2 for this field allows the plugin to work as expected.

     

    for some reason I cannot select 1 backup for the amount of copies as I only want one backup and then I use other tools to ship a copy encrypted offsite.

     

    Please see below how it turns read then will not allow you apply 1 even thought the vm only has 1 vdisk.

     

    You cannot proceed and press apply when its red.

     

    image.thumb.png.d1dbe4cdbfd3f0cd3ccfe965fbb2dcc7.png

    I believe the issue could be on line 299 of https://github.com/JTok/unraid.vmbackup/blob/v0.2.2/source/Vmbackup1Settings.page although this pattern string does not make much sense to me anyway, does    ^(0|([1-9]))$   not work?

     

    image.thumb.png.3273c65b30856add4b31370d1e48e98f.png

  10. I have discovered what I think may be a bug with this plugin.

     

    I'm sure this one could be a simple fix but its possible I might be missing something.
     

    Using 2 for this field allows the plugin to work as expected.

     

    for some reason I cannot select 1 backup for the amount of copies as I only want one backup and then I use other tools to ship a copy encrypted offsite.

     

    Please see below how it turns read then will not allow you apply 1 even thought the vm only has 1 vdisk.

     

    You cannot proceed and press apply when its red.

     

    image.thumb.png.d1dbe4cdbfd3f0cd3ccfe965fbb2dcc7.png

  11. On 6/4/2022 at 11:09 AM, WenzelComputing said:

    i gave up on waiting for an official fix. here is my unofficial fix: 

    image.thumb.png.432810319bd7287c52cd99837df36a34.png

     

    go back to 4.3.9 version.

     

    new version is completely broken.

     

    Also was getting this issue when enabling vpn yes on the latest tag.  I loose all access to the gui.

    Rolled back as per the previous posts has brought me back up and running.

    Obviously not a full solution.

     

    Found this is the Supervisord.log

    image.png.1282ae40609613901f3067b7426b0662.png

    Edit: Found that this still did not fix the issue as after I did a CA Backup and the container auto started again it was back to no gui and this error above.

    Please help 

     

  12. 6 hours ago, hugenbdd said:

    Few things.  If you scroll back way in the thread we talk about a-time etc.  I can't remember the specific's but the way unRAID is implemented it doesn't allow us to see the last "accessed" time of files.  Would have made it very easy to move files over.

     

    If, someone comes up with code that provides a filelist, I could call that and send it to mover.  i.e. off cache or onto cache.  i.e. Essentially just using a cat filelist>mover

    If I get some time over the next few days ill give it a shot otherwise our own or derived version of (active io streams) or (open files) plugin would need to be created to keep track of accessed / last accessed files which would give you your list.

  13. mmm

    On 5/11/2022 at 6:50 PM, -Daedalus said:

     

    Got'cha, thanks for the explanation.

     

    Though what I was requesting was different:

    I'd like the ability to say "Move according to tuning rules for all shares except these ones. For these ones, just follow stock mover rules (daily, etc.)"

    Everything is possible with time, however, I don't believe there has been significant  need or request for something like this yet as what we have currently caters for 95% of situations, however I have come across a couple of situations where having temporary delayed moving on a file/folder level would have been good.

    So to state your request another way, basically wanting a timer option that can be set per file/folder/share?

    As for getting this done, the best point to start would be then to modify this current "File list path" image.png.b267cfa607a664d43bf16de0c8bf0401.png option to do what you want E.g you would just add as normal your file/ folder locations  with a (space at the end followed by 5d, 7h, 9m, 70s) this would be the time it stays on cache for when using the parent share folder with cache Yes.
     

    Code changes would need to change from:
    looped if (exists)

    to something like: 

    looped if (exists && current date < $variableFromLine).

     

    Not actual code ^ but you get the hint :)


    The problem with this is that the script now has to store dates and times of arrival on cache for said folder / file per line but on every subsequent child file/folder and give them individual id's and times to reference you can imagine how this now grows very fast in both compute, io, that the mover would need not to mention now a database need, etc etc, a mover database which won't be as easy as first through to implement with a bash script which is what is run for the mover is a-lot of extra coding to be required and potentially this edges on complete mover redesign.
     

    I cannot see this being implemented from my point of view.

     @-Daedalus thank you for raising it here though.

    New Idea!
    What could potentially solve your issue and would be very cool in nearly every single case is if we where to use for example the code from the " Active-IO-Streams" and/or "Open-Files" plugins  then modify a little to then advise the mover  what files where frequently most accessed  by way of access count and time open and have the mover be bi-directional, as in taking those frequent files and moving them to the cache.
    Having the option to auto add said files to the "exclude file list option" of the mover would also be great as this stops the files from being moved back so soon if your like me and have your mover run hourly or so, at this point of adding to the exclude list you could have each and every file and or folder added to the list automatically ( basically using the txt file as a db ) which would allow you to then add a blanket timeframe to each entry as well if you want for your original needs OR instead of a blanket have the timeframe auto configure based on the usage of the file or folder e.g accessed > 9 times etc 

     

    The mover then becomes essentially a proper bidirectional cache and your system gets a little smarter by making what is accessed frequently available on the faster drives but again basically a mover plugin redesign.

    I would be happy to help out getting something like this out the door but as this is not my plugin and my time is limited its a decision that is not up to me.

    @hugenbdd not sure if your down to go this far but its all future potential and idea's

    Pardon the grammer and spelling was in a rush.

  14. Has anyone actually done a perf test of different size files using xfs vs btrfs with mover to help speed things up.

    I know that xfs yes less features has better io in general on linux, however, with the mover choking on larger amounts of small files eg kb's using btrfs I wonder has anyone actually tested move time / perf of the two file systems for the purpose of the the mover?

  15. On 5/9/2022 at 11:23 PM, -Daedalus said:

     

    I saw that, but the wording of the help section makes it sound like it'll just never get moved off cache:

     

    image.png.076392c1050b0fd62ffdfc406ce2597f.png

    this is correct it is used for keeping things on the cache (excluding from the move)

    Situation example:

    You have a share set to cache yes  data is read written to the cache until criteria is met for the mover to run, the mover runs and normally every bit of that data currently under that share that is on the cache is moved to the array.

    Let;s say you have a bunch of sub files or folders in that share that you would like to have stay on the array when the mover runs so that applications that depend on that data can run faster using the cache.

    having this option allows you to have less shares created and increase speed of some application you have used it for.

    e.g
    Nextcloud requires a share for user data which includes docs. thumbnails, photos etc, if you set that share to cache_yes the all the data that was once on the cache becomes very slow now especially small files after the mover runs and it gets transfered/moved to the array as things like thumbnails etc have to be then read from the array instead of the cache.

    Enter this mover feature! 
    Allowing you to find the thumbnail sub sub sub folder or what ever else you want and set it to stay on the cache regardless of mover run, however, all the actual pictures docs etc not specified get moved to the array still, keeping your end user experience nice and fast in the gui/webpage of nextcloud as you cached your thumbnails but allowing you to optimize your used storage of your cache by having the huge files sit in slower storage as they are not regularly accessed,

    Summery is that this feature of mover allows for more:
    Granular cache control
    cache Space Saving
    Application/docker performance
    Less Mover run time
    faster load times of games: if you set assets or .exe files etc to stay on cache
    etc
    etc



     

  16. On 4/25/2022 at 6:52 PM, Neo_x said:

    Running in a bit of a challenge here, where mover only seems to move a certain percentage of files.

    Expected operation -> cache share hits 95% usage, mover then moves all files to array. In stead it only moves roughly 20%.

    on the 23rd i did a manual move(Main Menu, the Move button) which took it down to +-10%.

    24th and 25th was the automaic moves once it hits 95%, but it only moved about 25%

    image.png.43605762a08d88fd71b6a840c39eedbb.png

     

    configuration as follows :

    image.png.3f5de84df9853634abe5f1a6a8201d52.png

    image.png.6cdf87ea371ffdff5253c445d2c512dc.png

     

    Any ideas? should i enable mover logging and/or test mode for further troubleshooting?

     

    Thx!

    check your share settings as you may have had something going to cache that has now been set to only use the array thus the files get left on the cache and never moved.

    Correct procedure if changing your caching of shares to not use the cache anymore is always to stop what ever is feeding that share, run a full mover run  then change the share setting back to array.

     

    I hope this helps