IronBeardKnight

Members
  • Posts

    65
  • Joined

  • Last visited

Posts posted by IronBeardKnight

  1. NVME CACHE
    I transferred from btrfs to zfs and have had a very noticeable decrease in write speed before and after memory cache has been filled not only that but the ram being used as a cache makes me nerves even though I have a ups and ecc memory, I have noticed that my duel nvme raid1 in zfs cache pool gets full 7000mbps read but only 1500mbps max write which is a far cry from what it should be when using zfs. I will be swtich my appdata pools back to btrfs as it has nearly all the same features as zfs but is much faster from my tests.
    The only thing that is missing to take advantage of the btrfs is the nice gui plugin and scripts that have been done to deal with snapshots which i'm sure someone could manage to bang up pretty quick using existing zfs scripts and plugins.

    Its important to note here that my main nvme cache pool was both raid1 in btrfs and zfs local to that file system type of raid1 obviously.


    ARRAY
    I also started doing some of my array drives to single disk zfs as per spaceinvadors videos as the array parity and expansion abilities would be handled by unraid which is where I noticed the biggest downside to me personally which was that zfs single disk  unlike any zpools is obviously missing a lot of features but more so is very heavily impacted write performance and you still only get single disk read speed obviously once the ram cache was exhausted. I noticed 65% degrading in write speed to the zfs single drive.

    I did a lot of research into BTRFS vs ZFS and have decided to migrate all my drives and cache to BTRFS  and let unraid handle parity much the same as the way spaceinvader is doing zfs but this way I don't see the performance impact I was seeing in zfs and should still be able to do all the same snapshot shifting and replication that zfs does. Doing it this way I avoid the dreaded unstable btrfs local file system raid 5/6 and I get nearly all the same features as ZFS but without the speed bug issues in Unraid.


    DISCLAIMER
    I'm sure ZFS is very fast when it comes to an actual zpool and not on a single disk drives situation but it also very much feels like zfs is a deep storage only file system and not really molded for an active so to speak array.

    Given my testing all my cache pools and the drives within them will be btrfs raid 1 or 0 ( Raid 1 giving you active bitrot protection) and my Array will be Unraid handled parity with individual BTRFS files system single disks

    Hope this helps others in some way to avoid days of data transfer only to realis the pitfalls.

     

  2. NVME CACHE
    I transferred from btrfs to zfs and have had a very noticeable decrease in write speed before and after memory cache has been filled not only that but the ram being used as a cache makes me nerves even though I have a ups and ecc memory, I have noticed that my duel nvme raid1 in zfs cache pool gets full 7000mbps read but only 1500mbps max write which is a far cry from what it should be when using zfs. I will be swtich my appdata pools back to btrfs as it has nearly all the same features as zfs but is much faster from my tests.
    The only thing that is missing to take advantage of the btrfs is the nice gui plugin and scripts that have been done to deal with snapshots which i'm sure someone could manage to bang up pretty quick using existing zfs scripts and plugins.

    Its important to note here that my main nvme cache pool was both raid1 in btrfs and zfs local to that file system type of raid1 obviously.


    ARRAY
    I also started doing some of my array drives to single disk zfs as per spaceinvadors videos as the array parity and expansion abilities would be handled by unraid which is where I noticed the biggest downside to me personally which was that zfs single disk  unlike any zpools is obviously missing a lot of features but more so is very heavily impacted write performance and you still only get single disk read speed obviously once the ram cache was exhausted. I noticed 65% degrading in write speed to the zfs single drive.

    I did a lot of research into BTRFS vs ZFS and have decided to migrate all my drives and cache to BTRFS  and let unraid handle parity much the same as the way spaceinvader is doing zfs but this way I don't see the performance impact I was seeing in zfs and should still be able to do all the same snapshot shifting and replication that zfs does. Doing it this way I avoid the dreaded unstable btrfs local file system raid 5/6 and I get nearly all the same features as ZFS but without the speed bug issues in Unraid.


    DISCLAIMER
    I'm sure ZFS is very fast when it comes to an actual zpool and not on a single disk drives situation but it also very much feels like zfs is a deep storage only file system and not really molded for an active so to speak array.

    Given my testing all my cache pools and the drives within them will be btrfs raid 1 or 0 ( Raid 1 giving you active bitrot protection) and my Array will be Unraid handled parity with individual BTRFS files system single disks

    Hope this helps others in some way to avoid days of data transfer only to realis the pitfalls.

    • Upvote 1
  3. Hey guys having major issues with the mover process. it seems to keep hanging and the only thing I can see in its logging is "move: stat: cannot statx xxxxxxxxxx cannot be found" it would apprear that after a while the "find" process then disappears for from the open files plugin but the move button is still grayed out.

    Mover seems to be a pretty buggy. I have been struggling to move everything off my cache drives to change them out with the mover and have had to resort to doing it manually as the mover keeps stalling or getting hung up on something.

    No disc activity or cache activity happens

    This does not seem to directly related to any particular type of file or file in general as it has happened or stalled on quite a number of different appdata files/ folders.

    I'm happy to help out trying to improve the mover however I am limited time wise. :(
     

  4. On 9/22/2021 at 7:05 AM, JonathanM said:

    Writes to the parity array will always be limited by the speed of the parity drive(s). Not much point in trying to write to multiple data drives simultaneously when the parity drive is the bottleneck.


    While this may be true for cache > array but what about array > cache would it still be the same limitation if the share is spread over multiple drives?

  5. For those of you that have setup the script to go with the ClamAV container but have noticed little to no activity coming from it when running  "Docker Stats" this may be the fix to your issue.

    I don't believe that the container is setup to do a scan on startup so you may have to trigger it by adding this line to the scripts as seen below in the screen shot.

    I have also figured out how to get multithreading working although be warned when using multi you may want to schedual it for when your not using your server as it can be quite CPU and RAM hungry.

     

    Some thoughts for you before you proceed with multithreaded scans are to put a memory limit on your docker through extra parameters. 

    Multi Thread:
    exec('docker exec ClamAV sh -c "find /scan -type f -print0 | xargs -0 -P $(nproc) clamscan"');
    image.thumb.png.d6df0d743af1f56eac2265e402f73424.png
    image.thumb.png.10d5ae770781299c82d276c3fa2fb022.png


    Single Thread:
    exec('docker exec ClamAV sh -c "clamscan"');


    image.thumb.png.6ce3467c3705c992a5a725db023c4da6.png

    • Like 1
  6. On 12/8/2022 at 12:07 AM, IronBeardKnight said:

    Hi guys,

     

    I can see others have posted about a similar issue but I cannot seem to find where the solution may have been posted.

     

    I cannot seem to get past this error.

    @Josh.5 any advise mate is greatly appreciated and for others that may also be getting this as well.

    image.png.b1b8acb5623ec653ccd69c86db42ef42.png

    Has anyone able to guide through this issue or even get this docker container working on unraid.

    Trying to use my primary/only gpu but none of the display options seem to work.

     

  7. On 9/25/2022 at 6:37 PM, alicex said:

    Hi

    May i have some configuration examples?

    I'd want to ignore moving some folders and some file types like *.!qb

    but my config seems not working:(

    Ignore files listed inside of a text file: yes
    File list path: /mnt/user/media/upload/

     

    Ignore file types: Yes
    comma seperated list of file types: .!qb,.part

    or type  "*.!qb,*.part" instead? or ".!qb;.part"?

     

    Thanks in advance.

     

     

    Hello, Perhaps I can provide an explanation of where you going wrong.
     

    you have not actually provided a path properly to a list of directories you want skipped.

    To be clear Ignore file types and Ignore files / folders listed inside of text file are not related they are individual settings and can be used independently.

     

    Please see below the examples: 

    image.thumb.png.eb6d02af2d972d1d98ccc7d0822c5200.png

    image.png.b83eae2fa22cb71efa8fa139f1dee7a3.png

    • Like 1
  8. 3 minutes ago, KluthR said:

    Exactly. It should be starting with 0 4, not * 4.

    Testing now! It will take a while to run through but i'm sure this is the solution. Thank you so much for your help, I just overlooked it lol I vaguely remember having it set to 00 4 previously but that was many unraid revisions ago probably before backup version 2 came out. 

    Ill also try giving your plugin revision a bash over the next couple of weeks :)

     

  9. 9 minutes ago, KluthR said:

    That seems incorrect - could you post your schedule settings from inside the plugin settings page? The current setting says "At every minute past hour 4 on Monday, Wednesday, and Friday."

    OMG you are right i'm clearly having a major brain cell malfunction lol I should remove the * and replace it with 00

    so it looks like this " 00 4 * * 1,3,5 "

     

    Do you think that is correct?

  10. On 11/13/2022 at 9:29 PM, IronBeardKnight said:

    Ill have to wait for a backup to run so I can get a screenshot for you of the duplicate issue.

    Yes restoring of single archives. for example I only want to restore duplicate or nextcloud etc. Currently if you want to do only a particular application/docker archive you need to extract the tar, stop and replace things manually.

     

    Looking forward to the merge and release. I would class this plugin as one of unraid's core utilities and its importance is high in the community. I have setup multiple unraid servers for friends and family and its always first on my list of must have things to be installed and configured.

    Currently I'm using it in conjunction with duplicate for encrypted remote backups between a few different unraid servers, pre-request of my time to help them setup unraid etc :) and way cheaper than a google drive etc.

    Being able to break backups apart into single archives without having to rely on post script extraction saves on ssd and hdd io and life. Per docker container / folder would make upload and restoring much easier and faster for example doing a restore from remote having to pull a full 80+gb single backup file  can take a very long time and is very susceptible to interruption over only a 2 mb's line for example.

     

    Please see below is a couple of screenshots of the duplicate issue that has me a bit stumped.
    image.thumb.png.78239d39f3b4c950dce35a499b932a53.png
    Untitled.thumb.png.5d3e1d5245b4dc649afbed3160d0971f.png

     

    I have been through the code that lives within the plugin as a brief overview but nothing stood out as incorrect in regards to this issue.

    "\flash\config\plugins\ca.backup2\ca.backup2-2022.07.23-x86_64-1.txz\ca.backup2-2022.07.23-x86_64-1.tar\usr\local\emhttp\plugins\ca.backup2\"

     

    I found an interesting file with paths in it that led me to a backup log file "/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" but as it gets re-written each backup it only shows data for the 4:48am backup and not the 4:00am one. so that is not useful, I did not find any errors within this log either.

    The fact that its getting re-written most likely in my opinion means that the plugin is having issues interpreting either the settings entered or the cron that I have set

    image.thumb.png.668f0bcfa125e981ab42a26daf46cba0.png

     

  11. On 11/11/2022 at 3:38 PM, KluthR said:

    Please clarify - you want to select the single archives which to restore and which not?

     

    I want this merged, of course! As soon as Andrew has time to review everything (https://github.com/Squidly271/ca.backup2/pulls). I already made changes on top of my changes which prevents me currently of creating more PRs but I sort that out later.

     

    Could you show a screeshot?

    Any last log entries?

    Ill have to wait for a backup to run so I can get a screenshot for you of the duplicate issue.

    Yes restoring of single archives. for example I only want to restore duplicate or nextcloud etc. Currently if you want to do only a particular application/docker archive you need to extract the tar, stop and replace things manually.

     

    Looking forward to the merge and release. I would class this plugin as one of unraid's core utilities and its importance is high in the community. I have setup multiple unraid servers for friends and family and its always first on my list of must have things to be installed and configured.

    Currently I'm using it in conjunction with duplicate for encrypted remote backups between a few different unraid servers, pre-request of my time to help them setup unraid etc :) and way cheaper than a google drive etc.

    Being able to break backups apart into single archives without having to rely on post script extraction saves on ssd and hdd io and life. Per docker container / folder would make upload and restoring much easier and faster for example doing a restore from remote having to pull a full 80+gb single backup file  can take a very long time and is very susceptible to interruption over only a 2 mb's line for example.

  12. I might have run into another issue as well. it seems that when running on a scheduale it will split the backup into two files or 3 some times causing duplication of taken space. 

    I have not been able to figure out what is causing this but my hunch is that it does this when it fills up one disk and has to move to the next or if a move runs on the cache.

    I have tried direct to array and also to cache first but always duplicate .tar files

     

  13. 10 hours ago, KluthR said:

    I packed all things to an experimental/unofficial plg, which is simply an update. I dont know if its the best option to publish it here - so: If anyone wants to test ALL my changes, please PM me.

     

    So far:

     

    • Fixed tar error detection during backup
    • Improve backuplog format
    • Option to create separate backup archives
    • Option to hide Docker warning message box
    • Check docker start result and wait 2 seconds between each start/stop operation (because of the: "failed to set IPv6 gateway" issue - which still needs some interperation of a docker guru)

     

    Amazing work mate!

    Are you working on separate restore options also ?

    Can we expect you to merge to original master branch or create your own to be added to Community Applications ?

    These changes I have been anticipating an am happy to do some testing if need be.

  14. 6 minutes ago, IronBeardKnight said:

    I have discovered what I think may be a bug with this plugin.

     

    I'm sure this one could be a simple fix but its possible I might be missing something.
     

    Using 2 for this field allows the plugin to work as expected.

     

    for some reason I cannot select 1 backup for the amount of copies as I only want one backup and then I use other tools to ship a copy encrypted offsite.

     

    Please see below how it turns read then will not allow you apply 1 even thought the vm only has 1 vdisk.

     

    You cannot proceed and press apply when its red.

     

    image.thumb.png.d1dbe4cdbfd3f0cd3ccfe965fbb2dcc7.png

    I believe the issue could be on line 299 of https://github.com/JTok/unraid.vmbackup/blob/v0.2.2/source/Vmbackup1Settings.page although this pattern string does not make much sense to me anyway, does    ^(0|([1-9]))$   not work?

     

    image.thumb.png.3273c65b30856add4b31370d1e48e98f.png

  15. I have discovered what I think may be a bug with this plugin.

     

    I'm sure this one could be a simple fix but its possible I might be missing something.
     

    Using 2 for this field allows the plugin to work as expected.

     

    for some reason I cannot select 1 backup for the amount of copies as I only want one backup and then I use other tools to ship a copy encrypted offsite.

     

    Please see below how it turns read then will not allow you apply 1 even thought the vm only has 1 vdisk.

     

    You cannot proceed and press apply when its red.

     

    image.thumb.png.d1dbe4cdbfd3f0cd3ccfe965fbb2dcc7.png