
IronBeardKnight
Members-
Posts
63 -
Joined
-
Last visited
Recent Profile Visitors
1020 profile views
IronBeardKnight's Achievements
Rookie (2/14)
8
Reputation
-
SMB & ZFS speeds I don't understand ...
IronBeardKnight replied to MatzeHali's topic in General Support
NVME CACHE I transferred from btrfs to zfs and have had a very noticeable decrease in write speed before and after memory cache has been filled not only that but the ram being used as a cache makes me nerves even though I have a ups and ecc memory, I have noticed that my duel nvme raid1 in zfs cache pool gets full 7000mbps read but only 1500mbps max write which is a far cry from what it should be when using zfs. I will be swtich my appdata pools back to btrfs as it has nearly all the same features as zfs but is much faster from my tests. The only thing that is missing to take advantage of the btrfs is the nice gui plugin and scripts that have been done to deal with snapshots which i'm sure someone could manage to bang up pretty quick using existing zfs scripts and plugins. Its important to note here that my main nvme cache pool was both raid1 in btrfs and zfs local to that file system type of raid1 obviously. ARRAY I also started doing some of my array drives to single disk zfs as per spaceinvadors videos as the array parity and expansion abilities would be handled by unraid which is where I noticed the biggest downside to me personally which was that zfs single disk unlike any zpools is obviously missing a lot of features but more so is very heavily impacted write performance and you still only get single disk read speed obviously once the ram cache was exhausted. I noticed 65% degrading in write speed to the zfs single drive. I did a lot of research into BTRFS vs ZFS and have decided to migrate all my drives and cache to BTRFS and let unraid handle parity much the same as the way spaceinvader is doing zfs but this way I don't see the performance impact I was seeing in zfs and should still be able to do all the same snapshot shifting and replication that zfs does. Doing it this way I avoid the dreaded unstable btrfs local file system raid 5/6 and I get nearly all the same features as ZFS but without the speed bug issues in Unraid. DISCLAIMER I'm sure ZFS is very fast when it comes to an actual zpool and not on a single disk drives situation but it also very much feels like zfs is a deep storage only file system and not really molded for an active so to speak array. Given my testing all my cache pools and the drives within them will be btrfs raid 1 or 0 ( Raid 1 giving you active bitrot protection) and my Array will be Unraid handled parity with individual BTRFS files system single disks Hope this helps others in some way to avoid days of data transfer only to realis the pitfalls. -
NVME CACHE I transferred from btrfs to zfs and have had a very noticeable decrease in write speed before and after memory cache has been filled not only that but the ram being used as a cache makes me nerves even though I have a ups and ecc memory, I have noticed that my duel nvme raid1 in zfs cache pool gets full 7000mbps read but only 1500mbps max write which is a far cry from what it should be when using zfs. I will be swtich my appdata pools back to btrfs as it has nearly all the same features as zfs but is much faster from my tests. The only thing that is missing to take advantage of the btrfs is the nice gui plugin and scripts that have been done to deal with snapshots which i'm sure someone could manage to bang up pretty quick using existing zfs scripts and plugins. Its important to note here that my main nvme cache pool was both raid1 in btrfs and zfs local to that file system type of raid1 obviously. ARRAY I also started doing some of my array drives to single disk zfs as per spaceinvadors videos as the array parity and expansion abilities would be handled by unraid which is where I noticed the biggest downside to me personally which was that zfs single disk unlike any zpools is obviously missing a lot of features but more so is very heavily impacted write performance and you still only get single disk read speed obviously once the ram cache was exhausted. I noticed 65% degrading in write speed to the zfs single drive. I did a lot of research into BTRFS vs ZFS and have decided to migrate all my drives and cache to BTRFS and let unraid handle parity much the same as the way spaceinvader is doing zfs but this way I don't see the performance impact I was seeing in zfs and should still be able to do all the same snapshot shifting and replication that zfs does. Doing it this way I avoid the dreaded unstable btrfs local file system raid 5/6 and I get nearly all the same features as ZFS but without the speed bug issues in Unraid. DISCLAIMER I'm sure ZFS is very fast when it comes to an actual zpool and not on a single disk drives situation but it also very much feels like zfs is a deep storage only file system and not really molded for an active so to speak array. Given my testing all my cache pools and the drives within them will be btrfs raid 1 or 0 ( Raid 1 giving you active bitrot protection) and my Array will be Unraid handled parity with individual BTRFS files system single disks Hope this helps others in some way to avoid days of data transfer only to realis the pitfalls.
-
Having two issues. Mover hangs and the server won't reboot
IronBeardKnight replied to flyize's topic in General Support
Hey guys having major issues with the mover process. it seems to keep hanging and the only thing I can see in its logging is "move: stat: cannot statx xxxxxxxxxx cannot be found" it would apprear that after a while the "find" process then disappears for from the open files plugin but the move button is still grayed out. Mover seems to be a pretty buggy. I have been struggling to move everything off my cache drives to change them out with the mover and have had to resort to doing it manually as the mover keeps stalling or getting hung up on something. No disc activity or cache activity happens This does not seem to directly related to any particular type of file or file in general as it has happened or stalled on quite a number of different appdata files/ folders. I'm happy to help out trying to improve the mover however I am limited time wise. -
While this may be true for cache > array but what about array > cache would it still be the same limitation if the share is spread over multiple drives?
-
For those of you that have setup the script to go with the ClamAV container but have noticed little to no activity coming from it when running "Docker Stats" this may be the fix to your issue. I don't believe that the container is setup to do a scan on startup so you may have to trigger it by adding this line to the scripts as seen below in the screen shot. I have also figured out how to get multithreading working although be warned when using multi you may want to schedual it for when your not using your server as it can be quite CPU and RAM hungry. Some thoughts for you before you proceed with multithreaded scans are to put a memory limit on your docker through extra parameters. Multi Thread: exec('docker exec ClamAV sh -c "find /scan -type f -print0 | xargs -0 -P $(nproc) clamscan"'); Single Thread: exec('docker exec ClamAV sh -c "clamscan"');
-
Has anyone able to guide through this issue or even get this docker container working on unraid. Trying to use my primary/only gpu but none of the display options seem to work.
-
Hi guys, I can see others have posted about a similar issue but I cannot seem to find where the solution may have been posted. I cannot seem to get past this error. @Josh.5 any advise mate is greatly appreciated and for others that may also be getting this as well.
-
Hello, Perhaps I can provide an explanation of where you going wrong. you have not actually provided a path properly to a list of directories you want skipped. To be clear Ignore file types and Ignore files / folders listed inside of text file are not related they are individual settings and can be used independently. Please see below the examples:
-
IronBeardKnight started following KluthR
-
Testing now! It will take a while to run through but i'm sure this is the solution. Thank you so much for your help, I just overlooked it lol I vaguely remember having it set to 00 4 previously but that was many unraid revisions ago probably before backup version 2 came out. Ill also try giving your plugin revision a bash over the next couple of weeks
-
OMG you are right i'm clearly having a major brain cell malfunction lol I should remove the * and replace it with 00 so it looks like this " 00 4 * * 1,3,5 " Do you think that is correct?
-
https://crontab.guru/ is what I used as referance and from my understanding it is supposed to run at 4am every monday, wednesday and friday. From BackupOptions.json :
-
-
Please see below is a couple of screenshots of the duplicate issue that has me a bit stumped. I have been through the code that lives within the plugin as a brief overview but nothing stood out as incorrect in regards to this issue. "\flash\config\plugins\ca.backup2\ca.backup2-2022.07.23-x86_64-1.txz\ca.backup2-2022.07.23-x86_64-1.tar\usr\local\emhttp\plugins\ca.backup2\" I found an interesting file with paths in it that led me to a backup log file "/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" but as it gets re-written each backup it only shows data for the 4:48am backup and not the 4:00am one. so that is not useful, I did not find any errors within this log either. The fact that its getting re-written most likely in my opinion means that the plugin is having issues interpreting either the settings entered or the cron that I have set