trurl Posted January 16, 2017 Share Posted January 16, 2017 So, you must set minimum free to be larger than the largest file you ever expect to write. Thanks, trurl. Would freeing up space on the drives help with this DFI issue? Since, as I read it in the OP, the hashes are being added to the files, I can see where the additional disk space requirement might be exceeding what's physically available and thus preventing the hashes from being added. I can see where DFI would then think there were significantly fewer files on the disk since only a few have hashes added to them. The hashes, even exported to a separate file, don't take a lot of space. On the other hand, it is probably a good idea to not let your drives get too full, especially if they are ReiserFS. It doesn't perform well when a disk begins to run out of space and it's possible filesystem performance might impact this. Quote Link to comment
FreeMan Posted January 16, 2017 Share Posted January 16, 2017 The hashes, even exported to a separate file, don't take a lot of space. On the other hand, it is probably a good idea to not let your drives get too full, especially if they are ReiserFS. It doesn't perform well when a disk begins to run out of space and it's possible filesystem performance might impact this. I've just finished migrating all drives from ReiserFS to XFS, so it's not a Reiser issue. I didn't implement DFI on the drives until they'd been migrated to XFS, either, as that's against the big red recommendation in the first post. I'll do some manual adjusting to free up some space on each drive and see if things settle down. Quote Link to comment
coolspot Posted January 17, 2017 Share Posted January 17, 2017 Hi all, I removed all the checksums from my disks, but I still have several "checkboxes" indicating the build is up-to-date. Shouldn't removing the checksums reset the state of the plug-in? Quote Link to comment
EdgarWallace Posted January 17, 2017 Share Posted January 17, 2017 I have an issue with excluding files. Here is the view on dynamic.file.integrity.cfg: disks="" service="1" method="-b2" cmd="a" exclude=".AppleDB,.Recycle.Bin,AppdataBackup,Cloud_Data,Datenaustausch,Squidbait-animals,Squidbait-back,Squidbait-began,Squidbait-cr ossed,Squidbait-foliage,Squidbait-howl,Squidbait-more,Squidbait-over,Squidbait-rose,Squidbait-seated,Squidbait-seemed,Squidbait-self ,Squidbait-spoke,Squidbait-their,Squidbait-these,Squidbait-things,Squidbait-what,Squidbait-which,Squidbait-wolves,Squidbait-world,TM A,TMO" schedule="0" parity="1" notify="-n" log="-L -f" priority="" folders="" files="*.nfo,*.dump,*.xml,*.tmp" apple="on" place="SystemInformation" The main issue is that nfo, xml and tmp files are still being checked so that 2 disks are showing missing "Build up-to-date" and "Export up-to-date" every day. Anything that I need to correct? Thanks a lot. Quote Link to comment
superderpbro Posted January 17, 2017 Share Posted January 17, 2017 Just wondering what might cause false corruption reports? All of my disks are scanned on the 15th of every month and i always get one or two small files that are reported as corrupted. Thing is i usually have a sha1 or md5 checksum for my files (or groups of files) and they always check fine. This last time it was an NES ROMset that File Integrity said had one corrupt file. Ran the sha1 file and its fine. Found the torrent and forced recheck with uTorrent and its 100%. Any ideas? Quote Link to comment
FreeMan Posted January 28, 2017 Share Posted January 28, 2017 my syslog is filling up with hundreds to thousands of lines like this: Jan 28 08:06:39 NAS bunker: error: no export of file: /mnt/disk6/TV/Star Trek The Next Generation/Season 01/Star Trek The Next Generation S01E23 Skin of Evil.mkv When I do an export of a disk, it is reporting that 'x thousand files were skipped'. Why is this? I'm not running out of space, at the moment, the two most recent drives I attempted to export both have over 250MB free space. Suggestions? Quote Link to comment
Interstellar Posted February 7, 2017 Share Posted February 7, 2017 Has the way this works changed recently? CCC is now backing up files it thinks have changed on the server but in reality haven't (the Archived file has today's time and date) so every time I backup it backs up a new set of files!!? Something has completely buggered up my backups to UnRAID. Quote Link to comment
John_M Posted February 8, 2017 Share Posted February 8, 2017 Has the way this works changed recently? The most recent update for this plugin is dated 2016.09.20, so no, it hasn't. Quote Link to comment
superderpbro Posted February 16, 2017 Share Posted February 16, 2017 Just wondering what might cause false corruption reports? All of my disks are scanned on the 15th of every month and i always get one or two small files that are reported as corrupted. Thing is i usually have a sha1 or md5 checksum for my files (or groups of files) and they always check fine. This last time it was an NES ROMset that File Integrity said had one corrupt file. Ran the sha1 file and its fine. Found the torrent and forced recheck with uTorrent and its 100%. Any ideas? Another month and another few small files that are "corrupt" also it doesn't seem to be scanning the entirety of my data. Total file size is larger than that on all discs. Last month.. BLAKE2 hash key mismatch, 12Unl).zip */mnt/disk3/Games/ROMs/Console/No-Intro (2016-01-03)/Nintendo - Nintendo Entertainment System/Poke Block (Asia) (Unl).zip is corrupted This month.. BLAKE2 hash key mismatch, 12).zip */mnt/disk3/Games/ROMs/Console/No-Intro (2016-01-03)/GamePark - GP32/[bIOS] GamePark GP32 (Europe) (v1.6.6) (2004-10-10).zip is corrupted BLAKE2 hash key mismatch, 12).zip */mnt/disk3/Games/ROMs/Console/No-Intro (2016-01-03)/GamePark - GP32/[bIOS] GamePark GP32 (Europe) (v1.6.6) (Beta).zip is corrupted All files check out fine. When i rehash the old torrent file. I uninstalled the plugin for now.. can i delete \flash\config\plugins\dynamix.file.integrity? Anything else i need to remove to completely start fresh? Quote Link to comment
RobJ Posted February 17, 2017 Share Posted February 17, 2017 Total file size is larger than that on all discs. Different tools show files sizes differently. It would be helpful to show what you are comparing with, as we can't tell what is wrong that you want us to see. Last month.. BLAKE2 hash key mismatch, 12Unl).zip */mnt/disk3/Games/ROMs/Console/No-Intro (2016-01-03)/Nintendo - Nintendo Entertainment System/Poke Block (Asia) (Unl).zip is corrupted This month.. BLAKE2 hash key mismatch, 12).zip */mnt/disk3/Games/ROMs/Console/No-Intro (2016-01-03)/GamePark - GP32/[bIOS] GamePark GP32 (Europe) (v1.6.6) (2004-10-10).zip is corrupted BLAKE2 hash key mismatch, 12).zip */mnt/disk3/Games/ROMs/Console/No-Intro (2016-01-03)/GamePark - GP32/[bIOS] GamePark GP32 (Europe) (v1.6.6) (Beta).zip is corrupted I have NEVER seen so many parentheses in a file name! And more in the path. I can't help wondering if that is tripping up something, particularly if it goes through some regex or shell processing. It's clearly not displaying correctly in the line message for each file (evidence of a buffer overrun), so it may not be loading the right file name into the hasher. Quote Link to comment
superderpbro Posted February 17, 2017 Share Posted February 17, 2017 Good point i didn't notice the weird cut off file names. As far as the files sizes I was just comparing to the main section where it says how much is used. For example the disc that says 1.77TB in my screen shot says 2.26TB used in main. I uninstalled the plugin for now.. can i delete \flash\config\plugins\dynamix.file.integrity? Anything else i need to do/remove to completely start fresh? EDIT: Well, i deleted that folder and re installed. Is there anything else i should do before i enable it again? Quote Link to comment
superderpbro Posted February 17, 2017 Share Posted February 17, 2017 When you start the plugin does it not protect old files already on the array? I just noticed in settings were you enable it that is says "Automatically protect new and modified files". IF so that is probably why there was a difference in what it was scanning and what was actually on the disc. Is there a ways to hash old files? I uninstalled the plugin, deleted it's folder on flash, and re installed. Is there anything else i should do before i enable it again? Should i even bother now? heh Quote Link to comment
trurl Posted February 17, 2017 Share Posted February 17, 2017 When you start the plugin does it not protect old files already on the array? I just noticed in settings were you enable it that is says "Automatically protect new and modified files". IF so that is probably why there was a difference in what it was scanning and what was actually on the disc. Is there a ways to hash old files? I uninstalled the plugin, deleted it's folder on flash, and re installed. Is there anything else i should do before i enable it again? Should i even bother now? heh It uses inotify to let it know when a file is new or modified so it can hash them. Turn on Help for more instructions on how to get the old files hashed. Quote Link to comment
superderpbro Posted February 17, 2017 Share Posted February 17, 2017 Is that the Build command? So, should i select all my disks and run build, then enable Automatically protect new and modified files after it is done? Do you recommend i do anything else? Sorry, this is all a bit confusing to me and i'd like to do it right this time EDIT: No more time to fiddle today. Hope i did it right hehe Quote Link to comment
Squid Posted February 19, 2017 Share Posted February 19, 2017 There's a bug in the install / removal routine During installation, the plugin tries to install inotifytools (which comes with stock 6.3+). It skips the package because it's already installed on 6.3. Wouldn't it be better to do a conditional check on the installation rather than just skipping the install? (IE: if unRaid up's the version later on, FIP will wind up installing an old version unless you also update the plugin) This leads to the second, more serious issue. During plugin removal, the inotifytools package is removed, with potentially disastrous effects for anything else that might be using it, since the plugin removal is effectively removing a stock OS feature. Quote Link to comment
superderpbro Posted February 19, 2017 Share Posted February 19, 2017 How can we tell if inotifytools is installed? Also.. with my settings two posts above it no longer hashes anything automatically. I get notifications to build export manually. Can anyone help me get this working for good? Quote Link to comment
bonienl Posted February 19, 2017 Author Share Posted February 19, 2017 There's a bug in the install / removal routine During installation, the plugin tries to install inotifytools (which comes with stock 6.3+). It skips the package because it's already installed on 6.3. Wouldn't it be better to do a conditional check on the installation rather than just skipping the install? (IE: if unRaid up's the version later on, FIP will wind up installing an old version unless you also update the plugin) This leads to the second, more serious issue. During plugin removal, the inotifytools package is removed, with potentially disastrous effects for anything else that might be using it, since the plugin removal is effectively removing a stock OS feature. Updated Dynamix FIP and madde it compatible with unRAID 6.3. Quote Link to comment
bonienl Posted February 19, 2017 Author Share Posted February 19, 2017 How can we tell if inotifytools is installed? inotifytools is automatically installed. To verify do: # ls -l /var/log/packages/inotify-tools-3.14-x86_64-1 -rw-r--r-- 1 root root 1418 Feb 16 23:00 /var/log/packages/inotify-tools-3.14-x86_64-1 Also.. with my settings two posts above it no longer hashes anything automatically. I get notifications to build export manually. Can anyone help me get this working for good? Upon first installation you need to Build to get all hash values calculated. It is recommended to exclude any folders and files which change regularly (normal disk protection works here). Quote Link to comment
superderpbro Posted February 19, 2017 Share Posted February 19, 2017 I did a build after install. As shown in the pics above. It does not hash anything automatically. I get notifications to build / export manually. I also have no idea how one would do # ls -l /var/log/packages/inotify-tools-3.14-x86_64-1 -rw-r--r-- 1 root root 1418 Feb 16 23:00 /var/log/packages/inotify-tools-3.14-x86_64-1 Quote Link to comment
trurl Posted February 19, 2017 Share Posted February 19, 2017 I did a build after install. As shown in the pics above. It does not hash anything automatically. I get notifications to build / export manually. I also have no idea how one would do # ls -l /var/log/packages/inotify-tools-3.14-x86_64-1 -rw-r--r-- 1 root root 1418 Feb 16 23:00 /var/log/packages/inotify-tools-3.14-x86_64-1 This is the commandls -l /var/log/packages/inotify-tools-3.14-x86_64-1 and this is the expected result -rw-r--r-- 1 root root 1418 Feb 16 23:00 /var/log/packages/inotify-tools-3.14-x86_64-1 Quote Link to comment
superderpbro Posted February 19, 2017 Share Posted February 19, 2017 Figured it out i think. used putty. The result is the same other than the time/date hehe No idea why its not hashing new stuff then All i did was re-install. Set those settings above. Build and export.. What did i do wrong? Quote Link to comment
John_M Posted February 19, 2017 Share Posted February 19, 2017 I find that it misses the occasional new file too (and I've mentioned it here before). I think files get missed if the system is especially busy - ifnotify can only track so many items. I did suggest that when it is detected that some file is missing a checksum it would be nice if the automatic calculation of it could be triggered. I don't personally use the export feature, just the self-contained checksums. So on a good day I see a row of green ticks and a row of blue crosses. Quote Link to comment
karateo Posted March 6, 2017 Share Posted March 6, 2017 the last two months after the check I get a few modified files last month was BLAKE2 hash key mismatch (updated), /mnt/disk1/Photos/2015 xxxxxxx/DSC_5258.JPG was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/Photos/2015 xxxxxxx/xxxxxxx/DSC_5422.JPG was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/Photos/2015 xxxxxxx/xxxxxxx/DSC_5423.JPG was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/Photos/2016 - 2017 xxxxxxx/-=Videos=-/MOV00005.MPG was modified and yesterday BLAKE2 hash key mismatch (updated), /mnt/disk1/Photos/2015 xxxxxxxx/DSC_5258.JPG was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/Photos/2015 xxxxxxx/xxxxxxx/DSC_5422.JPG was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/Photos/2015 xxxxxxx/xxxxxxx/DSC_5423.JPG was modified BLAKE2 hash key mismatch (updated), /mnt/disk1/Photos/2016 - 2017 xxxxxxx/-=Videos=-/MVI_0048.m4v was modified If I remember correctly I checked those files with beyond compare (binary comparison) and didn't see any siggnificant difference so I copied my backup file to server. I replaced Unraid files from one of my backups. But I got again errors yesterday so I need to find what Is causing this. Any ideas? Quote Link to comment
ksarnelli Posted March 7, 2017 Share Posted March 7, 2017 So I'm not sure if I should post here or create a new topic, but here goes... I had a pretty major issue the other night, and the root cause was the file integrity plugin. Here is what happened: - I had "Automatically protect new and modified files" enabled in the File Integrity settings - I had around 250 GB of large files on my cache drive - each file was approximately 2-3 GB. - Mover script started - all of the files on the cache drive were now being moved to a share set to use a single HDD - File Integrity plugin launched a b2sum process for every file as it was finished copying, however the copies were finishing way faster than the b2sum processes could calculate checksums. Eventually the server grinded to a halt - file shares were unresponsive and the UI was extremely slugish (taking several minutes or completely failing to load a page). I was eventually able to kill the b2sum processes and turned off the "automatically protect new and modified files" option and the server began functioning normally again. Here is a snippet of the syslog during the issue: Mar 5 03:01:19 unRAID kernel: java: page allocation stalls for 18911ms, order:0, mode:0x24201ca(GFP_HIGHUSER_MOVABLE|__GFP_COLD) Mar 5 03:01:19 unRAID kernel: CPU: 3 PID: 17921 Comm: java Not tainted 4.9.10-unRAID #1 Mar 5 03:01:19 unRAID kernel: Hardware name: VMware, Inc. VMware Virtual Platform/440BX Desktop Reference Platform, BIOS 6.00 04/05/2016 Mar 5 03:01:19 unRAID kernel: ffffc90010c8fb28 ffffffff813a353e 0000000000000001 0000000000000000 Mar 5 03:01:19 unRAID kernel: ffffc90010c8fbb8 ffffffff810cb40d 024201ca810c9c00 ffffffff8193d200 Mar 5 03:01:19 unRAID kernel: ffffc90010c8fb50 0000000000000010 ffffc90010c8fbc8 ffffc90010c8fb68 Mar 5 03:01:19 unRAID kernel: Call Trace: Mar 5 03:01:19 unRAID kernel: [<ffffffff813a353e>] dump_stack+0x61/0x7e Mar 5 03:01:19 unRAID kernel: [<ffffffff810cb40d>] warn_alloc+0x102/0x116 Mar 5 03:01:19 unRAID kernel: [<ffffffff810cb9c3>] __alloc_pages_nodemask+0x541/0xc71 Mar 5 03:01:19 unRAID kernel: [<ffffffff810d0bae>] ? __do_page_cache_readahead+0x1ed/0x21f Mar 5 03:01:19 unRAID kernel: [<ffffffff81102ad2>] alloc_pages_current+0xbe/0xe8 Mar 5 03:01:19 unRAID kernel: [<ffffffff810c4bff>] __page_cache_alloc+0x89/0x9f Mar 5 03:01:19 unRAID kernel: [<ffffffff810c67e4>] filemap_fault+0x23d/0x458 Mar 5 03:01:19 unRAID kernel: [<ffffffff810e8d03>] __do_fault+0x68/0xbb Mar 5 03:01:19 unRAID kernel: [<ffffffff810edd20>] handle_mm_fault+0x6b1/0xf96 Mar 5 03:01:19 unRAID kernel: [<ffffffff810421cc>] __do_page_fault+0x24a/0x3ed Mar 5 03:01:19 unRAID kernel: [<ffffffff810423b2>] do_page_fault+0x22/0x27 Mar 5 03:01:19 unRAID kernel: [<ffffffff8167ec98>] page_fault+0x28/0x30 Mar 5 03:01:19 unRAID kernel: Mem-Info: Mar 5 03:01:19 unRAID kernel: active_anon:353932 inactive_anon:11709 isolated_anon:0 Mar 5 03:01:19 unRAID kernel: active_file:3062030 inactive_file:375178 isolated_file:324 Mar 5 03:01:19 unRAID kernel: unevictable:0 dirty:368392 writeback:1390 unstable:0 Mar 5 03:01:19 unRAID kernel: slab_reclaimable:152463 slab_unreclaimable:59121 Mar 5 03:01:19 unRAID kernel: mapped:25063 shmem:113332 pagetables:5412 bounce:0 Mar 5 03:01:19 unRAID kernel: free:50749 free_pcp:0 free_cma:0 Mar 5 03:01:19 unRAID kernel: Node 0 active_anon:1415728kB inactive_anon:46836kB active_file:12248120kB inactive_file:1500712kB unevictable:0kB i$ Mar 5 03:01:19 unRAID kernel: Node 0 DMA free:15876kB min:128kB low:160kB high:192kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_f$ Mar 5 03:01:19 unRAID kernel: lowmem_reserve[]: 0 2889 15953 15953 Mar 5 03:01:19 unRAID kernel: Node 0 DMA32 free:76592kB min:24452kB low:30564kB high:36676kB active_anon:326844kB inactive_anon:32kB active_file:$ Mar 5 03:01:19 unRAID kernel: lowmem_reserve[]: 0 0 13064 13064 Mar 5 03:01:19 unRAID kernel: Node 0 Normal free:110528kB min:110580kB low:138224kB high:165868kB active_anon:1088884kB inactive_anon:46804kB act$ Mar 5 03:01:19 unRAID kernel: lowmem_reserve[]: 0 0 0 0 Mar 5 03:01:19 unRAID kernel: Node 0 DMA: 1*4kB (U) 0*8kB 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 3*40$ Mar 5 03:01:19 unRAID kernel: Node 0 DMA32: 989*4kB (UME) 993*8kB (UME) 771*16kB (UME) 669*32kB (UME) 298*64kB (UME) 62*128kB (UME) 5*256kB (UM) $ Mar 5 03:01:19 unRAID kernel: Node 0 Normal: 14731*4kB (UMEH) 6268*8kB (UMH) 22*16kB (MH) 30*32kB (H) 16*64kB (H) 2*128kB (H) 0*256kB 0*512kB 0*1$ Mar 5 03:01:19 unRAID kernel: 3551534 total pagecache pages Mar 5 03:01:19 unRAID kernel: 0 pages in swap cache Mar 5 03:01:19 unRAID kernel: Swap cache stats: add 0, delete 0, find 0/0 Mar 5 03:01:19 unRAID kernel: Free swap = 0kB Mar 5 03:01:19 unRAID kernel: Total swap = 0kB Mar 5 03:01:19 unRAID kernel: 4194174 pages RAM Mar 5 03:01:19 unRAID kernel: 0 pages HighMem/MovableOnly Mar 5 03:01:19 unRAID kernel: 85359 pages reserved The server has 8 cores and 16 GB of RAM. I think the root issue is a flaw in the file integrity plugin logic. The plugin should not run simultaneous checksum calculations on a single disk - they should be queued. What are the chances in getting this supported? Thanks! Quote Link to comment
bonienl Posted March 8, 2017 Author Share Posted March 8, 2017 FIP has some logic to start concurrent "hashing" sessions depending on the number of available cores in the system. Perhaps I should make that a variable which can be adjusted thru the GUI, it would allow people to experiment and see what works best on their system. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.