sven Posted October 9, 2021 Share Posted October 9, 2021 I am a new user of the File Integrity Plugin. I have used the BLAKE3 algorithm and so far the performance seems great. One (small?) problem: at the end of running the Export step for one of the disks, I get a PHP error that a memory limit has been reached: Quote <br /> <b>Fatal error</b>: Allowed memory size of 134217728 bytes exhausted (tried to allocate 168201344 bytes) in <b>/usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php</b> on line <b>40</b><br /> I have gathered that this has to do with the memory limit of php which is set at 128MB. When I go to the export directory, I see that a 168MB file has been created called disk1.export.blake3.hash. When I open the file it contains the hashes for all files from disk 1. Since the export file seems to be fine, is this just an error in the progress display? There were 862892 files in the export. 6TB drive, many large (500MB+) files, but also a lot of smaller files. 1 Quote Link to comment
Vr2Io Posted October 9, 2021 Share Posted October 9, 2021 9 minutes ago, sven said: There were 862892 files Good catch. Quote Link to comment
wildfire305 Posted October 15, 2021 Share Posted October 15, 2021 I am unable to figure out how to search the 41 pages of the post for the information I'm looking for. Can you explain how to properly use the tool to run a manual verify? I have this set to run on the 15th and it alerted me this morning to 10 corrupted files on a share that I had excluded (I excluded it after the automatic hash creation had run on the share). After reading the extremely helpful tooltips in the tool, I realize now that I needed to run the clear command after excluding the directory share. I ran that and everything is green checkmarks. Now I want to run a verify again so I get a clean report. Is it the "check export" button? Bonus info: The share I excluded was rapidly changing data from system backups that I knew did not need to be verified and didn't want to waste resources on. Those directories change multiple times a day. That's the share that reported the 10 errors which I'm confident the files changed since they were hashed. Quote Link to comment
Vr2Io Posted October 15, 2021 Share Posted October 15, 2021 (edited) I can't remember use clear or remove ( btw tooltips should have info.) Ple do one more step, click build again, if log of file need hash then this means something wrong, if all as expect then export. It slso suggest backup the export file(s). Edited October 15, 2021 by Vr2Io Quote Link to comment
RealActorRob Posted November 10, 2021 Share Posted November 10, 2021 I'm running two storj nodes and that's where all my 'errors' are reported. The exclusions in settings are for top level folders, so now I have to figure out how to exclude a subfolder.... Can I request that feature or am I missing something? Quote Link to comment
Vr2Io Posted November 11, 2021 Share Posted November 11, 2021 9 hours ago, RealActorRob said: I'm running two storj nodes and that's where all my 'errors' are reported. The exclusions in settings are for top level folders, so now I have to figure out how to exclude a subfolder.... Can I request that feature or am I missing something? You need apply "Clear", so that it will delete the attributes. Quote Link to comment
RealActorRob Posted November 22, 2021 Share Posted November 22, 2021 On 11/10/2021 at 11:12 PM, Vr2Io said: You need apply "Clear", so that it will delete the attributes. Yup, did that. Now to see if my format is correct to exclude the other node. It'd be nice just to have checkboxes for drilling down to subdirs. 1 Quote Link to comment
Interstellar Posted November 25, 2021 Share Posted November 25, 2021 (edited) On 10/15/2021 at 4:00 PM, wildfire305 said: I am unable to figure out how to search the 41 pages of the post for the information I'm looking for. Can you explain how to properly use the tool to run a manual verify? I have this set to run on the 15th and it alerted me this morning to 10 corrupted files on a share that I had excluded (I excluded it after the automatic hash creation had run on the share). After reading the extremely helpful tooltips in the tool, I realize now that I needed to run the clear command after excluding the directory share. I ran that and everything is green checkmarks. Now I want to run a verify again so I get a clean report. Is it the "check export" button? Bonus info: The share I excluded was rapidly changing data from system backups that I knew did not need to be verified and didn't want to waste resources on. Those directories change multiple times a day. That's the share that reported the 10 errors which I'm confident the files changed since they were hashed. Same - it used to be obvious now to do a manual verify but it isn't now. The "Check Export" button does nothing. --> "Check Export Finished - checked 0 files, skipped 0 files. Found: 0 mismatches, 0 corruptions. Duration: 00:00:00" How do I command a manual verify after a disk rebuild and upgrade?! Edited November 25, 2021 by Interstellar Quote Link to comment
wildfire305 Posted November 25, 2021 Share Posted November 25, 2021 I have file integrity set to generate automatically. It seems to keep up on a daily basis. The hashes are stored in the metadata in the filesystem (If I understand the process correctly). Check export, if done after build and export, should verify the hashes. Mine performs with thousands of checks when I do it. I also maintain separate hash catalogs and par2 for the really really important data. You could be safe with par2 as it generates hashes. I'm really surprised to see that not more people are using par2 as an action plan for corruption when restoring from backup. Obviously this is only practical for archival data and not constantly modified data. Quote Link to comment
dotn Posted November 28, 2021 Share Posted November 28, 2021 (edited) There are several unanswered questions regarding moving the export files to cache drive or another encrypted location. Does anybody find a proper solution to do that? Having complete list of files stored as an plain text on the USB drive is very bad idea from the security perspective. Edited November 28, 2021 by dotn Quote Link to comment
DanTheMan827 Posted November 30, 2021 Share Posted November 30, 2021 (edited) So I was copying files to a new array and after some time I got a message saying there was no space left when trying to access the web ui… I SSH’d into the server and found that tons of files ending with the extension of .end had filled up my ramdisk in /var/tmp, and after looking at them appear to be related to some file integrity check I’m assuming they’re from this plug-in, but that seems like a bug for files to just fill up the RAM disk like that… is there something misconfigured on my end? Edited November 30, 2021 by DanTheMan827 Quote Link to comment
CajunCoding Posted December 16, 2021 Share Posted December 16, 2021 (edited) On 10/9/2021 at 4:36 PM, sven said: I am a new user of the File Integrity Plugin. I have used the BLAKE3 algorithm and so far the performance seems great. One (small?) problem: at the end of running the Export step for one of the disks, I get a PHP error that a memory limit has been reached: I have gathered that this has to do with the memory limit of php which is set at 128MB. When I go to the export directory, I see that a 168MB file has been created called disk1.export.blake3.hash. When I open the file it contains the hashes for all files from disk 1. Since the export file seems to be fine, is this just an error in the progress display? There were 862892 files in the export. 6TB drive, many large (500MB+) files, but also a lot of smaller files. I had this very same issue this evening trying to use File Integrity plugin.... can anyone provide additional details/resolution on this error? Allowed memory size of 134217728 bytes exhausted (tried to allocate 505585896 bytes) in /usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php Edited December 16, 2021 by CajunCoding 3 1 Quote Link to comment
DataCollector Posted January 28, 2022 Share Posted January 28, 2022 (edited) Hello @bonienl I am a newbie/beginner (and not native english speaking). I installed this Plugin. I build md5 on several Disks (1 to 12). Then I read that Blake3 should be faster. I stopped creating md5, switched to Blake3 and Build new on several Disks. Since I did not experience faster building or less CPU usage, I stopped it, removed the Hashes, switched back to md5 and started to build anew on those disks starting with Disks 1,2 and 3. But surprise! On two disks (9 and 10) the green sign for (up to date) appeared. I removed the hashes on 9 and 10 again, the green sign switched to the orange circle...only to appear again after several minutes. I removed the hashes again, the green signs became orange circkes again. And serveral minutes later the green sign appears again (this time only on Disk 9). I do not understand what I am making wrong. (Adding another Screenshot: several minutaes later Disk 10 also did get the green sign again. I am puzzled. Addin third Screenshot from a newer bulding. Still 9 and 10 have the green sign. Even when disk9 is just buiilding.). Edited February 2, 2022 by DataCollector adding second screenshot and smal text Quote Link to comment
jbartlett Posted February 3, 2022 Share Posted February 3, 2022 The only way I can see how to invoke a scan of a drive is to go to the app's settings and change the cron job and drive settings to kick off the validate process in the next available time slot. Quote Link to comment
kevkru Posted February 5, 2022 Share Posted February 5, 2022 How does the plugin differentiate between user wanted filechanges and changes due to corruption? I can see files Ive changed willingly in my log file. Is there some further documentation on how the plugin works? Quote Link to comment
Solverz Posted February 10, 2022 Share Posted February 10, 2022 Is there a way to exclude a custom file using the complete path instead of the just file name as if I just use the filename like how it is set up now then if multiple files are named the same it would skip them all when I only want it to skip the file from a specific path. for example I do not want to skip test.txt from every single directory available, I only want to skip it from a specific directory like /path/to/file/test.txt Quote Link to comment
bladerunner1982 Posted February 13, 2022 Share Posted February 13, 2022 Please add a Verify button to invoke a scan manualy for a specific drive. Thank you. Quote Link to comment
jbartlett Posted February 21, 2022 Share Posted February 21, 2022 (edited) Feature request: Export by share so hashes from different machines can be compared against each other without manipulating the export files. For example, comparing files between a primary server and a backup server with different drive layouts. Edited February 21, 2022 by jbartlett 1 Quote Link to comment
gamerkonks Posted February 24, 2022 Share Posted February 24, 2022 Hi there, I have 2 disks. Disk 1 has my Media share, Disk 2 has everything else. I have had FIP running for a while now but recently selected my Media share to be excluded, via Settings -> FIP -> Excluded folders and files: I have my Disk verification schedule to run monthly. When my monthly Disk verification started running today, I noticed that it was reading from Disk 1, and in Tools -> FIP, I see that Disk 1 is currently processing file xxx of 88805, when I'm not expecting it to verify anything on Disk 1. Is this because there are existing hashes from this share, since it wasn't excluded previously? Thanks. Quote Link to comment
Vr2Io Posted February 24, 2022 Share Posted February 24, 2022 7 hours ago, gamerkonks said: Is this because there are existing hashes from this share, since it wasn't excluded previously? Yes, actually because the file hash entry in Disk_1 export file, so you can - Delete Disk_1 export file - Clear DIsk_1 file hash attribute then export to update Quote Link to comment
jbartlett Posted February 24, 2022 Share Posted February 24, 2022 On 2/5/2022 at 8:48 AM, kevkru said: How does the plugin differentiate between user wanted filechanges and changes due to corruption? I can see files Ive changed willingly in my log file. Is there some further documentation on how the plugin works? It'll report something like "1 mismatch (updated)" or "1 corruptions" Quote Link to comment
jbartlett Posted February 24, 2022 Share Posted February 24, 2022 The "Show Problems" link doesn't show the problems. Clicking the link opens a popup dialog with no contents. The file /var/tmp/disk1.tmp.end does exist. Quote Link to comment
jbartlett Posted February 24, 2022 Share Posted February 24, 2022 On 2/13/2022 at 8:19 AM, bladerunner1982 said: Please add a Verify button to invoke a scan manualy for a specific drive. Thank you. A workaround is to export the drive and then check the export on that drive. Quote Link to comment
paululibro Posted February 26, 2022 Share Posted February 26, 2022 Hi. Just installed this plugin and configured it as such: And then clicked "Build" with all disks selected: Do I need to do anything else or will it automatically calculate hashes for new files (as they are created) and verify the existing ones (on monthly basis) on its own? Quote Link to comment
jbartlett Posted February 28, 2022 Share Posted February 28, 2022 When ever I tried to schedule one to kick off daily in 15 minutes, it's only ever kicked off the check for Drive 3. It finished and nothing else started. *shrug* Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.