Dynamix File Integrity plugin


bonienl

Recommended Posts

I am a new user of the File Integrity Plugin. I have used the BLAKE3 algorithm and so far the performance seems great.

 

One (small?) problem: at the end of running the Export step for one of the disks, I get a PHP error that a memory limit has been reached:

 

Quote

<br /> <b>Fatal error</b>: Allowed memory size of 134217728 bytes exhausted (tried to allocate 168201344 bytes) in <b>/usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php</b> on line <b>40</b><br />

 

I have gathered that this has to do with the memory limit of php which is set at 128MB. When I go to the export directory, I see that a 168MB file has been created called disk1.export.blake3.hash. When I open the file it contains the hashes for all files from disk 1.

 

Since the export file seems to be fine, is this just an error in the progress display?

 

There were 862892 files in the export. 6TB drive, many large (500MB+) files, but also a lot of smaller files.

 

UnraidFileIntegrityPHPerror.thumb.JPG.a5e2e69ab5ac05fc645191f872899114.JPG

Link to comment

I am unable to figure out how to search the 41 pages of the post for the information I'm looking for. Can you explain how to properly use the tool to run a manual verify? I have this set to run on the 15th and it alerted me this morning to 10 corrupted files on a share that I had excluded (I excluded it after the automatic hash creation had run on the share). After reading the extremely helpful tooltips in the tool, I realize now that I needed to run the clear command after excluding the directory share. I ran that and everything is green checkmarks. Now I want to run a verify again so I get a clean report. Is it the "check export" button? 

 

Bonus info: The share I excluded was rapidly changing data from system backups that I knew did not need to be verified and didn't want to waste resources on. Those directories change multiple times a day. That's the share that reported the 10 errors which I'm confident the files changed since they were hashed.

Link to comment

I can't remember use clear or remove ( btw tooltips should have info.)

 

Ple do one more step, click build again, if log of file need hash then this means something wrong, if all as expect then export.

 

It slso suggest backup the export file(s).

Edited by Vr2Io
Link to comment
  • 4 weeks later...
9 hours ago, RealActorRob said:

I'm running two storj nodes and that's where all my 'errors' are reported.

 

The exclusions in settings are for top level folders, so now I have to figure out how to exclude a subfolder....

 

Can I request that feature or am I missing something?

 

 

You need apply "Clear", so that it will delete the attributes.

 

image.png.d885ddffba16989d98ca77b9c075a49b.png

Link to comment
  • 2 weeks later...
On 10/15/2021 at 4:00 PM, wildfire305 said:

I am unable to figure out how to search the 41 pages of the post for the information I'm looking for. Can you explain how to properly use the tool to run a manual verify? I have this set to run on the 15th and it alerted me this morning to 10 corrupted files on a share that I had excluded (I excluded it after the automatic hash creation had run on the share). After reading the extremely helpful tooltips in the tool, I realize now that I needed to run the clear command after excluding the directory share. I ran that and everything is green checkmarks. Now I want to run a verify again so I get a clean report. Is it the "check export" button? 

 

Bonus info: The share I excluded was rapidly changing data from system backups that I knew did not need to be verified and didn't want to waste resources on. Those directories change multiple times a day. That's the share that reported the 10 errors which I'm confident the files changed since they were hashed.

 


Same - it used to be obvious now to do a manual verify but it isn't now.

 

The "Check Export" button does nothing. --> "Check Export  Finished - checked 0 files, skipped 0 files. Found: 0 mismatches, 0 corruptions. Duration: 00:00:00"

 

How do I command a manual verify after a disk rebuild and upgrade?!

Edited by Interstellar
Link to comment

I have file integrity set to generate automatically. It seems to keep up on a daily basis. The hashes are stored in the metadata in the filesystem (If I understand the process correctly). Check export, if done after build and export, should verify the hashes. Mine performs with thousands of checks when I do it. I also maintain separate hash catalogs and par2 for the really really important data. You could be safe with par2 as it generates hashes. I'm really surprised to see that not more people are using par2 as an action plan for corruption when restoring from backup. Obviously this is only practical for archival data and not constantly modified data.

Link to comment

There are several unanswered questions regarding moving the export files to cache drive or another encrypted location. Does anybody find a proper solution to do that? Having complete list of files stored as an plain text on the USB drive is very bad idea from the security perspective.

Edited by dotn
Link to comment

So I was copying files to a new array and after some time I got a message saying there was no space left when trying to access the web ui…

 

I SSH’d into the server and found that tons of files ending with the extension of .end had filled up my ramdisk in /var/tmp, and after looking at them appear to be related to some file integrity check

 

I’m assuming they’re from this plug-in, but that seems like a bug for files to just fill up the RAM disk like that… is there something misconfigured on my end?

Edited by DanTheMan827
Link to comment
  • 3 weeks later...
On 10/9/2021 at 4:36 PM, sven said:

I am a new user of the File Integrity Plugin. I have used the BLAKE3 algorithm and so far the performance seems great.

 

One (small?) problem: at the end of running the Export step for one of the disks, I get a PHP error that a memory limit has been reached:

 

 

I have gathered that this has to do with the memory limit of php which is set at 128MB. When I go to the export directory, I see that a 168MB file has been created called disk1.export.blake3.hash. When I open the file it contains the hashes for all files from disk 1.

 

Since the export file seems to be fine, is this just an error in the progress display?

 

There were 862892 files in the export. 6TB drive, many large (500MB+) files, but also a lot of smaller files.

 

UnraidFileIntegrityPHPerror.thumb.JPG.a5e2e69ab5ac05fc645191f872899114.JPG

I had this very same issue this evening trying to use File Integrity plugin.... can anyone provide additional details/resolution on this error?
 

Allowed memory size of 134217728 bytes exhausted (tried to allocate 505585896 bytes) in 
/usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php


 

Edited by CajunCoding
  • Like 3
Link to comment
  • 1 month later...

Hello @bonienl


I am a newbie/beginner (and not native english speaking).
I installed this Plugin.
I build md5 on several Disks (1 to 12).
Then I read that Blake3 should be faster.
I stopped creating md5, switched to Blake3 and Build new on several Disks.
Since I did not experience faster building or less CPU usage, I stopped it, removed the Hashes, switched back to md5 and started to build anew on those disks starting with Disks 1,2 and 3.

 

But surprise!
On two disks (9 and 10) the green sign for (up to date) appeared.
I removed the hashes on 9 and 10 again, the green sign switched to the orange circle...only to appear again after several minutes.
I removed the hashes again, the green signs became orange circkes again.
And serveral minutes later the green sign appears again (this time only on Disk 9).
I do not understand what I am making wrong.

 

(Adding another Screenshot: several minutaes later Disk 10 also did get the green sign again. I am puzzled. Addin third Screenshot from a newer bulding. Still 9 and 10 have the green sign. Even when disk9 is just buiilding.).

 

 

FILE-Green.png

FILE-Green2.png

Unraid-FIC-2022-02-02#.png

Edited by DataCollector
adding second screenshot and smal text
Link to comment

Is there a way to exclude a custom file using the complete path instead of the just file name as if I just use the filename like how it is set up now then if multiple files are named the same it would skip them all when I only want it to skip the file from a specific path.

 

for example I do not want to skip test.txt from every single directory available, I only want to skip it from a specific directory like /path/to/file/test.txt

Link to comment

Feature request: Export by share so hashes from different machines can be compared against each other without manipulating the export files. For example, comparing files between a primary server and a backup server with different drive layouts.

Edited by jbartlett
Link to comment

Hi there,

I have 2 disks.

Disk 1 has my Media share, Disk 2 has everything else.

I have had FIP running for a while now but recently selected my Media share to be excluded, via Settings -> FIP -> Excluded folders and files:

I have my Disk verification schedule to run monthly.

When my monthly Disk verification started running today, I noticed that it was reading from Disk 1, and in Tools -> FIP, I see that Disk 1 is currently processing file xxx of 88805, when I'm not expecting it to verify anything on Disk 1.

 

Is this because there are existing hashes from this share, since it wasn't excluded previously?

 

Thanks.

Link to comment
7 hours ago, gamerkonks said:

Is this because there are existing hashes from this share, since it wasn't excluded previously?

Yes, actually because the file hash entry in Disk_1 export file, so you can

 

- Delete Disk_1 export file

- Clear DIsk_1 file hash attribute then export to update 

Link to comment
On 2/5/2022 at 8:48 AM, kevkru said:

How does the plugin differentiate between user wanted filechanges and changes due to corruption?

I can see files Ive changed willingly in my log file. Is there some further documentation on how the plugin works?

It'll report something like "1 mismatch (updated)" or "1 corruptions"

Link to comment

Hi. Just installed this plugin and configured it as such:

image.png.aa1b72ac854af1a823a59904358646b3.png

 

And then clicked "Build" with all disks selected:

image.thumb.png.2af22bb6f9bb9424617dbfe87b39ae3d.png

 

Do I need to do anything else or will it automatically calculate hashes for new files (as they are created) and verify the existing ones (on monthly basis) on its own?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.