Dynamix File Integrity plugin


bonienl

Recommended Posts

Quote

<br /> <b>Fatal error</b>: Allowed memory size of 134217728 bytes exhausted (tried to allocate 340890728 bytes) in <b>/usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php</b> on line <b>40</b><br />

I seem to be getting this error. Not quite sure how to fix it but I see a few others have also had a similar error. 

Link to comment
On 5/9/2022 at 8:25 PM, BluJ said:

I seem to be getting this error. Not quite sure how to fix it but I see a few others have also had a similar error. 

I am also getting this error from one of my drives while exporting. Was on Blake2 but recently switched to Blake3 with the same result. Have not tried the other methods. Crazy thing is, I have green checkmarks across the board but I got that fatal error on export. Would love to know what to tweak to fix the issue as well!

Link to comment
  • 2 weeks later...
On 5/16/2022 at 6:24 PM, JimmyGerms said:

I am also getting this error from one of my drives while exporting. Was on Blake2 but recently switched to Blake3 with the same result. Have not tried the other methods. Crazy thing is, I have green checkmarks across the board but I got that fatal error on export. Would love to know what to tweak to fix the issue as well!

I have found the issue but not sure of a solution. The issue is my hash file for Drive 9 is 340MB and that is beyond the 128MB limit for PHP in Unraid. I can change that manually but it does not stick between boots.

Link to comment
4 hours ago, BluJ said:

I have found the issue but not sure of a solution. The issue is my hash file for Drive 9 is 340MB and that is beyond the 128MB limit for PHP in Unraid. I can change that manually but it does not stick between boots.

Would you mind going over how you manually edited that? I'd love to get a successful export to one of my drives even if it's a temporary fix.

Link to comment
11 hours ago, JimmyGerms said:

Would you mind going over how you manually edited that? I'd love to get a successful export to one of my drives even if it's a temporary fix.

In the php.ini file at "/etc/php" I added "memory_limit = 512MB" to the end and saved. I'm still currently running the task but it has not failed yet!

  • Like 1
  • Thanks 1
Link to comment

<br /> <b>Fatal error</b>: Allowed memory size of 134217728 bytes exhausted (tried to allocate 218365648 bytes) in <b>/usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php</b> on line <b>40</b><br />

 

What can be done to perma-FIX the plugin?  I don't want to manually edit anything.  I just want this to work.  I started with sha256 and switched to blake3.  The same error is happening on the save hdd for me.

At least with blake3 the plugin will actually check the files.  The sha256 hash would not check any file and just return a "all is well" with zero files check, zero files skipped.

Link to comment

Hello.


I created a 2nd unraid PC completely new (from old parts).
I filled Disks with Data and then created Parity.


After that I did install the Plugin Dynamix File Integrity.
Then I checked disk1 and started Build on that disk.
So disk1 is the only one I started to build the checksums and none other on this machine up until now.
After Buiild for disk1 was ready I looked and was surprised:
Disk1, 2 and 3 have a green sign (Build up-to-date).

Since i did not build on Disk2, 3, 4 or 5    I am confused.
I now started Disk2 Build and this is now running.

 

But look at the Screenshot below:
The Tool tells me the Build on Disk1,2 and 3 are mentioned to be okay.
I did only create disk1 checksums and let it now bulild them on disk 2. I never build checksums on disks 3, 4 or 5.


I had this problem several Month ago (january 22) on my first unraid machine also and did uninstall the plugin.
Now on this (2nd) machine I have the same problem.

 

How can I trust the File Integrity Plugin, when it looks like it does not tell me the actual truth?
Where is my mistake?

 

My previous posting concerning my 1st unraid System from January 22:

 

 

DFI-again-1.png

DFI-again-2.png

Edited by DataCollector
Link to comment
  • 2 weeks later...

Still struggling to get this to manually do a verify after a disk rebuild.

 

The Export then Check Export just ends up with the following when disk 3 selected...

 

Finished - checked 0 files, skipped 0 files. Found: 0 mismatches, 0 corruptions. Duration: 00:00:00

 

Obviously wrong.

 

Nothing in the logs either, just trying to make it do an automatic daily check instead to see if that works...

 

At the moment I do not believe this plugin works correctly, sorry Bonienl.

 

Edit: Automatic verify seems to work, the Check Export button does NOT work.

Edited by Interstellar
Link to comment

There are hash files there, the Export function works.

 

The Verify Export function does not work, when the relevant disk is checked...

 

"Finished - checked 0 files, skipped 0 files. Found: 0 mismatches, 0 corruptions. Duration: 00:00:00"

 

So I cannot start a manual verify. A automatic verify does start the check however.

Link to comment
On 5/30/2022 at 8:18 PM, Jake0010 said:

<br /> <b>Fatal error</b>: Allowed memory size of 134217728 bytes exhausted (tried to allocate 218365648 bytes) in <b>/usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php</b> on line <b>40</b><br />

 

What can be done to perma-FIX the plugin?  I don't want to manually edit anything.  I just want this to work.  I started with sha256 and switched to blake3.  The same error is happening on the save hdd for me.

At least with blake3 the plugin will actually check the files.  The sha256 hash would not check any file and just return a "all is well" with zero files check, zero files skipped.

 

I am also having this problem, bonienl can you add some code that gives us the option to increase the PHP memory limit?

 

This problem isn't going away, disks are getting larger, thus filling with more files.

 

So two bugs with the script at the moment:

 

1. SHA256 "Check Export" button does not work correctly.

2. 128MB PHP limit is too low. (not strictly a bug, but it doesn't work with it set at 128MB so might as well be one)

Link to comment

Hi,

 

I've installed the File Integrity plugin and successfully run the first build. I activated the option "Automatically protect new and modified files" but it appears to me that new hashes aren't created automatically. I tried to add a test file to an user share and still, after a couple of days, if I check the exported file (after a new export) the test file hash doesn't appear to be present. Furthermore on the File Integrity control page the disk with the file shows an orange circle near "build up to date". If a do a manual build the circle disappears and the new file hash is created but the whole point of this plugin for me is to automatically protect new files and not have to build it manually each time.

 

 

113389827_Schermata2022-06-14alle10_32_57.thumb.png.61dd555b0b4fad74fa6ec88a3a19f3c6.png

 

Hope someone can help me.

 

Here I attach the diagnostic file

jarvis-diagnostics-20220614-1033.zip

Link to comment
  • 2 weeks later...
On 6/14/2022 at 10:35 AM, santinilor said:

Hi,

 

I've installed the File Integrity plugin and successfully run the first build. I activated the option "Automatically protect new and modified files" but it appears to me that new hashes aren't created automatically. I tried to add a test file to an user share and still, after a couple of days, if I check the exported file (after a new export) the test file hash doesn't appear to be present. Furthermore on the File Integrity control page the disk with the file shows an orange circle near "build up to date". If a do a manual build the circle disappears and the new file hash is created but the whole point of this plugin for me is to automatically protect new files and not have to build it manually each time.

 

 

113389827_Schermata2022-06-14alle10_32_57.thumb.png.61dd555b0b4fad74fa6ec88a3a19f3c6.png

 

Hope someone can help me.

 

Here I attach the diagnostic file

jarvis-diagnostics-20220614-1033.zip 183.3 kB · 0 downloads

same for me - have been struggling with this for a while. all data is coming in via smb or docker bind mounts. does it only work on files changed via smb?

Link to comment
  • 2 weeks later...
On 5/30/2022 at 8:18 PM, Jake0010 said:

<br /> <b>Fatal error</b>: Allowed memory size of 134217728 bytes exhausted (tried to allocate 218365648 bytes) in <b>/usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php</b> on line <b>40</b><br />

 

What can be done to perma-FIX the plugin?  I don't want to manually edit anything.  I just want this to work.  I started with sha256 and switched to blake3.  The same error is happening on the save hdd for me.

I've done this and it wasn't enough. I also changed this file, part of the plugin itself (and the one that's actually complaining):

/usr/local/emhttp/plugins/dynamix.file.integrity/include/ProgressInfo.php

 

I've added this line, just below the top:

 

ini_set('memory_limit', '512MB');

 

So the file now looks like this:

 

<?
$plugin = 'dynamix.file.integrity';
$docroot = $docroot ?: $_SERVER['DOCUMENT_ROOT'] ?: '/usr/local/emhttp';
$translations = file_exists("$docroot/webGui/include/Translations.php");
ini_set('memory_limit', '512MB');

 

That might be too large, but I believe PHP won't allocate that much memory unless it needs it...

It works - but it doesn't survive a reboot either, unfortunately.

Maybe @bonienl can incorporate a version of this into the next release of the plugin?

Edited by Mark Hood
Clarify this is an additional fix, not the only one.
  • Upvote 1
Link to comment

Hi. Im newbie here and I'm curious about this DYNAMIX FILE INTEGRITY PLUGIN vs Btrfs scrub.
I'm currently running BTRFS on 2 x 5TB data array with 1 Parity.

I read some discussion that the Btrfs scrub will compare the previous data checksum with the new checksum from new scheduled scrub job, then notify the error/Repair corrupted blocks. This function seems to be the same with the Dynamix File Integrity.

So my question is what's the different between those two methods ?

There are some suggestions that Repair corrupted Blocks function is not working/effective since there is no Raid in the array, therefore, if problem occurs with a disk, Repair Corrupted Blocks is useless and Btrfs scrub is no more than just a corruption alarm. So does parity disk help anything to repair the corrupted blocks ?

1195347382_Webcapture_7-7-2022_203036_192_168_50_105.thumb.jpeg.3768bf0151b016f3bc72c5f65e533ac0.jpeg
Thanks, everyone.

Edited by Ken.N
Link to comment
4 hours ago, Ken.N said:

There are some suggestions that Repair corrupted Blocks function is not working/effective since there is no Raid in the array, therefore, if problem occurs with a disk, Repair Corrupted Blocks is useless and Btrfs scrub is no more than just a corruption alarm.

Correct, for array devices a btrfs scrub can only detected data checksum errors, it can't repair them since there's no redundancy, and no parity can't help with data, it can repair metadata errors since that is redundant.

Link to comment

  

10 hours ago, JorgeB said:

Correct, for array devices a btrfs scrub can only detected data checksum errors, it can't repair them since there's no redundancy, and no parity can't help with data, it can repair metadata errors since that is redundant.

Thanks, @JorgeB. for your reply. I would love to understand more about how parity works.

As I read some old threads in Unraid and as you clarified already, Parity can not help anything in bit-rot, bad sector situations and parity just simply helps recover data of one malfunction data disk (suppose only 1 parity disk) while other disks in an array MUST be healthy to participate in recovery.

So does this feature also create a consequence that when the data disk encounters data corrupted, the parity will also sync the wrong corrupted bit string and replace the good bit string of the previous parity-sync ? Is there any solution within Unraid array to protect array against bit-rot, bad-sector except off-side backup ? oh wait, even if I run off-side backup on Unraid OS, there still exists corruption potential as well.

If this is the case, Unraid eventually exists a drawback against Raid due to the fact that Unraid has no redundancy and can not protect data from corruption, while Raid does by discovering corruption by BTRFS scrub job and repair corruption by comparing bit strings among raid disks.

Please help me to clarify this. Much appreciated.

Edited by Ken.N
Link to comment
10 minutes ago, Ken.N said:

So does this feature also create a consequence that when the data disk encounters data corrupted, the parity will also sync the wrong corrupted bit string and replace the good bit string of the previous parity-sync ?

If there is bit-rot in a disk, which in my experience is extremely rare, and you run a correcting check parity will be incorrectly updated, note that we always recommend running non correcting checks unless sync errors are expected, like after an unclean shutdown for example, if sync errors are found without a known reason and the user has checksums for the data disks he can then first check those and then decide how to proceed.

 

13 minutes ago, Ken.N said:

oh wait, even if I run off-side backup on Unraid OS, there still exists corruption potential as well.

Depends on how you do the backups, I for example use btrfs send/receive to backup all my array disks 1 to 1, the stream is checksummed all the way, if there's a checksum error on read it will abort, and as far as I understand the sent metadata also includes the checksums.

 

22 minutes ago, Ken.N said:

If this is the case, Unraid eventually exists a drawback against Raid due to the fact that Unraid has no redundant and can not protect data from corruption, while Raid does.

Usually you cannot have just the benefits of something, Unraid is very flexible with for example using the full capacity of different disks sizes, recovering the data from the remaining disks if you lose more disks that redundancy can save, but there are also some drawbacks, like array speed and array bit-rot fix, there are the pools where can have redundancy, but btrfs raid5/6 is not stable enough for the typical user and raid1/10 is not very efficient for large pools, soon you'll be able to create a zfs pool, of course you'll lose some of Unraid array advantages, like can't fully use different size disks with raidz and you can lose the complete pool if more disks than current redundancy can support are lost, again you cannot have everything.

 

 

Link to comment
11 minutes ago, JorgeB said:

Usually you cannot have just the benefits of something

Oh yeah for sure I understand this point. Everything has two sides and your clarification even still strengthens my decision of choosing Unraid rather than other OS. Knowing how Unraid works will help me and everyone have further implementation to avoid unpredictable lost down the road. Thanks, cap. @JorgeB

  • Like 1
Link to comment
  • 2 weeks later...

Hi all, I'm trying to get this plugin to work on my server, but for some reason the "Check Export" button does nothing. It shows 0 completed, 0 failed, and just quits immediately saying the check completed successfully (which it obviously did not because the check finished in like two seconds).

 

For some reason, the scheduled verification tasks work, but it's also broken as well, throwing up a ton of "corrupted" and "modified" errors, like below:

 

SHA256 hash key mismatch (updated), /mnt/disk1/appdata/overseerr/settings.json was modified
SHA256 hash key mismatch (updated), /mnt/disk1/appdata/overseerr/settings.json was modified
SHA256 hash key mismatch (updated), /mnt/disk1/appdata/overseerr/settings.json was modified
SHA256 hash key mismatch, /mnt/disk1/appdata/overseerr/settings.json is corrupted
SHA256 hash key mismatch, /mnt/disk1/appdata/overseerr/settings.json is corrupted
SHA256 hash key mismatch (updated), /mnt/disk1/appdata/Plex-Media-Server/Library/Application Support/Plex Media Server/Logs/Plex Transcoder Statistics.5.log was modified
SHA256 hash key mismatch (updated), /mnt/disk1/appdata/overseerr/settings.json was modified
SHA256 hash key mismatch (updated), /mnt/disk1/appdata/overseerr/settings.json was modified

 

Each log contains about 2000+ of these "errors," mostly all within the appdata share. Is it just a false positive because of Docker messing with these files in the background? Should I just exclude the entire appdata folder in the settings?

Link to comment
44 minutes ago, ericswpark said:

Should I just exclude the entire appdata folder in the settings?

Yes.

 

This plugin's main purpose is to verify that files put away for long term storage are still intact after a period of time. It's not meant to track files that change regularly.

  • Like 1
Link to comment
  • 2 weeks later...

Does the DFI checks only happen in the background?  Is there a manual operation?

 

Does exporting regenerate the hashes, or exports the hashes from the extended attributes?  The documentation says "generate checksum files", therefore I presume it regenerates hashes, which could be a problem if you have a problem with the existing file.  I don't want to regenerate and export a hash on a corrupt file.

 

I also have "open operations", yet I also have enabled "automatically protect new and modified files".  Not sure when this operation occurs, because the setting's menu refers to "verification schedule", not "creation".

Edited by Jaybau
Link to comment

i am interested in this plugin. a question regarding performance:

i understand that blake3 is best if you have a cpu that supports it?

 

i have a hard time understanding exactly what "Automatically protect new and modified files:" does.

by having it enabled it writes data to the meta of the file? what would be the use of this? it says that i have to maintain something on my own if i have this disabled, not exactly sure what this means?

 

my worry by the plugin writing to "extended attributes" is that the file is detected as changed by rsync resulting in my backup script having to overwrite all files to the backup server but maybe i am wrong here?

Edited by je82
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.