Dynamix File Integrity plugin


bonienl

Recommended Posts

If you haven't changed the hash method then the extended atrribute should be recognized and "build" skips these files.

When the hash method has changed it will need to rebuild all, and the new hash key is added next to an existing key. E.g. it is allowed to have both a SHA256 and MD5 hash for the same file, but only one will be actively used and maintained.

 

  • Like 1
Link to comment

I think this app is missing something in the initial post with the instructions: Once an error is found, how to proceed. I have a weekly task to check my files integrity and I got a notification that errors were found with some files. Nothing else. Then, I opened the plugin page and I don't know what to do right now :(. I don't know where to click, where to look for the damaged files. :(

 

Link to comment
  • 5 weeks later...

@bonienl First of all, thanks for this plugin. I'm using it on my array and it's working fine with one exception: it has trouble checking hashes of my EncFS directory. When the verify task finishes I always get some hash mismatches on the encrypted files and I'm sure the decrypted files are fine. This might not even have anything to do with this plugin, but have you encountered something like this?

Link to comment

I just added a new disk and wanted to be sure it's protected.  I checked and all the other disks showed green checks for Build up-to-date and also for Export up-to-date.  All good so far.

 

I built for the new disk, green check.  I exported for that disk, blue X.  It's brand new to the array, and blank, so fine, nothing to export I though.

 

then I noticed the include duplicate file hashes in Find command checkbox and thought 'what's this?', so I checked it and hit find.  I saw that it searched for duplicate files; awesome!!

 

I realized that I had lots of duplicated music files in one folder, so I used Midnight Commander to move them from one disk to the other, and told it to overwrite.  I confirmed that the folder was no longer on the old disk I copied from; perfect.

 

I then came back to the tool and re-ran the find, but the list shows these same files as duplicates.  I didn't expect this, since i just moved them, and also because it's telling me the build up-to-dates are correct on both disks.

 

I assume that I need to re-build the hashes, so I'm doing that now, but was concerned that it says up-to-date, but seems it maybe isn't.

 

I just wanted to confirm where it's actually checking for duplicates, because MC shows one of the duplicated folders isn't on the disk this tool shows it to be on.

 

Where did I go wrong; or is there some bug here?

Link to comment
5 hours ago, JustinChase said:

I just added a new disk and wanted to be sure it's protected.  I checked and all the other disks showed green checks for Build up-to-date and also for Export up-to-date.  All good so far.

 

I built for the new disk, green check.  I exported for that disk, blue X.  It's brand new to the array, and blank, so fine, nothing to export I though.

 

then I noticed the include duplicate file hashes in Find command checkbox and thought 'what's this?', so I checked it and hit find.  I saw that it searched for duplicate files; awesome!!

 

I realized that I had lots of duplicated music files in one folder, so I used Midnight Commander to move them from one disk to the other, and told it to overwrite.  I confirmed that the folder was no longer on the old disk I copied from; perfect.

 

I then came back to the tool and re-ran the find, but the list shows these same files as duplicates.  I didn't expect this, since i just moved them, and also because it's telling me the build up-to-dates are correct on both disks.

 

I assume that I need to re-build the hashes, so I'm doing that now, but was concerned that it says up-to-date, but seems it maybe isn't.

 

I just wanted to confirm where it's actually checking for duplicates, because MC shows one of the duplicated folders isn't on the disk this tool shows it to be on.

 

Where did I go wrong; or is there some bug here?

 

You need to export the hash again for updating.

Link to comment

I got a warning this morning that said

 

unRAID file corruption: 1-22-2018 11:27AM

Notice [Media] - bunker verify command

Found 6 files with SHA256 hash key mismatch

 

I'm not sure it's this plugin throwing the warning, nor am I sure how to find these 6 files.

 

The Fix Common Problems tool isn't showing any errors, and I'm not sure what else might notice such a thing.

 

Suggestions on how to proceed?

Link to comment

Hi,

 

today I replaced my cache drive by moving my cache shares onto the array and back again to the new cache drive after I assigned it.

Unfortunately, I forgot that by doing so I generated hash values for all of my cache files at the moment they were moved to one of my data disks.

 

Now, what would be the best way to get rid of the hash values again as the files are placed on my cache drive again right now.

Does it even matter? Somehow that disturbs me :)

Link to comment
8 hours ago, Squid said:

AFAIK, the hash values are not saved when moving files, not to mention that Dynamix FIP wouldn't care about files on the cache drive anyways since it doesn't touch it / check it.

 

yeah, I know that it doesn't care about cache files.

But as the files were moved to the array the plugin created hash values as it should of course.

 

Can I test by just copying over some files from my cache drive to an excluded array folder and then use the clean functon to see if the hash values were saved or not after moving?

Edited by Marv
Link to comment
On 1/23/2018 at 8:04 PM, mbc0 said:

Hi, 

 

Sorry to be a complete noob but I have been reading though and wondering if somebody can explain in a nutshell what this tool actually does?

 

Thanks and again sorry :-(

 

In a nutshell, it creates and compares checksums of your files. See first post in this thread for details on this plugin, and this wikipedia article for more about checksums in general: https://en.wikipedia.org/wiki/Checksum

  • Like 1
Link to comment
1 minute ago, mbc0 said:

Many Thanks for the reply, I understood that it creates checksum's what I don't understand is does it auto-correct or alert you to corrupt files?

 

Thanks again!

Checksums don't have enough data to correct anything, only to detect differences. It verifies on a schedule and alerts you. If differences are detected, you would have to rely on your backups for correction.

  • Like 1
Link to comment
19 hours ago, Marv said:

Hi,

 

today I replaced my cache drive by moving my cache shares onto the array and back again to the new cache drive after I assigned it.

Unfortunately, I forgot that by doing so I generated hash values for all of my cache files at the moment they were moved to one of my data disks.

 

Now, what would be the best way to get rid of the hash values again as the files are placed on my cache drive again right now.

Does it even matter? Somehow that disturbs me :)

 

7 hours ago, Benson said:

You can use "getfattr -d xxxxxx" to check hash value save or not. xxxx was file name.

 

So I tested with the above command and the files I moved from cache to array and back to cache again still have hash values saved with them.

Is there an easy way to remove them without having to move them back to the array?

Edited by Marv
Link to comment

I have the File Integrity Plugin setup and working.  Thank you for all the work that you've put into this.   I have a very simple setup with 8 data drives, 1 parity and a cache drive.  As part of getting to know what the tool does I have set up to check one of the data drives each night.  And it does what I expect for the most part.  No errors are reported.  But I am puzzled by the fact that the checking process appears to cause writes to the disk being checked (and an equivalent number of writes to the parity drive).  I would have thought that the checking function would be read-only as far as the data drives are concerned, reading the data each file, calculating the hash, and verifying against the previously stored version in the extended attributes.  Is this behaviour of writes to the data disk expected?  Thanks.  

Edited by S80_UK
Typos
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.