Dynamix File Integrity plugin


bonienl

Recommended Posts

  • 2 weeks later...

I am doing a mass move from my old unraid server over to my new one. I expected some likely file transfer damage, so no surprise when I did a manual check and saw errors. What was a surprise was how... incomplete those errors were.

 

Quote

 

SHA256 hash key mismatch, 1e03.mkv is corrupted

SHA256 hash key mismatch, 20p.AMZN.WEB-DL.mkv is corrupted

SHA256 hash key mismatch, DTV.XviD-NoTV.avi is corrupted

SHA256 hash key mismatch, es s02e06.avi is corrupted

 

 

I saw a few pages back someone else had a similar problem, and the instructions were to go into the hash log for the disk and find candidates, then run a b2sum (path) against each candidate. For some that is easy, but for like that first one and second one, there are a lot of candidates.

 

Is there a less tedious way of configuring this output so I don't need to check 39 false positives to find the one I want manually?

 

Also, that last one only has a single candidate, so I checked it. Those files have individual md5 checksums, one for each file. That directory passes an md5 check on both the source and the allegedly corrupt destination. I don't understand how that can be. I don't mean that I don't believe it, I just don't quite understand it. 


 

Edited by Ouze
Link to comment

I suspect this plugin can't be trusted.  Whenever I would encounter "corrupted" files I could not find any issue when hashing with some other tool.  When multiple hashing tools agree, and this plugin disagrees, you can deduce that there seems to be some issue with the plugin.

Link to comment
42 minutes ago, _Shorty said:

I suspect this plugin can't be trusted.  Whenever I would encounter "corrupted" files I could not find any issue when hashing with some other tool.  When multiple hashing tools agree, and this plugin disagrees, you can deduce that there seems to be some issue with the plugin.

I think that the error message means that the stored checksum does not agree with the current file.   That can happen if for any reason a file gets updated without the plugin getting notified so that it calculates a new checksum.   This means that you can occasionally get false 'corrupted' indications, but if the checksum agrees the file is almost certainly good.

Link to comment
37 minutes ago, _Shorty said:

Is there something else responsible for notifying the plugin of that, or is that its own responsibility?  Seems to not be happening often enough to be a concern, anyway.

Yes - it is up to the Linux notify system to tell the plugin that a file has changed so that it knows it needs checking.

Link to comment

What makes this strange is that this is happening with files that have been static for a very long time, and nothing has updated them.  In my particular case, while I have lots of backups on this particular unRAID machine that do indeed get updated from time to time, I also use it as my storage pool for TV shows and movies which is accessed by a home theatre PC running Kodi.  And those files never change.  Yet I would get "corrupted" messages on some of those files, too.

Edited by _Shorty
Link to comment
  • 2 weeks later...
On 10/1/2019 at 10:17 AM, itimpi said:

Yes - it is up to the Linux notify system to tell the plugin that a file has changed so that it knows it needs checking.

doesn't the file have a "Last Modified" tag? it would seem that if the file has been modified since the last checksum, it would recalculate if it was not notified by the Linux Notify System. 

 

Link to comment
On 10/1/2019 at 1:54 AM, Ouze said:

I am doing a mass move from my old unraid server over to my new one. I expected some likely file transfer damage, so no surprise when I did a manual check and saw errors. What was a surprise was how... incomplete those errors were.

 

 

I saw a few pages back someone else had a similar problem, and the instructions were to go into the hash log for the disk and find candidates, then run a b2sum (path) against each candidate. For some that is easy, but for like that first one and second one, there are a lot of candidates.

 

Is there a less tedious way of configuring this output so I don't need to check 39 false positives to find the one I want manually?

 

Also, that last one only has a single candidate, so I checked it. Those files have individual md5 checksums, one for each file. That directory passes an md5 check on both the source and the allegedly corrupt destination. I don't understand how that can be. I don't mean that I don't believe it, I just don't quite understand it. 


 

 

Happened again - did a check, out the output:

 

"bunker: error: SHA256 hash key mismatch, rnal.720p.bluray.x264-reward.mkv is corrupted"

 

According to a search, that narrows it down to 125 files. 

 

There has got to be a way to get this plugin to output more useful (specific) information, right? Why are these error messages being truncated this way?

 

I did an md5 check against all 125 files individual checksums and of course, they pass just fine.  These files got dropped onto the server once and have not been touched since then. If this plugin is going to generate output vague as it is, and as prone to false positives as it seems to be; then ultimately it's just contributing noise, not value. 

 

 

 

Edited by Ouze
clarification
Link to comment

Like I say, while great in theory, the plugin just doesn't seem to be that useful, given this issue.  It randomly gives me "mismatches" on files that have not changed.  It's not like it is missing a notification from the OS that a file has changed.  Files haven't changed, and it spits out messages about files being corrupt that most definitely are still intact.

Link to comment

I have two disks with 1 and 2 files that fail to export. I understand that an export failure is due to the fact that no hash has been generated for them. I (re)Build both drive but the error still persists. Here is one example:

 

capture_20191017-211212.thumb.jpg.d0ea7ebd7723e00055a78ee17b5e1299.jpg

 

xfs_repair (without -n) did not do anything either.

 

All files and can be read and copied without problems and are identical to their sources.

 

Note, though, that the folder has actually a whopping 25000+ file in it.

 

When I ran into this problem I had thousands of export errors on several drives. After Build, I am down to those three.


I also noticed that the number of files that were reported as having an export error was not identical to the ones that are reported as added when I run the Build. Added is higher. I would have expected it to be the same. E.g. Export = cannot find the hash <=> Build = adding the hash.

 

Appreciate your advice how to judge this and which measure to take to either assess more or rectify.

 

Thanks!

Link to comment
8 hours ago, Aderalia said:

is the build in btrfs scrup basically the same thing since it should just check block CRCs?

Basically the same function, but it can be a good idea to have another option for checking on a file basis, for example, for my movies and TV seasons I use corz to create checksums before moving the data to the server, then rely primarily on btrfs to know data is intact, but can use the file checksums if needed.

  • Thanks 1
Link to comment
On 10/18/2019 at 7:31 PM, tazman said:

I have two disks with 1 and 2 files that fail to export. I understand that an export failure is due to the fact that no hash has been generated for them. I (re)Build both drive but the error still persists. Here is one example:

 

capture_20191017-211212.thumb.jpg.d0ea7ebd7723e00055a78ee17b5e1299.jpg

 

xfs_repair (without -n) did not do anything either.

 

All files and can be read and copied without problems and are identical to their sources.

 

Note, though, that the folder has actually a whopping 25000+ file in it.

 

When I ran into this problem I had thousands of export errors on several drives. After Build, I am down to those three.


I also noticed that the number of files that were reported as having an export error was not identical to the ones that are reported as added when I run the Build. Added is higher. I would have expected it to be the same. E.g. Export = cannot find the hash <=> Build = adding the hash.

 

Appreciate your advice how to judge this and which measure to take to either assess more or rectify.

 

Thanks!

 

Long story short, the error not means filesystem fault, it should bacause file have extended attribute but in exclude filter and cause conflict.

 

An way for clear those error was use "clear" and "remove" function to clear the dirty data ( extended attribute ).

 

For example,

BUILD -> CLEAR -> EXPORT -> REMOVE -> IMPORT ( Pls backup all hash export file first )

 

Some feedback for FIP, for my understanding this plugin just call "bunker" and "bunker" like a blackbox, so any problem/bug in bunker can't fix by this plugin.

Edited by Benson
Link to comment

I ran a full check today and the plugin found some corruption errors. I checked the system log and found that the paths of the files were not complete making it hard to find the offending files for recovery.

 

Here is an example of the output with a mangled path:

Oct 22 17:58:39 Cogsworth bunker: error: SHA256 hash key mismatch, t/Filters/data_processing/coverlap_g_sdb.fil is corrupted

Has anyone else seen this before? Is it a plugin bug or something I did wrong?

 

-JesterEE

Link to comment
13 minutes ago, _Shorty said:

Read the last couple of pages.

Ok, ya, just did that and I see someone reporting a possible unicode display issue. Though, between my last post and now, I did a hash file export and got additional errors for the same files, this time with the correct path

Oct 22 21:28:22 Cogsworth bunker: error: no export of file: /mnt/disk4/Users/bsmith/Eng_archive/XMod/mach/v5/sys/depot/Filters/data_processing/coverlap_g_sdb.fil

So, script bug?

 

-JesterEE

Link to comment

I would guess so.  While it appears it may work correctly most of the time for most everyone, some of us have run into erroneous corrupt file messages, among other things.  In my case, the corrupt files still show/match the same hashes with other hash utils, so I'm not sure it's worth bothering with anymore, at least for the moment.  Naturally, hard to fix if the dev can't replicate.

Link to comment
2 hours ago, autumnwalker said:

Perhaps I am missing something, but I do not see "appdata" or "system" shares under "excluded folders and files". Are these excluded by default? The help file does not indicate either way.

Where do you have those located?   Mine are on the cache so automatically ignored anyway.

Link to comment

Since upgrading to Unraid 6.8-rc series I'm seeing the weekly scheduled integrity check abort with exit status 126:

Oct 27 06:00:01 Mandaue crond[1772]: exit status 126 from user root /boot/config/plugins/dynamix.file.integrity/integrity-check.sh &> /dev/null

@bonienl Likely a consequence of the permissions change on the boot flash device that now prevents scripts from being executed directly from the flash?

 

Link to comment
26 minutes ago, John_M said:

Since upgrading to Unraid 6.8-rc series I'm seeing the weekly scheduled integrity check abort with exit status 126:


Oct 27 06:00:01 Mandaue crond[1772]: exit status 126 from user root /boot/config/plugins/dynamix.file.integrity/integrity-check.sh &> /dev/null

@bonienl Likely a consequence of the permissions change on the boot flash device that now prevents scripts from being executed directly from the flash?

 

You are right.

Need to correct that. Thanks

  • Thanks 1
Link to comment
  • 1 month later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.