Dynamix File Integrity plugin


bonienl

918 posts in this topic Last Reply

Recommended Posts

Hello...I am in the process of converting all my drives FS from RFS to XFS. I have the plugin installed but not enabled until the whole conversion process is complete. Is there a common/optimum settings configuration to use once I am done? I look at all the options and I just don't know what it means.

Link to post
  • Replies 917
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

WARNING: USING THIS PLUGIN ON DISKS FORMATTED IN REISERFS MAY LEAD TO SYSTEM INSTABILITY. IT IS ADVISED TO USE XFS.   UPDATE Version 2016.01.05 marks the official release of this plugin This

Bilged the last two posts in this topic - please keep it civil people.

same here: two of the corrupt files are not mentioned and I don't know what/where the third one is Jan 6 03:31:59 bunker: error: BLAKE2 hash key mismatch, is corrupted Jan 6 03:51:41 bun

Posted Images

I was moving files from one disk to another today (using rsync) with the service enabled.    I started getting lots of error messages on the directly attached monitor acting as a console (but not in the syslog) about sha256sum being unable to stat each temporary file that rsync creates while doing a transfer.  I presume this was because rsync had renamed the file to its final name having completed the transfer so sha256sum could no longer find the file it was looking for.  Is this to be expected?   

 

I disabled the service and that stopped the messages but thought I should report it in case it point some other deeper underlying issue.  It would seem that it would be a good idea to suppress this type of error message to avoid cluttering up the console output (perhaps by redirecting it to /dev/null), but I guess this could have some undesirable side-effects for other types of errors?

Link to post

Good day @bonienl

I ran the integrity check (pre-stable) on all of my RFS 4 TB (nearly full) disks prior to knowing about the RFS issue and it took a LONG time...

 

Now that I have converted to XFS I reran and the copy was MUCH quicker....

 

Should I be concerned?  Anything that I need to delete off of the flash for a clean set of data?

 

Thanks!

Link to post

This problem is only related indirectly, but not sure where else to seek advice. When I run a check of my TimeMachine backups I get hundreds of checksum errors. I've read that this problem is caused by TimeMachine modifying the files without changing the modification date, so I understand why I'm getting the warnings. Not sure how I can correct this? Is my only option to just exclude my TM backups? These are the primary files that I wanted to protect. It's pointless to just leave my settings the way they are though. I wouldn't know if a TM file had actually been corrupted and with hundreds of corrupted files being listed I could easily miss other non TM files that might be reported. Any recommendations welcome.

Link to post

This problem is only related indirectly, but not sure where else to seek advice. When I run a check of my TimeMachine backups I get hundreds of checksum errors. I've read that this problem is caused by TimeMachine modifying the files without changing the modification date, so I understand why I'm getting the warnings. Not sure how I can correct this? Is my only option to just exclude my TM backups? These are the primary files that I wanted to protect. It's pointless to just leave my settings the way they are though. I wouldn't know if a TM file had actually been corrupted and with hundreds of corrupted files being listed I could easily miss other non TM files that might be reported. Any recommendations welcome.

I'd say it's nearly impossible to detect bitrot in constantly changing files, without added file monitoring and constant check-summing.  Obviously, it's impossible to determine what is bitrot during file changes, so the only period you can usefully monitor is the idle period between changes.  You would need monitoring software to detect when the file is closed after modification, and immediately initiate a fresh checksum, then be able to detect and pause any file modifying software, so that a checksum can be recalculated and compared.  I doubt that TimeMachine has a way to pause itself while you re-verify the checksum, so the only alternative is constant re-verification, perhaps on 5 minute intervals.  That way, you could detect bitrot *after* a file is modified, up until some time in the last 5 minutes before it's modified again.  This does not seem feasible, especially for numerous files like this.

 

I would say that any constantly changing file should be excluded from check-summing.

 

Side note: not changing the file modification timestamp is just plain wrong!  Not that it matters in this case...

Link to post

This problem is only related indirectly, but not sure where else to seek advice. When I run a check of my TimeMachine backups I get hundreds of checksum errors. I've read that this problem is caused by TimeMachine modifying the files without changing the modification date, so I understand why I'm getting the warnings. Not sure how I can correct this? Is my only option to just exclude my TM backups? These are the primary files that I wanted to protect. It's pointless to just leave my settings the way they are though. I wouldn't know if a TM file had actually been corrupted and with hundreds of corrupted files being listed I could easily miss other non TM files that might be reported. Any recommendations welcome.

 

I noticed this too. I for instance keep .nfo files for media which can be updated regularly by Media Managing Software (e.g. Emby). In my case all my checksum errors relate to .nfo files only.

 

I'd say it's nearly impossible to detect bitrot in constantly changing files, without added file monitoring and constant check-summing. 

 

I agree with this statement. In my case I don't really see the point in constantly checking these constantly changing files for bitrot. What I do find myself looking for is an option to exclude .nfo files (potentially extending to any other file extension for that matter) from the scan.

 

I find myself imagining some sort of RefEx being built (or I guess input by the user) by the selection of a number of options on the GUI and then file matching that condition(s) are not scanned / monitored?

 

Just thinking out loud.

Link to post

Or some means of excluding files under a certain size or file extension as an option to exclude. This way I could exclude anything under 1 Meg which would include all the media metadata like movie.xml, movie.nfo, movie.txt, etc, since I keep metadata for xbmc/kodi, emby, pytivo, and plex.

Link to post

I opened the console for the first time in a long while (I usually just use SSH) and was greeted with a screenfull of this:

getfattr: Removing leading '/' from absolute path names

getfattr sounds like it is related to this plugin, is anyone else seeing this?

Link to post

I opened the console for the first time in a long while (I usually just use SSH) and was greeted with a screenfull of this:

getfattr: Removing leading '/' from absolute path names

getfattr sounds like it is related to this plugin, is anyone else seeing this?

 

Just checked. Yes, on the console I have this too. Nothing in the syslog and nothing reported to the command line when I telnet in either though ....

Link to post

Re:

 

"Select here any folders (shares) you want to exclude from the automatic hashing and verification functionality. A folder existing on multiple disks will be skipped on any disk where it is present."

 

Does this mean shares spanning across multiple disk are not protected?

Link to post

Re:

 

"Select here any folders (shares) you want to exclude from the automatic hashing and verification functionality. A folder existing on multiple disks will be skipped on any disk where it is present."

 

Does this mean shares spanning across multiple disk are not protected?

No.  It just means that if you say a share is not to be protected by the hashing/verification functionality then it makes no difference whether it is on one disk or multiple disks.
Link to post

Re:

 

"Select here any folders (shares) you want to exclude from the automatic hashing and verification functionality. A folder existing on multiple disks will be skipped on any disk where it is present."

 

Does this mean shares spanning across multiple disk are not protected?

No.  It just means that if you say a share is not to be protected by the hashing/verification functionality then it makes no difference whether it is on one disk or multiple disks.

 

 

Thanks!

Link to post

How much memory does this plugin use? I'm trying to estimate my memory requirements on a new build so I don't overbuy.

 

Would 2GB be sufficient if the only other plug-ins used are system utilities and cache_dirs? Does the memory requirement grow with array size? If so, then how much for ~75TB and ~1 million files.

Link to post

How much memory does this plugin use? I'm trying to estimate my memory requirements on a new build so I don't overbuy.

 

Would 2GB be sufficient if the only other plug-ins used are system utilities and cache_dirs? Does the memory requirement grow with array size? If so, then how much for ~75TB and ~1 million files.

i would be surprised if that was enough RAM for cache-dirs to be effective with that number of files.
Link to post

Good day Crew,

 

Scheduled Blake2 ran last night and kicked off 3 errors (looks like email is working)

Feb 3 21:34:25 Tower bunker: error: BLAKE2 hash key mismatch, /mnt/disk3/Media/TV/ConMan/folder.jpg is corrupted

Feb 3 22:35:09 Tower bunker: error: BLAKE2 hash key mismatch, /mnt/disk3/Media/Preclear.txt is corrupted

Feb 3 22:35:09 Tower bunker: error: BLAKE2 hash key mismatch, /mnt/disk3/Media/backup_movies.sh is corrupted

 

All 3 are items that I altered yesterday

folder.jpg is an update image with the same filename

preclear.txt and backup_movies.sh are both files that I deleted

 

given folder.jpg is a different file with a different timestamp, but the same name - is this flagged as a false positive?

Why flag deleted files?

 

Thanks

Link to post

When I first installed this plugin I had it check all files (nothing excluded). The TimeMachine backups were a problem though. Any modified files would generate a warning. Because of this I added all the TM backups to excluded shares in settings. I still get warnings when this plugin runs though.

 

Example-

Event: unRAID file corruption
Subject: Notice [bRUNNHILDE] - bunker  command
Description: Found 839 files with BLAKE2 hash key mismatch
Importance: warning

BLAKE2 hash key mismatch (updated), /mnt/disk9/TM-Jasper/Jasper.sparsebundle/bands/114b was modified
BLAKE2 hash key mismatch (updated), /mnt/disk9/TM-Jasper/Jasper.sparsebundle/bands/1158 was modified
BLAKE2 hash key mismatch (updated), /mnt/disk9/TM-Jasper/Jasper.sparsebundle/bands/1162 was modified

TM-Jasper is excluded. I've double checked that in settings.

How do I get it to stop checking those files?

Link to post

How much memory does this plugin use? I'm trying to estimate my memory requirements on a new build so I don't overbuy.

 

Would 2GB be sufficient if the only other plug-ins used are system utilities and cache_dirs? Does the memory requirement grow with array size? If so, then how much for ~75TB and ~1 million files.

i would be surprised if that was enough RAM for cache-dirs to be effective with that number of files.

 

Yeah, I haven't seen conclusive evidence either way. My understanding is that it was a bigger deal when UnRAID was 32-bit and cache_dirs had to stay in the low_mem split. I've seen an estimate on this forum that each file entry probably takes no more than 256 bytes of memory or ~250MB per million files, which should comfortably fit now that cache_dirs has access to the entire system memory.

 

Dynamix File Integrity Plugin is still a major question mark for me. Other than CPU load, I haven't seen anyone mention resource usage. This is slightly off topic, but I've also never seen evidence whether dual channel memory actually makes a difference in UnRAID or any of its plugins.

Link to post

This is slightly off topic, but I've also never seen evidence whether dual channel memory actually makes a difference in UnRAID or any of its plugins.

 

Dual channel helps parity checks on server with many disks (12+), if I’m remembering correctly got an increase of about 10 to 15% on an Intel server and a little more on AMD.

Link to post

This is slightly off topic, but I've also never seen evidence whether dual channel memory actually makes a difference in UnRAID or any of its plugins.

 

Dual channel helps parity checks on server with many disks (12+), if I’m remembering correctly got an increase of about 10 to 15% on an Intel server and a little more on AMD.

 

Do you think it depends on disk number or total bandwidth? In other words, would 1x 8TB drive see the same benefit as 2x 4TB drives? Dual channel seem like it would help my anemic CPU deal with the I/O of parity operations, but you're the first evidence I've seen.

 

If you remember from the other thread, I'm the guy hoping to jam 16TB drives into an N40L.  ;D

Link to post

How much memory does this plugin use? I'm trying to estimate my memory requirements on a new build so I don't overbuy.

 

Would 2GB be sufficient if the only other plug-ins used are system utilities and cache_dirs? Does the memory requirement grow with array size? If so, then how much for ~75TB and ~1 million files.

 

If running 6+, I would go no less than 4GB ram even if lightly loaded.

Link to post

Over in this thread:

  https://lime-technology.com/forum/index.php?topic=46295.0

I documented the process I went through to defragment an XFS array drive.

 

To streamline this for other people, it would be best if this plugin could automatically ignore:

  • A root directory of .fsr  (when you defrag a whole disk, it puts temp files in this directory)
  • Any files that start with .fsr (when you defrag a specific file, it creates a temp file starting with .fsr)

For external confirmation of this, see the Notes section of this page: http://linux.die.net/man/8/xfs_fsr

 

It would also be helpful to have an option to disable the cron job during a defrag, similar to how there is an option to not run it during a parity check.  Or maybe handle both with a single option?

 

Thanks for considering it!

Link to post

When I first installed this plugin I had it check all files (nothing excluded). The TimeMachine backups were a problem though. Any modified files would generate a warning. Because of this I added all the TM backups to excluded shares in settings. I still get warnings when this plugin runs though.

 

Example-

Event: unRAID file corruption
Subject: Notice [bRUNNHILDE] - bunker  command
Description: Found 839 files with BLAKE2 hash key mismatch
Importance: warning

BLAKE2 hash key mismatch (updated), /mnt/disk9/TM-Jasper/Jasper.sparsebundle/bands/114b was modified
BLAKE2 hash key mismatch (updated), /mnt/disk9/TM-Jasper/Jasper.sparsebundle/bands/1158 was modified
BLAKE2 hash key mismatch (updated), /mnt/disk9/TM-Jasper/Jasper.sparsebundle/bands/1162 was modified

TM-Jasper is excluded. I've double checked that in settings.

How do I get it to stop checking those files?

I've tried everything I can think of. Tried deleting the exported hashes. Removing the hashes. Even in-installed and deleted the plugin from flash. After I re-install, the plugin still scans the excluded shares. The exclusions are listed on the settings pages and the .ini file. Evidently the exclusions setting isn't working.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.