bonienl Posted December 18, 2017 Author Share Posted December 18, 2017 If you haven't changed the hash method then the extended atrribute should be recognized and "build" skips these files. When the hash method has changed it will need to rebuild all, and the new hash key is added next to an existing key. E.g. it is allowed to have both a SHA256 and MD5 hash for the same file, but only one will be actively used and maintained. 1 Quote Link to comment
almarma Posted December 19, 2017 Share Posted December 19, 2017 I think this app is missing something in the initial post with the instructions: Once an error is found, how to proceed. I have a weekly task to check my files integrity and I got a notification that errors were found with some files. Nothing else. Then, I opened the plugin page and I don't know what to do right now :(. I don't know where to click, where to look for the damaged files. Quote Link to comment
digitalfixer Posted December 19, 2017 Share Posted December 19, 2017 The odd time this has happened to me I find there is a log file listing the damaged files. Under Tools/File Integrity click on the folder next to log files. Kevin. Quote Link to comment
Krzaku Posted January 19, 2018 Share Posted January 19, 2018 @bonienl First of all, thanks for this plugin. I'm using it on my array and it's working fine with one exception: it has trouble checking hashes of my EncFS directory. When the verify task finishes I always get some hash mismatches on the encrypted files and I'm sure the decrypted files are fine. This might not even have anything to do with this plugin, but have you encountered something like this? Quote Link to comment
JustinChase Posted January 21, 2018 Share Posted January 21, 2018 I just added a new disk and wanted to be sure it's protected. I checked and all the other disks showed green checks for Build up-to-date and also for Export up-to-date. All good so far. I built for the new disk, green check. I exported for that disk, blue X. It's brand new to the array, and blank, so fine, nothing to export I though. then I noticed the include duplicate file hashes in Find command checkbox and thought 'what's this?', so I checked it and hit find. I saw that it searched for duplicate files; awesome!! I realized that I had lots of duplicated music files in one folder, so I used Midnight Commander to move them from one disk to the other, and told it to overwrite. I confirmed that the folder was no longer on the old disk I copied from; perfect. I then came back to the tool and re-ran the find, but the list shows these same files as duplicates. I didn't expect this, since i just moved them, and also because it's telling me the build up-to-dates are correct on both disks. I assume that I need to re-build the hashes, so I'm doing that now, but was concerned that it says up-to-date, but seems it maybe isn't. I just wanted to confirm where it's actually checking for duplicates, because MC shows one of the duplicated folders isn't on the disk this tool shows it to be on. Where did I go wrong; or is there some bug here? Quote Link to comment
Vr2Io Posted January 22, 2018 Share Posted January 22, 2018 5 hours ago, JustinChase said: I just added a new disk and wanted to be sure it's protected. I checked and all the other disks showed green checks for Build up-to-date and also for Export up-to-date. All good so far. I built for the new disk, green check. I exported for that disk, blue X. It's brand new to the array, and blank, so fine, nothing to export I though. then I noticed the include duplicate file hashes in Find command checkbox and thought 'what's this?', so I checked it and hit find. I saw that it searched for duplicate files; awesome!! I realized that I had lots of duplicated music files in one folder, so I used Midnight Commander to move them from one disk to the other, and told it to overwrite. I confirmed that the folder was no longer on the old disk I copied from; perfect. I then came back to the tool and re-ran the find, but the list shows these same files as duplicates. I didn't expect this, since i just moved them, and also because it's telling me the build up-to-dates are correct on both disks. I assume that I need to re-build the hashes, so I'm doing that now, but was concerned that it says up-to-date, but seems it maybe isn't. I just wanted to confirm where it's actually checking for duplicates, because MC shows one of the duplicated folders isn't on the disk this tool shows it to be on. Where did I go wrong; or is there some bug here? You need to export the hash again for updating. Quote Link to comment
JustinChase Posted January 22, 2018 Share Posted January 22, 2018 I got a warning this morning that said unRAID file corruption: 1-22-2018 11:27AM Notice [Media] - bunker verify command Found 6 files with SHA256 hash key mismatch I'm not sure it's this plugin throwing the warning, nor am I sure how to find these 6 files. The Fix Common Problems tool isn't showing any errors, and I'm not sure what else might notice such a thing. Suggestions on how to proceed? Quote Link to comment
bonienl Posted January 22, 2018 Author Share Posted January 22, 2018 1 hour ago, JustinChase said: Suggestions on how to proceed? Look in your syslog, it will mention which files have a mismatch. Quote Link to comment
laterdaze Posted January 22, 2018 Share Posted January 22, 2018 Minor nit, inconsistent build status indicated. Quote Link to comment
mbc0 Posted January 24, 2018 Share Posted January 24, 2018 Hi, Sorry to be a complete noob but I have been reading though and wondering if somebody can explain in a nutshell what this tool actually does? Thanks and again sorry :-( Quote Link to comment
Marv Posted January 30, 2018 Share Posted January 30, 2018 Hi, today I replaced my cache drive by moving my cache shares onto the array and back again to the new cache drive after I assigned it. Unfortunately, I forgot that by doing so I generated hash values for all of my cache files at the moment they were moved to one of my data disks. Now, what would be the best way to get rid of the hash values again as the files are placed on my cache drive again right now. Does it even matter? Somehow that disturbs me Quote Link to comment
Squid Posted January 30, 2018 Share Posted January 30, 2018 AFAIK, the hash values are not saved when moving files, not to mention that Dynamix FIP wouldn't care about files on the cache drive anyways since it doesn't touch it / check it. Quote Link to comment
Marv Posted January 31, 2018 Share Posted January 31, 2018 (edited) 8 hours ago, Squid said: AFAIK, the hash values are not saved when moving files, not to mention that Dynamix FIP wouldn't care about files on the cache drive anyways since it doesn't touch it / check it. yeah, I know that it doesn't care about cache files. But as the files were moved to the array the plugin created hash values as it should of course. Can I test by just copying over some files from my cache drive to an excluded array folder and then use the clean functon to see if the hash values were saved or not after moving? Edited January 31, 2018 by Marv Quote Link to comment
Vr2Io Posted January 31, 2018 Share Posted January 31, 2018 You can use "getfattr -d xxxxxx" to check hash value save or not. xxxx was file name. Quote Link to comment
Marv Posted January 31, 2018 Share Posted January 31, 2018 42 minutes ago, Benson said: You can use "getfattr -d xxxxxx" to check hash value save or not. xxxx was file name. nice, I'll test this later. Thanks. Quote Link to comment
mbc0 Posted January 31, 2018 Share Posted January 31, 2018 On 24/01/2018 at 1:04 AM, mbc0 said: Hi, Sorry to be a complete noob but I have been reading though and wondering if somebody can explain in a nutshell what this tool actually does? Thanks and again sorry :-( bump Quote Link to comment
trurl Posted January 31, 2018 Share Posted January 31, 2018 On 1/23/2018 at 8:04 PM, mbc0 said: Hi, Sorry to be a complete noob but I have been reading though and wondering if somebody can explain in a nutshell what this tool actually does? Thanks and again sorry :-( In a nutshell, it creates and compares checksums of your files. See first post in this thread for details on this plugin, and this wikipedia article for more about checksums in general: https://en.wikipedia.org/wiki/Checksum 1 Quote Link to comment
mbc0 Posted January 31, 2018 Share Posted January 31, 2018 Many Thanks for the reply, I understood that it creates checksum's what I don't understand is does it auto-correct or alert you to corrupt files? Thanks again! Quote Link to comment
trurl Posted January 31, 2018 Share Posted January 31, 2018 1 minute ago, mbc0 said: Many Thanks for the reply, I understood that it creates checksum's what I don't understand is does it auto-correct or alert you to corrupt files? Thanks again! Checksums don't have enough data to correct anything, only to detect differences. It verifies on a schedule and alerts you. If differences are detected, you would have to rely on your backups for correction. 1 Quote Link to comment
mbc0 Posted January 31, 2018 Share Posted January 31, 2018 Perfect! Thank you trurl :-) Quote Link to comment
Marv Posted January 31, 2018 Share Posted January 31, 2018 (edited) 19 hours ago, Marv said: Hi, today I replaced my cache drive by moving my cache shares onto the array and back again to the new cache drive after I assigned it. Unfortunately, I forgot that by doing so I generated hash values for all of my cache files at the moment they were moved to one of my data disks. Now, what would be the best way to get rid of the hash values again as the files are placed on my cache drive again right now. Does it even matter? Somehow that disturbs me 7 hours ago, Benson said: You can use "getfattr -d xxxxxx" to check hash value save or not. xxxx was file name. So I tested with the above command and the files I moved from cache to array and back to cache again still have hash values saved with them. Is there an easy way to remove them without having to move them back to the array? Edited January 31, 2018 by Marv Quote Link to comment
bonienl Posted January 31, 2018 Author Share Posted January 31, 2018 Extended attributes are not stored in the file itself, but are part of the file system. You can delete extended attributes with setfattr -x <attribute> <filename> Quote Link to comment
S80_UK Posted February 5, 2018 Share Posted February 5, 2018 (edited) I have the File Integrity Plugin setup and working. Thank you for all the work that you've put into this. I have a very simple setup with 8 data drives, 1 parity and a cache drive. As part of getting to know what the tool does I have set up to check one of the data drives each night. And it does what I expect for the most part. No errors are reported. But I am puzzled by the fact that the checking process appears to cause writes to the disk being checked (and an equivalent number of writes to the parity drive). I would have thought that the checking function would be read-only as far as the data drives are concerned, reading the data each file, calculating the hash, and verifying against the previously stored version in the extended attributes. Is this behaviour of writes to the data disk expected? Thanks. Edited February 5, 2018 by S80_UK Typos Quote Link to comment
Vr2Io Posted February 6, 2018 Share Posted February 6, 2018 (edited) FIP won't cause write to data disk, just some after hash build. Edited February 6, 2018 by Benson Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.