bonienl Posted September 18, 2019 Author Share Posted September 18, 2019 48 minutes ago, EdgarWallace said: Oh great@bonienl. Thanks a lot. I made a typo instead of "-a" (add) you should use "-u" (update) 1 Quote Link to comment
EdgarWallace Posted September 19, 2019 Share Posted September 19, 2019 Great @bonienl Worked very well, thank you very much: root@Tower:~# /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker -b2 -u "/mnt/disk2/iTunes/Music/Eric Clapton/I Still Do" Finished - verified 12 files, skipped 0 files. Found: 0 mismatches, 1 corruption (updated). Duration: 00:00:04. Average speed: 149 MB/s Quote Link to comment
Ouze Posted October 1, 2019 Share Posted October 1, 2019 (edited) I am doing a mass move from my old unraid server over to my new one. I expected some likely file transfer damage, so no surprise when I did a manual check and saw errors. What was a surprise was how... incomplete those errors were. Quote SHA256 hash key mismatch, 1e03.mkv is corrupted SHA256 hash key mismatch, 20p.AMZN.WEB-DL.mkv is corrupted SHA256 hash key mismatch, DTV.XviD-NoTV.avi is corrupted SHA256 hash key mismatch, es s02e06.avi is corrupted I saw a few pages back someone else had a similar problem, and the instructions were to go into the hash log for the disk and find candidates, then run a b2sum (path) against each candidate. For some that is easy, but for like that first one and second one, there are a lot of candidates. Is there a less tedious way of configuring this output so I don't need to check 39 false positives to find the one I want manually? Also, that last one only has a single candidate, so I checked it. Those files have individual md5 checksums, one for each file. That directory passes an md5 check on both the source and the allegedly corrupt destination. I don't understand how that can be. I don't mean that I don't believe it, I just don't quite understand it. Edited October 1, 2019 by Ouze Quote Link to comment
_Shorty Posted October 1, 2019 Share Posted October 1, 2019 I suspect this plugin can't be trusted. Whenever I would encounter "corrupted" files I could not find any issue when hashing with some other tool. When multiple hashing tools agree, and this plugin disagrees, you can deduce that there seems to be some issue with the plugin. Quote Link to comment
itimpi Posted October 1, 2019 Share Posted October 1, 2019 42 minutes ago, _Shorty said: I suspect this plugin can't be trusted. Whenever I would encounter "corrupted" files I could not find any issue when hashing with some other tool. When multiple hashing tools agree, and this plugin disagrees, you can deduce that there seems to be some issue with the plugin. I think that the error message means that the stored checksum does not agree with the current file. That can happen if for any reason a file gets updated without the plugin getting notified so that it calculates a new checksum. This means that you can occasionally get false 'corrupted' indications, but if the checksum agrees the file is almost certainly good. Quote Link to comment
_Shorty Posted October 1, 2019 Share Posted October 1, 2019 Is there something else responsible for notifying the plugin of that, or is that its own responsibility? Seems to not be happening often enough to be a concern, anyway. Quote Link to comment
itimpi Posted October 1, 2019 Share Posted October 1, 2019 37 minutes ago, _Shorty said: Is there something else responsible for notifying the plugin of that, or is that its own responsibility? Seems to not be happening often enough to be a concern, anyway. Yes - it is up to the Linux notify system to tell the plugin that a file has changed so that it knows it needs checking. Quote Link to comment
_Shorty Posted October 1, 2019 Share Posted October 1, 2019 (edited) What makes this strange is that this is happening with files that have been static for a very long time, and nothing has updated them. In my particular case, while I have lots of backups on this particular unRAID machine that do indeed get updated from time to time, I also use it as my storage pool for TV shows and movies which is accessed by a home theatre PC running Kodi. And those files never change. Yet I would get "corrupted" messages on some of those files, too. Edited October 1, 2019 by _Shorty Quote Link to comment
wesman Posted October 13, 2019 Share Posted October 13, 2019 On 10/1/2019 at 10:17 AM, itimpi said: Yes - it is up to the Linux notify system to tell the plugin that a file has changed so that it knows it needs checking. doesn't the file have a "Last Modified" tag? it would seem that if the file has been modified since the last checksum, it would recalculate if it was not notified by the Linux Notify System. Quote Link to comment
Ouze Posted October 14, 2019 Share Posted October 14, 2019 (edited) On 10/1/2019 at 1:54 AM, Ouze said: I am doing a mass move from my old unraid server over to my new one. I expected some likely file transfer damage, so no surprise when I did a manual check and saw errors. What was a surprise was how... incomplete those errors were. I saw a few pages back someone else had a similar problem, and the instructions were to go into the hash log for the disk and find candidates, then run a b2sum (path) against each candidate. For some that is easy, but for like that first one and second one, there are a lot of candidates. Is there a less tedious way of configuring this output so I don't need to check 39 false positives to find the one I want manually? Also, that last one only has a single candidate, so I checked it. Those files have individual md5 checksums, one for each file. That directory passes an md5 check on both the source and the allegedly corrupt destination. I don't understand how that can be. I don't mean that I don't believe it, I just don't quite understand it. Happened again - did a check, out the output: "bunker: error: SHA256 hash key mismatch, rnal.720p.bluray.x264-reward.mkv is corrupted" According to a search, that narrows it down to 125 files. There has got to be a way to get this plugin to output more useful (specific) information, right? Why are these error messages being truncated this way? I did an md5 check against all 125 files individual checksums and of course, they pass just fine. These files got dropped onto the server once and have not been touched since then. If this plugin is going to generate output vague as it is, and as prone to false positives as it seems to be; then ultimately it's just contributing noise, not value. Edited October 14, 2019 by Ouze clarification Quote Link to comment
_Shorty Posted October 14, 2019 Share Posted October 14, 2019 Like I say, while great in theory, the plugin just doesn't seem to be that useful, given this issue. It randomly gives me "mismatches" on files that have not changed. It's not like it is missing a notification from the OS that a file has changed. Files haven't changed, and it spits out messages about files being corrupt that most definitely are still intact. Quote Link to comment
tazman Posted October 18, 2019 Share Posted October 18, 2019 I have two disks with 1 and 2 files that fail to export. I understand that an export failure is due to the fact that no hash has been generated for them. I (re)Build both drive but the error still persists. Here is one example: xfs_repair (without -n) did not do anything either. All files and can be read and copied without problems and are identical to their sources. Note, though, that the folder has actually a whopping 25000+ file in it. When I ran into this problem I had thousands of export errors on several drives. After Build, I am down to those three. I also noticed that the number of files that were reported as having an export error was not identical to the ones that are reported as added when I run the Build. Added is higher. I would have expected it to be the same. E.g. Export = cannot find the hash <=> Build = adding the hash. Appreciate your advice how to judge this and which measure to take to either assess more or rectify. Thanks! Quote Link to comment
Aderalia Posted October 18, 2019 Share Posted October 18, 2019 Does this Plugin make any sense to use with btrfs or is the build in btrfs scrup basically the same thing since it should just check block CRCs? Sorry if this has been asked before but the search doesnt allow me to search for a keyword inside a specific thread... Quote Link to comment
JorgeB Posted October 19, 2019 Share Posted October 19, 2019 8 hours ago, Aderalia said: is the build in btrfs scrup basically the same thing since it should just check block CRCs? Basically the same function, but it can be a good idea to have another option for checking on a file basis, for example, for my movies and TV seasons I use corz to create checksums before moving the data to the server, then rely primarily on btrfs to know data is intact, but can use the file checksums if needed. 1 Quote Link to comment
Vr2Io Posted October 20, 2019 Share Posted October 20, 2019 (edited) On 10/18/2019 at 7:31 PM, tazman said: I have two disks with 1 and 2 files that fail to export. I understand that an export failure is due to the fact that no hash has been generated for them. I (re)Build both drive but the error still persists. Here is one example: xfs_repair (without -n) did not do anything either. All files and can be read and copied without problems and are identical to their sources. Note, though, that the folder has actually a whopping 25000+ file in it. When I ran into this problem I had thousands of export errors on several drives. After Build, I am down to those three. I also noticed that the number of files that were reported as having an export error was not identical to the ones that are reported as added when I run the Build. Added is higher. I would have expected it to be the same. E.g. Export = cannot find the hash <=> Build = adding the hash. Appreciate your advice how to judge this and which measure to take to either assess more or rectify. Thanks! Long story short, the error not means filesystem fault, it should bacause file have extended attribute but in exclude filter and cause conflict. An way for clear those error was use "clear" and "remove" function to clear the dirty data ( extended attribute ). For example, BUILD -> CLEAR -> EXPORT -> REMOVE -> IMPORT ( Pls backup all hash export file first ) Some feedback for FIP, for my understanding this plugin just call "bunker" and "bunker" like a blackbox, so any problem/bug in bunker can't fix by this plugin. Edited October 20, 2019 by Benson Quote Link to comment
JesterEE Posted October 23, 2019 Share Posted October 23, 2019 I ran a full check today and the plugin found some corruption errors. I checked the system log and found that the paths of the files were not complete making it hard to find the offending files for recovery. Here is an example of the output with a mangled path: Oct 22 17:58:39 Cogsworth bunker: error: SHA256 hash key mismatch, t/Filters/data_processing/coverlap_g_sdb.fil is corrupted Has anyone else seen this before? Is it a plugin bug or something I did wrong? -JesterEE Quote Link to comment
_Shorty Posted October 23, 2019 Share Posted October 23, 2019 Read the last couple of pages. Quote Link to comment
JesterEE Posted October 23, 2019 Share Posted October 23, 2019 13 minutes ago, _Shorty said: Read the last couple of pages. Ok, ya, just did that and I see someone reporting a possible unicode display issue. Though, between my last post and now, I did a hash file export and got additional errors for the same files, this time with the correct path Oct 22 21:28:22 Cogsworth bunker: error: no export of file: /mnt/disk4/Users/bsmith/Eng_archive/XMod/mach/v5/sys/depot/Filters/data_processing/coverlap_g_sdb.fil So, script bug? -JesterEE Quote Link to comment
_Shorty Posted October 23, 2019 Share Posted October 23, 2019 I would guess so. While it appears it may work correctly most of the time for most everyone, some of us have run into erroneous corrupt file messages, among other things. In my case, the corrupt files still show/match the same hashes with other hash utils, so I'm not sure it's worth bothering with anymore, at least for the moment. Naturally, hard to fix if the dev can't replicate. Quote Link to comment
autumnwalker Posted October 25, 2019 Share Posted October 25, 2019 Perhaps I am missing something, but I do not see "appdata" or "system" shares under "excluded folders and files". Are these excluded by default? The help file does not indicate either way. Quote Link to comment
itimpi Posted October 25, 2019 Share Posted October 25, 2019 2 hours ago, autumnwalker said: Perhaps I am missing something, but I do not see "appdata" or "system" shares under "excluded folders and files". Are these excluded by default? The help file does not indicate either way. Where do you have those located? Mine are on the cache so automatically ignored anyway. Quote Link to comment
autumnwalker Posted October 25, 2019 Share Posted October 25, 2019 /mnt/user/appdata, cache prefer. /mnt/user/system, cache only. It's possible with my configuration that appdata could end up on an array disk - would be useful to ignore that share. Quote Link to comment
John_M Posted October 27, 2019 Share Posted October 27, 2019 Since upgrading to Unraid 6.8-rc series I'm seeing the weekly scheduled integrity check abort with exit status 126: Oct 27 06:00:01 Mandaue crond[1772]: exit status 126 from user root /boot/config/plugins/dynamix.file.integrity/integrity-check.sh &> /dev/null @bonienl Likely a consequence of the permissions change on the boot flash device that now prevents scripts from being executed directly from the flash? Quote Link to comment
bonienl Posted October 27, 2019 Author Share Posted October 27, 2019 26 minutes ago, John_M said: Since upgrading to Unraid 6.8-rc series I'm seeing the weekly scheduled integrity check abort with exit status 126: Oct 27 06:00:01 Mandaue crond[1772]: exit status 126 from user root /boot/config/plugins/dynamix.file.integrity/integrity-check.sh &> /dev/null @bonienl Likely a consequence of the permissions change on the boot flash device that now prevents scripts from being executed directly from the flash? You are right. Need to correct that. Thanks 1 Quote Link to comment
dbinott Posted December 6, 2019 Share Posted December 6, 2019 So this is weird. I installed this plugin a couple days ago. I built disk 1 one day, disk 2 and 3 yesterday. When I came to look at status it shows all 7 disks have been built. What can I look at to verify whether this is the case or not? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.