John_M Posted January 20, 2020 Share Posted January 20, 2020 On 1/19/2020 at 1:26 PM, _Shorty said: Sorry, but randomly giving incorrect hash errors on files that haven't changed is not something that I would call working well. That's not very constructive. It works well for me, apart from the problem with the cron job not running, mentioned above. If you've experienced problems perhaps there's an underlying issue with your server that needs to be fixed. Quote Link to comment
_Shorty Posted January 20, 2020 Share Posted January 20, 2020 As has been said by more than one user, when this plugin says there are hash errors but external tools say the hashes/files have not changed, you cannot blame anything but this plugin. My server is 100% fine. If it were not, there would be other issues rearing their heads. This plugin incorrectly stating files have changed is a problem with this plugin. Again, I am not the only person that has reported this issue. And it's not like I'm here fuming mad about an issue the dev(s) are unaware of. I'm simply reporting, again, that the plugin does not properly do what it is designed to do. It really does not matter to me at all if it ever works, as the more I think about it there isn't even much purpose to it. Other hash tools report unchanged hashes. Parity does not complain. So is my file actually corrupt because the plugin states it is? Incredibly unlikely. If you take half a dozen hash tools and they all report the same hash, at some point you have to believe the hash is correct. What bothers me is person after person trickling into this thread every now and then reporting, seemingly full of anxiety and in a panic, that their file(s) are being reported as corrupt. When the fact of the matter is their files are more than likely 100% fine, and the plugin simply is malfunctioning. Reporting that it is not functioning correctly for everyone and to not trust it as a result is most certainly constructive. Quote Link to comment
superloopy1 Posted January 20, 2020 Share Posted January 20, 2020 Shorty .... my problem is not what you are describing which I believe to be the reporting of 'corrupt' files. In my case the files are not having the hashes built in the first place so any subsequent 'export' jobs fail those files. Why they can't be built is my question? Obviously I can just ignore them, there's only a few. But an answer as to why the extended attributes can't be set on only a handful of files would be more helpful. Other than that the plugin seems to be working just fine. No corrupt files for me. Quote Link to comment
unraidun Posted February 6, 2020 Share Posted February 6, 2020 Plugin looks great, just had a few questions... - I assume there is one hash file per actual file? And we can specify where these hashes are stored (so they can be backed up?). - What does the performance impact look like? Let's say we add a 1 gig file to a typical 200MB/sec hard disk. How long will it take to generate the hash (assume this happens automatically behind the scenes)? How long will it take to validate that same file? - Any differences between this and btrfs scrub (in terms of detecting file corruption/early drive failure)? Thanks Quote Link to comment
LEKO Posted February 8, 2020 Share Posted February 8, 2020 Feb 8 15:55:49 Tower bunker: warning: SHA256 hash key mismatch (updated), /mnt/disk1/domains/CentOS/vdisk1.img was modified Feb 8 15:58:50 Tower bunker: error: SHA256 hash key mismatch, /mnt/disk1/system/docker/docker.img is corrupted Feb 8 15:58:55 Tower bunker: warning: SHA256 hash key mismatch (updated), /mnt/disk1/system/libvirt/libvirt.img was modified Feb 8 15:58:55 Tower bunker: warning: 2 corrections made, export file needs to be updated ^^^^ Just got these logs. On first check, I tough I had a problem. Then, I noticed it was normal to have these hash to fail because it is the VM image file which changes quite a bit. Would it be possible to change the default behaviour of this plugin to not check the integrity of those files? At least, users should be warned that checking integrity of all files can generate false positive. Quote Link to comment
BRiT Posted February 8, 2020 Share Posted February 8, 2020 But its not false positives at all. The files really don't match the checksums. Quote Link to comment
hotio Posted February 21, 2020 Share Posted February 21, 2020 I can confirm that this plugin is giving false positives....got several "corrupted" files when doing a check. Did a b2sum on the "corrupted" file and compared that with a known good version of the file, they matched. The b2sum in the hash export file is completely different...double checked this too. For tsome reason it's exporting hashes that are not correct. Quote Link to comment
Jerky_san Posted March 8, 2020 Share Posted March 8, 2020 So having an issue or two.. First Full Disclosure Processor 2990wx - 24 physical cores 48 threads available for use Ram 128GB Using BLAKE2 for hashing I was doing checks to see how many disks I could check at once.. Started with All and started killing checks starting with disk 1. I noticed however that disk 1's job never stopped for some reason. It has kept running. That is my first issue. It appears repeatable as well. When I kill disk 1 it appears to kill another job instead. Problem two is I can seemingly only do 5 disks at a time if I want any kind of speed. After that the MB checked per second drops substantially. If I do all drives at once it will drop to about 10mb a second(18 Drives). So looking at top I noticed two processes seem to run at once per drive. So I cut it down to 12 drives thinking 12x2 = 24 and no one core was maxing so should be great. But it only raised it to about 30 megabytes a second. A 3x increase but still was hoping with as many physical cores as I have it would be able to do more. Is this expected or perhaps I have something configured wrong? Quote Link to comment
Vr2Io Posted March 8, 2020 Share Posted March 8, 2020 (edited) 30 minutes ago, Jerky_san said: It has kept running. Sometimes it would that, pls note this plugin just submit command to "bunker" for my understanding. 30 minutes ago, Jerky_san said: 10mb a second(18 Drives) 30 minutes ago, Jerky_san said: 30 megabytes a second 18 x 10MB/s = 180MB/s 12 x 30MB/s = 360MB/s How about the speed if you hash on 4 or 5 disks at a time ? Anyway bottleneck not on CPU, it is array performance issue. Edited March 8, 2020 by Benson Quote Link to comment
Jerky_san Posted March 8, 2020 Share Posted March 8, 2020 (edited) 14 minutes ago, Benson said: Sometimes it would that, pls note this plugin just submit command to "bunker" for my understanding. 18 x 10MB/s = 180MB/s 12 x 30MB/s = 360MB/s How about the speed if you hash on 4 or 5 disks at a time ? Anyway bottleneck not on CPU, it is array performance issue. 5x 160-190MB/s = 800-950MB/s Also during things like parity checks it(with the extra 2 drives for parity) it will go an average of 110 megabytes a second average check speed(which is a limitation of the back-plane. Since the back plane only has a single SAS connection 6GBx4 + overhead. Edited March 8, 2020 by Jerky_san Quote Link to comment
Vr2Io Posted March 8, 2020 Share Posted March 8, 2020 (edited) 20 minutes ago, Jerky_san said: 5x 160-190MB/s = 800-950MB/s So good, I got about 600MB/s - 700MB/s (max) in array pool. I got 2GB+/s with array pool + several raid0 pool, so I said not CPU problem. Below is hash check command, "pool1.hash" is a hash file for a disk pool, it can be any name. /bin/bash /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker -Cx -md5 -L -n -f /boot/config/plugins/dynamix.file.integrity/export/pool1.hash Edited March 8, 2020 by Benson Quote Link to comment
Jerky_san Posted March 8, 2020 Share Posted March 8, 2020 57 minutes ago, Benson said: So good, I got about 600MB/s - 700MB/s (max) in array pool. I got 2GB+/s with array pool + several raid0 pool, so I said not CPU problem. Below is hash check command, "pool1.hash" is a hash file for a disk pool, it can be any name. /bin/bash /usr/local/emhttp/plugins/dynamix.file.integrity/scripts/bunker -Cx -md5 -L -n -f /boot/config/plugins/dynamix.file.integrity/export/pool1.hash Turns out it is CPU limiting.. The processes called "unraidDD" with the number I assume of the disk behind it are all spiking to 100% all the time. It also appears there are more b2sum processes running than originally meets the eye. I'm unsure though why the unraidDD process is requiring so much processing power but assume it has something to do with the lookup of files on that particular disk. Quote Link to comment
Mihle Posted March 9, 2020 Share Posted March 9, 2020 I run parity once a month, and this once a week, is that fine? Quote Link to comment
John_M Posted March 10, 2020 Share Posted March 10, 2020 On 3/8/2020 at 4:39 PM, Jerky_san said: I'm unsure though why the unraidDD process is requiring so much processing power but assume it has something to do with the lookup of files on that particular disk. What does htop show? The threads are probably spending most of their time waiting for I/O. Quote Link to comment
John_M Posted March 10, 2020 Share Posted March 10, 2020 5 hours ago, Mihle said: I run parity once a month, and this once a week, is that fine? That's what I do. I don't check all disks every week though. A quarter of them are checked each week. 1 Quote Link to comment
John_M Posted March 10, 2020 Share Posted March 10, 2020 On 1/19/2020 at 11:44 AM, John_M said: It will because the plugin is broken at the moment. It's a trivial thing to fix but bonienl is a busy guy. Now fixed. Thank you very much. Quote Link to comment
Mihle Posted March 10, 2020 Share Posted March 10, 2020 13 hours ago, John_M said: That's what I do. I don't check all disks every week though. A quarter of them are checked each week. Oh, ok, will do once a week now then, have only one data disk (and one parity) so the same one will be every week until I get more when that happens. Btw I noticed I had it on once per day, I remembered that one person told me that is what I should do. But don't remember why and seemed little bit too much and cause unnecessary drive head movement. So I asked that here, will put it on once a week or something or less Quote Link to comment
John_M Posted March 13, 2020 Share Posted March 13, 2020 On 3/10/2020 at 3:00 PM, Mihle said: will do once a week now then, have only one data disk I'd just schedule it for once a month, making sure it doesn't clash with the parity check. I think checking every file every week is too often, which is why I check a quarter of mine each week (so each file gets checked once every four weeks and I'm simply splitting the load), but YMMV. Quote Link to comment
Mihle Posted March 14, 2020 Share Posted March 14, 2020 On 3/13/2020 at 4:32 PM, John_M said: I'd just schedule it for once a month, making sure it doesn't clash with the parity check. I think checking every file every week is too often, which is why I check a quarter of mine each week (so each file gets checked once every four weeks and I'm simply splitting the load), but YMMV. ok, thanks Quote Link to comment
kubed_zero Posted April 1, 2020 Share Posted April 1, 2020 (edited) I have been using this plugin without issues for years. In the past few months (possibly after the 6.8 update, I can't remember for sure) my scheduled job no longer runs. I've confirmed that it is still present in the Crontab file and that the Cron line is set to the frequency I expect, and running the command manually appears to work fine. I can also trigger integrity checks without issue through the Web UI. All that said, regardless of if I change the job to daily, weekly, monthly, it doesn't seem to make a difference. Curious if anyone else is running across this issue with scheduled integrity checks. Nevermind, I found the answer a couple pages back: Namely the plugin tries to write a Cron line that calls the shell script directly, but 6.8 disallows that and requires calling bash or perl or whatever shebang line you want before running the actual file. That means 10 0 * * * /boot/config/plugins/dynamix.file.integrity/integrity-check.sh &> /dev/null needs to change to 10 0 * * * /bin/bash /boot/config/plugins/dynamix.file.integrity/integrity-check.sh &> /dev/null or in place of /bin/bash it could be /usr/bin/bash or just bash (relying on the environment path to determine which bash to actually run) or sh instead of bash. I was on the 3/08/2020 plugin version and see there was a 3/31 plugin just released, so I'm going to try that again and post back if it works now. Another update, I just updated the plugin to 3/31 and changed the integrity check from weekly to daily to weekly again so it can regenerate the cron file in the /boot/config/plugins/dynamix.file.integrity/ directory and the file now has what I had said earlier, "bash" before the actual shell script. Not sure if a reboot will be needed to apply this new cron file to the actual crontab (can't remember where that lives) but I'm assuming it won't need a reboot to adjust a setting. Edited April 1, 2020 by kubed_zero looked at previous pages Quote Link to comment
Blaqwolf Posted April 16, 2020 Share Posted April 16, 2020 Hey folks, Just had my first "error" and am thinking that I have not set something up correctly. Where do I go to find the file that has this error? I look in the logs folder but is empty. I changed logs to show syslog as well as output file but I still don't see any logs. Thanks Quote Link to comment
JustinChase Posted April 21, 2020 Share Posted April 21, 2020 I got a server popup yesterday with the following error... found 1 file with sha256 hash key mismatch no more information, and I can't find any errors or more information in the plugin page. how do I know what the issue is and how to fix it? thanks Quote Link to comment
JustinChase Posted April 27, 2020 Share Posted April 27, 2020 If I can't find out which file(s) have gone bad, I'm not sure this really helps. It only causes me stress knowing something is 'broke'. Is there a way to know what's not right? Or should I just uninstall the plugin to not have the stress of knowing there is an un-fixable problem with my data? Quote Link to comment
Vr2Io Posted April 27, 2020 Share Posted April 27, 2020 1 hour ago, JustinChase said: Is there a way to know what's not right? You can found the file detail in "log tab". Quote Link to comment
JustinChase Posted April 27, 2020 Share Posted April 27, 2020 I couldn't find any "log tab", but there is a "log files" folder on the plugin page. The only thing in that folder is a file called duplicate_file_hashes.txt That only lists all the duplicate hashes it found, but nothing about any errors, or help figuring out which file triggered the error. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.