SlrG Posted April 22, 2019 Share Posted April 22, 2019 @Squid After a mistake all my plugins and settings were lost on my server. After the reinstall of the Checksum Tools the blake2 algorithm is no longer available albeit it was before. Running the b2check.sh manually does not create the NotBlakeCompatible file in /tmp/checksum, so it should be recognized as compatible, too. Anything I can do to fix this problem? Quote Link to comment
Squid Posted April 22, 2019 Author Share Posted April 22, 2019 [mention=10290]Squid[/mention] After a mistake all my plugins and settings were lost on my server. After the reinstall of the Checksum Tools the blake2 algorithm is no longer available albeit it was before. Running the b2check.sh manually does not create the NotBlakeCompatible file in /tmp/checksum, so it should be recognized as compatible, too. Anything I can do to fix this problem?No idea I deprecated it IIRC 2 years ago. tbh I suggest either running corz on another computer against your shares or switch to dynamix file integrity Sent via telekinesis Quote Link to comment
SlrG Posted April 23, 2019 Share Posted April 23, 2019 Well, I use corz and dynamix file integrity. But I like it, that your plugin watches my shares and creates corz compatible hashes automatically, when new files are created so I don't have to do this manually. Dynamix file integrity is cool also, but as you know it, uses the extended file attributes which don't copy over when working with other non linux systems and exports them only to the flash where I would have to manually copy them if needed. So no, I think your plugin is irreplaceable. I understand however you are not supporting it any longer if a problem doesn't hit you personally. I just wanted to ask in case you had this problem before and would know a quick solution. If anybody else runs into this problem: I was able to get it back to a working state, by creating the share watches with md5 as algorithm and manually editing that back to blake2 in the config file on the flash. It still shows as md5 in the GUI but it will create the blake2 hashes I need, so everything is fine. Oh and I'm now using your CA Backup Plugin, too. Don't want to restore my plugins and settings manually ever again. 😭 Quote Link to comment
Vr2Io Posted April 23, 2019 Share Posted April 23, 2019 52 minutes ago, SlrG said: I was able to get it back to a working state, by creating the share watches with md5 as algorithm and manually editing that back to blake2 in the config file on the flash. It still shows as md5 in the GUI but it will create the blake2 hashes I need, so everything is fine. So odd, hope never me too 🤣 Quote Link to comment
Marshalleq Posted July 17, 2019 Share Posted July 17, 2019 (edited) Is this still an active solution? I mean, actively maintained? Scratch that, I've now confirmed that it isn't. Thanks. Edited July 17, 2019 by Marshalleq Clarification Quote Link to comment
Squid Posted July 17, 2019 Author Share Posted July 17, 2019 13 hours ago, Marshalleq said: Is this still an active solution? I mean, actively maintained? Scratch that, I've now confirmed that it isn't. Thanks. It should still work, and can be installed via CA, but if you've got any problems, they will NOT be fixed. Quote Link to comment
J.Nerdy Posted August 4, 2019 Share Posted August 4, 2019 I know that this in no longer supported, but, is running this and Dynamic File Integrity concurrently a bad idea? Thanks! Quote Link to comment
Squid Posted August 4, 2019 Author Share Posted August 4, 2019 6 minutes ago, J.Nerdy said: I know that this in no longer supported, but, is running this and Dynamic File Integrity concurrently a bad idea? Thanks! No problems per se as they can exist side by side. But, you would double the CPU usage at the very least as they both try to hash any new files added concurrently Quote Link to comment
trurl Posted December 11, 2019 Share Posted December 11, 2019 I know this is deprecated and locked, but lately I have seen this slowly begin to consume rootfs after a reboot. My workaround has been to simply remove it then reinstall and it is OK until the next reboot. Any thoughts? Quote Link to comment
Squid Posted December 11, 2019 Author Share Posted December 11, 2019 How do you know that it's this plugin causing it? Quote Link to comment
trurl Posted December 11, 2019 Share Posted December 11, 2019 9 minutes ago, Squid said: How do you know that it's this plugin causing it? rootfs stable at 9% use when the plugin is removed. Also stable at 9% if I install the plugin after booting. Only if it is already installed at boot does rootfs usage grow. I haven't tried to figure out what the plugin is doing or how it might cause this, just thought I would ask in case you had some ideas. I first noticed this behavior when trying to use recent versions of Unraid.net plugin which had a memory leak issue. That issue was independent of this. That plugin has been removed for several weeks now and I'm not planning to install it again until someone decides it's really fixed. So that is not in play nor is anything else changing on my system. Quote Link to comment
gnollo Posted March 15, 2020 Share Posted March 15, 2020 Newbie here, with this plugin would I be able to compare the original drive with a rebuilt one, to validate that parity did the job correctly? Quote Link to comment
Vr2Io Posted March 15, 2020 Share Posted March 15, 2020 (edited) 48 minutes ago, gnollo said: Newbie here, with this plugin would I be able to compare the original drive with a rebuilt one, to validate that parity did the job correctly? Plugin not relate to parity, it generate hash for file, if drive rebuild, of course could use to verify file by hash. Or this not properly to say if hash no error then means parity also error free. Edited March 15, 2020 by Benson Quote Link to comment
trurl Posted March 15, 2020 Share Posted March 15, 2020 5 hours ago, gnollo said: Newbie here, with this plugin would I be able to compare the original drive with a rebuilt one, to validate that parity did the job correctly? Has absolutely nothing to say about parity, since parity doesn't know anything at all about files. Every disk is just a bunch of bits as far as parity is concerned, and every bit of every disk is included in parity, whether or not those bits are even part of any files. You could have parity errors with no file problems, and those errors could still affect a rebuild that would result in file problems. Quote Link to comment
trurl Posted March 15, 2020 Share Posted March 15, 2020 5 hours ago, gnollo said: Newbie here @Squid Isn't this thread supposed to be locked? I posted to it after it was On 12/11/2019 at 9:28 AM, trurl said: deprecated and locked but I am a moderator and I'm pretty sure I wouldn't have unlocked it to post to it. I will lock it again if you want. Quote Link to comment
t3 Posted March 15, 2020 Share Posted March 15, 2020 (edited) 6 hours ago, gnollo said: Newbie here, with this plugin would I be able to compare the original drive with a rebuilt one, to validate that parity did the job correctly? _after_ a rebuild, the plugin will allow you to validate the rebuilt files to have the same content, like when the hashes were created. in case they do, this implicitly also means, that the rebuilt went well so far. with one exception (as there is always one): the rather unlikely case, that the hash file itself was corrupted in such a way it had flipped one ascii character into another (since the hash is saved as ascii text). afaik there is no validation for the hash files themselves. ps: i guess you didn't literally mean to to compare a rebuilt drive with the original one... Edited March 15, 2020 by t3 Quote Link to comment
Squid Posted March 15, 2020 Author Share Posted March 15, 2020 11 minutes ago, trurl said: @Squid Isn't this thread supposed to be locked? I posted to it after it was but I am a moderator and I'm pretty sure I wouldn't have unlocked it to post to it. I will lock it again if you want. Yeah, it was locked at one point, but I believe that after you posted to it I unlocked it for anyone. Personally, I don't care either way as I've made it quite clear that no further updates will ever be made to it. Quote Link to comment
gnollo Posted March 16, 2020 Share Posted March 16, 2020 7 hours ago, t3 said: _after_ a rebuild, the plugin will allow you to validate the rebuilt files to have the same content, like when the hashes were created. in case they do, this implicitly also means, that the rebuilt went well so far. with one exception (as there is always one): the rather unlikely case, that the hash file itself was corrupted in such a way it had flipped one ascii character into another (since the hash is saved as ascii text). afaik there is no validation for the hash files themselves. ps: i guess you didn't literally mean to to compare a rebuilt drive with the original one... Yeah, I have both so I meant comparing the two drives. I can compare easily the contents in terms of each file being on both drives, but I wanted to make sure that the files are identical, not just by name by in terms of content. Quote Link to comment
t3 Posted March 16, 2020 Share Posted March 16, 2020 2 minutes ago, gnollo said: Yeah, I have both so I meant comparing the two drives. I can compare easily the contents in terms of each file being on both drives, but I wanted to make sure that the files are identical, not just by name by in terms of content. didn't expect that. the plugin would still do that, on a file-per-file basis. but: if you have both drives, and there was no write-access after the rebuild, i guess there are some other (faster) ways to compare two drives; afaik they should be identical on a byte-by-byte basis, directly after a rebuild... Quote Link to comment
gnollo Posted March 16, 2020 Share Posted March 16, 2020 I used Checksum Compare, a windows app in the past, but I guess the process would be much faster if I ran the process on the server itself, using a much faster processor as well... Quote Link to comment
gnollo Posted March 16, 2020 Share Posted March 16, 2020 Maybe i should ask the question in a different way. What is the best way to compare the content of two folders across two drives to make sure they are identical? Quote Link to comment
JonathanM Posted March 17, 2020 Share Posted March 17, 2020 4 minutes ago, gnollo said: Maybe i should ask the question in a different way. What is the best way to compare the content of two folders across two drives to make sure they are identical? rsync -narcv /mnt/diskX/ /mnt/diskY Quote Link to comment
gnollo Posted March 18, 2020 Share Posted March 18, 2020 On 3/17/2020 at 12:05 AM, jonathanm said: rsync -narcv /mnt/diskX/ /mnt/diskY Great, thank you! do I run that in a telnet session? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.