wacko37 Posted June 10 Share Posted June 10 On 6/6/2024 at 8:01 PM, tazman said: My exclusions do not work. I have set: Excluded folders: .Trash-99 .incoming Data_Shadow Documents_Shadow Public Stickware .Recycle.Bin .cache Excluded files: *.nfo , metadata.db But still get: BLAKE3 hash key mismatch (updated), /mnt/disk7/Books/Calibre/.cache/calibre/server-log.txt was modified BLAKE3 hash key mismatch (updated), /mnt/disk7/Books/Calibre/metadata.db was modified Any idea how to fix that? Thanks! 4 hours ago, Falloutman said: I am also seeing the same issue with the excluded folders. Is this plugin still being maintained? There isn't much going on in their GitHub repository. Out of curiosity have to pressed the "CLEAR" button after applying your excluded details? Quote Link to comment
tazman Posted June 11 Share Posted June 11 14 hours ago, wacko37 said: Out of curiosity have to pressed the "CLEAR" button after applying your excluded details? Thanks for that. No, I didnt clear. Trying right now. Quote Link to comment
dirtrally Posted June 16 Share Posted June 16 (edited) Heres a question I cant seem to find a workaround for: Is it possible to extend the integrity check period beyond monthly? I've recently decided to pull a bunch of old drives that were just sitting around and use them in a new media backup server (low priority and just making better use of excess hardware), however as this data is not important but still of value to check that its good, I'd like to only carry out integrity checks every 6 months/ yearly for example so that my primary server isn't choked up for days every month reading the disks. I know I can stagger the checks with tasks, but processor load isnt the issue. It would seem a lot more logical to consider the integrity check system on a per share basis rather than disks, then we could have different schedules that suit the importance of the data, however I'm sure i'm really not thinking everything through and its all been done this way for a reason Any ideas relating to my initial query, or have I missed something very obvious? I presume the answer will just be to perform manual checks? However that then removes automated checks for data I do want to check regularly....though to be fair in my particular case anything of actual importance is on a seperate ZFS mirror pool...i feel the use-case remains though. Edited June 16 by dirtrally Quote Link to comment
andyd Posted June 17 Share Posted June 17 Hi. I am bit confused on how to use this plug in. Below is how I have set things up. 1. Is the plugin supposed to build hashes for existing files based on the config? It seems to only care about new files 2. When I check "bad" hashes, it's always some nfo file which doesn't make sense. Any reason for this? Quote Link to comment
primeval_god Posted June 18 Share Posted June 18 (edited) 17 hours ago, andyd said: 1. Is the plugin supposed to build hashes for existing files based on the config? It seems to only care about new files The plugin only automatically creates hashes for new files. For initial setup you need to manually trigger hashing for existing files. To do this you need to go the the control page under the unRAID "Tools" page and use the "Build" button. Dont forget you can enable the help text in the webui to get more information. p.s. read back in this thread a ways for info about nfo files. Edited June 18 by primeval_god Quote Link to comment
sunwind Posted June 21 Share Posted June 21 Complaint/criticism/feedback/suggestion: THIS is FAR too difficult to find. My notifications were being spammed with error messages of corruption and all other shit, I have ZERO idea what to do, after HOURS I fucking find this so I can actually SEE what wrong. What a horribly designed UI I hate the entire unraid web ui. Quote Link to comment
foo_fighter Posted June 21 Share Posted June 21 3 hours ago, sunwind said: Complaint/criticism/feedback/suggestion: THIS is FAR too difficult to find. My notifications were being spammed with error messages of corruption and all other shit, I have ZERO idea what to do, after HOURS I fucking find this so I can actually SEE what wrong. What a horribly designed UI I hate the entire unraid web ui. They are also logged to /var/log/syslog... Quote Link to comment
andyd Posted June 21 Share Posted June 21 (edited) On 6/18/2024 at 9:00 AM, primeval_god said: The plugin only automatically creates hashes for new files. For initial setup you need to manually trigger hashing for existing files. To do this you need to go the the control page under the unRAID "Tools" page and use the "Build" button. Dont forget you can enable the help text in the webui to get more information. p.s. read back in this thread a ways for info about nfo files. Got it. Thanks! I keep trying to run export on disk 4 and the result is always the same. 0 exported. I tried searching the thread but the only mention I saw was related to requiring a plugin upgrade. Any reason why this might be happening? UPDATE: looks every file has this error... Jun 21 09:17:25 HomeServer bunker: error: no export of file: /mnt/disk4/movies/Underworld - Awakening (2012)/Underworld - Awakening (2012).mkv UPDATE #2: even though the build has a check mark, looks like I had to redo the build for the export to work correctly. All good now Edited June 22 by andyd Quote Link to comment
sunwind Posted July 11 Share Posted July 11 Just let this finish and all my data is checked with this? 30-40 hours for some drives though.. worth it? Quote Link to comment
sunwind Posted July 12 Share Posted July 12 I did export, then tried to check export, and I get: seems to be exporting fine though, what am I missing? Quote Link to comment
sunwind Posted July 15 Share Posted July 15 (edited) how come disk 8 is failing? it seems it isn't even being exported for some reason? (there are 0 log files by the way) Edited July 15 by sunwind Quote Link to comment
Szene Posted July 27 Share Posted July 27 On 5/2/2024 at 1:53 AM, cr08 said: So after a recent overhaul of my server I am looking back into using this plugin again. However the one question I have that I can't seem to find any mention of is moving the hash file location. Seems if I'm reading correctly it is hard coded to store on the flash drive which I'd like to avoid for obvious reasons. Any chance it can get easily pushed to cache instead? I would be interessted in an setting to change the export location to another drive then the flash drive as well, as I want to limit the wear of it as much a possible. Can't it just check if the configured location is available or not and just use it if it's existing? So for example if it's not existing it should just treat it as like the automatic report was disabled to doge some potential bugs. On top of that, how does it look regarding a retenetion of the exports? Are they never cleared and just pile up, do we need to create additional scripts that deal with it or how can I understand that? I can't find any mentions regarding this. Do I really need those automatic exports anyway, or are the manuall exports sufficient? I can't rap my head around how critical it is to have the latest export if some files are defective. Do I see all corrupted files if I do a manual export after the plugin reported issues even if the last export was months behind? There is no clear spesification on all of that as far as I understood on basis of my reasearch in this thread. I just don't want to bombard my flash drive with unnecessary data that cause more harm then good over a long time. Quote Link to comment
_Shorty Posted July 27 Share Posted July 27 Flash drives are cheap, and you're not going to wear it out. Quote Link to comment
dada051 Posted Wednesday at 03:42 PM Share Posted Wednesday at 03:42 PM Is there any update planned ? At least to get a newer version of b3sum. Quote Link to comment
darkside40 Posted Thursday at 04:45 AM Share Posted Thursday at 04:45 AM Any advice which hash algorithm is the fastest? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.