Jump to content
Squid

Checksum Suite

260 posts in this topic Last Reply

Recommended Posts

That text file was put in Pix2015, but several sub folders deep under that.

You don't have .txt files excluded by chance (or not included)?  Anything "weird" about the folder name?  Can you send me the complete command log (stored in /tmp/checksum/checksumlog.txt) and the .hash file for that particular folder.

 

Settings are still default.  Nothing excluded, nothing included.  This .hash file is dated Oct 25 at 12:28am.  Later that day a test.txt file was added to that directory, and then on Oct 26, a test Word .docx file was added.  Nothing has been added to this .hash file.

 

Here is the full path to that file

 

\\kim\pix2015\2015-10 Oct\[2015-10-24] Baby Aria Smith\To release

 

Here is the .hash file 

 

Log by pm  OOPS 500kb attached zip file won't go via pm

 

#Squid's Checksum
#
#md5#IMG_2271N-Edit.jpg#2015.10.24@20.00:34
82350cf297c2de27ba8c803c2d772423  IMG_2271N-Edit.jpg
#md5#IMG_2285N.jpg#2015.10.24@19.32:34
1e5d97fbdea4660c67efa3b56d34ec76  IMG_2285N.jpg
#md5#IMG_2301N.jpg#2015.10.24@19.49:08
988e1a20e0955aa1bbff45561e962d39  IMG_2301N.jpg

 

 

Share this post


Link to post

Received your log.  will check it out after work.  But, I'm curious as what happens if you manually trigger another checksum on that share.  Will it pick up those two files?  (It won't rehash everything else)

Share this post


Link to post

I pressed the add to queue on the Pix2015 folder and after waiting an hour, these 2 files were hashed successfully.

 

Now why did it not happen automatically?

Share this post


Link to post

I pressed the add to queue on the Pix2015 folder and after waiting an hour, these 2 files were hashed successfully.

 

Now why did it not happen automatically?

ok.

 

Narrows the problem down at least.  Got an update coming out tonight because in the process of testing this I find a few minor but annoying (to me at least) bugs.

 

There's nothing wrong with the path that you are using (replicated that path on mine with no problems), and the plugin picked up changes to it.

 

My best guess right now as to what happened is this:

 

You triggered a manual scan of the share.  Because of the nature of how everything works (right now), each write to a folder ultimately triggers another scan (and a hash file being written counts as one).

 

The cumulative amount of excess scans exceeded the maximum number of queued events (default is 16384).  Not sure if you've got 16384 separate folders within that share, nor if anything else installed is also possibly using inotifywait either.  If that maximum is exceeded, then events get dropped.

 

This circumstance should only happen during initial creation of hashes, or if you happen to write out a ton of files at once (ie: save the plex metadata to each folder).  Luckily, the forthcoming cron job will alleviate this high workload issue.  Everyday (or whenever you choose) a manual scan gets triggered to catch up anything that may have been missed.  ETA for that update is this weekend.

 

Got some more testing to do with regards to some of the logging issues that I noticed in your logs, and also discovered that the monitor doesn't properly start up from a restart, and some annoying items that keep insisting on being output to an attached monitor.  Those are todays's updates.

 

 

 

Share this post


Link to post

***Sanity check***

did the first and second release not automatically start the monitoring after a reboot ?

 

seems with the latest release i need to push the start monitor button myself after a reboot?

 

Share this post


Link to post

***Sanity check***

did the first and second release not automatically start the monitoring after a reboot ?

 

seems with the latest release i need to push the start monitor button myself after a reboot?

You are correct.  The first two did start automatically on boot.  That got broken in the last rev.

 

Will update tonight (would have done it 2 days ago, but my boss is being unreasonable and insisting I earn a pay check  :o  )

Share this post


Link to post

Just a heads-up - I just started playing with this, as a Corz user, and discovered it probably wasn't tested for those few of us with no User Shares.  There's a large paragraph of errors, mainly missing /mnt/users.  I've set up a test folder using Custom, so still seems usable for disk shares.  Just starting, will test further...

Share this post


Link to post

Correct.  Was designed for user shares in mind and never thought that someone out there never used them

Share this post


Link to post

Correct.  Was designed for user shares in mind and never thought that someone out there never used them

 

While I have user shares, I almost always access and organize with disk shares.

 

My own gdbm managed md5sum DB files are all based on the disks.

my sqlocate sqlite tables are also based on disks as they could migrate into and out of the array as well.

 

Ideally when a disk goes bad and you question it's integrity, you'll be operating at the disk level.

i.e. a failed disks/when replacing/rebuilding or migrating to other file systems.

 

With hash files existing on a per directory level it does actually provide a valid check mechanism per disk.

 

I have not used the Checksum Creator/Verify tool, but being able to validate a disk with a sweep of hash files contained only on that disk would be valuable after an event.

Share this post


Link to post

Correct.  Was designed for user shares in mind and never thought that someone out there never used them

 

While I have user shares, I almost always access and organize with disk shares.

 

My own gdbm managed md5sum DB files are all based on the disks.

my sqlocate sqlite tables are also based on disks as they could migrate into and out of the array as well.

 

Ideally when a disk goes bad and you question it's integrity, you'll be operating at the disk level.

i.e. a failed disks/when replacing/rebuilding or migrating to other file systems.

 

With hash files existing on a per directory level it does actually provide a valid check mechanism per disk.

 

I have not used the Checksum Creator/Verify tool, but being able to validate a disk with a sweep of hash files contained only on that disk would be valuable after an event.

Because with unRaid's user shares and split levels, there is no guarantee that the actual .hash file is stored on the disk being checked even if the files hashed are stored there.

 

Because of that, when checking a disk the plugin first parses every hash on all the disks.  It then checks to see what files that have been hashed are stored on the particular disk.  It then checks those files.  That way, no file stored on the disk gets missed.

 

To only process .hash files stored on the particular disk being checked is pointless in my mind.

Share this post


Link to post

Because with unRaid's user shares and split levels, there is no guarantee that the actual .hash file is stored on the disk being checked even if the files hashed are stored there.

 

Because of that, when checking a disk the plugin first parses every hash on all the disks.  It then checks to see what files that have been hashed are stored on the particular disk.  It then checks those files.  That way, no file stored on the disk gets missed.

 

To only process .hash files stored on the particular disk being checked is pointless in my mind.

 

 

OK I see your point.

Share this post


Link to post

Because with unRaid's user shares and split levels, there is no guarantee that the actual .hash file is stored on the disk being checked even if the files hashed are stored there.

 

Because of that, when checking a disk the plugin first parses every hash on all the disks.  It then checks to see what files that have been hashed are stored on the particular disk.  It then checks those files.  That way, no file stored on the disk gets missed.

 

To only process .hash files stored on the particular disk being checked is pointless in my mind.

 

 

OK I see your point.

On my system it takes around 20 seconds (after the drives spin up)  to figure it all out before it starts actually checking the disk.

Share this post


Link to post

Because with unRaid's user shares and split levels, there is no guarantee that the actual .hash file is stored on the disk being checked even if the files hashed are stored there.

 

Because of that, when checking a disk the plugin first parses every hash on all the disks.  It then checks to see what files that have been hashed are stored on the particular disk.  It then checks those files.  That way, no file stored on the disk gets missed.

 

To only process .hash files stored on the particular disk being checked is pointless in my mind.

 

OK I see your point.

On my system it takes around 20 seconds (after the drives spin up)  to figure it all out before it starts actually checking the disk.

 

 

I get it.  We do things differently.  I exhibit much more control in the layout of my data utilizing user shares for read only.

I never have a directory with multiple files span multiple disks. I always keep like files self contained within a directory.

With all my split points, they are only directories and they rarely have files that require a checksum.

 

 

With this software, if a user access a disk share via windows and uses corz to verify a hash file, is it possible the hash file will contain references to files on other disks?

Share this post


Link to post

Because with unRaid's user shares and split levels, there is no guarantee that the actual .hash file is stored on the disk being checked even if the files hashed are stored there.

 

Because of that, when checking a disk the plugin first parses every hash on all the disks.  It then checks to see what files that have been hashed are stored on the particular disk.  It then checks those files.  That way, no file stored on the disk gets missed.

 

To only process .hash files stored on the particular disk being checked is pointless in my mind.

 

OK I see your point.

On my system it takes around 20 seconds (after the drives spin up)  to figure it all out before it starts actually checking the disk.

 

 

I get it.  We do things differently.  I exhibit much more control in the layout of my data utilizing user shares for read only.

I never have a directory with multiple files span multiple disks. I always keep like files self contained within a directory.

With all my split points, they are only directories and they rarely have files that require a checksum.

 

 

With this software, if a user access a disk share via windows and uses corz to verify a hash file, is it possible the hash file will contain references to files on other disks?

Depends on how you set up the folders.

 

If you use user shares then yes because then its up to unraid's split levels on where to put the hash file.

 

If you set up a custom disk folder, then no.  The hash file generated will always be on the particular disk.

 

And related to this, I'll add to the list code to suppress all the errors if no user shares are present (since that is a valid use case for unRaid)

Share this post


Link to post

That being said, the hash file does not contain any absolute path references to the file.  In other words, in your case you could verify the files either through user shares or through disk shares and it would still work ok.

Share this post


Link to post

Updated to 2015.10.29

 

Just a heads-up - I just started playing with this, as a Corz user, and discovered it probably wasn't tested for those few of us with no User Shares.  There's a large paragraph of errors, mainly missing /mnt/users.  I've set up a test folder using Custom, so still seems usable for disk shares.  Just starting, will test further...

Fixed (And in the course of this discovered a defect with unRaid / docker where if you stop the array, and disable user shares, but keep docker enabled with references to /mnt/user within it, then docker will automatically recreate /mnt/user/the_top_level_folders.  And then the system will hang trying to unmount user shares when stopping the array (even though they aren't enabled).  Too much of an edge case for me to either investigate further or formally report.)

 

User shares not enabled will remain as custom paths being entered for the time being.

***Sanity check***

did the first and second release not automatically start the monitoring after a reboot ?

 

seems with the latest release i need to push the start monitor button myself after a reboot?

 

Fixed  This bug could have also affected the system not picking up changes to folders if a certain sequence of buttons was pressed after updating to 2015.10.24 and a scan was already in progress.

 

Other fixes:

 

Suppress extraneous messages being output to a locally attached monitor.

 

 

@tr0910

 

With regards to the plugin not picking up the change in that particular file, it was either running out of queued watches (which will be fixed with the addition of scheduled scans this weekend), or a rather unlikely sequence of button clicks immediately after the updating to 10.24.  I made sure that both my servers were completely up to date on the hashing, then threw at them in total around 20,000 folder modifications (which would in turn generate another 20,000 scans due to rewriting the hash file), and after everything was done re-ran a manual scan to see if anything got missed.  Nothing did.

Share this post


Link to post

Updated to 2015.10.31    boo!

 

Testing of the scheduled creation / verifications proved that an overhaul of the GUI was needed as it was just too much of a cluster**** without it, so this is a little simpler to use, with global settings, creation settings, and manual job settings all separated from each other.  (The release with the scheduled settings is in testing right now, and should be released shortly)

 

Fixed maximum number of queued events not being set correctly (If you had a custom value for this, you will either have to reboot or stop and start the monitor)

 

Fixed some other minor errors along the way.

 

Share this post


Link to post

@tr0910

 

With regards to the plugin not picking up the change in that particular file, it was either running out of queued watches (which will be fixed with the addition of scheduled scans this weekend), or a rather unlikely sequence of button clicks immediately after the updating to 10.24.  I made sure that both my servers were completely up to date on the hashing, then threw at them in total around 20,000 folder modifications (which would in turn generate another 20,000 scans due to rewriting the hash file), and after everything was done re-ran a manual scan to see if anything got missed.  Nothing did.

 

I updated to your latest boo 10/31 version, and edited one text file in that directory and created a new .xlsx file.  After 12 hours I don't see an update to the .hash file.  The last entry in the creator log is 10/27 where it picked up the 2 files because I forced it to.

 

Maybe I should also note that the user share Pix2015 only includes one disk (disk5) at present.  What else should I try? 

 

(I didn't reboot after updating to 10/31 nor did I stop and restart the checksum monitor.  Does it need that?)

 

I just stopped and restarted the monitor to see if that works.....

Share this post


Link to post

@tr0910

 

With regards to the plugin not picking up the change in that particular file, it was either running out of queued watches (which will be fixed with the addition of scheduled scans this weekend), or a rather unlikely sequence of button clicks immediately after the updating to 10.24.  I made sure that both my servers were completely up to date on the hashing, then threw at them in total around 20,000 folder modifications (which would in turn generate another 20,000 scans due to rewriting the hash file), and after everything was done re-ran a manual scan to see if anything got missed.  Nothing did.

 

I updated to your latest boo 10/31 version, and edited one text file in that directory and created a new .xlsx file.  After 12 hours I don't see an update to the .hash file.  The last entry in the creator log is 10/27 where it picked up the 2 files because I forced it to.

 

Maybe I should also note that the user share Pix2015 only includes one disk (disk5) at present.  What else should I try? 

 

(I didn't reboot after updating to 10/31 nor did I stop and restart the checksum monitor.  Does it need that?)

 

I just stopped and restarted the monitor to see if that works.....

On a stop / start you'll have to make a modification to the file for it to pick it up again.

 

I'll be releasing the scheduled updates & verifies later today.

Share this post


Link to post

Updated to 2015.11.04

 

Added in scheduled cron jobs for creation and verification jobs.

 

All verification jobs have a % to verify.  Each this is the amount that will be verified each time the job runs.  EG: if its set to 5%, the first time the jobs runs it will verify 0-4% of the share.  The next time it runs it will verify 5-9%, and so on.  The oldest files on the drive are always verified first.

 

@tr0910

 

Have a possible theory on your issues...  Can you make sure that your notification settings are set to email on warnings, then try and verify 100% of a disk.  Let me know if you get any weird emails sent to you.

Share this post


Link to post

Updated to 2015.11.04

 

Added in scheduled cron jobs for creation and verification jobs.

 

All verification jobs have a % to verify.  Each this is the amount that will be verified each time the job runs.  EG: if its set to 5%, the first time the jobs runs it will verify 0-4% of the share.  The next time it runs it will verify 5-9%, and so on.  The oldest files on the drive are always verified first.

 

 

Very cool idea!

Share this post


Link to post

How do you cancel manual Checksum Creation in progress?

It's changing the modification dates of my files!

Changing the modification dates of your files? Or just the modification dates of the folders?

 

The folders are modified by writing the .hash file for the folder.

Share this post


Link to post

How do you cancel manual Checksum Creation in progress?

It's changing the modification dates of my files!

Changing the modification dates of your files? Or just the modification dates of the folders?

 

The folders are modified by writing the .hash file for the folder.

Creating a checksum does not change the date of the file.

 

But to answer your question, you can stop all activity (verify's and creation) by stopping the monitor.

 

You cannot individually stop a creation job in progress (although it is possible to stop a verification job, but I haven't incorporated that feature).  The reason why you can't stop a creation job is because at its heart the plugin is designed around monitoring changes of folders (even though you don't have to use that feature).

 

Every change to a folder (including writing the hash / md5 / sha1 / sha256 / blake2 file) triggers a scan of that folder.  So on a full share creation, after that job is over, the plugin basically rescans it all over again and does not since there're no further changes.  The process will take a minute or two.  But it introduces a problem with being able to individually drop a job.  Which job do you actually tell it to drop.  The main one, or all of them.  The easiest solution is to just stop the monitor.

 

Verifications are separate and not handled through the queue.  I was already planning on introducing a cancel and resume feature on verifications with the next gui update.

Share this post


Link to post

Updated to 2015.11.04

 

Added in scheduled cron jobs for creation and verification jobs.

 

All verification jobs have a % to verify.  Each this is the amount that will be verified each time the job runs.  EG: if its set to 5%, the first time the jobs runs it will verify 0-4% of the share.  The next time it runs it will verify 5-9%, and so on.  The oldest files on the drive are always verified first.

 

 

Very cool idea!

Thanks, I don't really see the need to be forced to always run full checksums.  Myself, I'd rather run them a bit at a time every week or so over the course of 6 months or a year than once a year, and I think that running a full verification once a month is a bit overkill even for the most paranoid among us.

 

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.