Jump to content
Squid

Checksum Suite

260 posts in this topic Last Reply

Recommended Posts

Will your plugin also er remove hashes in case you just delete a file?

Is there just one file with all hashes included?

It will if there is a modification / addition to the folder.  Deleting a file will not trigger a rescan of the folder, so the hash file will still have the deleted file's info within it.

Share this post


Link to post

... Actually, my copy of Corz doesn't report the file as being changed. ...

 

You either need to update your copy of Checksum; or possibly simply update your options.  I believe by default Checksum has "report_changed=false" in the options set (setup.ini).    If you change this to true it will report any changed files.

 

 

Share this post


Link to post

HI,

 

I can confirm that checking hash from windows with corz worked ... so no issue with that....

 

one thing is not really clear yet .....

 

i have some folders that i had done from corz windows but now i added extra folders to the monitor queue.....

these folders that don't have hashes yet but that i added to the monitor..... they will get automatically hashes now ?

or do i first need to add them manually to the queue ?

 

Share this post


Link to post

HI,

 

I can confirm that checking hash from windows with corz worked ... so no issue with that....

 

one thing is not really clear yet .....

 

i have some folders that i had done from corz windows but now i added extra folders to the monitor queue.....

these folders that don't have hashes yet but that i added to the monitor..... they will get automatically hashes now ?

or do i first need to add them manually to the queue ?

As you change /add files within the sub folders those sub folders should have the files hashed.  To hash all the files you will have to hit add to queue

Share this post


Link to post

... As you change /add files within the sub folders those sub folders should have the files hashed.  To hash all the files you will have to hit add to queue

 

But, if I understand your plugin correctly, not right away -- correct ??    Isn't that what the "Pause before calculating" setting does?  [Which, by the way, is a good idea, since new files are likely part of a copy of new data to the array.]

 

Share this post


Link to post

... As you change /add files within the sub folders those sub folders should have the files hashed.  To hash all the files you will have to hit add to queue

 

But, if I understand your plugin correctly, not right away -- correct ??    Isn't that what the "Pause before calculating" setting does?  [Which, by the way, is a good idea, since new files are likely part of a copy of new data to the array.]

Correct.  Just to elaborate on the pause / philosophy of design.

 

By design, this is a single threaded app.  Only one checksum calculation is done at a time (although the soon to be verification tool will operate separate, and I'll probably make that multi threaded)

 

Everything is done on a FIFO (first in, first out) queue.

 

So you start writing a file(s) at 1pm.  The files take 15 minutes to write.  Pause is set to half hour.  At 1:15pm, a signal is sent to start calculating.  The calculator looks at it, and then waits 1/2 hour before doing anything.  If the files take say 5 minutes to create the hash value, then that job is finished at 1:50pm.

 

Now assume that at 1:30pm another set of files (different folder) is written.  That job is sent to the calculator at 1:50pm.  But, since the files were written at 1:30pm, and the job is already 20 minutes old, the calculator will start the calculations for the second job at  2pm (ie: it will only pause for 10 minutes)

 

The net result is that on bulk writes to multiple folders (like plex writing .nfo files) you should only have the initial wait time of 30 minutes right at the start.

 

In practice however, inotifywait isn't perfect and while it always picks up the changes in the folders, sometimes it takes a minute or two to pick it up.  So you may get a couple of pauses in this situation right at the beginning (depending upon the order which plex wrote the files, and the order that inotifywait noticed the write), but once it gets rolling, any pauses keep getting smaller and smaller until there is no pause at all.

 

 

The pause was put in there not so much for writes to folders to get done, but rather because of how windows works when cutting a folder from the server to your desktop.

 

Windows when cutting a file, opens that file up in read/write mode (even though its not actually writing anything to the file).  When the file is moved to the new location, it closes the original and then deletes it).  Its impossible for inotifywait to determine if a file actually got changed or not, and a signal is sent because of the CLOSE from RW mode happened.

 

This isn't a big deal.  Normally, the scanner looks at the folder, sees that the file isn't there, and just skips it.  The problem happens if there was already a .hash file in the folder, and windows decides to move it to your desktop first.)  Now the scanner looks at the folder sees the a file was closed from RW mode, and sees that there is no hash file.  So it winds up calculating hashes on the remaining files.

 

In this situation, windows successfully moves all of the files, but the folder still exists on the server with a .hash file inside it (windows won't try and move it because it already has and has been recreated).  Hence my reasoning on the pause.  Set it to be more than the average time it would take you to cut the folder to the desktop and you won't run into any problems.  If its less, the cut will still work (and be at pretty much full speed since the calculator runs low priority), but you'll be left with a .hash file on the server.

 

As an aside, the calculator also skips over any file at any time that is open, so as to not affect any playback, etc.  And reissues a job for it in another hour (to catch it back up)  But, I just discovered a problem with that this morning where it was working on my "laboratory" tests, but isn't working properly in real-life, so I'll be pumping out an update tonight / tomorrow.

 

 

Cron job scheduling of manual scans and the checker is in progress and should be pumped out by Sunday / Monday depending upon how the Jays do.

Share this post


Link to post

Updated to 2015.10.18

 

- Hash files now written as nobody:users

- Fix rescheduling of open files

- Add in customization of inotifywait variables

- Add in work arounds for Corz / Windows timestamp issues

 

Notes:

 

Rescheduling of open files.  If the creator gets a command to create / update a hash file, but that file is open (ie: you're watching the movie) after the pause period is over, rather than mess up your viewing experience by trying to calculate the movie, it will automatically reschedule the hash creation for up to an hour later (runs according to cron.hourly schedule).  If its still open, it reschedules again.  Can't have this plugin causing the latest block buster that the minute it finished downloading you started to watch  to start stuttering on you  ;)

 

Customization of inotifywait variables.  Unraid's defaults are set to number of watches: 524288, and 16384 max queued items.  You'll know that the number of watches is too low if the log shows something like inadequate watches when starting up.  I haven't found yet a good formula to determine this amount, but if you get the error, then I would start by increasing it by ~100000 and keep going until it works.  Note that I believe that if you also run dropbox on your server that it also uses watches, so in that case, odds are very good that you'll have to start increasing it if you have a complex file structure.  I run both variables at default and have yet to see any issues at all (don't run drop box) on either of my servers.

 

Corz / Windows timestamp issues:  Work arounds now in in place to ignore minor (and explainable) time stamp issues with hash files generated by corz checksum.  There are two issues:  #1 Corz has a bug related to daylight savings time where the timestamp within its hash file differs from both linux and windows timestamps on the files by 1 hour.  #2  Windows and linux time stamps are not perfectly in sync.  These can differ by +/- 1 second.  Basically, if you enable this option, if a corz generated hash file is present, the plugin will not update the file (if updates are turned on) if the time stamp differs by -1, +1, 3599, 3600, or 3601 seconds.  Any other differences will have a rehash done (once again if updates are turned on).  A warning will be generated in the log if it runs across this situation.

 

Now that these last few annoyances are out of the way, I can concentrate again finishing up the checker side of things and adding some more features to the creator.

 

 

 

Share this post


Link to post

I've clicked the 'Sart Monitor' button three times over the last 20+ minutes.  Monitor Status is still 'Not Running'.  Nerd Tools is installed.  What else do I have to do to get the Monitor running?

Share this post


Link to post

I've clicked the 'Sart Monitor' button three times over the last 20+ minutes.  Monitor Status is still 'Not Running'.  Nerd Tools is installed.  What else do I have to do to get the Monitor running?

Have you added any shares to monitor?  Does it say in the log setting up watches?  After hitting start monitor, does the button switch to being disabled (ie: you have to stop it then try and start it again)

 

The GUI as-is is also a tad counter intuitive since with each share section, you have to apply it separately for it to take effect (on my to-do list to set up an apply all button)

Share this post


Link to post

I've clicked the 'Sart Monitor' button three times over the last 20+ minutes.  Monitor Status is still 'Not Running'.  Nerd Tools is installed.  What else do I have to do to get the Monitor running?

Have you added any shares to monitor?

 

No, the instructions say that tou have to wait for the 'Watches Established' log message.

 

 

Does it say in the log setting up watches?  After hitting start monitor, does the button switch to being disabled (ie: you have to stop it then try and start it again)

 

The GUI as-is is also a tad counter intuitive since with each share section, you have to apply it separately for it to take effect (on my to-do list to set up an apply all button)

 

No, the Start button stays active, the Stop button stays greyed out.

 

Edit:

 

Ah, the message to wait relates to queueing jobs, not adding shares - sorry, my confusion.

Share this post


Link to post

I've clicked the 'Sart Monitor' button three times over the last 20+ minutes.  Monitor Status is still 'Not Running'.  Nerd Tools is installed.  What else do I have to do to get the Monitor running?

Have you added any shares to monitor?

 

No, the instructions say that tou have to wait for the 'Watches Established' log message.

 

 

Does it say in the log setting up watches?  After hitting start monitor, does the button switch to being disabled (ie: you have to stop it then try and start it again)

 

The GUI as-is is also a tad counter intuitive since with each share section, you have to apply it separately for it to take effect (on my to-do list to set up an apply all button)

 

No, the Start button stays active, the Stop button stays greyed out.

 

Edit:

 

Ah, the message to wait relates to queueing jobs, not adding shares - sorry, my confusion.

Like I said, the GUI needs a little TLC.  After I finish up with the verifier, then that's going to get overhauled.

Share this post


Link to post
Like I said, the GUI needs a little TLC.  After I finish up with the verifier, then that's going to get overhauled.

 

Okay - thanks.

 

I'm still not sure that I fully understand the 'queue'.  What actually is queued?  Can we look at what is in the queue?  Do the queue entries have to be re-added after a reboot (ie after a power cut) - the 'Add To Queue' button never gets greyed out?

Share this post


Link to post

99% of the time there's probably going to be nothing in the queue to save.

 

If you have a share set to be monitored, then any change to the share (new file / changed file) will add that folder (not the entire share) to the queue.

 

That particular folder will then be hashed according to the rules that you've set.  No other folders within that share will be touched.

 

My rational for this is that I want every new / changed file to be hashed as soon as possible.  Not up to whenever then next cron job runs.  My settings for the pause is set to 10 minutes.  10 minutes after a file is added, it has the hash created for it.  If I just can't wait to watch the movie after its transferred, and the file happens to be in use (watched) when the plugin goes to hash it, it automatically is skipped (to avoid any potential viewing troubles like stuttering), and rescheduled for the next time that cron.hourly runs.

 

To my way of thinking, this has a couple of advantages

 

- The less amount of time the file sits on the hard drive without being hashed, the less likely that any silent corruption to that file could happen.

- Any changes I make to the particular file(s) (stripping foreign languages, etc) are rehashed immediately without me having to think about it, and then trying to remember 6 months later when the file fails a verification test if I changed anything to it.

- There is no lengthy process of having to hash your entire shares unless you choose to do it.

 

That being said, I fully do realize that everyone does not want the same method of operation as I myself do.  After the verification side of things is pumped out in the next day or two (work responsibilities permitting), the next updates will be to the GUI which will also allow scheduled hashing of entire shares (currently right now its a manual job ->  It adds to the queue the share, and tells it to run against every folder.  If that happens to get interrupted, then adding it to the queue again just picks up right back where it left off)

 

I would think that the vast majority of writes to shares happen basically one folder at a time, with a lengthy pause before another write (eg: a movie gets transferred over, then another movie after a download, etc)  In the interim, the queue has already been cleared out, so there's nothing to lose.

 

About the only time a ton of queues would happen at a time is when you would have your PMS write out all of the .nfo files for each piece of media.  In that case, a new item is going to get queued for just about every folder within your shares.  The plugin runs through them all.  In that case, its fast because the only new files it has hash are the very small .nfo files.  But, you have to keep in mind that any folder that gets queued, will scan the entire folder for items that aren't already hashed.  Anything not already hashed is going to get hashed.

 

So, if you tell PMS to write all your .nfo's and you haven't already hashed the media files with either this plugin, or with corz, or something else, then the net result is that all of your folders will get fully hashed (basically no different than manually hitting add to queue)

 

Simply put, you set a share to be monitored for changes.

 

If a change happens within a folder of that share, then the folder is hashed for items not already hashed.

 

Manually adding a share to the queue will hash every folder within that share.

 

This was originally designed for how I want it to handle my media collection.  Over the next month, things are going to become more general purpose and try and hit everyone's usage case (and a big part of that is scheduled hash creation, not relying upon a change queue). 

 

At that point, I would think it should be bullet proof.  Changes to folders are picked up immediately, and in the off chance that something gets missed, then the scheduled scan will pick it up.  But, saving the queue (if anything is there) in the event of a shutdown is a good idea, and its something that I'll think about how to implement.  In the case of an outright power failure with no UPS to trigger a clean shutdown, there's probably not much I can do about it, because the queue is implemented as a linux pipe (to facilitate easy interprocess communication) and is not a file which is just "read" off the flash drive.  But, clean shutdowns can be handled because I can read all remaining queued items, save them, and then re-write them at the next startup.  Hopefully that made sense)

 

The basic routine for verifications is done and tested.  Next up is the GUI for it, for which it is also going to necessitate the frame work for scheduled cron jobs on verifications.

Share this post


Link to post

HI Squid,

 

I have run the checksum for my movie share which has about 3500 files it has been running for a week now but it still has not finished and it has been running 24x7 is there a way to see how far it gotten because it seems to be going extremely slow.

 

Capture.PNG.c2544b837ae0dccec52652d097d719f1.PNG

Share this post


Link to post

HI Squid,

 

I have run the checksum for my movie share which has about 3500 files it has been running for a week now but it still has not finished and it has been running 24x7 is there a way to see how far it gotten because it seems to be going extremely slow.

You can hit log to see what's actually going on.  (Or post a link to the log -> its stored at /tmp/checksum/log.txt)  Today / Tomorrow's update has the option to automatically save the log(s) to the flash drive.

Share this post


Link to post

Updated to 2015.10.24

 

- Ability to run manual verifications of shares / disks (either full or partial)

- Revised logging

- Ability to save log files when they rotate or at array stop.

 

I am just in the beginning stages of testing cron job scheduling for verifications, but since the actual verification routine is done (and ultimately forms the basis for cron jobs), here is the ability to manually run them.

 

You can run verifications against either a share or against a disk.  Two options are available for either of these:  % of the disk / share to check, and % to start at.  IE: if you only want to check the oldest 10% of the files in a share, you would enter in 10 and 0.  If you only want to check the new 10% of the files, you would enter in 10 and 90.

 

Verifications are completely separate from the job queue.  In otherwords, verifications will run regardless if a creation job is in progress, and you can run multiple verifications at the same time.  Speed decrease on running multiple verifications concurrently will be dependent upon your included / excluded disks for a particular share, transfer speed of the drive(s), CPU speed, etc. 

 

On my main server, I can run around 6 concurrent disk verifications and the total net speed is greater than only running one at a time (although each verification is slower than if I only ran one at a time)

 

 

At the end of each verification, if there are any failures (corrupted / updated files), a notification is sent out.  If you have email notifications enabled on warnings, the email body will detail the files affected and the probable cause.  If you do not have email notifications enabled, then you will just see that there were failures (I have a feature request in to rectify this:  http://lime-technology.com/forum/index.php?topic=43615.0)

 

In either case, once there is a failure, a new log button (Failures) will be enabled that will also show you all of the failures that have occurred since the last reset of the server.

 

The logging has been revamped.  There are now 4 log buttons:

 

Failures - A list of all the verification failures since the last reset

Checksum Log - A list of all the creation / updating of hash files

Verify Log - A list of all the verification activity

Command Log - A list of all the various commands given to the plugin (ie: update this folder's checksums, verify this folder, etc)

 

All logs (with the exception of the failure log) will automatically rotate at a size of 500k (to not eat all your memory on the server if you have a ton of activity and long up-times).  The Failures log does not rotate so that you can see any/all failures on the system since the last reset of the server.  If this file should happen to grow to be an appreciable size, then you've got far more important issues happening with your server than the size of a log file.

 

You also have the option of saving the log files when they rotate to the flash drive (/boot/config/plugins/checksum/logs).

 

A failure log for a particular job is always saved to the flash drive (/boot/config/plugins/checksum/logs/failure) regardless of whether you have save logs enabled or not.

 

Notes on Verifying a Disk

 

This plugin makes zero assumptions about how your shares and split levels are set up.  Because of this, unRaid may or may not store the hash files for a particular media folder on the same disk as the media folder.  The plugin is aware of this, and when doing a disk verification, the following happens:

 

- Plugin will find every hash file within /mnt/user

- It will then parse each hash file to see what file(s) exist on the particular disk to be checked.

- It will then only verify those files that exist on the disk.

 

On my servers, this process only takes ~20 seconds before it gets rolling on the verification, but it guarantees that any file on a particular disk that has had a hash created for it will be checked regardless of which disk the actual hash file exists on.

 

Next Week's Update: Cron Job scheduling of manual creation jobs and verifications (Verification scheduling will be user-set cumulative eg: Week 1 check first 10%, week 2 check second 10%, etc), and hopefully a rehash of the GUI (if not that will happen in two weeks)

 

Share this post


Link to post

This is really great stuff! I'll be giving the verification part a go shortly.  :)

Share this post


Link to post

Are there plans for future capability to include or exclude by file extension?  To me, generated txt files like .nfo or .xml get updated by scrapers, but core files like .mkv, .iso, jpg, .mov, .mt2s, etc should be the critical files to hash and monitor.

Share this post


Link to post

Are there plans for future capability to include or exclude by file extension?  To me, generated txt files like .nfo or .xml get updated by scrapers, but core files like .mkv, .iso, jpg, .mov, .mt2s, etc should be the critical files to hash and monitor.

You already can on creation.  Disable include all files and enter in whatever wild cards you choose (separated by a space) into the include / exclude sections.

 

Share this post


Link to post

Are there plans for future capability to include or exclude by file extension?  To me, generated txt files like .nfo or .xml get updated by scrapers, but core files like .mkv, .iso, jpg, .mov, .mt2s, etc should be the critical files to hash and monitor.

You already can on creation.  Disable include all files and enter in whatever wild cards you choose (separated by a space) into the include / exclude sections.

Ahh, thanks.  Excellent utility.

Share this post


Link to post

This is something I have always wanted to do with my 15gb of data but never gotten around to doing in the last three years so thanks for making this. I hope it becomes a standard for unraid

Share this post


Link to post

I have installed the checksum verifier and it successfully created the checksum files for a 3tb share with less than 100,000 files.  I then proceeded to create a new file on the share (just a test.txt file) and expected the .hash file in that directory to be updated within an hour.  This has not happened several days later.  Am I not understanding things correctly??

 

Here is the command log

 

Oct 25 2015 01:27:59 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-20] Clearance backdrops
Oct 25 2015 01:27:59 Waiting 12 seconds before processing.
Oct 25 2015 01:28:10 Resuming
Oct 25 2015 01:28:11 Job Finished. Total Time: 1 Second. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:11 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-22] Website Products
Oct 25 2015 01:28:11 Waiting 3 seconds before processing.
Oct 25 2015 01:28:13 Resuming
Oct 25 2015 01:28:14 Job Finished. Total Time: 1 Second. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:14 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-22] Website Products #2
Oct 25 2015 01:28:14 Waiting 1 seconds before processing.
Oct 25 2015 01:28:14 Resuming
Oct 25 2015 01:28:15 Job Finished. Total Time: 1 Second. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:15 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-24] Baby Aria Fleming/to email
Oct 25 2015 01:28:15 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:15 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-24] Baby Aria Fleming/To release
Oct 25 2015 01:28:15 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:15 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-24] Baby Aria Fleming
Oct 25 2015 01:28:15 Waiting 20 seconds before processing.
Oct 25 2015 01:28:34 Resuming
Oct 25 2015 01:28:35 Job Finished. Total Time: 1 Second. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:35 Scan command received for /mnt/user/Pix2015/2015-10 Oct/Girls
Oct 25 2015 01:28:35 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:35 Scan command received for /mnt/user/Pix2015/2015-10 Oct/Missy - The Royals
Oct 25 2015 01:28:35 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:35 Scan command received for /mnt/user/Pix2015
Oct 25 2015 01:28:35 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 20:49:25 Manually Added /mnt/user/Documents to queue
Oct 25 2015 20:49:27 Manual scan of /mnt/user/Documents started
Oct 25 2015 20:50:00 Background Monitor Stopping
Oct 25 2015 20:50:19 Background Monitor Starting
Setting maximum number of watches to 524288
Setting maximum number of queued events to 16384
Monitoring /mnt/user/Pix2015 /mnt/user/Documents
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
Oct 26 2015 12:49:31 Background Monitor Stopping
Oct 26 2015 12:49:50 Background Monitor Starting
Setting maximum number of watches to 750000
Setting maximum number of queued events to 16384
Monitoring /mnt/user/Pix2015 /mnt/user/Documents
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.

Share this post


Link to post

I have installed the checksum verifier and it successfully created the checksum files for a 3tb share with less than 100,000 files.  I then proceeded to create a new file on the share (just a test.txt file) and expected the .hash file in that directory to be updated within an hour.  This has not happened several days later.  Am I not understanding things correctly??

 

Here is the command log

 

Oct 25 2015 01:27:59 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-20] Clearance backdrops
Oct 25 2015 01:27:59 Waiting 12 seconds before processing.
Oct 25 2015 01:28:10 Resuming
Oct 25 2015 01:28:11 Job Finished. Total Time: 1 Second. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:11 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-22] Website Products
Oct 25 2015 01:28:11 Waiting 3 seconds before processing.
Oct 25 2015 01:28:13 Resuming
Oct 25 2015 01:28:14 Job Finished. Total Time: 1 Second. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:14 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-22] Website Products #2
Oct 25 2015 01:28:14 Waiting 1 seconds before processing.
Oct 25 2015 01:28:14 Resuming
Oct 25 2015 01:28:15 Job Finished. Total Time: 1 Second. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:15 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-24] Baby Aria Fleming/to email
Oct 25 2015 01:28:15 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:15 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-24] Baby Aria Fleming/To release
Oct 25 2015 01:28:15 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:15 Scan command received for /mnt/user/Pix2015/2015-10 Oct/[2015-10-24] Baby Aria Fleming
Oct 25 2015 01:28:15 Waiting 20 seconds before processing.
Oct 25 2015 01:28:34 Resuming
Oct 25 2015 01:28:35 Job Finished. Total Time: 1 Second. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:35 Scan command received for /mnt/user/Pix2015/2015-10 Oct/Girls
Oct 25 2015 01:28:35 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:35 Scan command received for /mnt/user/Pix2015/2015-10 Oct/Missy - The Royals
Oct 25 2015 01:28:35 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 01:28:35 Scan command received for /mnt/user/Pix2015
Oct 25 2015 01:28:35 Job Finished. Total Time: 0 Seconds. Total Size: 0.00 B Average Speed: 0.00 B/s
Oct 25 2015 20:49:25 Manually Added /mnt/user/Documents to queue
Oct 25 2015 20:49:27 Manual scan of /mnt/user/Documents started
Oct 25 2015 20:50:00 Background Monitor Stopping
Oct 25 2015 20:50:19 Background Monitor Starting
Setting maximum number of watches to 524288
Setting maximum number of queued events to 16384
Monitoring /mnt/user/Pix2015 /mnt/user/Documents
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.
Oct 26 2015 12:49:31 Background Monitor Stopping
Oct 26 2015 12:49:50 Background Monitor Starting
Setting maximum number of watches to 750000
Setting maximum number of queued events to 16384
Monitoring /mnt/user/Pix2015 /mnt/user/Documents
Setting up watches. Beware: since -r was given, this may take a while!
Watches established.

Assuming that the txt file was put into /mnt/user/Documents (because the plg did pick up changes in Pix2015 (several commands issued to scan (and presumably it did hash the folder -> the excess scan commands didn't pick anything up that had changed), was that folder already being monitored when you put the file in?

Share this post


Link to post

That text file was put in Pix2015, but several sub folders deep under that.

You don't have .txt files excluded by chance (or not included)?  Anything "weird" about the folder name?  Can you send me the complete command log (stored in /tmp/checksum/checksumlog.txt) and the .hash file for that particular folder.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.