Jump to content
  • [6.7.0-rc5] Mover errors out and does not run from cron


    dlandon
    • Minor

    The mover does not run at times and errors out.

    Feb 24 04:40:48 MediaServer crond[1767]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null

    From the mover script, this error is from the mover supposedly already running.

    start() {
      if [ -f $PIDFILE ]; then
        if ps h $(cat $PIDFILE) | grep mover ; then
            echo "mover: already running"
            exit 1
        fi
      fi

    According to the log, the mover is not running.   This is the contents of /var/run/.  There is no mover.pid.

    -rw-r--r-- 1 root root      5 Feb 23 06:11 acpid.pid
    srw-rw-rw- 1 root root      0 Feb 23 06:11 acpid.socket=
    -rw-r--r-- 1 root root      6 Feb 24 04:40 apcupsd.pid
    -rw-r--r-- 1 root root      5 Feb 23 06:11 atd.pid
    drwxr-xr-x 2 root root     80 Feb 23 06:11 dbus/
    drwx------ 6 root root    140 Feb 23 06:12 docker/
    srw-rw---- 1 root docker    0 Feb 23 06:12 docker.sock=
    -rw-r--r-- 1 root root      4 Feb 23 06:12 dockerd.pid
    srw-rw-rw- 1 root root      0 Feb 23 06:11 emhttpd.socket=
    -rw-rw-rw- 1 root root      4 Feb 23 06:11 haveged.pid
    -rw-rw-rw- 1 root root      5 Feb 23 06:12 inetd.pid
    drwxr-xr-x 8 root root    360 Feb 23 06:12 libvirt/
    -rw-rw-rw- 1 root root     24 Feb 23 06:12 nginx.origin
    -rw-r--r-- 1 root root      5 Feb 23 06:12 nginx.pid
    srw-rw-rw- 1 root root      0 Feb 23 06:12 nginx.socket=
    -rw-r--r-- 1 root root      5 Feb 23 06:12 nmbd.pid
    drwxr-xr-x 2 root root     40 Aug  3  2018 nscd/
    -rw-r--r-- 1 root root      4 Feb 23 06:11 ntpd.pid
    -rw-r--r-- 1 root root      4 Feb 23 06:12 php-fpm.pid
    srw-rw---- 1 root users     0 Feb 23 06:12 php5-fpm.sock=
    drwxr-xr-x 2 root root     60 Feb 24 06:37 recycle.bin/
    -rw-rw-rw- 1 rpc  rpc       5 Feb 23 06:11 rpc.statd.pid
    drwxr-xr-x 2 rpc  root     40 Nov 15 13:00 rpcbind/
    -r--r--r-- 1 root root      0 Feb 23 06:11 rpcbind.lock
    srw-rw-rw- 1 root root      0 Feb 23 06:11 rpcbind.sock=
    -rw-r--r-- 1 root root      4 Feb 23 06:12 rsyslogd.pid
    -rw-r--r-- 1 root root      1 Feb 23 06:11 runlevel
    drwxr-xr-x 5 root root    100 Feb 23 06:11 samba/
    -rw------- 1 root root      5 Feb 23 06:11 sm-notify.pid
    -rw-r--r-- 1 root root      5 Feb 23 06:12 smbd.pid
    -rw-r--r-- 1 root root      5 Feb 23 06:12 sshd.pid
    -rw-r--r-- 1 root root      4 Feb 23 06:12 syslogd.pid
    srwxrwxrwx 1 root root      0 Feb 23 06:12 ttyd.sock=
    -rw-rw-r-- 1 root utmp   4608 Feb 24 06:35 utmp
    -rw-r--r-- 1 root root      5 Feb 23 06:12 winbindd.pid

    Contents of /etc/cron.d/root:

    # Generated docker monitoring schedule:
    10 0 * * * /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/dockerupdate.php check &> /dev/null
    
    # Generated system monitoring schedule:
    */1 * * * * /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null
    
    # Generated mover schedule:
    40 3 * * * /usr/local/sbin/mover &> /dev/null
    
    # Generated parity check schedule:
    0 9 1 * * /usr/local/sbin/mdcmd check NOCORRECT &> /dev/null || :
    
    # Generated plugins version check schedule:
    10 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/plugincheck &> /dev/null
    
    # Generated array status check schedule:
    20 0 * * * /usr/local/emhttp/plugins/dynamix/scripts/statuscheck &> /dev/null
    
    # Generated Unraid OS update check schedule:
    11 0 * * * /usr/local/emhttp/plugins/dynamix.plugin.manager/scripts/unraidcheck &> /dev/null
    
    # Generated local master browser check:
    */1 * * * * /usr/local/emhttp/plugins/dynamix.local.master/scripts/localmaster &> /dev/null
    
    # Generated ssd trim schedule:
    30 5 * * * /sbin/fstrim -a -v | logger &> /dev/null
    
    # Purge recycle bin at 3:00 every day:
    0 3 * * * /usr/local/emhttp/plugins/recycle.bin/scripts/rc.recycle.bin cron &> /dev/null
    
    # Refresh Recycle Bin trash sizes every minute:
    * * * * * /usr/local/emhttp/plugins/recycle.bin/scripts/get_trashsizes &> /dev/null

    crontab -l:

    # If you don't want the output of a cron job mailed to you, you have to direct
    # any output to /dev/null.  We'll do this here since these jobs should run
    # properly on a newly installed system.  If a script fails, run-parts will
    # mail a notice to root.
    #
    # Run the hourly, daily, weekly, and monthly cron jobs.
    # Jobs that need different timing may be entered into the crontab as before,
    # but most really don't need greater granularity than this.  If the exact
    # times of the hourly, daily, weekly, and monthly cron jobs do not suit your
    # needs, feel free to adjust them.
    #
    # Run hourly cron jobs at 47 minutes after the hour:
    47 * * * * /usr/bin/run-parts /etc/cron.hourly 1> /dev/null
    #
    # Run daily cron jobs at 4:40 every day:
    40 4 * * * /usr/bin/run-parts /etc/cron.daily 1> /dev/null
    #
    # Run weekly cron jobs at 4:30 on the first day of the week:
    30 4 * * 0 /usr/bin/run-parts /etc/cron.weekly 1> /dev/null
    #
    # Run monthly cron jobs at 4:20 on the first day of the month:
    20 4 1 * * /usr/bin/run-parts /etc/cron.monthly 1> /dev/null

    The mover then does not run and files are left on the cache.  I'm pretty sure that there aren't any plugins interfering with the mover cron.

     

    mediaserver-diagnostics-20190224-1122.zip




    User Feedback

    Recommended Comments

    "From the mover script, this error is from the mover supposedly already running."

     

    Not necessarily. When the mover runs it finishes with this statement

    [[ $LOGLEVEL -gt 0 ]] && echo "mover: finished"

    By default LOGLEVEL = 0, which lets the mover script terminate with exit code 1

     

    Changing the code to

    [[ $LOGLEVEL -eq 0 ]] || echo "mover: finished"

    would solve the error reporting issue

    Edited by bonienl
    Link to comment
    1 minute ago, johnnie.black said:

    This came up before, apparently that happens when the mover runs on a schedule:

    Any script running on a schedule needs to terminate with exit code 0, otherwise an error is reported.

    Link to comment
    18 minutes ago, bonienl said:

    Not necessarily. When the mover runs it finishes with this statement

    The way I read the script it didn't get that far.

     

    11 minutes ago, bonienl said:

    Any script running on a schedule needs to terminate with exit code 0, otherwise an error is reported.

    I agree.

     

    This is not just a situation of the exit code from the mover or the start/stop showing in the log.  I have logging turned off, so I get it that the mover started/stopped messages won't show in the log.

     

    Regardless, the mover failed to run.

    Link to comment

    As a test I changed this to see if it was where the error occurred:

    start() {
      if [ -f $PIDFILE ]; then
        if ps h $(cat $PIDFILE) | grep mover ; then
            echo "mover: already running"
            exit 10
        fi
      fi

    I got the same error:

    Feb 27 03:45:56 BackupServer crond[1675]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null

    I think bonienl is right.  The error is occurring later in the script and occurs when the logging is turned off.  That's why turning on logging does not create this error.

    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...