Jump to content
Squid

[Plugin] CA User Scripts

1168 posts in this topic Last Reply

Recommended Posts

12 hours ago, Squid said:

After editing the custom.Schedule.cron file, 


update_cron

You found a bug where it doesn't update that file if you delete a script that runs on a custom schedule.  (It does update it if you disable the schedule)

Thanks.  That appears to have resolved it.  No more crontab errors in the log.

Share this post


Link to post

Hi,

 

I have a few scripts setup to run on a cron schedule and they work great.

 

Is there a way I can trigger these scripts remotely somehow from another machine? As in being able to invoke a script with a CURL command or something similar?

 

Thanks.

Share this post


Link to post
On 10/31/2018 at 11:04 AM, Hoopster said:

Thanks.  That appears to have resolved it.  No more crontab errors in the log.

Something else was going on.  The plugin does handle missing scripts and will exit gracefully without an error.  Too late now though to figure stuff out.

Share this post


Link to post
2 hours ago, Squid said:

Something else was going on.  The plugin does handle missing scripts and will exit gracefully without an error.  Too late now though to figure stuff out.

 

Before running update_cron I edited the customSchedule.cron file and noticed there were about four extra carriage return/blank lines at the end of the file.  I removed them, saved the file and ran update_cron and the errors disappeared.  Perhaps the extra lines were a problem?  Probably there as a result of my prior editing attempts.

Edited by Hoopster

Share this post


Link to post
5 minutes ago, Hoopster said:

prior editing attempts

Yeah, that makes sense in retrospect.  #015 is a \r which is part of Windows end of lines (\r\n), but generally everything in the rest of the world only uses \n

 

In the future, don't use Notepad to edit things as it doesn't understand line endings correctly.  Use Notepad++, or CA Config File Editor

Share this post


Link to post
1 minute ago, Squid said:

In the future, don't use Notepad to edit things as it doesn't understand line endings correctly.  Use Notepad++,

Yeah, I generally always use Notepad++, but, as I recall now, I think my initial edit was done with just Notepad and that likely created the problem.

Share this post


Link to post

Hello,

I *LOVE* this plugin! However, I am running into an issue. My cache drive failed so I had to reinstall all of my dockers. (I now have backups set up and a cache pool).. The problem I am running in to is that none of the scheduled jobs get executed. They work fine if I start them manually, but they will not run automatically on the schedule -- anyone have any ideas? Did I miss something? I had this set up and working fine before the cache drive went T.U.

 

I've tried rebooting, and using the crond stop and start commands

Share this post


Link to post
4 minutes ago, Senson said:

Hello,

I *LOVE* this plugin! However, I am running into an issue. My cache drive failed so I had to reinstall all of my dockers. (I now have backups set up and a cache pool).. The problem I am running in to is that none of the scheduled jobs get executed. They work fine if I start them manually, but they will not run automatically on the schedule -- anyone have any ideas? Did I miss something? I had this set up and working fine before the cache drive went T.U.

 

I've tried rebooting, and using the crond stop and start commands

What version of UnRAID are you running?     There was a cron related problem with some of the 6.6.x releases that is now corrected in the 6.6.5 release.

Share this post


Link to post
1 hour ago, Senson said:

6.6.4... I will upgrade immediately. Thank you!

Yes, that fixed it. Thanks again

Share this post


Link to post

Good plugin.

But when I active this plugin. local cron does not work.

I did set some jobs in crontab, that using crontab -e, but it doesnot works.

Also mover not works.

Is this alright? Every cron job must move to plugin?

Share this post


Link to post
9 minutes ago, forumi0721 said:

Good plugin.

But when I active this plugin. local cron does not work.

I did set some jobs in crontab, that using crontab -e, but it doesnot works.

Also mover not works.

Is this alright? Every cron job must move to plugin?

Sounds like maybe you’re using unRAID 6.6.4. There is a bug which prevents crontab from running. You’ll need to update to 6.6.5.

Share this post


Link to post
3 hours ago, wgstarks said:

Sounds like maybe you’re using unRAID 6.6.4. There is a bug which prevents crontab from running. You’ll need to update to 6.6.5.

Thanks, I'll update unRAID 6.6.5.

Share this post


Link to post

I'm hoping someone can help me with a script please.  

 

I run the following command in a script to upload files via rclone to google:

 

rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m 

what I've been doing is rotating the /mnt/user/rclone_upload/google_vfs/ bit to rotate through my drives to stop them all spinning at the same time, e.g.  /mnt/disk1/rclone_upload/google_vfs/ then /mnt/disk2/rclone_upload/google_vfs/ then /mnt/disk3/rclone_upload/google_vfs/ etc etc but what I'm finding is the script is never making it to disk7 so it never frees up space.

 

Is there a way to dynamically determine the disk with the least GB (rather than % if possible) free and then upload that one e.g. the final command would be

 

rclone move /mnt/VARIABLE SETTING CORRECT DISK/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~*  --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m 

Thanks in advance for any help

Share this post


Link to post

Its possible if you can do some bash scripting. I'd start off with trying to parse output of  df or similiar command for the disks of interest before trying to parse out specific columns. It might beging to look like this ... "df -h | grep /mnt/disk", and probably fed through "awk" scriplets to calculate min free and associated disk #.

 

Im not sure if it would be /mnt/disk# or /dev/md#.

 

Im away from the server now so I can't try running this to get better script code.

Edited by BRiT

Share this post


Link to post

Try the following...

 

df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $4} NR > 1 && $4 < min {line = $6; min = $4} END {print line}'

 

We shouldn't use df -h because it switches units, from TB to GB. Column 4 of "df" is Available Space. Column 6 of "df" is the disk filesystem label. Column 3 would be Used Space, and column 5 would be Percentage Used.

 

Here's my input and output results, split between two steps so you can get an idea for what the first part does before awk is thrown in:

 

# df | grep /mnt/disk

/dev/md1        7811939620  4640646784  3171292836  60% /mnt/disk1
/dev/md2        7811939620  7444221980   367717640  96% /mnt/disk2
/dev/md3        7811939620   895848128  6916091492  12% /mnt/disk3

 

# df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $4} NR > 1 && $4 < min { line = $6; min = $4} END { print line}'
/mnt/disk2

Edited by BRiT
Correcting the column numbers.

Share this post


Link to post
50 minutes ago, BRiT said:

Try the following...

 


df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $4} NR > 1 && $4 < min {line = $6; min = $4} END {print line}'

 

We shouldn't use df -h because it switches units, from TB to GB. Column 4 of "df" is Available Space. Column 6 of "df" is the disk filesystem label. Column 3 would be Used Space, and column 5 would be Percentage Used.

 

Here's my input and output results, split between two steps so you can get an idea for what the first part does before awk is thrown in:

 


# df | grep /mnt/disk

/dev/md1        7811939620  4640646784  3171292836  60% /mnt/disk1
/dev/md2        7811939620  7444221980   367717640  96% /mnt/disk2
/dev/md3        7811939620   895848128  6916091492  12% /mnt/disk3

 


# df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $4} NR > 1 && $4 < min { line = $6; min = $4} END { print line}'
/mnt/disk2

Thanks - I'll have a play to see if I can work out how to complete the script

Share this post


Link to post
6 hours ago, DZMM said:

Thanks - I'll have a play to see if I can work out how to complete the script

 

To assign the drive into the variable named DRIVE, using " "  and $() to evaluate the command and take the output.

 

DRIVE="$(df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $3} NR > 1 && $4 < min { line = $6; min = $4} END { print line}')"

 

Then to see the result you could do: echo ${DRIVE}

You should then be able to refer to that variable in your command ...

 

rclone ${DRIVE}/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m 

 

Using ECHO to see what the command expansion would be:

echo rclone ${DRIVE}/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m

Produces the following output:

rclone /mnt/disk2/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m

 

So I think putting it together to only do the drive with least free:

 


DRIVE="$(df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $3} NR > 1 && $4 < min { line = $6; min = $4} END { print line}')"

rclone ${DRIVE}/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m

 

Share this post


Link to post

hi all noobie here i have a script set up to run every 5 mins to check cpu temps and ajust server fans but user scripts does not seem to be working

 

running it manually outputs correctly but 5 * * * * or one of the pre-defined such as hourly and does not seem to run checked logs and nothing there

 

Any ideas im on version 6.6.5

 

Thank you 

Share this post


Link to post

Is it possible to run scripts only when the drives are spin up? If not, can that be added?

Share this post


Link to post
On 11/25/2018 at 1:45 AM, khile said:

hi all noobie here i have a script set up to run every 5 mins to check cpu temps and ajust server fans but user scripts does not seem to be working

 

running it manually outputs correctly but 5 * * * * or one of the pre-defined such as hourly and does not seem to run checked logs and nothing there

 

Any ideas im on version 6.6.5

 

Thank you 

Just use plugin dynamix auto fan controll

Share this post


Link to post
2 minutes ago, nuhll said:

Is it possible to run scripts only when the drives are spin up? If not, can that be added?

For what purpose?

Share this post


Link to post
12 minutes ago, Squid said:

For what purpose?

Currently i have a script:

Quote

 


#!/bin/bash
FROM_DIR=/mnt/user/downloads/completed/Filme
TO_DIR=/mnt/user/downloads/DVDR

FILES="$(find "$FROM_DIR" -iname '*.iso' -or -iname '*.img')"
for FILES in $FILES; do
    DIR="$(basename "$(dirname "$FILES")")"
    mkdir -p "$TO_DIR"/"$DIR"
    mv "$FILES" "$TO_DIR"/"$DIR"
done

logger DVDRs verschoben.

find /mnt/user/downloads/DVDR/ -mtime 7 -delete

touch /mnt/user/downloads/DVDR/touch
find /mnt/user/downloads/DVDR/* -type d -empty -exec rmdir {} \;
rm /mnt/user/downloads/DVDR/touch

logger DVDRs älter als eine Woche gelöscht.
 

 

which checkes my download directory for new DVDRs and moves them to a location where handbrake converts them to a useable file format.

 

Sadly this makes disks spin up:

Nov 29 11:47:01 Unraid-Server root: DVDRs verschoben.
Nov 29 11:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht.
Nov 29 12:47:01 Unraid-Server root: DVDRs verschoben.
Nov 29 12:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht.
Nov 29 12:49:26 Unraid-Server kernel: mdcmd (153): spindown 4
Nov 29 13:47:01 Unraid-Server root: DVDRs verschoben.
Nov 29 13:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht.
Nov 29 14:09:31 Unraid-Server kernel: mdcmd (154): spindown 4
Nov 29 14:39:45 Unraid-Server kernel: mdcmd (155): spindown 4
Nov 29 14:47:01 Unraid-Server root: DVDRs verschoben.
Nov 29 14:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht.
Nov 29 15:47:01 Unraid-Server root: DVDRs verschoben.
Nov 29 15:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht.
Nov 29 15:57:32 Unraid-Server kernel: mdcmd (156): spindown 2
Nov 29 15:57:38 Unraid-Server kernel: mdcmd (157): spindown 4
Nov 29 16:19:26 Unraid-Server kernel: mdcmd (158): spindown 4


I changed it now to daily. so it doesnt spin up every hour. Cooler would be if its only done when disks are spun up. (then icould check every 15 min or so, and only do something when disks are up)

Edited by nuhll

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.