Hoopster Posted October 31, 2018 Share Posted October 31, 2018 12 hours ago, Squid said: After editing the custom.Schedule.cron file, update_cron You found a bug where it doesn't update that file if you delete a script that runs on a custom schedule. (It does update it if you disable the schedule) Thanks. That appears to have resolved it. No more crontab errors in the log. Quote Link to comment
Kaz Posted November 1, 2018 Share Posted November 1, 2018 Hi, I have a few scripts setup to run on a cron schedule and they work great. Is there a way I can trigger these scripts remotely somehow from another machine? As in being able to invoke a script with a CURL command or something similar? Thanks. Quote Link to comment
Squid Posted November 2, 2018 Author Share Posted November 2, 2018 On 10/31/2018 at 11:04 AM, Hoopster said: Thanks. That appears to have resolved it. No more crontab errors in the log. Something else was going on. The plugin does handle missing scripts and will exit gracefully without an error. Too late now though to figure stuff out. Quote Link to comment
Hoopster Posted November 2, 2018 Share Posted November 2, 2018 (edited) 2 hours ago, Squid said: Something else was going on. The plugin does handle missing scripts and will exit gracefully without an error. Too late now though to figure stuff out. Before running update_cron I edited the customSchedule.cron file and noticed there were about four extra carriage return/blank lines at the end of the file. I removed them, saved the file and ran update_cron and the errors disappeared. Perhaps the extra lines were a problem? Probably there as a result of my prior editing attempts. Edited November 2, 2018 by Hoopster Quote Link to comment
Squid Posted November 2, 2018 Author Share Posted November 2, 2018 5 minutes ago, Hoopster said: prior editing attempts Yeah, that makes sense in retrospect. #015 is a \r which is part of Windows end of lines (\r\n), but generally everything in the rest of the world only uses \n In the future, don't use Notepad to edit things as it doesn't understand line endings correctly. Use Notepad++, or CA Config File Editor Quote Link to comment
Hoopster Posted November 2, 2018 Share Posted November 2, 2018 1 minute ago, Squid said: In the future, don't use Notepad to edit things as it doesn't understand line endings correctly. Use Notepad++, Yeah, I generally always use Notepad++, but, as I recall now, I think my initial edit was done with just Notepad and that likely created the problem. Quote Link to comment
Senson Posted November 11, 2018 Share Posted November 11, 2018 Hello, I *LOVE* this plugin! However, I am running into an issue. My cache drive failed so I had to reinstall all of my dockers. (I now have backups set up and a cache pool).. The problem I am running in to is that none of the scheduled jobs get executed. They work fine if I start them manually, but they will not run automatically on the schedule -- anyone have any ideas? Did I miss something? I had this set up and working fine before the cache drive went T.U. I've tried rebooting, and using the crond stop and start commands Quote Link to comment
itimpi Posted November 11, 2018 Share Posted November 11, 2018 4 minutes ago, Senson said: Hello, I *LOVE* this plugin! However, I am running into an issue. My cache drive failed so I had to reinstall all of my dockers. (I now have backups set up and a cache pool).. The problem I am running in to is that none of the scheduled jobs get executed. They work fine if I start them manually, but they will not run automatically on the schedule -- anyone have any ideas? Did I miss something? I had this set up and working fine before the cache drive went T.U. I've tried rebooting, and using the crond stop and start commands What version of UnRAID are you running? There was a cron related problem with some of the 6.6.x releases that is now corrected in the 6.6.5 release. Quote Link to comment
Squid Posted November 11, 2018 Author Share Posted November 11, 2018 unRaid 6.6.4? Update if so. Quote Link to comment
Senson Posted November 11, 2018 Share Posted November 11, 2018 6.6.4... I will upgrade immediately. Thank you! Quote Link to comment
Senson Posted November 12, 2018 Share Posted November 12, 2018 1 hour ago, Senson said: 6.6.4... I will upgrade immediately. Thank you! Yes, that fixed it. Thanks again Quote Link to comment
forumi0721 Posted November 23, 2018 Share Posted November 23, 2018 Good plugin. But when I active this plugin. local cron does not work. I did set some jobs in crontab, that using crontab -e, but it doesnot works. Also mover not works. Is this alright? Every cron job must move to plugin? Quote Link to comment
wgstarks Posted November 23, 2018 Share Posted November 23, 2018 9 minutes ago, forumi0721 said: Good plugin. But when I active this plugin. local cron does not work. I did set some jobs in crontab, that using crontab -e, but it doesnot works. Also mover not works. Is this alright? Every cron job must move to plugin? Sounds like maybe you’re using unRAID 6.6.4. There is a bug which prevents crontab from running. You’ll need to update to 6.6.5. Quote Link to comment
forumi0721 Posted November 23, 2018 Share Posted November 23, 2018 3 hours ago, wgstarks said: Sounds like maybe you’re using unRAID 6.6.4. There is a bug which prevents crontab from running. You’ll need to update to 6.6.5. Thanks, I'll update unRAID 6.6.5. Quote Link to comment
DZMM Posted November 24, 2018 Share Posted November 24, 2018 I'm hoping someone can help me with a script please. I run the following command in a script to upload files via rclone to google: rclone move /mnt/user/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m what I've been doing is rotating the /mnt/user/rclone_upload/google_vfs/ bit to rotate through my drives to stop them all spinning at the same time, e.g. /mnt/disk1/rclone_upload/google_vfs/ then /mnt/disk2/rclone_upload/google_vfs/ then /mnt/disk3/rclone_upload/google_vfs/ etc etc but what I'm finding is the script is never making it to disk7 so it never frees up space. Is there a way to dynamically determine the disk with the least GB (rather than % if possible) free and then upload that one e.g. the final command would be rclone move /mnt/VARIABLE SETTING CORRECT DISK/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m Thanks in advance for any help Quote Link to comment
BRiT Posted November 24, 2018 Share Posted November 24, 2018 (edited) Its possible if you can do some bash scripting. I'd start off with trying to parse output of df or similiar command for the disks of interest before trying to parse out specific columns. It might beging to look like this ... "df -h | grep /mnt/disk", and probably fed through "awk" scriplets to calculate min free and associated disk #. Im not sure if it would be /mnt/disk# or /dev/md#. Im away from the server now so I can't try running this to get better script code. Edited November 24, 2018 by BRiT Quote Link to comment
BRiT Posted November 24, 2018 Share Posted November 24, 2018 (edited) Try the following... df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $4} NR > 1 && $4 < min {line = $6; min = $4} END {print line}' We shouldn't use df -h because it switches units, from TB to GB. Column 4 of "df" is Available Space. Column 6 of "df" is the disk filesystem label. Column 3 would be Used Space, and column 5 would be Percentage Used. Here's my input and output results, split between two steps so you can get an idea for what the first part does before awk is thrown in: # df | grep /mnt/disk /dev/md1 7811939620 4640646784 3171292836 60% /mnt/disk1 /dev/md2 7811939620 7444221980 367717640 96% /mnt/disk2 /dev/md3 7811939620 895848128 6916091492 12% /mnt/disk3 # df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $4} NR > 1 && $4 < min { line = $6; min = $4} END { print line}' /mnt/disk2 Edited November 24, 2018 by BRiT Correcting the column numbers. 1 Quote Link to comment
DZMM Posted November 24, 2018 Share Posted November 24, 2018 50 minutes ago, BRiT said: Try the following... df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $4} NR > 1 && $4 < min {line = $6; min = $4} END {print line}' We shouldn't use df -h because it switches units, from TB to GB. Column 4 of "df" is Available Space. Column 6 of "df" is the disk filesystem label. Column 3 would be Used Space, and column 5 would be Percentage Used. Here's my input and output results, split between two steps so you can get an idea for what the first part does before awk is thrown in: # df | grep /mnt/disk /dev/md1 7811939620 4640646784 3171292836 60% /mnt/disk1 /dev/md2 7811939620 7444221980 367717640 96% /mnt/disk2 /dev/md3 7811939620 895848128 6916091492 12% /mnt/disk3 # df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $4} NR > 1 && $4 < min { line = $6; min = $4} END { print line}' /mnt/disk2 Thanks - I'll have a play to see if I can work out how to complete the script Quote Link to comment
BRiT Posted November 24, 2018 Share Posted November 24, 2018 6 hours ago, DZMM said: Thanks - I'll have a play to see if I can work out how to complete the script To assign the drive into the variable named DRIVE, using " " and $() to evaluate the command and take the output. DRIVE="$(df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $3} NR > 1 && $4 < min { line = $6; min = $4} END { print line}')" Then to see the result you could do: echo ${DRIVE} You should then be able to refer to that variable in your command ... rclone ${DRIVE}/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m Using ECHO to see what the command expansion would be: echo rclone ${DRIVE}/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m Produces the following output: rclone /mnt/disk2/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m So I think putting it together to only do the drive with least free: DRIVE="$(df | grep /mnt/disk | awk 'NR == 1 {line = $6; min = $3} NR > 1 && $4 < min { line = $6; min = $4} END { print line}')" rclone ${DRIVE}/rclone_upload/google_vfs/ gdrive_media_vfs: -vv --drive-chunk-size 512M --checkers 3 --fast-list --transfers 2 --exclude .unionfs/** --exclude *fuse_hidden* --exclude *_HIDDEN --exclude .recycle** --exclude *.backup~* --exclude *.partial~* --delete-empty-src-dirs --fast-list --bwlimit 15500k --tpslimit 3 --min-age 30m 2 Quote Link to comment
DZMM Posted November 24, 2018 Share Posted November 24, 2018 @BRiTThanks for pulling this together - brilliant! Quote Link to comment
khile Posted November 25, 2018 Share Posted November 25, 2018 hi all noobie here i have a script set up to run every 5 mins to check cpu temps and ajust server fans but user scripts does not seem to be working running it manually outputs correctly but 5 * * * * or one of the pre-defined such as hourly and does not seem to run checked logs and nothing there Any ideas im on version 6.6.5 Thank you Quote Link to comment
NewDisplayName Posted November 29, 2018 Share Posted November 29, 2018 Is it possible to run scripts only when the drives are spin up? If not, can that be added? Quote Link to comment
NewDisplayName Posted November 29, 2018 Share Posted November 29, 2018 On 11/25/2018 at 1:45 AM, khile said: hi all noobie here i have a script set up to run every 5 mins to check cpu temps and ajust server fans but user scripts does not seem to be working running it manually outputs correctly but 5 * * * * or one of the pre-defined such as hourly and does not seem to run checked logs and nothing there Any ideas im on version 6.6.5 Thank you Just use plugin dynamix auto fan controll Quote Link to comment
Squid Posted November 29, 2018 Author Share Posted November 29, 2018 2 minutes ago, nuhll said: Is it possible to run scripts only when the drives are spin up? If not, can that be added? For what purpose? Quote Link to comment
NewDisplayName Posted November 29, 2018 Share Posted November 29, 2018 (edited) 12 minutes ago, Squid said: For what purpose? Currently i have a script: Quote #!/bin/bash FROM_DIR=/mnt/user/downloads/completed/Filme TO_DIR=/mnt/user/downloads/DVDR FILES="$(find "$FROM_DIR" -iname '*.iso' -or -iname '*.img')" for FILES in $FILES; do DIR="$(basename "$(dirname "$FILES")")" mkdir -p "$TO_DIR"/"$DIR" mv "$FILES" "$TO_DIR"/"$DIR" done logger DVDRs verschoben. find /mnt/user/downloads/DVDR/ -mtime 7 -delete touch /mnt/user/downloads/DVDR/touch find /mnt/user/downloads/DVDR/* -type d -empty -exec rmdir {} \; rm /mnt/user/downloads/DVDR/touch logger DVDRs älter als eine Woche gelöscht. which checkes my download directory for new DVDRs and moves them to a location where handbrake converts them to a useable file format. Sadly this makes disks spin up: Nov 29 11:47:01 Unraid-Server root: DVDRs verschoben. Nov 29 11:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht. Nov 29 12:47:01 Unraid-Server root: DVDRs verschoben. Nov 29 12:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht. Nov 29 12:49:26 Unraid-Server kernel: mdcmd (153): spindown 4 Nov 29 13:47:01 Unraid-Server root: DVDRs verschoben. Nov 29 13:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht. Nov 29 14:09:31 Unraid-Server kernel: mdcmd (154): spindown 4 Nov 29 14:39:45 Unraid-Server kernel: mdcmd (155): spindown 4 Nov 29 14:47:01 Unraid-Server root: DVDRs verschoben. Nov 29 14:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht. Nov 29 15:47:01 Unraid-Server root: DVDRs verschoben. Nov 29 15:47:01 Unraid-Server root: DVDRs älter als eine Woche gelöscht. Nov 29 15:57:32 Unraid-Server kernel: mdcmd (156): spindown 2 Nov 29 15:57:38 Unraid-Server kernel: mdcmd (157): spindown 4 Nov 29 16:19:26 Unraid-Server kernel: mdcmd (158): spindown 4 I changed it now to daily. so it doesnt spin up every hour. Cooler would be if its only done when disks are spun up. (then icould check every 15 min or so, and only do something when disks are up) Edited November 29, 2018 by nuhll Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.