[Plugin] CA User Scripts


Recommended Posts

Hi there, good morning. Thanks for the awesome plugin!

I set my own script to run as a cronjob this morning but it didn't execute. I had it set to run custom and cron time 30 2 * * TUE

(2:30AM on every Tuesday). Not sure why it did not execute this morning. Was my cron format wrong? Can you please advise?

 

Thanks in advance.

Link to comment
13 hours ago, lamp said:

Hi there, good morning. Thanks for the awesome plugin!

I set my own script to run as a cronjob this morning but it didn't execute. I had it set to run custom and cron time 30 2 * * TUE

(2:30AM on every Tuesday). Not sure why it did not execute this morning. Was my cron format wrong? Can you please advise?

 

Thanks in advance.

 

If I have questions about the cron schedule, I rely on this......     https://crontab.guru/

 

Truly a god send.....

 

Link to comment

Is it possible to add an optional feature, to send the output of the script (the same as what is shown if you Run the Task in the Web UI) as a Notification using the unRAID Notification system, so we can see when scripts have run, and if any issues etc were reported? Similar to how in Cron you can set it to email you on completion (assuming you have the right stuff setup).

Link to comment
On 8/23/2021 at 1:53 PM, Squid said:

Look in /config/plugins/user.scripts/scripts on the flash drive and change the folder names to avoid special characters and see if that makes a difference

 Solved it, it was a script that had to be deleted that was messing with the others. Thanks!

Link to comment
On 8/29/2021 at 5:27 PM, timethrow said:

Is it possible to add an optional feature, to send the output of the script (the same as what is shown if you Run the Task in the Web UI) as a Notification using the unRAID Notification system, so we can see when scripts have run, and if any issues etc were reported? Similar to how in Cron you can set it to email you on completion (assuming you have the right stuff setup).

You can use '/usr/local/emhttp/webGui/scripts/notify' in your script:

notify [-e "event"] [-s "subject"] [-d "description"] [-i "normal|warning|alert"] [-m "message"] [-x] [-t] [-b] [add]
  create a notification
  use -e to specify the event
  use -s to specify a subject
  use -d to specify a short description
  use -i to specify the severity
  use -m to specify a message (long description)
  use -l to specify a link (clicking the notification will take you to that location)
  use -x to create a single notification ticket
  use -r to specify recipients and not use default
  use -t to force send email only (for testing)
  use -b to NOT send a browser notification
  all options are optional

notify init
  Initialize the notification subsystem.

notify smtp-init
  Initialize sendmail configuration (ssmtp in our case).

notify get
  Output a json-encoded list of all the unread notifications.

notify archive file
  Move file from 'unread' state to 'archive' state.

 

  • Like 1
Link to comment
20 hours ago, KnifeFed said:

You can use '/usr/local/emhttp/webGui/scripts/notify' in your script:

notify [-e "event"] [-s "subject"] [-d "description"] [-i "normal|warning|alert"] [-m "message"] [-x] [-t] [-b] [add]
  create a notification
  use -e to specify the event
  use -s to specify a subject
  use -d to specify a short description
  use -i to specify the severity
  use -m to specify a message (long description)
  use -l to specify a link (clicking the notification will take you to that location)
  use -x to create a single notification ticket
  use -r to specify recipients and not use default
  use -t to force send email only (for testing)
  use -b to NOT send a browser notification
  all options are optional

notify init
  Initialize the notification subsystem.

notify smtp-init
  Initialize sendmail configuration (ssmtp in our case).

notify get
  Output a json-encoded list of all the unread notifications.

notify archive file
  Move file from 'unread' state to 'archive' state.

 

 

Thanks, but this only works if you put that in for every eventuality in your script, whereas having the plugin send it after a script has completed (whether successful or not) ensures its always sent.

 

For example, if my script encounters an error that was not captured, the notify may not be sent if included in the script manually, whereas this way, it will always be sent, so long as the underlying plugin works.

 

I do have the notify in some of my user scripts, and I also do redirect alot of my output to log files I store on the array (in case I need it) but this is more for being notified for when a script is run and the output, similar to the cron service on a vanilla Linux server. It allows you to check quickly/easily if a script ran and if it has the expected output or not.

Link to comment

Silly question, but I can't seem to find an answer for it:

There's the pre-set Hourly, Daily, Weekly, and Monthly schedules. But I can't seem to find when they run (ie. What time does the daily kick off, day/time for weekly, etc.). Is there any way to adjust them to be your preferred values?

Link to comment
25 minutes ago, IMTheNachoMan said:

How can I either email the output of scripts or have my scripts send an email?

I have an rsync backup script that I run through User Scripts that sends an email summary of the backup process.  Here are the relevant portions of the script I use:

 

	# Set up email header
	echo To: xxxxxxx@yahoo.com >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo From: xxxxxxx@gmail.com >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo Subject: MediaNAS to BackupNAS rsync summary >> /boot/logs/cronlogs/BackupNAS_Summary.log
	echo   >> /boot/logs/cronlogs/BackupNAS_Summary.log

 

# Send email of summary of results
	ssmtp xxxxxxxx@yahoo.com < /boot/logs/cronlogs/allshares.log
	cd /boot/logs/cronlogs  
	mv BackupNAS.zip "$(date +%Y%m%d_%H%M)_BackupNAS.zip"
	rm *.log

 

The script concatenates share backup info in allshares.log and then that is emailed to a yahoo.com address

Edited by Hoopster
Link to comment

I guess this is simple.

But, how to:

 

  • Delete all files older then 14 days in /mnt/user/surveillance_camera/cam1 and /mnt/user/surveillance_camera/cam2 (including subfolders)
  • Then Remove all empty folders whitin the same paths.

It might be something I can find out for myself, but I don't have any unraid development environment, and I don't know if I will run delete-commands that I'm not sure about on my production server.

Link to comment
24 minutes ago, Flemming said:

Delete all files older then 14 days in /mnt/user/surveillance_camera/cam1 and /mnt/user/surveillance_camera/cam2 (including subfolders)

find /mnt/user/surveillance_camera/cam[12] -type f -mtime +14 -delete

 

28 minutes ago, Flemming said:

Then Remove all empty folders whitin the same paths.

find /mnt/user/surveillance_camera/cam[12] -type d -empty -delete

Link to comment
  • 2 weeks later...

Hey All, 

 

 I have userscripts set to move "Move Array Only Shares to Array" daily. I also just ran it manually. but i still have share data on my (3rd) cache pool. The related share has cache set to NO.

 

The mover doesn't see to resolve this either. 

 

I feel like I read somewhere that there was a bug with multiple cache pools and spacing but I'm curious if there is solution anyone can offer.

 

As a last resort, if you know how to move these files from the pool to the array without breaking anything via another docker or command I could attempt that.

Link to comment
17 minutes ago, Aerodb said:

Array Only Shares

Do you mean shares that are set to Use cache: No?

 

Nothing can move open files. Are you sure they aren't open?

 

Mover won't move duplicates. How are you handling duplicates?

 

How are you specifying the source path on cache? How are you specifying the destination path on array?

Link to comment
2 minutes ago, trurl said:

Do you mean shares that are set to Use cache: No?

 

Nothing can move open files. Are you sure they aren't open?

 

How are you specifying the source path on cache? How are you specifying the destination path on array?

1- yes, the share that these files are associated to is set to cache: No. yet they are on the cache drive currently. 

 

2- I don't believe they are open, its quite a few of them and have no idea what or why they would be open for this long. some have been on the machine, on the cache pool for longer than the machine has been up. (there have been restarts since they were written to the pool.)

 

3- I'm not sure how to answer this. Squid posted some user script templates/examples that I used and have had success with this thus far. I can provide the user script code if you think it would be helpful. 

 

 

Link to comment
14 hours ago, Aerodb said:

1- yes, the share that these files are associated to is set to cache: No. yet they are on the cache drive currently.

Mover ignores files for a share that has use Cache=No.   If you want them moved to the array then you need to change Use Cache to Yes (at least temporarily).    Note that the Use Cache setting primarily determines where NEW files are placed - it does not stop old files from being left on the cache (conversely the Only setting does not automatically move files from array to cache).  Mover only gets involved for the Yes and Prefer settings.

Link to comment
46 minutes ago, itimpi said:

Mover ignores files for a share that has use Cache=No.   If you want them moved to the array then you need to change Use Cache to Yes (at least temporarily).    Note that the Use Cache setting primarily determines where NEW files are placed - it does not stop old files from being left on the cache (conversely the Only setting does not automatically move files from array to cache).  Mover only gets involved for the Yes and Prefer settings.

so after changing the share associated to the files on the cache pool/drives to yes(use cache), I then started the mover. the mover finished and did not move the files. They remained on the cache pool drives. 

 

I really think this was related to the issue I mentioned. I don't think I made that up in my head, I must have read it somewhere that there was an issue with having multiple cache pools and using the mover on shares that have a space in the share name.

Link to comment
5 hours ago, itimpi said:

Did you make sure that the correct pool was referenced when you change the setting to Yes?    There have been reports of files ending up on a pool not named in the share setting - they will not get moved.  
 

 

can confirm this did fix the issue. thank you so much. 

 

two follow up questions, 

 

1- Do we know how this issue starts? I can assure you this share has never been allowed to use cache. 

2- will this "move all none cache shares to array" script will ever be adapted to handle this? im asking if its possible to do, rather than if its being worked on. 

Link to comment
1 hour ago, Aerodb said:

can confirm this did fix the issue. thank you so much. 

 

two follow up questions, 

 

1- Do we know how this issue starts? I can assure you this share has never been allowed to use cache. 

2- will this "move all none cache shares to array" script will ever be adapted to handle this? im asking if its possible to do, rather than if its being worked on. 

The issue starts when a file is ‘moved’ to anther share at the Linux level  (rather than copy/deleted) either via the command line or via a container.   Linux (which does not understand User Shares) implements ‘move’ by first trying a rename and only if that fails doing a copy+delete if source and target appear to be on the same mount point (/mnt/user in the case of user shares).   The rename succeeds leaving the file on the original drive so the copy/delete never gets triggered.   Doing an explicit copy+delete gives the desired result, or (in the case of containers) map the paths so they appear to be different mount points.
 

2) as it is not part of standard UnRaid.   It should be possible to get the script to handle this correctly by making it get the list of pool names from /boot/config/pools on the flash drive.  Having said that there are valid Use cases for keeping files for non-cached shares on a cache/pool (and I know of Users who exploit this behaviour) so this script would break those Use Cases unless it is explicitly written to allow for them.

  • Thanks 1
Link to comment
15 hours ago, itimpi said:

The issue starts when a file is ‘moved’ to anther share at the Linux level  (rather than copy/deleted) either via the command line or via a container.   Linux (which does not understand User Shares) implements ‘move’ by first trying a rename and only if that fails doing a copy+delete if source and target appear to be on the same mount point (/mnt/user in the case of user shares).   The rename succeeds leaving the file on the original drive so the copy/delete never gets triggered.   Doing an explicit copy+delete gives the desired result, or (in the case of containers) map the paths so they appear to be different mount points.
 

2) as it is not part of standard UnRaid.   It should be possible to get the script to handle this correctly by making it get the list of pool names from /boot/config/pools on the flash drive.  Having said that there are valid Use cases for keeping files for non-cached shares on a cache/pool (and I know of Users who exploit this behaviour) so this script would break those Use Cases unless it is explicitly written to allow for them.

that makes total sense. I can also confirm that i have used a container to move files so that's VERY likely what happened.

 

thank you!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.