Jump to content
Squid

[Plugin] CA User Scripts

655 posts in this topic Last Reply

Recommended Posts

I have all mine goto a Cache/SSD drive. No spin ups required. 😃

 

I have a few shares

(Uploads)

   ../incoming

   ../outgoing

(Movies)

(Tv)

 

When ever I rip a movie or tv show I always send it to my (Upload) share. Then unraid via user scripts checks that upload folder and rips thru handbrake and a few other processes and then it places it in outgoing folder which FileBot looks at and then renames things and places it either in a (TV) or (Movie) share. Mover runs in the AM when my Plex Docker spools up to check my drives for new stuff moves the files onto the Protected Array off the SSD drive. 

 

My scripts run every 30 minutes so I don't have to Babysit, but the final drop to my Array/Spinner drives are at 6AM along with Plex. 

Share this post


Link to post
56 minutes ago, M2MIT said:

Just a Quick Error, The "What is Cron" link at the bottom is broken.

It seems to always be broken.  Whenever I fix the link, it then fails in the future again.  Wikipedia has a decent writeup on cron

Share this post


Link to post

Hey all,

 

I have created my script for shutdown my server at 00 every days.

 

My Script is :

/usr/local/sbin/powerdown

 

My Custom Schedule file is :

 

# Generated cron schedule for user.scripts
00*** /usr/local/emhttp/plugins/user.scripts/startCustom.php /boot/config/plugins/user.scripts/scripts/Shutdown Schedule/script > /dev/null 2>&1

 

In my dashbord of unraid :

 

image.thumb.png.8cc1ed64f0e6ec6b7d879c0bafbd4b5e.png

 

What should I do after that? because apparently my server does not stop.

 

Thank You.

Share this post


Link to post
14 minutes ago, snake382 said:

Hey all,

 

I have created my script for shutdown my server at 00 every days.

 

My Script is :

/usr/local/sbin/powerdown

 

My Custom Schedule file is :

 

# Generated cron schedule for user.scripts
00*** /usr/local/emhttp/plugins/user.scripts/startCustom.php /boot/config/plugins/user.scripts/scripts/Shutdown Schedule/script > /dev/null 2>&1

 

In my dashbord of unraid :

 

image.thumb.png.8cc1ed64f0e6ec6b7d879c0bafbd4b5e.png

 

What should I do after that? because apparently my server does not stop.

 

Thank You.

Your custom cron is wrong. There must be spaces between each part.

Share this post


Link to post

ok thanks i will go to change this like this :

 

image.png

image.png

Edited by snake382

Share this post


Link to post

Hi all, I have done some reading and searching and can't find a clear answer.  Is there a way outside of writing it a script to get a notification that the script ran and completed successfully/failed? I use this to run some very basic rclone backups, but have no way of knowing if it ran successfully or failed.  If it did run successfully, I'd love to see the log of what was uploaded rclone puts out under the verbose command.  If it fails I'd like to know that as well so I don't assume it completed.  Is this a built in feature I am just missing?  I believe I read that it possible to get an notification, but it needs to be written in the script, but I am not familiar with writing scripts (outside of simple rclone commands).  

 

Thanks!

-Jason

Share this post


Link to post

I don't intend to have the plugin send out a notification after a script runs.

 

Rather, you'll have to do some homework, and then utilize the notify command

root@ServerA:~# /usr/local/emhttp/plugins/dynamix/scripts/notify
notify [-e "event"] [-s "subject"] [-d "description"] [-i "normal|warning|alert"] [-m "message"] [-x] [-t] [add]
  create a notification
  use -e to specify the event
  use -s to specify a subject
  use -d to specify a short description
  use -i to specify the severity
  use -m to specify a message (long description)
  use -x to create a single notification ticket
  use -t to force send email only (for testing)
  all options are optional

notify init
  Initialize the notification subsystem.

notify smtp-init
  Initialize sendmail configuration (ssmtp in our case).

notify get
  Output a json-encoded list of all the unread notifications.

notify archive file
  Move file from 'unread' state to 'archive' state.

And execute the appropriate command based upon say the exit code of your rclone command

Share this post


Link to post

User Scripts can't talk to docker containers with their own IPs anymore, right?  Unless I set up VLANs, etc?  I was using a script to query my UPS, but I can't upload the results to InfluxDB anymore since it's on its own IP now.  

Share this post


Link to post
48 minutes ago, Andiroo2 said:

User Scripts can't talk to docker containers with their own IPs anymore, right?  Unless I set up VLANs, etc?  I was using a script to query my UPS, but I can't upload the results to InfluxDB anymore since it's on its own IP now.  

With "br0" it should could?! Atleast thats how i do it.

Edited by nuhll

Share this post


Link to post

Anyone a idea why my script doesnt get run anymore?

 

*/5 * * * *

 

Its running since the start of the day, but since then it doesnt get recalled. Its an upload script which shoudl stop itself after PCs go online... 

 

What can be the problem when the script doenst get run anymore? (I see in log it is working, but i see it dont get run every 5 min)

Edited by nuhll

Share this post


Link to post

restarting the server fixed the issue, still i dont know why it was happening?!

Share this post


Link to post

Okay, its happening again.

 

If it says "RUNNING ABORT SCRIPT" will it rerun while its running? Or not?

Share this post


Link to post

So, it seems that sometimes scripts dont get rerun when they running, and sometimes they do.

 

I want to avoid that problem by running another script:

 

#!/bin/sh

### Here you can enter your hosts IP addresses, you can add as much was u want, but then u need to also specify them later in code
host=192.168.86.42
#host2=192.168.86.48
#host3=192.168.86.154

 

### Hosts

### Ping 3 hosts
ping -c 1 -W1 -q $host > /dev/null
      if [ $? == 0 ]; then

###  Check if script already run 
if [[ -f "/mnt/user/appdata/other/speed/rclone_killed" ]]; then
logger ""$(date "+%d.%m.%Y %T")"" rclone bereits gekillt.
exit

else

touch /mnt/user/appdata/other/speed/rclone_killed

fi
###

logger min. 1 Ping erfolgreich, rclone gekillt.

### rclone killen
ps -ef | grep "rclone" | grep -v 'grep' | grep -v 'mount' | awk '{ print $2 }' | xargs kill
###


else
logger Ping nicht erfolgreich, rclone kann wieder starten.
rm -f /mnt/user/appdata/other/speed/rclone_killed

exit

fi

 

problem here is, that this scripts run forever and i dont know why. Ive set everywhere a exit, so this script should stop at some pojnt... why not?

Share this post


Link to post

hi all

 

i'm looking to see where are located the logs with krusader.

 

But i'm not able to find them

 

is there a way to see it in ssh otherwise

 

thx

Share this post


Link to post

The path to the script's log is shown when you click the show logs button.  Or you can download them via the button also.

 

You won't be able to hit them via krusader unless you've mapped / to / in its template

Share this post


Link to post

I have a question about how scripts are executed. If I have several "Scheduled Daily" scripts are they executed in a particular order? Are they executed serially, one after another, or in parallel? The reason I ask is I have one daily script that is restarting a Docker container that one of the other daily scripts is dependent upon.

Share this post


Link to post
17 minutes ago, Taddeusz said:

I have a question about how scripts are executed. If I have several "Scheduled Daily" scripts are they executed in a particular order? Are they executed serially, one after another, or in parallel? The reason I ask is I have one daily script that is restarting a Docker container that one of the other daily scripts is dependent upon.

I think the correct way of handling your exact case is to merge the two scripts, so you can be positive that the first action is complete before starting the second action. Ideally you could issue the restart command, then do a conditional loop checking to verify it's running again before moving on.

 

I don't really know how for sure how cron tasks are handled, I think they are all queued up at the same time and attempt to run in parallel.

Share this post


Link to post
12 minutes ago, jonathanm said:

I think the correct way of handling your exact case is to merge the two scripts, so you can be positive that the first action is complete before starting the second action. Ideally you could issue the restart command, then do a conditional loop checking to verify it's running again before moving on.

 

I don't really know how for sure how cron tasks are handled, I think they are all queued up at the same time and attempt to run in parallel.

I was kind of thinking that myself. It's really the only way I can guarantee the order of execution.

Share this post


Link to post
1 minute ago, scubieman said:

If i choose for it to run daily. What time does it run?

The built-in schedules can be seen and adjusted at Settings - Scheduler - Fixed Schedules

  • Like 2

Share this post


Link to post
5 minutes ago, trurl said:

The built-in schedules can be seen and adjusted at Settings - Scheduler - Fixed Schedules

Right here trurl?

 

image.png.718d6782afd04bc1413bcd11b63e512e.png

Share this post


Link to post
11 minutes ago, scubieman said:

Right here trurl?

 

image.png.718d6782afd04bc1413bcd11b63e512e.png

Try it and see😉

Share this post


Link to post
2 minutes ago, trurl said:

Try it and see😉

I dont see it thats why I ask :P

image.thumb.png.3ae72d7d35789f63cc705240e0880acf.png

image.thumb.png.335e2db1faeca0e2776757f501f68e39.png

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now