[Plugin] CA User Scripts


1233 posts in this topic Last Reply

Recommended Posts

hi  i i have couple questions i tried googling didnt find the answers

1... can you  access Raspberry PI to run a command on it  from the user scripts  at a certain time.

2.. i know how to setup rsync to login another unraid  using the pub key stuff..   but can i do the same for raspberry pi kinda thing

 

what i wanan do is

Unraid ====>certain time====>tells Raspberry Pi to flip a relay on  using one of my raspberry pi scripts

then unraid(main)  in the userscript then tells another computer to turn on WOL       and then rsyncs with it..  

then after  unraid tells that computer to shut down.. and then tells Raspberry pi to run another script before ending the user script

 

is there any code that could help me

 

Link to post
  • Replies 1.2k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Just a simple little plugin to act as a front end for any little scripts that you may have that you may need to run every once in a while, and don't feel like dropping down to the command line to do i

I could make up something long, convoluted, and technical as a reason why it doesn't do that but it would all be bs.  The simple answer is that when I did this, I never thought of that, and why it sti

@Squid I tried many commands and by that I found out that pkill does not kill (all) child processes: root@Thoth:/tmp# pgrep -f isleep2m | xargs --no-run-if-empty ps fp PID TTY STAT T

Posted Images

so i was able to get ssh into Raspberry pi

 

but running code  on the Raspberry Pi doesnt work right

i get this error on  Unraid
"Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
/tmp/user.scripts/tmpScripts/raspberry pi/script: line 4: /home/pi/bin/pumpon.py: No such file or directory"

 

my user script is 

ssh pi@192.168.0.12
/home/pi/bin/pumpon.py 16

 

it says no such file or directory.. but it defently exisits on the pi  so i dunno why its giving me errors 

Edited by comet424
Link to post

I'm running some python scripts on array startup; each script sends an email when it is started, and also when an exception occurs, but also when the program exits, using the atexit exit handler (https://medium.com/better-programming/create-exit-handlers-for-your-python-appl-bc279e796b6b)

 

If i run the python script manually, ctrl-c will indeed make the program send an email that it has been stopped, but if i call the python script from a user-script, and stop the program from user-script, no exit email is being sent, the exit handler doesn't seem to be called.  I guess this has to do with the way user-script runs these in the background, is there another way to hook up to some exit function in python so this works?

Link to post
3 hours ago, jowi said:

I'm running some python scripts on array startup; each script sends an email when it is started, and also when an exception occurs, but also when the program exits, using the atexit exit handler (https://medium.com/better-programming/create-exit-handlers-for-your-python-appl-bc279e796b6b)

 

If i run the python script manually, ctrl-c will indeed make the program send an email that it has been stopped, but if i call the python script from a user-script, and stop the program from user-script, no exit email is being sent, the exit handler doesn't seem to be called.  I guess this has to do with the way user-script runs these in the background, is there another way to hook up to some exit function in python so this works?

@Squid can provide the real answer, but I suspect that User Script may be sending a SIGKILL to stop the script.

 

When you are running your script interactively and you push Ctrl-C, a SIGINT signal is sent to your script. If your script has a signal handler (which the atexit exit handler is), this signal can be caught, which allows your script to do a more graceful exit. In your case, that includes sending an email before exiting. However, a script with a signal handler can misbehave, either by choosing to or failing to exit after intercepting the SIGINT. This is where SIGKILL comes in. That's the signal that is sent when you run "kill -9 $PID". SIGKILL cannot be caught by a signal handler. It just shoots the script in the head immediately without waiting for a response.

 

Generally speaking, it's better to send a SIGINT first to allow a process to exit gracefully. And after a certain amount of time (i.e. 10 secs) and the process still has not exited, then send a follow-up SIGKILL to kill it immediately. If I'm right and User Scripts only sends SIGKILL, then perhaps this is a feature request you can put in for User Script.

Edited by Phoenix Down
Link to post
2 hours ago, Phoenix Down said:

@Squid can provide the real answer, but I suspect that User Script may be sending a SIGKILL to stop the script.

 

What I'm doing is 

pkill -TERM -P $pid
kill -9 $pid

followed by

ps aux | grep -i ' /usr/bin/php /tmp/user.scripts/tmpScripts/$script' | grep -v grep

And kill -9 on anything returned by the above

 

Killing off scripts is the worst part of user scripts, and something that doesn't quite seem to ever work correctly 100% (especially with long running processes spawned by the script)

Link to post
7 hours ago, Squid said:

What I'm doing is 


pkill -TERM -P $pid
kill -9 $pid

followed by


ps aux | grep -i ' /usr/bin/php /tmp/user.scripts/tmpScripts/$script' | grep -v grep

And kill -9 on anything returned by the above

 

Killing off scripts is the worst part of user scripts, and something that doesn't quite seem to ever work correctly 100% (especially with long running processes spawned by the script)

I hear you, orphaned processes are a pain in the rear.

 

After you send the SIGTERM, do you wait at all before sending SIGKILL?

Link to post
3 minutes ago, mgutt said:

Feature Request:

I really like to see echo output inside the logs (maybe as optional setting?). At the moment only errors are logged.

You can always get the script to handle this by sending entries to the syslog by redirecting the output to go through the ‘logger’ command

Link to post
13 minutes ago, mgutt said:

Feature Request:

I really like to see echo output inside the logs (maybe as optional setting?). At the moment only errors are logged.

It already does that

Link to post
11 minutes ago, Squid said:

It already does that

Does not work for me. rclone is running and the logs are "empty":

516330505_2020-07-2614_57_53.thumb.png.c10fe8a276e47fbf937bc59bddc93a0e.png

 

"vi /tmp/user.scripts/tmpScripts/backup_shares_owncube/log.txt" returns the same.

Link to post

This would make the scripts to complicate and it's not easy to use / share them on multiple platforms.

 

EDIT: Ok, found a solution that is more portable and works for all unraid scripts:

exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

This needs to be put in the first line after "#!/bin/bash" and now all command returns and echo's are logged!

 

Other versions I tested that did not work (only for information purposes):

# does not work
#exec >> /tmp/user.scripts/tmpScripts/backup_shares_owncube/log.txt
#exec 3>&1 1>>/tmp/user.scripts/tmpScripts/backup_shares_owncube/log.txt 2>&1
#exec > >(tee --append --ignore-interrupts /tmp/user.scripts/tmpScripts/backup_shares_owncube/log.txt)
#exec 1>&1
#exec 2>&1

 

Example script:

#!/bin/bash

# logging
exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

# rclone
rclone sync /mnt/user/Software owncube:software --create-empty-src-dirs --ignore-checksum --bwlimit 3M --checkers 2 --transfers 1 -v --stats 10s

 

Log output:

570562029_2020-07-2710_17_09.thumb.png.3fd4716193f47473b5d219567650d310.png

Edited by mgutt
Link to post
22 hours ago, mgutt said:

This would make the scripts to complicate and it's not easy to use / share them on multiple platforms.

 

EDIT: Ok, found a solution that is more portable and works for all unraid scripts:


exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

This needs to be put in the first line after "#!/bin/bash" and now all command returns and echo's are logged!

 

Other versions I tested that did not work (only for information purposes):


# does not work
#exec >> /tmp/user.scripts/tmpScripts/backup_shares_owncube/log.txt
#exec 3>&1 1>>/tmp/user.scripts/tmpScripts/backup_shares_owncube/log.txt 2>&1
#exec > >(tee --append --ignore-interrupts /tmp/user.scripts/tmpScripts/backup_shares_owncube/log.txt)
#exec 1>&1
#exec 2>&1

 

Example script:


#!/bin/bash

# logging
exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

# rclone
rclone sync /mnt/user/Software owncube:software --create-empty-src-dirs --ignore-checksum --bwlimit 3M --checkers 2 --transfers 1 -v --stats 10s

 

Log output:

570562029_2020-07-2710_17_09.thumb.png.3fd4716193f47473b5d219567650d310.png

It's a bit more heavyweight solution, but you can always write your own wrapper that does logging, timeouts, interlocking, and notifications. Then you can run all of your crons using this wrapper.

Link to post
36 minutes ago, Phoenix Down said:

It's a bit more heavyweight solution, but you can always write your own wrapper that does logging, timeouts, interlocking, and notifications.

Yes, but this one-liner is absolutely sufficient ;)

Link to post

I need to create a script that will look through a file to find a specific text and replace all the matches with other text. How can I do this?

 

lets say the file is /mnt/user/text/mytestfile.txt

I am looking for #FF8C2F and want to replace them all with #42ADFA.

 

This is something I need to run after every reboot.

Link to post
6 minutes ago, almulder said:

I need to create a script that will look through a file to find a specific text and replace all the matches with other text. How can I do this?

 

lets say the file is /mnt/user/text/mytestfile.txt

I am looking for #FF8C2F and want to replace them all with #42ADFA.

 

This is something I need to run after every reboot.

You could write a small Perl script to do it. Maybe something like this:

 

#!/usr/bin/env perl

use strict;
use warnings;
use Carp;
use English;

# Read the file
my $file = "/mnt/user/text/mytestfile.txt";
my $fh;
open $fh, "<", $file
  or croak("\nERROR: Cannot open '$file' for reading: $OS_ERROR\n\n");
my @rawdata = <$fh>;
close $fh;

# Update each line
my @newdata;
foreach my $line (@rawdata)
{
  $line =~ s/#FF8C2F/#42ADFA/g;
  push @newdata, $line;
}

# Update the file with the updated data
open $fh, ">", $file
  or croak("\nERROR: Cannot open '$file' for writing: $OS_ERROR\n\n");
foreach my $line (@newdata)
{
  print $fh $line;
}
close $fh;

 

Link to post
25 minutes ago, Phoenix Down said:

You could write a small Perl script to do it. Maybe something like this:

 


#!/usr/bin/env perl

use strict;
use warnings;
use Carp;
use English;

# Read the file
my $file = "/mnt/user/text/mytestfile.txt";
my $fh;
open $fh, "<", $file
  or croak("\nERROR: Cannot open '$file' for reading: $OS_ERROR\n\n");
my @rawdata = <$fh>;
close $fh;

# Update each line
my @newdata;
foreach my $line (@rawdata)
{
  $line =~ s/#FF8C2F/#42ADFA/g;
  push @newdata, $line;
}

# Update the file with the updated data
open $fh, ">", $file
  or croak("\nERROR: Cannot open '$file' for writing: $OS_ERROR\n\n");
foreach my $line (@newdata)
{
  print $fh $line;
}
close $fh;

 

Thanks for your help. Thats more complicated than I wanted. However if made me dig deeper and not just search for unraid but linux code and found out I could do this all in 1 line and it works great.

 

sed -i 's/#FF8C2F/#42ADFA/gI' /mnt/user/text/mytestfile.txt

 

 

Also found a way to use variables instead:

old_color="#e22828"
new_color="#00378F"
sed -i "s/$old_color/$new_color/gI" /mnt/user/Unraid_Backup/Newlogos/hello.txt

Hope this helps other one day!

 

Link to post
17 hours ago, Phoenix Down said:

You could write a small Perl script to do it. Maybe something like this:

 


#!/usr/bin/env perl

use strict;
use warnings;
use Carp;
use English;

# Read the file
my $file = "/mnt/user/text/mytestfile.txt";
my $fh;
open $fh, "<", $file
  or croak("\nERROR: Cannot open '$file' for reading: $OS_ERROR\n\n");
my @rawdata = <$fh>;
close $fh;

# Update each line
my @newdata;
foreach my $line (@rawdata)
{
  $line =~ s/#FF8C2F/#42ADFA/g;
  push @newdata, $line;
}

# Update the file with the updated data
open $fh, ">", $file
  or croak("\nERROR: Cannot open '$file' for writing: $OS_ERROR\n\n");
foreach my $line (@newdata)
{
  print $fh $line;
}
close $fh;

 

 

You can achieve the same thing from the command line with PERL:

perl -pi -e 's/#FF8C2F/#42ADFA/g' /mnt/user/text/mytestfile.txt

Perl one-liners

Edited by JoeUnraidUser
Link to post

having issues  with ssh keys 

i have it setup so when unraid boots it copys the ssh folder and does the keyscan stuff and chkmod  so i can do rsync without a password  and that works.. for a while.. it seems like it times out after 24 hours  it releases and now rsync stops  and public is released out of memory..

as if you drop to the terminal and ssh  <servername>  u gotta type yes no  thing  again..  if i reboot the server running the script to access the other server..  i can ssh  <server>  and not have to enter a password

 

my question is how do you keep it from loosing like a network dhcp lease...  as i dont wanna always reboot  server just to rsync to refresh the public key

 

so say  after 1000 minutes unraid release the public key leases... is there a way to permently keep it in

Link to post

Although my last test returned the opposite, I found out that CA user scripts is not race condition safe. So I changed my script as follows:

#!/bin/bash

# kill all commands on exit
function exitus() {
    exit_status=$1
    sudo pkill -f rcloneorig
    sleep 1
    sudo pkill -f rcloneorig
    sleep 1
    sudo pkill -f rcloneorig
    sleep 1
    sudo pkill -f rcloneorig
    sleep 1
    rmdir "/tmp/${0///}"
    exit $exit_status
}

# make script race condition safe
if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then
    exit 1 # do not use exitus here or it will kill the already running script!
fi
trap 'exitus 1' EXIT

# logging
exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

# rclone
rclone copy /mnt/user/software owncube:software --create-empty-src-dirs --ignore-checksum --bwlimit 3M --checkers 2 --transfers 1 -vv --stats 10s --min-age 3d
rclone copy /mnt/user/music owncube:music --create-empty-src-dirs --ignore-checksum --bwlimit 3M --checkers 2 --transfers 1 -vv --stats 10s --min-age 3d
rclone copy /mnt/user/movie owncube:movie --create-empty-src-dirs --ignore-checksum --bwlimit 3M --checkers 2 --transfers 1 -vv --stats 10s --min-age 3d
rclone copy /mnt/user/tv owncube:tv --create-empty-src-dirs --ignore-checksum --bwlimit 3M --checkers 2 --transfers 1 -vv --stats 10s --min-age 3d

 

Then I pressed "Abort Script", waited 10 seconds and checked if my tmp dir is still existing and it does:

293622279_2020-07-3114_02_46.png.98ef9bbd72c2044ca1e05d3060d59b00.png

 

A short check with top confirmed that the rlconeorig process is still active.

 

So I killed rclone as follows:

2078881803_2020-07-3114_04_39.png.9cf890a10bba7bf490e92317a426bba9.png

 

As you can see I needed to kill every rclone command separately and you can see that "rmdir" of my script has been executed as my atomic dir "tmpuser.scriptstmpScriptsbackup_shares_owncubescript" is gone.

 

Conclusion: The "Abort Script" button does nothing except removing the "Running"-status in the webclient. The script itself runs until the end.

 

Edited by mgutt
Link to post

Looks like aborting a script does not kill everything.

 

I'm using a user script that 'bashes' a shell script, and that shell script in turn runs a python script:

 


#!/bin/bash
#backgroundOnly=true
#arrayStarted=true
bash /boot/custom/kuroautomode.sh

(I don't know how to use bash to call python directly? So this is how i do it. bash is needed, the .sh and .py reside on the usb stick in a subfolder under boot folder)

 

When this is started, proceslist will show 4 processes:


root 22117 0.0 0.0 104624 26196 ? SL Aug03 0:00 /usr/bin/php /usr/local/emhttp/plugins/user.scripts/startBackground.php /tmp/user.scripts/tmpScripts/kuroAutoMode/script 

root 22118 0.0 0.0 3840 2916 ? S Aug03 0:00 sh -c /tmp/user.scripts/tmpScripts/kuroAutoMode/script >> /tmp/user.scripts/tmpScripts/kuroAutoMode/log.txt 2>&1 

root 22119 0.0 0.0 3840 2808 ? S Aug03 0:00 /bin/bash /tmp/user.scripts/tmpScripts/kuroAutoMode/script
root 22120 0.0 0.0 3840 2916 ? S Aug03 0:00 bash /boot/custom/kuroautomode.sh root 22121 0.0 0.0 18716 14564 ? S Aug03 0:02 /usr/bin/python /boot/custom/kuroautomodecmdline.py

 

Now, when i abort the user script from within unraid, proclist still shows:


root 22119 0.0 0.0 3840 2808 ? S Aug03 0:00 /bin/bash /tmp/user.scripts/tmpScripts/kuroAutoMode/script 

root 22120 0.0 0.0 3840 2916 ? S Aug03 0:00 bash /boot/custom/kuroautomode.sh 

root 22121 0.0 0.0 18716 14564 ? S Aug03 0:02 /usr/bin/python /boot/custom/kuroautomodecmdline.py

 

So it only aborts the script logging, nothing else. Everything else keeps running. If i restart the user.script, i get everything running 2x, 3x, 4x etc. if i don't manually remove it.

 

Am i doing something wrong or is this an issue?

 

Edited by jowi
Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.