[Plugin] CA User Scripts


1233 posts in this topic Last Reply

Recommended Posts

  • Replies 1.2k
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Just a simple little plugin to act as a front end for any little scripts that you may have that you may need to run every once in a while, and don't feel like dropping down to the command line to do i

I could make up something long, convoluted, and technical as a reason why it doesn't do that but it would all be bs.  The simple answer is that when I did this, I never thought of that, and why it sti

@Squid I tried many commands and by that I found out that pkill does not kill (all) child processes: root@Thoth:/tmp# pgrep -f isleep2m | xargs --no-run-if-empty ps fp PID TTY STAT T

Posted Images

On 7/31/2020 at 6:08 AM, mgutt said:

A short check with top confirmed that the rlconeorig process is still active.

 

But was the script itself still running?  It's a royal pain to kill off sub-processes from the main script, especially since the only thing I've got to go on is the name of the main script.  The system does attempt to kill off children (pkill -TREM -P). 

 

If you can come up with a better solution, then please let me know.

Link to post
2 minutes ago, Squid said:

But was the script itself still running?  It's a royal pain to kill off sub-processes from the main script, especially since the only thing I've got to go on is the name of the main script.  The system does attempt to kill off children (pkill -TREM -P). 

 

If you can come up with a better solution, then please let me know.

What if you crawl the process tree and kill each child process individually? It's a bit heavy handed, but should take care of any orphaned child processes.

Link to post

All scripts should be well behaved and handle the spawning and reaping of child processes themselves. It is their own responsibility.

 

I suggest serious scripters put together a template using best practices to make it easier for other scripters to not fall into the common pitfalls.

Edited by BRiT
Link to post
24 minutes ago, Phoenix Down said:

What if you crawl the process tree and kill each child process individually? It's a bit heavy handed, but should take care of any orphaned child processes.

And that's what the -P on the pkill is supposed to do.  Kill off children from the main PID

Link to post
54 minutes ago, Squid said:

But was the script itself still running?

Yes. I proved it in my post. The script contains 4 rclone commands. After I aborted the script through the GUI it does not kill the running rlcone process and I need to kill every following command separately (the first rclone command was already finished, so I needed to kill only 3):

64947463_2020-07-3114_04_39.png.5bfbb603f8d0f63a4340c360e7d2e300.png

(The "-c" flag returns the amount of processes that were killed)

 

And after I killed the last rclone process it executes the very last line of the script, too (the rmdir command).

 

This proves that the script is still running (and not only one long process).

 

How do you execute the script if the "run in background" button is pressed?

Edited by mgutt
Link to post
16 hours ago, BRiT said:

All scripts should be well behaved and handle the spawning and reaping of child processes themselves. It is their own responsibility.

I understand there is no way to respond to a sigkill... 

Link to post

@Squid

I tried many commands and by that I found out that pkill does not kill (all) child processes:

root@Thoth:/tmp# pgrep -f isleep2m | xargs --no-run-if-empty ps fp
  PID TTY      STAT   TIME COMMAND
19608 ?        SL     0:00 /usr/bin/php /usr/local/emhttp/plugins/user.scripts/startBackground.php /tmp/user.scripts/tmpScripts/isleep2m/script
19609 ?        S      0:00  \_ sh -c /tmp/user.scripts/tmpScripts/isleep2m/script  >> /tmp/user.scripts/tmpScripts/isleep2m/log.txt 2>&1
19610 ?        S      0:00      \_ /bin/bash /tmp/user.scripts/tmpScripts/isleep2m/script
root@Thoth:/tmp# pkill -9 -P 19608
root@Thoth:/tmp# pgrep -f isleep2m | xargs --no-run-if-empty ps fp
  PID TTY      STAT   TIME COMMAND
19610 ?        S      0:00 /bin/bash /tmp/user.scripts/tmpScripts/isleep2m/script

But I finally found two solutions to repair the "Abort Script" button.:

 

A) Run every script in its own process group. Something similar to this should work:

(set -m;sh -c /tmp/user.scripts/tmpScripts/kllme/script & pid=$!;echo $pid)

Then you are able to get the pgid as follows:

ps opgid= $pid

Now you can kill the complete tree with the pgid

kill -- -$pgid

Note: It is important to execute the user script and all the other commands (like logging) in a new process group (pgid) as at the moment all running user scripts executed through PHP get the same pgid. In my case its "1733" and killing this group would kill all scripts: ("backup_shares" and "isleep2m"):

root@Thoth:/tmp# pgrep -f isleep2m | xargs --no-run-if-empty ps fp
  PID TTY      STAT   TIME COMMAND
11772 ?        SL     0:00 /usr/bin/php /usr/local/emhttp/plugins/user.scripts/startBackground.php /tmp/user.scripts/tmpScripts/isleep2m/script
11773 ?        S      0:00  \_ sh -c /tmp/user.scripts/tmpScripts/isleep2m/script  >> /tmp/user.scripts/tmpScripts/kllme/log.txt 2>&1
11774 ?        S      0:00      \_ /bin/bash /tmp/user.scripts/tmpScripts/isleep2m/script
root@Thoth:/tmp# ps opgid= 11772
 1733
root@Thoth:/tmp# pgrep -g 1733 | xargs --no-run-if-empty ps fp
  PID TTY      STAT   TIME COMMAND
 1733 ?        Ss     0:00 /usr/sbin/atd -b 15 -l 1
26844 ?        S      0:00  \_ /usr/sbin/atd -b 15 -l 1
26845 ?        S      0:00  |   \_ sh
26846 ?        SL   346:12  |       \_ /usr/bin/php /usr/local/emhttp/plugins/fix.common.problems/scripts/extendedTest.php
23911 ?        S      0:00  \_ /usr/sbin/atd -b 15 -l 1
23912 ?        S      0:00  |   \_ sh
23913 ?        SL     0:00  |       \_ /usr/bin/php /usr/local/emhttp/plugins/user.scripts/startBackground.php /tmp/user.scripts/tmpScripts/backup_shares/script
23914 ?        S      0:00  |           \_ sh -c /tmp/user.scripts/tmpScripts/backup_shares/script  >> /tmp/user.scripts/tmpScripts/backup_shares/log.txt 2>&1
23915 ?        S      0:00  |               \_ /bin/bash /tmp/user.scripts/tmpScripts/backup_shares/script
23917 ?        S      0:00  |                   \_ /bin/bash /tmp/user.scripts/tmpScripts/backup_shares/script
23925 ?        S      0:00  |                   |   \_ tee --append --ignore-interrupts /tmp/user.scripts/tmpScripts/backup_shares/log.txt
 2752 ?        S      0:00  |                   \_ /bin/bash /usr/sbin/rclone sync /mnt/user/sharedir nextcloud:sharedir --create-empty-src-dirs --ignore-checksum --bwlimit 3M --checkers
 2753 ?        Sl    10:48  |                       \_ rcloneorig --config /boot/config/plugins/rclone/.rclone.conf sync /mnt/user/sharedir nextcloud:sharedir --create-empty-src-dirs --i
11770 ?        S      0:00  \_ /usr/sbin/atd -b 15 -l 1
11771 ?        S      0:00      \_ sh
11772 ?        SL     0:00          \_ /usr/bin/php /usr/local/emhttp/plugins/user.scripts/startBackground.php /tmp/user.scripts/tmpScripts/isleep2m/script
11773 ?        S      0:00              \_ sh -c /tmp/user.scripts/tmpScripts/isleep2m/script  >> /tmp/user.scripts/tmpScripts/isleep2m/log.txt 2>&1
11774 ?        S      0:00                  \_ /bin/bash /tmp/user.scripts/tmpScripts/isleep2m/script
11776 ?        S      0:00                      \_ sleep 2m

Maybe you need to execute "set +m" to disable this feature after every executed user script?!

 

B) Another solution is to get all child process ids by the following recursive function and kill all of them at once:

list_descendants ()
{
  local children=$(ps -o pid= --ppid "$1")
  for childpid in $children
  do
    list_descendants "$childpid"
  done
  echo "$children"
}
kill $(list_descendants $pid)

 

Edited by mgutt
  • Like 2
  • Thanks 1
Link to post

My background user script just got stopped (by unraid i assume) for no apparent reason, did not do it myself, also wasnt an exception in the python code, because that would send me an email and gets logged. Also, in that case only the python script would get killed not the .sh etc. In this case, the whole user script ‘chain’ was gone from process list. 

 

Not sure if there is some syslogging that i could attach for more clarity?

Edited by jowi
Link to post
On 8/6/2020 at 6:08 PM, jowi said:

Not sure if there is some syslogging that i could attach for more clarity?

Does your phython script return errors to the sh script? If yes, then you could add this to your sh script:

# logging
exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

By that all output is written to the scripts log file and can be access through the CA user scripts GUI.

Link to post

I have two identical directories in Unraid (one to store movie info - NFO files, posters, etc., and one to store actual movies). I'm trying to find a script that will automate copying all the metadata files in the first directory into the directory with the actual media on a daily basis.

 

When I try to run this:

 

#!/bin/bash

bash cp -R mnt/user/metadates/movies/. mnt/user/movies/ 

 

I get this error:

 

/bin/cp: /bin/cp: cannot execute binary file

 

Does anyone have any thoughts on how I can accomplish this with User Scripts?

Edited by HALPtech
Link to post
1 hour ago, HALPtech said:

#!/bin/bash

bash cp -R mnt/user/metadates/movies/. mnt/user/movies/ 

 

cp -R /mnt/user/metadates/movies/ /mnt/user/movies/

 

Should work.

 

Why don't you have the metadata going to the proper location to begin with?  Would solve your problem without needing a daily script.

 

You could also use rsync and might be better.  I'm not sure what the default action is for cp if you have duplicate files.. might prompt you to overwrite or skip, didn't look if there's an option to automatically skip.  rsync might offer more flexibility.

Edited by Energen
Link to post
3 hours ago, HALPtech said:

I have two identical directories in Unraid (one to store movie info - NFO files, posters, etc., and one to store actual movies). I'm trying to find a script that will automate copying all the metadata files in the first directory into the directory with the actual media on a daily basis.

 

When I try to run this:

 


#!/bin/bash

bash cp -R mnt/user/metadates/movies/. mnt/user/movies/ 

 

I get this error:

 


/bin/cp: /bin/cp: cannot execute binary file

 

Does anyone have any thoughts on how I can accomplish this with User Scripts?

Try just:

 

cp -R mnt/user/metadates/movies/. mnt/user/movies/

 

It's not a shell script. No need to put "bash" in front.

Link to post

Hello everyone

 

I would like to ask for your help to solve a problem i have when executing a script. I have difficulties to have to get the home directory of the user.

Here is the script i have:

#!/bin/bash
echo "echo \$-: " $-
echo -n "shopt login_shell: "
shopt login_shell
echo -n "whoami: "
whoami
echo "echo \$HOME: " $HOME
echo "changing directory to ~"
cd ~
echo -n "current working directory: "
pwd
grep root /etc/passwd

When i run this script from the https://.../Settings/Userscripts i get the following output:

Script location: /tmp/user.scripts/tmpScripts/test/script
Note that closing this window will abort the execution of this script
echo $-: hB
shopt login_shell: login_shell off
whoami: root
echo $HOME: /
changing directory to ~
current working directory: /
root:x:0:0:Console and webGui login account:/root:/bin/bash

So the ~ directory looks like equals to the / directory.

 

If i copy the same content of the script to an a.sh and execute it from the shell, then i get the following output:

echo $-:  hB
shopt login_shell: login_shell          off
whoami: root
echo $HOME:  /root
changing directory to ~
current working directory: /root
root:x:0:0:Console and webGui login account:/root:/bin/bash

The HOME directory is the /root.

Why don't i get the same result when i execute from the user script plugin?

 

In both situation it is a non-interactive, non-login shell.

Link to post

Thanks for your reply Squid

What is your opinion? Is it the expected behavior or should the PHP script be improved to somehow set the home variable before execution?

 

My actual script would read config parameters from the ~ directory. Of course i could hard code explicit directory path, but i have the feeling that it shouldnt be like that.

Link to post
31 minutes ago, l4t3b0 said:

Thanks for your reply Squid

What is your opinion? Is it the expected behavior or should the PHP script be improved to somehow set the home variable before execution?

 

My actual script would read config parameters from the ~ directory. Of course i could hard code explicit directory path, but i have the feeling that it shouldnt be like that.

Not everything needs to be configurable if it's never going to change.

Link to post

The above.  But, this plugin was designed for users who need to run a quick couple of commands on a schedule and also not have to worry about line ending formats.  What people have leveraged this for exceeds it's design goal and specifications.

 

  If you know what you're doing, you will always have the most flexibility by directly running your own scripts and forgoing this plugin altogether.

Link to post

@JoeUnraidUser: Thanks for your reply. Actually the code i pasted here is not the script i want to execute. That one was only for demo purposes.

Actually i'm trying to execute a python script.

Neither bash nor python script works and none of them is aware of the home directory.

 

Unfortunately i have to rely on a python 3rd party library, that is trying to use a config file in the home directory.

 

I have the feeling that @mgutt is right and it has something to do with the php script or with the web server settings.

Link to post

I had the same problem when running PERL scripts.  I had to hardcode library and config paths.

 

Try running the script in a login shell by doing the following:

su - root /mnt/user/test/test.sh <<<password

--or--

sudo -i -u root /mnt/user/test/test.sh

 

Edited by JoeUnraidUser
Link to post
On 8/8/2020 at 12:27 PM, mgutt said:

Does your phython script return errors to the sh script? If yes, then you could add this to your sh script:


# logging
exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

By that all output is written to the scripts log file and can be access through the CA user scripts GUI.

The python script catches any exceptions and sends an email with the exception message.... that also doesn't happen, so i don't why returning an error would work? It just looks like the process gets shot in the head. Immediate death.

Link to post
1 minute ago, jowi said:

The python script catches any exceptions and sends an email with the exception message.... that also doesn't happen, so i don't why returning an error would work? It just looks like the process gets shot in the head. Immediate death.

The point is to log your stdout & stderr to a logfile, so that even if the process does get shot in the head, you can still go look in the log file to see what happened up until the moment the process was killed.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.