[Plugin] CA User Scripts


Recommended Posts

My background user script just got stopped (by unraid i assume) for no apparent reason, did not do it myself, also wasnt an exception in the python code, because that would send me an email and gets logged. Also, in that case only the python script would get killed not the .sh etc. In this case, the whole user script ‘chain’ was gone from process list. 

 

Not sure if there is some syslogging that i could attach for more clarity?

Edited by jowi
Link to comment
On 8/6/2020 at 6:08 PM, jowi said:

Not sure if there is some syslogging that i could attach for more clarity?

Does your phython script return errors to the sh script? If yes, then you could add this to your sh script:

# logging
exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

By that all output is written to the scripts log file and can be access through the CA user scripts GUI.

Link to comment

I have two identical directories in Unraid (one to store movie info - NFO files, posters, etc., and one to store actual movies). I'm trying to find a script that will automate copying all the metadata files in the first directory into the directory with the actual media on a daily basis.

 

When I try to run this:

 

#!/bin/bash

bash cp -R mnt/user/metadates/movies/. mnt/user/movies/ 

 

I get this error:

 

/bin/cp: /bin/cp: cannot execute binary file

 

Does anyone have any thoughts on how I can accomplish this with User Scripts?

Edited by HALPtech
Link to comment
1 hour ago, HALPtech said:

#!/bin/bash

bash cp -R mnt/user/metadates/movies/. mnt/user/movies/ 

 

cp -R /mnt/user/metadates/movies/ /mnt/user/movies/

 

Should work.

 

Why don't you have the metadata going to the proper location to begin with?  Would solve your problem without needing a daily script.

 

You could also use rsync and might be better.  I'm not sure what the default action is for cp if you have duplicate files.. might prompt you to overwrite or skip, didn't look if there's an option to automatically skip.  rsync might offer more flexibility.

Edited by Energen
Link to comment
3 hours ago, HALPtech said:

I have two identical directories in Unraid (one to store movie info - NFO files, posters, etc., and one to store actual movies). I'm trying to find a script that will automate copying all the metadata files in the first directory into the directory with the actual media on a daily basis.

 

When I try to run this:

 


#!/bin/bash

bash cp -R mnt/user/metadates/movies/. mnt/user/movies/ 

 

I get this error:

 


/bin/cp: /bin/cp: cannot execute binary file

 

Does anyone have any thoughts on how I can accomplish this with User Scripts?

Try just:

 

cp -R mnt/user/metadates/movies/. mnt/user/movies/

 

It's not a shell script. No need to put "bash" in front.

Link to comment

Hello everyone

 

I would like to ask for your help to solve a problem i have when executing a script. I have difficulties to have to get the home directory of the user.

Here is the script i have:

#!/bin/bash
echo "echo \$-: " $-
echo -n "shopt login_shell: "
shopt login_shell
echo -n "whoami: "
whoami
echo "echo \$HOME: " $HOME
echo "changing directory to ~"
cd ~
echo -n "current working directory: "
pwd
grep root /etc/passwd

When i run this script from the https://.../Settings/Userscripts i get the following output:

Script location: /tmp/user.scripts/tmpScripts/test/script
Note that closing this window will abort the execution of this script
echo $-: hB
shopt login_shell: login_shell off
whoami: root
echo $HOME: /
changing directory to ~
current working directory: /
root:x:0:0:Console and webGui login account:/root:/bin/bash

So the ~ directory looks like equals to the / directory.

 

If i copy the same content of the script to an a.sh and execute it from the shell, then i get the following output:

echo $-:  hB
shopt login_shell: login_shell          off
whoami: root
echo $HOME:  /root
changing directory to ~
current working directory: /root
root:x:0:0:Console and webGui login account:/root:/bin/bash

The HOME directory is the /root.

Why don't i get the same result when i execute from the user script plugin?

 

In both situation it is a non-interactive, non-login shell.

Link to comment

Thanks for your reply Squid

What is your opinion? Is it the expected behavior or should the PHP script be improved to somehow set the home variable before execution?

 

My actual script would read config parameters from the ~ directory. Of course i could hard code explicit directory path, but i have the feeling that it shouldnt be like that.

Link to comment
31 minutes ago, l4t3b0 said:

Thanks for your reply Squid

What is your opinion? Is it the expected behavior or should the PHP script be improved to somehow set the home variable before execution?

 

My actual script would read config parameters from the ~ directory. Of course i could hard code explicit directory path, but i have the feeling that it shouldnt be like that.

Not everything needs to be configurable if it's never going to change.

Link to comment

The above.  But, this plugin was designed for users who need to run a quick couple of commands on a schedule and also not have to worry about line ending formats.  What people have leveraged this for exceeds it's design goal and specifications.

 

  If you know what you're doing, you will always have the most flexibility by directly running your own scripts and forgoing this plugin altogether.

Link to comment

@JoeUnraidUser: Thanks for your reply. Actually the code i pasted here is not the script i want to execute. That one was only for demo purposes.

Actually i'm trying to execute a python script.

Neither bash nor python script works and none of them is aware of the home directory.

 

Unfortunately i have to rely on a python 3rd party library, that is trying to use a config file in the home directory.

 

I have the feeling that @mgutt is right and it has something to do with the php script or with the web server settings.

Link to comment
On 8/8/2020 at 12:27 PM, mgutt said:

Does your phython script return errors to the sh script? If yes, then you could add this to your sh script:


# logging
exec > >(tee --append --ignore-interrupts $(dirname ${0})/log.txt) 2>&1

By that all output is written to the scripts log file and can be access through the CA user scripts GUI.

The python script catches any exceptions and sends an email with the exception message.... that also doesn't happen, so i don't why returning an error would work? It just looks like the process gets shot in the head. Immediate death.

Link to comment
1 minute ago, jowi said:

The python script catches any exceptions and sends an email with the exception message.... that also doesn't happen, so i don't why returning an error would work? It just looks like the process gets shot in the head. Immediate death.

The point is to log your stdout & stderr to a logfile, so that even if the process does get shot in the head, you can still go look in the log file to see what happened up until the moment the process was killed.

Link to comment
31 minutes ago, Phoenix Down said:

The point is to log your stdout & stderr to a logfile, so that even if the process does get shot in the head, you can still go look in the log file to see what happened up until the moment the process was killed.

But the python script doesn't write to stdin or stdout...? It just sends commands over IP to my TV, and if an exception is thrown, it sends an email with the exception...

Link to comment
6 hours ago, jowi said:

This script runs 24/7... i don't want to create GB's of logging... so i only log errors. Which is best practice anyway.

What is the end goal here? Are you just wanting an email notification if your process is killed/dies? Presumably, the only time when it cannot send the notification itself is when it's terminated by a SIGKILL.

Link to comment
22 minutes ago, Phoenix Down said:

What is the end goal here? Are you just wanting an email notification if your process is killed/dies? Presumably, the only time when it cannot send the notification itself is when it's terminated by a SIGKILL.

Exactly... THAT is the point. Unraid user script ALWAYS kills user processes with SIGKILL... also when you manually stop the script. And sometimes the script gets killed off for no reason by unraid. And there is no way to react to that. You just get a bullet in the head.

Link to comment
1 minute ago, jowi said:

Exactly... THAT is the point. Unraid user script ALWAYS kills user processes with SIGKILL... also when you manually stop the script. And sometimes the script gets killed off for no reason by unraid. And there is no way to react to that. You just get a bullet in the head.

Well, the easiest solution is to ask @Squid to add a delay between sending SIGTERM and SIGKILL. I generally use 8 seconds to allow processes reasonable amount of time for cleaning up.

 

You can also have a monitoring process whose sole job would be monitor the process(es) you care about and notify you if they disappear. You might want to do this any way, in cases where your main process dies without being able to send a notification (for any reason, not just from SIGKILL). This monitor could be a daemon or a cronjob (I prefer the latter).

Link to comment
24 minutes ago, Phoenix Down said:

Well, the easiest solution is to ask @Squid to add a delay between sending SIGTERM and SIGKILL. I generally use 8 seconds to allow processes reasonable amount of time for cleaning up.

 

You can also have a monitoring process whose sole job would be monitor the process(es) you care about and notify you if they disappear. You might want to do this any way, in cases where your main process dies without being able to send a notification (for any reason, not just from SIGKILL). This monitor could be a daemon or a cronjob (I prefer the latter).

Ok, but such a process would be started by.... the user script plugin :) and... the user script plugin sometimes kills your process for no apparent reason (as far as i can. tell) also this monitoring process, with... a sigkill bullet :) so, how do we monitor the monitoring process... 

 

I had this python script built as a daemon years ago, and it was started by adding it to the go file. But since the latest unraid version, the python daemon library i used, doesn’t work anymore, and i couldnt be bothered finding out why, so i just converted it to a user script.

 

I think the request for sending sigterm first and sigkill a few seconds later, solves everything.

Edited by jowi
Link to comment
6 minutes ago, jowi said:

Ok, but such a process would be started by.... the user script plugin :) and... the user script plugin sometimes kills your process for no apparent reason (as far as i can. tell) also this monitoring process, with... a sigkill bullet :) so, how do we monitor the monitoring process... 

 

I had this python script built as a daemon years ago, and it was started by adding it to the go file. But since the latest unraid version, the python daemon library i used, doesn’t work anymore, and i couldnt be bothered finding out why, so i just converted it to a user script.

 

I think the request for sending sigterm first and sigkill a few seconds later, solves everything.

Which is why I prefer to run the monitor process as a cronjob and not as a daemon. No need to monitor the monitor. It starts up, does its check, and finishes, all in a second or two.

  • Haha 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.