Jump to content
Squid

[Plugin] CA User Scripts

1147 posts in this topic Last Reply

Recommended Posts

31 minutes ago, Phoenix Down said:

The point is to log your stdout & stderr to a logfile, so that even if the process does get shot in the head, you can still go look in the log file to see what happened up until the moment the process was killed.

But the python script doesn't write to stdin or stdout...? It just sends commands over IP to my TV, and if an exception is thrown, it sends an email with the exception...

Share this post


Link to post
1 hour ago, jowi said:

But the python script doesn't write to stdin or stdout...? It just sends commands over IP to my TV, and if an exception is thrown, it sends an email with the exception...

Then you better add some print statements :)

  • Like 1
  • Haha 1

Share this post


Link to post
1 minute ago, Phoenix Down said:

Then you better add some print statements :)

This script runs 24/7... i don't want to create GB's of logging... so i only log errors. Which is best practice anyway.

Share this post


Link to post
6 hours ago, jowi said:

This script runs 24/7... i don't want to create GB's of logging... so i only log errors. Which is best practice anyway.

What is the end goal here? Are you just wanting an email notification if your process is killed/dies? Presumably, the only time when it cannot send the notification itself is when it's terminated by a SIGKILL.

Share this post


Link to post
22 minutes ago, Phoenix Down said:

What is the end goal here? Are you just wanting an email notification if your process is killed/dies? Presumably, the only time when it cannot send the notification itself is when it's terminated by a SIGKILL.

Exactly... THAT is the point. Unraid user script ALWAYS kills user processes with SIGKILL... also when you manually stop the script. And sometimes the script gets killed off for no reason by unraid. And there is no way to react to that. You just get a bullet in the head.

Share this post


Link to post
1 minute ago, jowi said:

Exactly... THAT is the point. Unraid user script ALWAYS kills user processes with SIGKILL... also when you manually stop the script. And sometimes the script gets killed off for no reason by unraid. And there is no way to react to that. You just get a bullet in the head.

Well, the easiest solution is to ask @Squid to add a delay between sending SIGTERM and SIGKILL. I generally use 8 seconds to allow processes reasonable amount of time for cleaning up.

 

You can also have a monitoring process whose sole job would be monitor the process(es) you care about and notify you if they disappear. You might want to do this any way, in cases where your main process dies without being able to send a notification (for any reason, not just from SIGKILL). This monitor could be a daemon or a cronjob (I prefer the latter).

Share this post


Link to post
Posted (edited)
24 minutes ago, Phoenix Down said:

Well, the easiest solution is to ask @Squid to add a delay between sending SIGTERM and SIGKILL. I generally use 8 seconds to allow processes reasonable amount of time for cleaning up.

 

You can also have a monitoring process whose sole job would be monitor the process(es) you care about and notify you if they disappear. You might want to do this any way, in cases where your main process dies without being able to send a notification (for any reason, not just from SIGKILL). This monitor could be a daemon or a cronjob (I prefer the latter).

Ok, but such a process would be started by.... the user script plugin :) and... the user script plugin sometimes kills your process for no apparent reason (as far as i can. tell) also this monitoring process, with... a sigkill bullet :) so, how do we monitor the monitoring process... 

 

I had this python script built as a daemon years ago, and it was started by adding it to the go file. But since the latest unraid version, the python daemon library i used, doesn’t work anymore, and i couldnt be bothered finding out why, so i just converted it to a user script.

 

I think the request for sending sigterm first and sigkill a few seconds later, solves everything.

Edited by jowi

Share this post


Link to post
6 minutes ago, jowi said:

Ok, but such a process would be started by.... the user script plugin :) and... the user script plugin sometimes kills your process for no apparent reason (as far as i can. tell) also this monitoring process, with... a sigkill bullet :) so, how do we monitor the monitoring process... 

 

I had this python script built as a daemon years ago, and it was started by adding it to the go file. But since the latest unraid version, the python daemon library i used, doesn’t work anymore, and i couldnt be bothered finding out why, so i just converted it to a user script.

 

I think the request for sending sigterm first and sigkill a few seconds later, solves everything.

Which is why I prefer to run the monitor process as a cronjob and not as a daemon. No need to monitor the monitor. It starts up, does its check, and finishes, all in a second or two.

Share this post


Link to post

Yeah ok but why then use the user script module anyway at all, and not just run my python script (like i did before....) as a cron job or daemon.... 

Share this post


Link to post
3 minutes ago, jowi said:

Yeah ok but why then use the user script module anyway at all, and not just run my python script (like i did before....) as a cron job or daemon.... 

IMHO CA User Script is just a GUI for simplifying the process of setting up cronjobs. If you know what you are doing, you absolutely could just setup the cronjobs yourself.

Share this post


Link to post

In my case - when i use a python script that needs the home directory - the program is not raising an exception but does not find the config file in the home directory and therefore not initialized properly.

 

Some more detail about home directory is not set correctly issue.

If i install a script in the plugin that relies on the home directory than if it is executed from the crontab, then it works.

So the "run script" and "run in background" buttons are malfunctioning in such cases.

Share this post


Link to post
4 minutes ago, l4t3b0 said:

In my case - when i use a python script that needs the home directory - the program is not raising an exception but does not find the config file in the home directory and therefore not initialized properly.

 

Some more detail about home directory is not set correctly issue.

If i install a script in the plugin that relies on the home directory than if it is executed from the crontab, then it works.

So the "run script" and "run in background" buttons are malfunctioning in such cases.

Did you try running the script in a login shell like I suggested above?

Share this post


Link to post

yes, i have tried the sudo version: sudo -i -u root /tmp/test.py

 

It works!

 

But i dont want to write a script to execute another one.

Share this post


Link to post

Dear @Squid

 

I dont know too much about php, but i tried to debug your plugin.

I have figured out, that the startScript.sh file already has the symtom that the HOME directory is not set.

 

But with a 2 line modification i could fix the problem.

 

I kindly ask you to examine my modification and if you agree, than please add it to your plugin, so others will have the fix too.

 

Quote

#!/bin/bash
echo "Script location: <b>$1</b>"
export HOME=$(grep $(whoami) /etc/passwd | cut -d: -f 6)
source ${HOME}/.bashrc
echo "<font color='red'>Note that closing this window will abort the execution of this script</font>"
"$1" "$2" 2>&1

With this modification my python plugin works too :)

 

Actually i need only the export command and not the source. I thought that it could be useful for others in the future to try to initialize the environment as much close to the login shell as possible.

 

What is your opinion?

Share this post


Link to post
Posted (edited)

I have been using User Scripts with no problem for a long time. I tried to modify the timings of my some of my tasks, and APPLY and DONE does nothing. The button is clicked, but no action is taken and refreshing the page leads to old settings. Not working in chrome, IE or Firefox.

 

How can i save my updated changes.

 

NOTE: I also have one user script with an @ in the title and the cog is completely unclickable, not sure if this is related. If i could remove that too would be great.

 

image.thumb.png.a6c6e0d014efd9cd62323f99cd71b5ef.png

bigrig-diagnostics-20200815-1326.zip

Edited by mihcox

Share this post


Link to post
1 hour ago, mihcox said:

I also have one user script with an @ in the title and the cog is completely unclickable

Check the browser developer tools (F12 -> Console in Chrome). I bet there are javascript errors listed.

Share this post


Link to post
41 minutes ago, mgutt said:

Check the browser developer tools (F12 -> Console in Chrome). I bet there are javascript errors listed.

Errors appear related to the @Reports... How can i manually remove this, as the gui does not respond when i click on it.

Share this post


Link to post
27 minutes ago, mihcox said:

How can i manually remove this, as the gui does not respond when i click on it.

I tried it by myself. You can bypass this error by right mouse click -> inspect element and then remove the "@" symbol of the "id" attribut as shown in this screenshot (so it becomes id="nametest")

 

1300506083_2020-08-1522_45_32.png.2ec9815b0a4a523f938e0f79d74ddd7d.png

 

After that the cog works and you are able to delete this script (and add a new one without the @ symbol in the name).

Share this post


Link to post
6 minutes ago, mgutt said:

I tried it by myself. You can bypass this error by right mouse click -> inspect element and then remove the "@" symbol of the "id" attribut as shown in this screenshot (so it becomes id="nametest")

 

1300506083_2020-08-1522_45_32.png.2ec9815b0a4a523f938e0f79d74ddd7d.png

 

After that the cog works and you are able to delete this script (and add a new one without the @ symbol in the name).

Resolved both issues, thank you!

Share this post


Link to post

Hi Squid,

 

Great plugin !

I have more than 25 scripts runing daily thanks to you.

Could be possible to implement something like "Docker Folder" ? It would be much easier to organize large amounts of scripts.

That would be awesome !

 

Thanks !

 

 

Share this post


Link to post
On 8/23/2020 at 9:55 PM, Marcel_Costa said:

Hi Squid,

 

Great plugin !

I have more than 25 scripts runing daily thanks to you.

Could be possible to implement something like "Docker Folder" ? It would be much easier to organize large amounts of scripts.

That would be awesome !

 

Thanks !

 

 

wow, this is what i was coming here for too! my scripts are starting to stack up, would love to be able to organize them in a folder type structure. i.e. array scripts, backup scripts, VM scripts, etc. Goes without saying, Squid, great plugin for sure.

Share this post


Link to post

Thanks for the plugin! I have a question about permissions. I am in the process of migrating my Plex/NAS server from a Mac. On my Mac I have a script that syncs new files every 5 minutes from a remote seedbox. I have altered this script for UnRAID to download to my "Downloads" share. It appears to work fine as far as the syncing goes, but the files on the share are read-only and I can't do anything about that. What should I do to make it so I can have read/write permissions from my other devices when I access the share? Thanks!

 

Here's the script:

#!/bin/bash
#### Seedbox Sync
#
# Sync various directories between home and seedbox, do some other things also.

SCRIPT_NAME="$(basename "$0")"
LOG_DIR="/mnt/user/Downloads/Seedbox/Scripts/Logs"
SYNC_LOG="$SCRIPT_NAME.log"

LOG_FILE="$LOG_DIR/$SYNC_LOG"

USERNAME='***'
PASS='***'
HOST='***'
PORT='***'

# Number of files to download simultaneouslyvupv
# Number of segments/parts to split the downloads into
# Minimum size each chunk (part) should be

nfile='2'
nsegment='10'
minchunk='1'

# Location of remote files ready for pickup, no trailing slash
REMOTE_MEDIA="***/downloads/Sync/"
# Destination for remote media to be stored for further processing
LOCAL_MEDIA="/mnt/user/Downloads/Seedbox/Sync"
# Local processing directory files get moved to after sync
PROC_MEDIA="/mnt/user/Downloads/Seedbox/Processing"

# Test for local lockfile, exit if it exists
BASE_NAME="$(basename "$0")"
LOCK_FILE="/mnt/user/Downloads/Seedbox/Scripts/$BASE_NAME.lock"

# Test for remote lockfile, exit if exists
REMOTE_LOCK_FILE="***/scripts/post_process.sh.lock"

# Fix permissions issue
umask 002

# If a log file exists, rename it
if [ -f $LOG_FILE ]; then
    mv $LOG_FILE $LOG_FILE.last
fi

echo "${0} Starting at $(date)"
trap "rm -f ${LOCK_FILE}" SIGINT SIGTERM
    if [ -e "${LOCK_FILE}" ]
    then
        echo "${base_name} is running already."
        exit
    else
		ssh $USERNAME@$HOST "test -e $REMOTE_LOCK_FILE"
		if [ "$?" -eq "0" ]; then
			echo "Post Process is running on remote server, exiting..."
			exit
		fi
        touch "$LOCK_FILE"
        lftp -p "${PORT}" -u "${USERNAME},${PASS}" sftp://"${HOST}" << EOF
        set ftp:list-options -a
        set sftp:auto-confirm yes
        set pget:min-chunk-size ${minchunk}
        set pget:default-n ${nsegment}
        set mirror:use-pget-n ${nsegment}
        set mirror:parallel-transfer-count ${nfile}
        set mirror:parallel-directories yes
        set xfer:use-temp-file yes
        set xfer:temp-file-name *.lftp
        mirror -c -v --loop --Remove-source-dirs "${REMOTE_MEDIA}" "${LOCAL_MEDIA}"
        quit
EOF
    echo "${0} Remote sync finished at $(date)"

	# Move sync'd files to processing directory
	rsync -av --progress --ignore-existing --remove-source-files --prune-empty-dirs -O \
	--log-file=$LOG_FILE \
	--log-file-format="%f - %n" \
	${LOCAL_MEDIA}/ \
	${PROC_MEDIA}/

	# Clear Sync directory of empty folders and .DS_Store files
	find $LOCAL_MEDIA -name '.DS_Store' -type f -delete
	find $LOCAL_MEDIA -depth -type d -empty -exec rmdir "{}" \;
	mkdir -p $LOCAL_MEDIA
	rm -f "$LOCK_FILE"
	trap - SIGINT SIGTERM
	exit
fi

 

Share this post


Link to post
On 3/28/2020 at 10:37 PM, Derek_ said:
  • When i run Borg with a user.script - it performs its operations as root (as expected) and the dir/file ownership is root:root
  • When i run and rsync copy with a user.script - it performs its operations as root (as expected) and the file ownership is with nobody:users.

 

@Derek_ did you ever figure this out? I am having the same issue. When I run lftp, any folders/files it creates are root:root and i need it to be nobody:users. Thanks!

Edited by cmarshall85

Share this post


Link to post

root:root vs nobody:users doesn't matter.  Under either scenario, the permissions on files for access need to be either 666 or 777

 

Use the chmod command in the script if you can't get rsync to set permissions correctly

Share this post


Link to post

Hello,

Everyday, a backup software create a folder on /mnt/user/disk2/Nextcloud_backup :

 

2020-09-17

2020-09-16

2020-09-15

2020-09-14

2020-09-13

...

 

How can I keep for example last 10 folders ?

Thanks

Edited by Alex.b

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.