[Plugin] CA User Scripts


Recommended Posts

2 hours ago, uek2wooF said:

What is the viewDockerLogSize script telling me exactly?  The zoneminder log gets big pretty fast.  It doesn't seem to be the same info you get from docker logs <container>.  Can I move that log outside the container somehow?

 

Thanks!

That script is the precursor to the container size button.

 

Basically the only two columns of interest in your image filling up is the writeable and log.

 

There's an entry in the docker FAQ about limiting the log size.   New containers as they are installed are automatically limited according to settings - docker (after disabling the service)

Link to comment

Hey all, running some docker commands using "At Startup of Array" and finding that docker isn't started when the script runs- is this expected?

I initially tried waiting for a bit too, but it seems to block the startup of docker entirely until it's completed/run.

 

If this is expected; is there an option/ability to say "At startup of array, once everything else is ready?" 😅

 

Thanks!

Link to comment

I'm trying to run a custom script to back up multiple shares with rclone. Here's a sample of the script:

 

rclone_shares=( "movies" "TV" )

mkdir "/mnt/user/scripts/rclone/logs/`date -I`"

for share in "${rclone_shares[@]}"
do
  echo "Working on $share"
  rclone sync "/mnt/user/$share" "crypt:current/$share" --log-file "/mnt/user/scripts/rclone/logs/`date -I`/$share.txt" --progress --fast-list --max-backlog 1000000 --transfers 3 --tpslimit 3 --checkers 16 --drive-chunk-size 128M --log-level=INFO --backup-dir "crypt:old/`date -I`/$share/" --track-renames --track-renames-strategy "modtime" --modify-window 1s
done

/usr/local/emhttp/webGui/scripts/notify -s "Rclone" -d "Finished processing all shares"

My problem is that the script spawns multiple processes, and they aren't killed when I press "abort script". I then have to go through and manually kill all the processes. Is there a way I can run this and avoid the issue? Thanks.

Link to comment
48 minutes ago, Derek_ said:

Is it possible to rename 'script' to something else? When i do, it disappears from the user.scripts GUI.

No.  That was a design limitation back when the plugin was introduced.  The use of the plugin has very significantly morphed though from what I thought it would be doing (running a 1 or 2 line script) into how many people are using it now.  

 

Dictating a set file name does make my life a ton easier, but you do have the option to rename the folder to whatever you choose.

 

Link to comment
1 hour ago, Squid said:

No.  That was a design limitation back when the plugin was introduced.  The use of the plugin has very significantly morphed though from what I thought it would be doing (running a 1 or 2 line script) into how many people are using it now.  

 

Dictating a set file name does make my life a ton easier, but you do have the option to rename the folder to whatever you choose.

 

Thanks. Yes, i have useful folder names which is ok. Mainly for backup purposes, i have to rename them as i back them up. No biggie, but would be nice.

 

Thank you for the plugin :)

Link to comment
On 2/13/2019 at 3:08 AM, Squid said:

Probably in retrospect, this all makes perfect sense as user scripts doesn't execute any script through the shell, so any environment variables aren't set.  (This is also why you have to give items the full path in some cases vs when executing via a shell you don't have to)

Hi Squid. I'm trying to understand and hopefully fix a complicated problem i'm having, and i've been hunting high and low for the cause. I'm not experienced with BASH and scripting. I've learned a lot so far but sadly no cigar. I'm wondering if your statement has any bearing?

 

I've made it more difficult for myself as i have an unusual setup. I run some scripts as my 'backup-user' - i.e. not as root. The user has a shell through the SSH plugin (it's also how i SSH into my box - rather than as root). I can't, or rather don't want to use root, because if i do it 'breaks' the archive as all operations on the archive are done by and owned by the backup-user. I could ditch the backup-user and do everything including SSH into unRAID from the clients as root - but i've designed it so i wouldn't have to. I've actually made a suggestion to Lime to improve the flexibility of the SSH landscape.

 

To the problem:

  • I'm using 'script' to call my 'borg-healthcheck.sh' script as the backup-user.
  • I can't get 'script' (parent) to pass the environment variables to 'borg-check.sh' (child) nor can i get 'borg-check.sh' to return the error codes to 'script' so that i can notify results.

I can duplicate the environment variables (most apply to both scripts). But this is bad practise and RCs still don't go back to parent script and without sending RCs back to the parent i can't use notify (notify doesn't work as non-root).

 

I don't believe this is a user.scripts issue. It's either i simply don't know how to do it, or it's an environmental peculiarity. If you have any advice - i'd appreciate it. I have spent days on the whole thing (never touched BASH before this) :)  Thanks.

 

FYI running notify command as a backup-user user I get messages like this, which suggests permissions (understandably):

Warning: file_put_contents(/tmp/notifications/archive/2020_04_11_1913_1586596404.notify): failed to open stream: No such file or directory in /usr/local/emhttp/plugins/dynamix/scripts/notify on line 201

- i swear it used to work in 6.7.2. 🤔

 

Link to comment
On 4/4/2020 at 5:09 AM, dalgibbard said:

Hey all, running some docker commands using "At Startup of Array" and finding that docker isn't started when the script runs- is this expected?

I initially tried waiting for a bit too, but it seems to block the startup of docker entirely until it's completed/run.

A simple solution could be to put a sleep in your script before it does anything else:

sleep 10m

 

Unless unRAID has a specific order of things but i doubt it would wait for your script to complete before it started it's next thing. What if you had a script that lasted hours?

 

Failing a simple 'sleep' command, you could try calling another script and closing the first one (i.e. not waiting for the child to complete before the parent completes).

Link to comment
On 4/5/2020 at 4:07 AM, Mushin said:

I'm trying to run a custom script to back up multiple shares with rclone. Here's a sample of the script:

 

My problem is that the script spawns multiple processes, and they aren't killed when I press "abort script". I then have to go through and manually kill all the processes. Is there a way I can run this and avoid the issue? Thanks.

Do the children run in paralell?

 

Search this thread for a variety of keywords - i think someone else asked a similar question but theirs was more to do with long-running script that he tried to kill but it didn't actually stop the script.

Edited by Derek_
Link to comment
4 hours ago, Derek_ said:

A simple solution could be to put a sleep in your script before it does anything else:


sleep 10m

 

I did initially add a while loop that waited for docker to be 'up', and it never came up. Killed the script and then docker came up...

Not sure if it is blocking, or if it was co-incidence. But either way, I can always have it call another script and nohup it or something.

 

Thanks for the pointers :)

Link to comment
1 hour ago, dalgibbard said:

I did initially add a while loop that waited for docker to be 'up', and it never came up. Killed the script and then docker came up...

Not sure if it is blocking, or if it was co-incidence. But either way, I can always have it call another script and nohup it or something.

 

Thanks for the pointers :)

Could you put something like this at the top of your script...

until [ $_done ]; do
	docker inspect -f {{.State.Running}} CONTAINER-NAME | grep 'true' &> /dev/null
    if [ $? == 0 ]; then
        _done=1
    else
        sleep 10;
    fi
done;
echo "GO! RUN YOUR LAUNCHED SCRIPTS"

...?  It could probably be made more efficient, but it seems to work for here.  It will run straight away if the docker is already running.

 

I was initially trying to use the docker events command, which worked, but I couldn't find a way to safely kill it in a script once I'd got what I needed.

 

I did start working on a script where you could put triggers in for start/stop/pause/resume events - and got quite a way through, but again - once it's running it keeps on running, and I couldn't stop it.  It would be something better suited to a standalone plugin where you could select containers, events (start, stop, pause, resume) and a script to run.

Link to comment

Is there an option to run the script after boot, regardless of the array is up or down?

I tried with custom "@reboot" as it would be with cronjobs/crontab implemented but that seems to not work out.

 

Is there any better solution for running a script "service"-like?

Or run the script every x seconds? (I have implemented this in the script now; I fetch some data, run a regex and write the result into a influxdb)

Edited by Greyberry
Link to comment
  • 2 weeks later...

Feature Requests:

1.) Jump to top if "Edit Script" is selected or display the code under the scripts row (maybe better as it prevents scrolling)

2.) *deleted*

3.) "Show log" uses the -0700 timezone. Instead it should respect the Unraid timezone setting

4.) Changing a script name should rename the script filename, too. Or the User Scripts overview should be sorted by the script name and not the script filename.

5.) Maybe its nice to see directly if a script returned an error. Maybe by displaying the row with a red background color?!

6.) Optional: Send script output to email address.

7.) If you add a cogwheel symbol after the script name you do not need to explain the user how to edit a script ("For more editing options, click on any script's name"), but I'm not sure if this would look good

8.) "Add new script" should contain the code textarea as well (no need for the extra step to edit it)

 

Thank you for reading and building this plugin!

Edited by mgutt
  • Like 1
Link to comment
On 3/27/2020 at 2:13 PM, Zotarios said:

Once deleted tryed to reinstall it, this happens:

 

image.png.de0cc2c00d934e850b06633287a7e267.png

 

image.thumb.png.d4968da067e842ac08a983b20deb0a53.png

 

Removed and installed again, same behaviour.

Hey - Did anyone get a solution for this? I seem to be experiencing the exact same behaviour.

 

I've completely remove the directory and plg files from the flash drive.

Link to comment

Nice changes. Love that the icons are forced to be displayed in one row.

 

But there is a small bug. If you edit the name of a script the gear symbol is html code is the name:

 

565149761_2020-04-2813_00_42.png.450cbbe6d16c4b92176eabb8159bd9bd.png

 

and if you edit it, it displays both names without the gear:

 

1923994317_2020-04-2813_01_18.png.2e411fe7d83b7e094b4216ed7e7f5c9d.png

 

and if you delete the name its not possible to edit the name anymore:

 

1662775256_2020-04-2813_02_04.png.c5916c4eef2f8e8ce5596c27958ae85d.png

 

 

Link to comment
On 4/25/2020 at 8:14 PM, pearce1340 said:

Hey - Did anyone get a solution for this? I seem to be experiencing the exact same behaviour.

 

I've completely remove the directory and plg files from the flash drive.

 

On 4/25/2020 at 9:32 PM, Squid said:

Reboot.

Just try rebooting and waiting, that's what I did and it solved (after a couple of days and reboots) the issue, don't know how neither if it's really the solution.

Link to comment

It appears docker rmi $(docker images --quiet --filter "dangling=true") for the built in delete_dangling_images script has been depreciated (or at least has changed.) I updated the script to this below and it works well (I also added a few checks and prune the volumes too.) I read that some applications with older "data container patterns" could be inadvertently destroyed using this method but it did not affect any of my many containers. Maybe someone knows more about that.

 

 

/boot/config/plugins/user.scripts/scripts/delete_dangling_images/script

#!/bin/bash
#arrayStarted=true
#noParity=true
echo
echo
echo Removing unused images...
docker image prune -f
echo
echo
echo Removing unused volumes...
docker volume prune -f
echo
echo
echo Finished
echo
echo
echo ...If 0B shows above, no dangling images or volumes were found to delete...

 

Edited by TheDude
Link to comment
  • 2 weeks later...

Hello to everybody im new user in unraid ...

i found its very powerfull System.

thanks to autor of this plugin , i found its very usefull to HDSentinel linux for reporting status of all disks.

im using 2 scripts .

1 - to copy file to system on boot

2 - to report and save data file *.html on disk every 10min

then you add this file in HDSentinel of your windows and enjoying status of all disks in nas 

Thanks

Link to comment

Hello everybody,

 

I'm posting in this topic because I think it is the best way to achieve what I want to do... but maybe I'm wrong so don't blame me ^^

This problem is probably allready solved but I don't really understand the 38 pages of the topic to see if the detailed scripts fit my needs.

 

I have a share called "photos" which is basically ALL my photos (stored on google photos right now).

I have a Syncthing docker which sync my camera folder on my phone with a "backup" share on my server.

 

Is there a way I can schedule every week / day the copy of my new photos stored in "backup" into the "photos" share ? I think it is but I'm not able to write any script... :(

 

Link to comment

@guilhem31

 

A) You could move all files with "mv":

#!/bin/bash
mv /mnt/user/backup/phonecamera/*.* /mnt/user/photos/phonecamera/

By that all files will be moved to "/photos/phonecamera/", but if they already exist, the script will fail. Instead you would need to force overwriting the destination by using the "-f" flag:

#!/bin/bash
mv -f /mnt/user/backup/phonecamera/*.* /mnt/user/photos/phonecamera/

 

B) If moving triggers a full new upload through your smartphone app, you should consider "rsync" to copy the files:

#!/bin/bash
rsync -a /mnt/user/backup/phonecamera /mnt/user/photos/phonecamera

 

Of course there are much more possibilities. Like copying files to a year/month base subfolder structure or copying only files with specific extensions like .jpg. I'm sure you will find other solutions through a little bit research.

 

Hints:

You find the correct paths by clicking on the view icon in Unraid next to the Share:

2082271224_2020-05-1621_31_19.png.22b972e20df7c29d89d68b5611a914c8.png

 

By that it displays the complete path in the headline:

419150153_2020-05-1621_32_36.png.d97f8a04c14fef4fa718436b14771414.png

 

If you want to know how to use CA user scripts in general, this video could be helpful:

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.