[Plugin] CA User Scripts


Recommended Posts

1 hour ago, DigitalDivide said:

Thanks!  Very much appreciated!

 

Just one other question, when I create the text file, does it require an extension or simply leave it blank?

 

Edit: nevermind, saw I can create it in the app.

Do you really want to execute this multiple times? It sounds you only want to execute it one time. Then you could use the Terminal:

IMG_20200530_195625.jpg.3a20b090f18f1e2a84186db687fa1b5d.jpg

Link to comment

Not sure what you mean?  The folder is where  I have my seedbox syncthing copying files to on my local server.  I then want a job to run that will grab those files and move them to a different foler where sonarr will pick them up.  If it runs every hour, I would expect it do have nothing to move once the files have been already moved.  THen when syncthing dumps a new file there, it will get moved again.

 

The problem I am having is Sonarr is trying to copy the files from the sync folder as they are being synched with gives an error.  The only way around this that I know of is to move them to a diff folder and have sonarr grab them from there.  Can't find a way to have Sonarr exclude the sync files.  When syncthing is synching, it temporarily names the files .sync<filename>.  If I could find a way for Sonarr to disregard files that start with .sync I'd be more than happy.

Link to comment

Ok, if you need hourly scheduling, CA user scripts is the correct way.

 

Edit: instead of physically copying all files you could hardlink them in a second folder. By that the "copy process" would be much faster and it will not waste your storage space.

 

Edit2: This should work:

rsync -a --delete --link-dest=/mnt/disks/DELUGETorrents/Seedbox_Downloads/ /mnt/disks/DELUGETorrents/Seedbox_Downloads/ /mnt/disks/DELUGETorrents/Sonarr_Pickup/

 

"--link-dest" forces rsync to create hard links instead if copying the complete file.

 

"--delete" means it removes all files from Sonarr_Pickup/ if they do not exist in Seedbox_Downloads/ anymore.

 

Before executing this command you should empty the destination folder or use a different one.

 

Edit3: Ok "cp" supports that by using the "l"-flag too:

https://superuser.com/a/1360635/129262

Edited by mgutt
Link to comment

Well, not sure if hard linking would be best.  Once the files are copied (just for now will eventually move them) to the sonarr pick up folder, sonarr will do a move and rename from there to the final destination.  So in the pickup folder and in the syncthing folder there will be no files once sonarr has done it's job.  Then when a new episode of a watched show is grabbed by sonarr and passed to deluge, it will be donwloaded and grabbed by syncthing and the process starts over with the script....or I hope it works anyhow.  That's the idea.

 

 

Link to comment

Hmm got an error running the script

Script location: /tmp/user.scripts/tmpScripts/SonarrCopy/script
Note that closing this window will abort the execution of this script
mv: invalid option -- 'R'
Try 'mv --help' for more information.

 

I had

mv -R /mnt/disks/DELUGETorrents/Seedbox_Downloads/* /mnt/disks/DELUGETorrents/Sonarr_Pickup/

 

I'm guessing the -R was for copy only?

Edited by DigitalDivide
Link to comment

Hi Guys,

Great plugin, I have two small questions:

- Will the scheduler start a script if it's still running? (I only want to run one piece in any single time, otherwise I need to put some simple locking into the script)

- I have a simple script what's running rclone sync. When I click on the abort button the  button disappears but the script keeps running. I have to kill the processes manually. How should I run it to be able to abort from the GUI? 

Link to comment
59 minutes ago, Norbs said:

Will the scheduler start a script if it's still running?

yes

 

59 minutes ago, Norbs said:

When I click on the abort button the  button disappears but the script keeps running. I have to kill the processes manually.

Long running processes cannot be killed via the abort button.  (I've tried everything to try and kill the PID of children from the PID of the parent, and nothing works because of how everything works under the hood)

Link to comment

Just another question on my script, mv /mnt/disks/DELUGETorrents/Seedbox_Downloads/* /mnt/disks/DELUGETorrents/Sonarr_Pickup/

 

If a file is being written to the /seedbox_downloads/ folder and the script above runs, will it try to copy the file while it's still being written?  When a files is being written (synched) to the folder, Syncthing names it .sync(filename).  Would I need to put in the move to exclue anything that starts with ".synch"?  If so, what would I need to add to the above script.

Link to comment
21 minutes ago, mgutt said:

If your source files have all the same extension you could use this:

/mnt/disks/DELUGETorrents/Seedbox_Downloads/*.mp4

 

Or this one should work, too:

/mnt/disks/DELUGETorrents/Seedbox_Downloads/!(*.synch)

I think I didn't explain that properly.  The file starts with .sync so I think in the case of *.synch, that's looking for all files with .synch extension.  That won't work.  All files end in .mkv including the ones being written.  I think I need something that excludes all .syncthing.*

 

Just not sure how to do it.

Link to comment

It didn't work, got the following error 

Script location: /tmp/user.scripts/tmpScripts/SonarrCopy/script
Note that closing this window will abort the execution of this script
/tmp/user.scripts/tmpScripts/SonarrCopy/script: line 2: syntax error near unexpected token `('
/tmp/user.scripts/tmpScripts/SonarrCopy/script: line 2: `mv /mnt/disks/DELUGETorrents/Seedbox_Downloads/!(.syncthing.*) /mnt/disks/DELUGETorrents/Sonarr_Pickup/'

 

I tried with !(.sync*.*) and got the same error as above.

Link to comment

Hi, I've one docker container that somehow often fails to restart after e.g. stop for backup.

However, if manually initiated it, it works. Always. I tried checking logs etc but couldn't find a solution.

 

Then I remembered that User Scripts is able to use "restart" command to start a docker container and I could configure it to run every now and then.

However, I don't want it to restart if the status is already running. How can I add something like "if status = active, idle, else star/restart dockercontainer.

 

Can someone please help me to find those commands if possible or point me to script documentation? Or do I need to look at docker commands? Ofc. I tried searching the forums beforehand but couldn't find a solution or similar case yet. If I missed it, feel free to point me to it.

 

Thanks

Link to comment
2 hours ago, twok said:

However, I don't want it to restart if the status is already running. How can I add something like "if status = active, idle, else star/restart dockercontainer.

 

This is how I check the status and start a container through one of my scripts:

        # check if mkvtoolnix container exists
        if [[ ! "$(docker ps -q -f name=mkvtoolnix_mkv2sub)" ]]; then # https://stackoverflow.com/a/38576401/318765
            # check for blocking container
            if [[ "$(docker ps -aq -f status=exited -f name=mkvtoolnix_mkv2sub)" ]]; then
                docker rm mkvtoolnix_mkv2sub
            fi
            echo "mkvtoolnix container needs to be started"
            # start mkvtoolnix container
            docker_options=(
                run -d
                --name=mkvtoolnix_mkv2sub
                -e TZ=Europe/Berlin
                -v "${docker_config_path}mkvtoolnix_mkv2sub:/config:rw"
                -v "${movies_path}:/storage:rw"
                jlesage/mkvtoolnix
            )
            echo "docker ${docker_options[@]}"
            docker "${docker_options[@]}"
        fi

 

Found here:

https://stackoverflow.com/a/38576401/318765

 

But this checks only if the container is already existing or its status is "exited".

 

I tried to check if the container is not used anymore by its cpu usage, but this produced problems with parallel running scripts so I disabled it:

    # check if container exists
    if [[ -x "$(command -v docker)" ]] && [[ "$(docker ps -q -f name=mkvtoolnix_mkv2sub)" ]]; then
        # stop container only if its not in use (by other shell script)
        mkvtoolnix_cpu_usage="$(docker stats mkvtoolnix_mkv2sub --no-stream --format "{{.CPUPerc}}")"
        # if [[ ${mkvtoolnix_cpu_usage%.*} -lt 1 ]]; then
            # we do not stop the container as our script is not race-condition safe!
            # echo "Stop mkvtoolnix container"
            # docker stop mkvtoolnix_mkv2sub
            # docker rm mkvtoolnix_mkv2sub
        # fi
    fi

Maybe I will set in the future a random container name and clean them up after they become too old. I'm not sure.

Edited by mgutt
Link to comment

Hi,

 

I want to run a command every 12 hours, here is the command and it works from putty


docker exec -u abc -it sabnzbdrninja /downloads/opus/opus.bash /music

 

if I put that in user scripts it does not work?  

 

#!/bin/bash
docker exec -u abc -it sabnzbdrninja /downloads/opus/opus.bash /music

 

here is the log output, do I need to do something else?

 

Script Starting Jun 02, 2020  15:14.32

Full logs for this script are available at /tmp/user.scripts/tmpScripts/music folder - flac to opus/log.txt

the input device is not a TTY
Script Finished Jun 02, 2020  15:14.32

Full logs for this script are available at /tmp/user.scripts/tmpScripts/music folder - flac to opus/log.txt
 

 

Many Thanks

Link to comment

Hello.

I am running into a slight issue. I just updated from 6.7.2 to 6.8.3 (a little late, I know) and it seems to have broken one of my scripts. Any time I try to run

Quote

/etc/rc.d/rc.docker start

or

Quote

/etc/rc.d/rc.docker restart

the done button never appears which leads me to believe the script is hanging. If I open the docker tab of the web ui in another browser tab without closing the script, the docker service appears to have started and works fine. However, closing the script in the original tab causes it to abort leading the docker service to return that it failed to start despite it working fine up until then. I have deleted my docker image to verify that it was not an issue with docker and the issue persists. Executing the same commands from the terminal works fine. Any help would be very appreciated!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.