Jump to content

[Plugin] CA User Scripts


Recommended Posts

On 4/17/2024 at 12:13 AM, Amane said:

Hi sasbro

 

Cron jobs do not run with the full user environment, which means some environment variables needed by docker or your script might not be set correctly. This can affect the execution of commands that rely on these variables.

 

Add the line on top of your script:

PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin

 

I hope it helps.

If not, test whether you can copy the script into the container and execute it with the scheduled user script - just for test

 

Grüsse ;)

Hey as I told you it worked. At least I thought so. Now after many days I realized again it did not work. I changed the schedule now once to run it in a few minutes also opened the script and closed it again without changes. Now it worked again. I guess unfortunately again just once. What the hell is causing this behavior? I added the PATH line. I feels like some kind of cache or something.

Link to comment
11 hours ago, sasbro97 said:

Hey as I told you it worked. At least I thought so. Now after many days I realized again it did not work. I changed the schedule now once to run it in a few minutes also opened the script and closed it again without changes. Now it worked again. I guess unfortunately again just once. What the hell is causing this behavior? I added the PATH line. I feels like some kind of cache or something.

 

Ok... I would extend the logging so that you create a log file in the "/config" folder, i.e. the one that is mapped in appdata.

 

#!/bin/bash
#description=This script updates all apps in Nextcloud.
#arrayStarted=true
#name=Nextcloud Auto-Update Apps

CONTAINER_LOGFILE="/config/auto_update_script.log"

echo "Script PATH: $(printenv PATH)"

docker exec -u www-data nextcloud echo -e "\nStart: $(date)" >> $CONTAINER_LOGFILE
docker exec -u www-data nextcloud echo -e "\nDocker PATH: $(printenv PATH)" >> $CONTAINER_LOGFILE

docker exec -u www-data nextcloud echo -e "\napp:update output:" >> $CONTAINER_LOGFILE
docker exec -u www-data nextcloud php /var/www/html/occ app:update --all >> $CONTAINER_LOGFILE 2>&1
docker_exit_status=$?

if [ $docker_exit_status -eq 0 ]; then
    curl -fsS -m 10 --retry 5 --data-raw "$m" http://unraid-server:8003/ping/1505e87d-84e2-4806-a2a0-fa214d96cb17
    echo -e "\nScript succeeded"
fi

 

On 4/16/2024 at 10:10 PM, sasbro97 said:
OK
Script succeeded

This issue is particularly interesting, as it is otherwise not logged?!

docker exec -u www-data nextcloud php /var/www/html/occ app:update --all >> $CONTAINER_LOGFILE 2>&1

 

 

Edited by Amane
Link to comment

@sasbro97 Ok i have test the PATH and i see that:

Manual Script PATH:	/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
Cron Script PATH:	.:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin

So test:

PATH="/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin"

 

That also works:

On 4/17/2024 at 3:22 PM, JoeUnraidUser said:
#!/bin/bash -l

source /root/.bash_profile

 

But check the docker PATH with the script above

 

Edited by Amane
Link to comment

So I think I may have found a glitch in this plugin.

I wrote a script that mounts a drive using unassigned devices, spins it up, and after some checks backs up a series of folders to the drive. Afterward it unmounts the drive and spins it back down.

 

The glitch comes in when running this script through User.Scripts. After the execution of the script ends, the drive spins back up and remains in that state.

 

The drive only spins back up when running the script through User.Scripts. When run through terminal or ran after the Appdata Backup plugin's process, the drive remains spun down.

 

Here is the script:
 

#!/bin/bash

source_dir="/mnt/user/Backups"
destination_dir="/mnt/disks/External_Backup"

script_dir="/mnt/user/scripts/Backup_to_External_Drive/"
config_file="${script_dir}backup_config.json"

logger_tag="Backup to External Drive Script"

echo "##### Backup Script Starting #####" | tee >(logger -t "$logger_tag")

if [ -s "$config_file" ]; then
    folders=($(jq -r '.folders[]' "$config_file"))
	backup_drive=$(jq -r '.backup_drive' "$config_file")
else
    echo "ERROR: Config file: "$config_file" not found. Exiting." | tee >(logger -t "$logger_tag")
	echo "##### Backup Script Exiting #####" | tee >(logger -t "$logger_tag")
	exit 1
fi


echo "Mounting and spinning up drive." | tee >(logger -t "$logger_tag")
/usr/local/sbin/rc.unassigned mount "$backup_drive"
hdparm -C "$backup_drive"

# Check if the source directory exists
if [ ! -d "$source_dir" ]; then
    echo "ERROR: Source directory $source_dir not found." | tee >(logger -t "$logger_tag")
	echo "##### Backup Script Exiting #####" | tee >(logger -t "$logger_tag")
    exit 1
fi

# Check if the destination directory exists
if [ ! -d "$destination_dir" ]; then
    echo "ERROR: Destination directory $destination_dir not found." | tee >(logger -t "$logger_tag")
	echo "##### Backup Script Exiting #####" | tee >(logger -t "$logger_tag")
    exit 1
fi

# Loop through each folder and perform backup
for folder in "${folders[@]}"; do
    # Check if the folder exists in the source directory
    if [ -d "$source_dir/$folder" ]; then
        echo "Backing up $folder..." | tee >(logger -t "$logger_tag")
		rsync -av --delete "$source_dir/$folder" "$destination_dir" 2>&1 | tee >(logger -t "$logger_tag")
        echo "Backup of $folder completed." | tee >(logger -t "$logger_tag")
    else
        echo "Folder $folder not found in $source_dir. Skipping..." | tee >(logger -t "$logger_tag")
    fi
done

echo "All backups completed." | tee >(logger -t "$logger_tag")

echo "Unmounting and spinning down drive." | tee >(logger -t "$logger_tag")

/usr/local/sbin/rc.unassigned umount "$backup_drive"
/usr/local/sbin/rc.unassigned spindown "$backup_drive"

echo "##### Backup Script Exiting #####" | tee >(logger -t "$logger_tag")

 

Link to comment
Posted (edited)
3 hours ago, karldonteljames said:

Afternoon All, I seem to be unable to "apply" settings (the apply button doesn't do anything) or view logs of previously run scripts.

I've tried a couple of browsers, on a couple of devices, any idea what might be going on?

 

Hi karldonteljames

 

I think you forgot the space (*/15****) -> (*/15 * * * *). I hope i see that right..

(No, I think you wrote that correctly in the field.)

After you have entered the cron in the field, press Enter and then klick apply.

 

You can check the entrys after apply with this command:

sed -n '/user.scripts/,$p' /etc/cron.d/root

(/etc/cron.d/root)

 

grafik.png.7f9dc1686650740c631a46a7d52dd9ed.png

I hope you can confirm this.

 

Greetings

Edited by Amane
Link to comment
Posted (edited)

Thanks, That's just it. Clicking apply doesn't do anything. No Loading, no being greyed out, nothing so far as I can tell.

After running that command line (above) nothing changes on the crown file.
I'm also missing the icons to view the logs of the completed scripts too.
I've even restarted Unraid a couple of times.

Edited by karldonteljames
Link to comment
On 4/23/2024 at 10:04 PM, Luanas said:

Hello. After renaming the name of the script in the section where user scripts are located, the file is not renamed and the original name remains.

Do you can fix it, please?

The plugin is not created so that it renames the folder in which the script is located (/boot/config/plugins/user.scripts/scripts/)

grafik.thumb.png.afbe92a4127f87fe3fe2f4202293593b.png

 

If you want to rename the script or rather the folder in which the script is located, you must recreate the script with the same content and use the correct name from the beginning.

 

Does that help you?

 

Link to comment
Posted (edited)
2 hours ago, karldonteljames said:

Thanks, That's just it. Clicking apply doesn't do anything. No Loading, no being greyed out, nothing so far as I can tell.

After running that command line (above) nothing changes on the crown file.
I'm also missing the icons to view the logs of the completed scripts too.
I've even restarted Unraid a couple of times.

Can you apply the other options correctly?

grafik.png.137e315d24b4e508692918f52271de7e.png

 

2 hours ago, karldonteljames said:

I'm also missing the icons to view the logs of the completed scripts too.

These only appear if the script has been running in the background, only then are the log files recreated and the buttons appear again.

 

Edited by Amane
Link to comment
1 hour ago, Luanas said:

Does the author of this script read the posts here, or is that just slapping the wind? Or where can I contact the developer of this script?

Not sure what script you are trying to run but assuming you copied it from one of the posts in this thread you could tag the author of that post or just reply to that post and the person who posted the script would likely get a notification. You should be aware though that most of the scripts posted in this thread are being supplied by other users and for the most part aren’t being actively supported. If LT makes changes to unRAID that break the script then the author may not post an updated script.

Link to comment
Posted (edited)
On 4/28/2024 at 10:57 AM, Amane said:

That also works:

On 4/17/2024 at 3:22 PM, JoeUnraidUser said:

This did not output anything for me but I also don't know if it should. I just created a new script with the content and run it. It seems to work now again (not sure how long or if its stable).

 

But I printed the docker run command of my portainer container and saw this environment variable:

PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Is it possible that is my correct PATH then?

 

It makes no goddamn sense... this are my logs:

Script Starting May 10, 2024  03:00.01

Full logs for this script are available at /tmp/user.scripts/tmpScripts/nextcloud_auto_update_apps/log.txt

All apps are up-to-date or no updates could be found
OK
Script succeeded
Script Finished May 10, 2024  03:00.02

Full logs for this script are available at /tmp/user.scripts/tmpScripts/nextcloud_auto_update_apps/log.txt

I know there were updates for some apps. So I ran the script again manually. Out of nowhere it is telling me ah calender can be updated and updates it. Then why the hell is it telling me there are no updates when running it in a script. Wtf is that? I cannot explain that now as the command seems to return different things depending on the automatic execution.

Edited by sasbro97
Link to comment
Posted (edited)
On 5/9/2024 at 4:53 PM, sasbro97 said:

Is it possible that is my correct PATH then?

I no longer believe that it is due to the PATH variable.
At least I am also in the dark here.
I've given it some thought and have written a script for you that will log the relevant variables. You should be able to detect differences:

#!/bin/bash

# Name of the Docker container
CONTAINER_NAME="Nextcloud"

# Command to be executed in the container
COMMAND="env | grep -E '^PATH=|^USER_ID=|^GROUP_ID=|^HOME='"

# Execute docker exec to retrieve the environment variables from the container
echo -e "\n\nEnvironment variables in the Docker container:"
docker exec $CONTAINER_NAME bash -c "$COMMAND"

# Output additional local environment variables
echo -e "\nLocal environment variables:"
echo "PATH: $PATH"
echo "User: $(whoami)"
echo "Group: $(id -gn)"
echo "Home directory: $HOME"

Check the CONTAINER_NAME

 

Edited by Amane
Link to comment
On 5/10/2024 at 6:22 PM, Amane said:

I no longer believe that it is due to the PATH variable.
At least I am also in the dark here.
I've given it some thought and have written a script for you that will log the relevant variables. You should be able to detect differences:

#!/bin/bash

# Name of the Docker container
CONTAINER_NAME="Nextcloud"

# Command to be executed in the container
COMMAND="env"

# Execute docker exec to retrieve the environment variables from the container
echo -e "\n\nEnvironment variables in the Docker container:"
docker exec $CONTAINER_NAME bash -c "$COMMAND" | grep -E '^PATH=|^USER_ID=|^GROUP_ID=|^HOME='

# Output additional local environment variables
echo -e "\nLocal environment variables:"
echo "PATH: $PATH"
echo "User: $(whoami)"
echo "Group: $(id -gn)"
echo "Home directory: $HOME"

Check the CONTAINER_NAME

 

So the output of this exact script is the following:

Environment variables in the Docker container:
HOME=/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Local environment variables:
PATH: /usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
User: root
Group: root
Home directory: /root

 

Link to comment
Posted (edited)
On 5/15/2024 at 4:13 AM, sasbro97 said:
Environment variables in the Docker container:
HOME=/root
PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin

Local environment variables:
PATH: /usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin
User: root
Group: root
Home directory: /root

 

Remember to also run this script in the background and manually to check for differences..
Maybe it's an indication that the USER_ID and GROUP_ID are not set, but I'm also in the dark here. sry

 

Edited by Amane
Link to comment
On 3/12/2024 at 6:25 PM, sjtuross said:

Is it possible to have docker daemon start complete trigger? My script depends on some docker network interfaces which only become available after docker daemon starts.


I'm wondering if this plugin could inject a hook in /usr/local/etc/rc.d/rc.docker, so whenever docker daemon (re)starts, it calls some user scripts.

Link to comment
43 minutes ago, sjtuross said:


I'm wondering if this plugin could inject a hook in /usr/local/etc/rc.d/rc.docker, so whenever docker daemon (re)starts, it calls some user scripts.

You could use an event for this but you would need to create a file in the correct path and add persistence. You could use user scripts to create event file at boot. Event files are normally stored in events dir for plugins.

 

add script into  /usr/local/emhttp/plugins/dynamix/event/docker_started/

 

You need to make sure the script completes as it will stop other events.

 

event description is here.

 

https://github.com/unraid/webgui/blob/master/sbin/emhttp_event

  • Thanks 1
Link to comment
22 hours ago, SimonF said:

You could use an event for this but you would need to create a file in the correct path and add persistence. You could use user scripts to create event file at boot. Event files are normally stored in events dir for plugins.

 

add script into  /usr/local/emhttp/plugins/dynamix/event/docker_started/

 

You need to make sure the script completes as it will stop other events.

 

event description is here.

 

https://github.com/unraid/webgui/blob/master/sbin/emhttp_event

 

Thank you for the information. I persisted my script under /usr/local/emhttp/plugins/user.scripts/event/docker_started/ and it worked as expected. Any special reason to put it under the dynamix plugin folder in your example?

Link to comment
16 minutes ago, sjtuross said:

 

Thank you for the information. I persisted my script under /usr/local/emhttp/plugins/user.scripts/event/docker_started/ and it worked as expected. Any special reason to put it under the dynamix plugin folder in your example?

No special reason. It can be under any plugin. dynamix is the main gui parts.

Link to comment
1 hour ago, sjtuross said:

 

Thank you for the information. I persisted my script under /usr/local/emhttp/plugins/user.scripts/event/docker_started/ and it worked as expected. Any special reason to put it under the dynamix plugin folder in your example?

What are you using the script for? Is it a specific use case or could it be used by others?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...