Luanas Posted April 23 Share Posted April 23 Hello. After renaming the name of the script in the section where user scripts are located, the file is not renamed and the original name remains. Do you can fix it, please? Quote Link to comment
sasbro97 Posted April 27 Share Posted April 27 On 4/17/2024 at 12:13 AM, Amane said: Hi sasbro Cron jobs do not run with the full user environment, which means some environment variables needed by docker or your script might not be set correctly. This can affect the execution of commands that rely on these variables. Add the line on top of your script: PATH=/usr/bin:/bin:/usr/sbin:/sbin:/usr/local/bin I hope it helps. If not, test whether you can copy the script into the container and execute it with the scheduled user script - just for test Grüsse Hey as I told you it worked. At least I thought so. Now after many days I realized again it did not work. I changed the schedule now once to run it in a few minutes also opened the script and closed it again without changes. Now it worked again. I guess unfortunately again just once. What the hell is causing this behavior? I added the PATH line. I feels like some kind of cache or something. Quote Link to comment
Amane Posted April 28 Share Posted April 28 (edited) 11 hours ago, sasbro97 said: Hey as I told you it worked. At least I thought so. Now after many days I realized again it did not work. I changed the schedule now once to run it in a few minutes also opened the script and closed it again without changes. Now it worked again. I guess unfortunately again just once. What the hell is causing this behavior? I added the PATH line. I feels like some kind of cache or something. Ok... I would extend the logging so that you create a log file in the "/config" folder, i.e. the one that is mapped in appdata. #!/bin/bash #description=This script updates all apps in Nextcloud. #arrayStarted=true #name=Nextcloud Auto-Update Apps CONTAINER_LOGFILE="/config/auto_update_script.log" echo "Script PATH: $(printenv PATH)" docker exec -u www-data nextcloud echo -e "\nStart: $(date)" >> $CONTAINER_LOGFILE docker exec -u www-data nextcloud echo -e "\nDocker PATH: $(printenv PATH)" >> $CONTAINER_LOGFILE docker exec -u www-data nextcloud echo -e "\napp:update output:" >> $CONTAINER_LOGFILE docker exec -u www-data nextcloud php /var/www/html/occ app:update --all >> $CONTAINER_LOGFILE 2>&1 docker_exit_status=$? if [ $docker_exit_status -eq 0 ]; then curl -fsS -m 10 --retry 5 --data-raw "$m" http://unraid-server:8003/ping/1505e87d-84e2-4806-a2a0-fa214d96cb17 echo -e "\nScript succeeded" fi On 4/16/2024 at 10:10 PM, sasbro97 said: OK Script succeeded This issue is particularly interesting, as it is otherwise not logged?! docker exec -u www-data nextcloud php /var/www/html/occ app:update --all >> $CONTAINER_LOGFILE 2>&1 Edited April 28 by Amane Quote Link to comment
Amane Posted April 28 Share Posted April 28 (edited) @sasbro97 Ok i have test the PATH and i see that: Manual Script PATH: /usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin Cron Script PATH: .:/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin So test: PATH="/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin" That also works: On 4/17/2024 at 3:22 PM, JoeUnraidUser said: #!/bin/bash -l source /root/.bash_profile But check the docker PATH with the script above Edited April 28 by Amane Quote Link to comment
p0xus Posted May 4 Share Posted May 4 So I think I may have found a glitch in this plugin. I wrote a script that mounts a drive using unassigned devices, spins it up, and after some checks backs up a series of folders to the drive. Afterward it unmounts the drive and spins it back down. The glitch comes in when running this script through User.Scripts. After the execution of the script ends, the drive spins back up and remains in that state. The drive only spins back up when running the script through User.Scripts. When run through terminal or ran after the Appdata Backup plugin's process, the drive remains spun down. Here is the script: #!/bin/bash source_dir="/mnt/user/Backups" destination_dir="/mnt/disks/External_Backup" script_dir="/mnt/user/scripts/Backup_to_External_Drive/" config_file="${script_dir}backup_config.json" logger_tag="Backup to External Drive Script" echo "##### Backup Script Starting #####" | tee >(logger -t "$logger_tag") if [ -s "$config_file" ]; then folders=($(jq -r '.folders[]' "$config_file")) backup_drive=$(jq -r '.backup_drive' "$config_file") else echo "ERROR: Config file: "$config_file" not found. Exiting." | tee >(logger -t "$logger_tag") echo "##### Backup Script Exiting #####" | tee >(logger -t "$logger_tag") exit 1 fi echo "Mounting and spinning up drive." | tee >(logger -t "$logger_tag") /usr/local/sbin/rc.unassigned mount "$backup_drive" hdparm -C "$backup_drive" # Check if the source directory exists if [ ! -d "$source_dir" ]; then echo "ERROR: Source directory $source_dir not found." | tee >(logger -t "$logger_tag") echo "##### Backup Script Exiting #####" | tee >(logger -t "$logger_tag") exit 1 fi # Check if the destination directory exists if [ ! -d "$destination_dir" ]; then echo "ERROR: Destination directory $destination_dir not found." | tee >(logger -t "$logger_tag") echo "##### Backup Script Exiting #####" | tee >(logger -t "$logger_tag") exit 1 fi # Loop through each folder and perform backup for folder in "${folders[@]}"; do # Check if the folder exists in the source directory if [ -d "$source_dir/$folder" ]; then echo "Backing up $folder..." | tee >(logger -t "$logger_tag") rsync -av --delete "$source_dir/$folder" "$destination_dir" 2>&1 | tee >(logger -t "$logger_tag") echo "Backup of $folder completed." | tee >(logger -t "$logger_tag") else echo "Folder $folder not found in $source_dir. Skipping..." | tee >(logger -t "$logger_tag") fi done echo "All backups completed." | tee >(logger -t "$logger_tag") echo "Unmounting and spinning down drive." | tee >(logger -t "$logger_tag") /usr/local/sbin/rc.unassigned umount "$backup_drive" /usr/local/sbin/rc.unassigned spindown "$backup_drive" echo "##### Backup Script Exiting #####" | tee >(logger -t "$logger_tag") Quote Link to comment
karldonteljames Posted May 7 Share Posted May 7 Afternoon All, I seem to be unable to "apply" settings (the apply button doesn't do anything) or view logs of previously run scripts. I've tried a couple of browsers, on a couple of devices, any idea what might be going on? Thanks Quote Link to comment
Amane Posted May 7 Share Posted May 7 (edited) 3 hours ago, karldonteljames said: Afternoon All, I seem to be unable to "apply" settings (the apply button doesn't do anything) or view logs of previously run scripts. I've tried a couple of browsers, on a couple of devices, any idea what might be going on? Hi karldonteljames I think you forgot the space (*/15****) -> (*/15 * * * *). I hope i see that right.. (No, I think you wrote that correctly in the field.) After you have entered the cron in the field, press Enter and then klick apply. You can check the entrys after apply with this command: sed -n '/user.scripts/,$p' /etc/cron.d/root (/etc/cron.d/root) I hope you can confirm this. Greetings Edited May 7 by Amane Quote Link to comment
karldonteljames Posted May 8 Share Posted May 8 (edited) Thanks, That's just it. Clicking apply doesn't do anything. No Loading, no being greyed out, nothing so far as I can tell. After running that command line (above) nothing changes on the crown file. I'm also missing the icons to view the logs of the completed scripts too. I've even restarted Unraid a couple of times. Edited May 8 by karldonteljames Quote Link to comment
Luanas Posted May 8 Share Posted May 8 Does the author of this script read the posts here, or is that just slapping the wind? Or where can I contact the developer of this script? Quote Link to comment
Amane Posted May 8 Share Posted May 8 On 4/23/2024 at 10:04 PM, Luanas said: Hello. After renaming the name of the script in the section where user scripts are located, the file is not renamed and the original name remains. Do you can fix it, please? The plugin is not created so that it renames the folder in which the script is located (/boot/config/plugins/user.scripts/scripts/) If you want to rename the script or rather the folder in which the script is located, you must recreate the script with the same content and use the correct name from the beginning. Does that help you? Quote Link to comment
Amane Posted May 8 Share Posted May 8 (edited) 2 hours ago, karldonteljames said: Thanks, That's just it. Clicking apply doesn't do anything. No Loading, no being greyed out, nothing so far as I can tell. After running that command line (above) nothing changes on the crown file. I'm also missing the icons to view the logs of the completed scripts too. I've even restarted Unraid a couple of times. Can you apply the other options correctly? 2 hours ago, karldonteljames said: I'm also missing the icons to view the logs of the completed scripts too. These only appear if the script has been running in the background, only then are the log files recreated and the buttons appear again. Edited May 8 by Amane Quote Link to comment
wgstarks Posted May 8 Share Posted May 8 1 hour ago, Luanas said: Does the author of this script read the posts here, or is that just slapping the wind? Or where can I contact the developer of this script? Not sure what script you are trying to run but assuming you copied it from one of the posts in this thread you could tag the author of that post or just reply to that post and the person who posted the script would likely get a notification. You should be aware though that most of the scripts posted in this thread are being supplied by other users and for the most part aren’t being actively supported. If LT makes changes to unRAID that break the script then the author may not post an updated script. Quote Link to comment
sasbro97 Posted May 9 Share Posted May 9 (edited) On 4/28/2024 at 10:57 AM, Amane said: That also works: On 4/17/2024 at 3:22 PM, JoeUnraidUser said: This did not output anything for me but I also don't know if it should. I just created a new script with the content and run it. It seems to work now again (not sure how long or if its stable). But I printed the docker run command of my portainer container and saw this environment variable: PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Is it possible that is my correct PATH then? It makes no goddamn sense... this are my logs: Script Starting May 10, 2024 03:00.01 Full logs for this script are available at /tmp/user.scripts/tmpScripts/nextcloud_auto_update_apps/log.txt All apps are up-to-date or no updates could be found OK Script succeeded Script Finished May 10, 2024 03:00.02 Full logs for this script are available at /tmp/user.scripts/tmpScripts/nextcloud_auto_update_apps/log.txt I know there were updates for some apps. So I ran the script again manually. Out of nowhere it is telling me ah calender can be updated and updates it. Then why the hell is it telling me there are no updates when running it in a script. Wtf is that? I cannot explain that now as the command seems to return different things depending on the automatic execution. Edited May 10 by sasbro97 Quote Link to comment
Amane Posted May 10 Share Posted May 10 (edited) On 5/9/2024 at 4:53 PM, sasbro97 said: Is it possible that is my correct PATH then? I no longer believe that it is due to the PATH variable. At least I am also in the dark here. I've given it some thought and have written a script for you that will log the relevant variables. You should be able to detect differences: #!/bin/bash # Name of the Docker container CONTAINER_NAME="Nextcloud" # Command to be executed in the container COMMAND="env | grep -E '^PATH=|^USER_ID=|^GROUP_ID=|^HOME='" # Execute docker exec to retrieve the environment variables from the container echo -e "\n\nEnvironment variables in the Docker container:" docker exec $CONTAINER_NAME bash -c "$COMMAND" # Output additional local environment variables echo -e "\nLocal environment variables:" echo "PATH: $PATH" echo "User: $(whoami)" echo "Group: $(id -gn)" echo "Home directory: $HOME" Check the CONTAINER_NAME Edited May 17 by Amane Quote Link to comment
sasbro97 Posted May 15 Share Posted May 15 On 5/10/2024 at 6:22 PM, Amane said: I no longer believe that it is due to the PATH variable. At least I am also in the dark here. I've given it some thought and have written a script for you that will log the relevant variables. You should be able to detect differences: #!/bin/bash # Name of the Docker container CONTAINER_NAME="Nextcloud" # Command to be executed in the container COMMAND="env" # Execute docker exec to retrieve the environment variables from the container echo -e "\n\nEnvironment variables in the Docker container:" docker exec $CONTAINER_NAME bash -c "$COMMAND" | grep -E '^PATH=|^USER_ID=|^GROUP_ID=|^HOME=' # Output additional local environment variables echo -e "\nLocal environment variables:" echo "PATH: $PATH" echo "User: $(whoami)" echo "Group: $(id -gn)" echo "Home directory: $HOME" Check the CONTAINER_NAME So the output of this exact script is the following: Environment variables in the Docker container: HOME=/root PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Local environment variables: PATH: /usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin User: root Group: root Home directory: /root Quote Link to comment
Amane Posted May 17 Share Posted May 17 (edited) On 5/15/2024 at 4:13 AM, sasbro97 said: Environment variables in the Docker container: HOME=/root PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin Local environment variables: PATH: /usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin User: root Group: root Home directory: /root Remember to also run this script in the background and manually to check for differences.. Maybe it's an indication that the USER_ID and GROUP_ID are not set, but I'm also in the dark here. sry Edited May 17 by Amane Quote Link to comment
sjtuross Posted May 18 Share Posted May 18 On 3/12/2024 at 6:25 PM, sjtuross said: Is it possible to have docker daemon start complete trigger? My script depends on some docker network interfaces which only become available after docker daemon starts. I'm wondering if this plugin could inject a hook in /usr/local/etc/rc.d/rc.docker, so whenever docker daemon (re)starts, it calls some user scripts. Quote Link to comment
SimonF Posted May 18 Share Posted May 18 43 minutes ago, sjtuross said: I'm wondering if this plugin could inject a hook in /usr/local/etc/rc.d/rc.docker, so whenever docker daemon (re)starts, it calls some user scripts. You could use an event for this but you would need to create a file in the correct path and add persistence. You could use user scripts to create event file at boot. Event files are normally stored in events dir for plugins. add script into /usr/local/emhttp/plugins/dynamix/event/docker_started/ You need to make sure the script completes as it will stop other events. event description is here. https://github.com/unraid/webgui/blob/master/sbin/emhttp_event 1 Quote Link to comment
sasbro97 Posted May 18 Share Posted May 18 It worked multiple times now... I used the local path variable in the script: PATH=/usr/local/sbin:/usr/sbin:/sbin:/usr/local/bin:/usr/bin:/bin Maybe that is the solution. Quote Link to comment
sjtuross Posted May 19 Share Posted May 19 22 hours ago, SimonF said: You could use an event for this but you would need to create a file in the correct path and add persistence. You could use user scripts to create event file at boot. Event files are normally stored in events dir for plugins. add script into /usr/local/emhttp/plugins/dynamix/event/docker_started/ You need to make sure the script completes as it will stop other events. event description is here. https://github.com/unraid/webgui/blob/master/sbin/emhttp_event Thank you for the information. I persisted my script under /usr/local/emhttp/plugins/user.scripts/event/docker_started/ and it worked as expected. Any special reason to put it under the dynamix plugin folder in your example? Quote Link to comment
SimonF Posted May 19 Share Posted May 19 16 minutes ago, sjtuross said: Thank you for the information. I persisted my script under /usr/local/emhttp/plugins/user.scripts/event/docker_started/ and it worked as expected. Any special reason to put it under the dynamix plugin folder in your example? No special reason. It can be under any plugin. dynamix is the main gui parts. Quote Link to comment
SimonF Posted May 19 Share Posted May 19 1 hour ago, sjtuross said: Thank you for the information. I persisted my script under /usr/local/emhttp/plugins/user.scripts/event/docker_started/ and it worked as expected. Any special reason to put it under the dynamix plugin folder in your example? What are you using the script for? Is it a specific use case or could it be used by others? Quote Link to comment
sjtuross Posted May 27 Share Posted May 27 On 5/19/2024 at 6:37 PM, SimonF said: What are you using the script for? Is it a specific use case or could it be used by others? I have 2 use cases - 1. Set up policy routing for my custom docker bridge network, so its network traffic can go through my 2nd physical network interface. 2. I have a docker that mounts multiple cloud drives, and then I run a mergerfs script to merge them. I think the best trigger of both scripts is docker_started. I think this event would be useful to all others and it could be much easier to set up with this event in user scripts add-on if possible. Quote Link to comment
idean Posted May 27 Share Posted May 27 On 10/1/2022 at 6:22 AM, DataHearth said: Hey! I've been using user.scripts for triggering my backup script ever 3 days, but I noticed since I upgrade to unRAID v6.10.3 (I haven't check before this version bc it was working, I guess) that I get 2 behavior with it. One is doing what it's supposed to do via the run script button, another one when it's in running in background or via the run in background button. I'm more interested in the background method to automate my backups. To be more specific, I'm using "rclone" and "restic" to backup everything to "storj" data center. rclone has its configuration working and stored as file in $HOME/.config/rclone/rclone.conf which is setted up automatically by another script, and a simple script to backup everything: #!/bin/bash echo $(rclone config file) export RESTIC_PASSWORD=ABEAUTIFULPASSWORD export RESTIC_REPOSITORY=rclone:storj:SERVER/LOCATION echo "Starting backup..." restic backup --exclude-file=/boot/config/plugins/user.scripts/scripts/backup_data/excludes.txt > restic forget --keep-last 2 --prune echo "Backup finished successfully" Everything is working as intended with run script or by using bash /boot/config/plugins/user.scripts/scripts/backup_data/script But when running in the backup I get this output from the script run Script Starting Oct 01, 2022 12:04.54 Full logs for this script are available at /tmp/user.scripts/tmpScripts/backup_data/log.txt Configuration file doesn't exist, but rclone will use this path: /.config/rclone/rclone.conf # <======= debug line to test if rclone detect the config file in the context. It should point to /root/.config/rclone/rclone.conf. Command "rclone config file" Starting backup... rclone: 2022/10/01 12:04:54 NOTICE: Config file "/.config/rclone/rclone.conf" not found - using defaults rclone: 2022/10/01 12:04:54 Failed to create file system for "storj:cronos/backup": didn't find section in config file Fatal: unable to open repo at rclone:storj:cronos/backup: error talking HTTP to rclone: Get "http://localhost/file-5577006791947779410": unexpected EOF rclone: 2022/10/01 12:04:54 NOTICE: Config file "/.config/rclone/rclone.conf" not found - using defaults rclone: 2022/10/01 12:04:54 Failed to create file system for "storj:cronos/backup": didn't find section in config file Fatal: unable to open repo at rclone:storj:cronos/backup: error talking HTTP to rclone: Get "http://localhost/file-5577006791947779410": unexpected EOF Backup finished successfully Script Finished Oct 01, 2022 12:04.54 I noticed that when running in background, rclone doesn't use my configuration file (like it doesn't exists) and as fallback try using the /.config/rclone/rclone.conf which is not right as rclone default behavior should look up at $HOME/.config/rclone/rclone.conf Any idea why I get 2 behavior with the script? Thanks! Same issue, I worked around it by copying my rclone.conf to /boot/config and referring to it directly with --config /boot/config/rclone.conf But I'd still like to know why background processes can't access it directly? Quote Link to comment
Joly0 Posted May 27 Share Posted May 27 Hey guys, i have some issues running a script. The script connects to a docker container using docker exec and runs a command in it. But the problem is, that these commands take a very long time to run and i want the ability to stop them when needed. So i was able to rewrite it, so i can run it in foreground mode and it stops, when i X the window. But as soon as i run it in background mode and abort the script, the commands just keep running forever. Maybe someone knows why this is and here is the current version of the script: #!/bin/bash # Variables CONTAINER_NAME="LANCache-Prefill" # Function to clean up child processes cleanup() { echo "Cleaning up..." pkill -P $$ pkill -f "docker exec -u prefill $CONTAINER_NAME" kill `ps aux | grep "prefill --force" | awk '{print $2}'` exit 1 } # Trap termination signals to trigger cleanup trap cleanup SIGINT SIGTERM # Execute commands in Docker container sequentially docker exec -u prefill $CONTAINER_NAME /bin/bash -c "cd /lancacheprefill && ./SteamPrefill/SteamPrefill prefill --force" & PID=$! wait $PID docker exec -u prefill $CONTAINER_NAME /bin/bash -c "cd /lancacheprefill && ./EpicPrefill/EpicPrefill prefill --force" & PID=$! wait $PID docker exec -u prefill $CONTAINER_NAME /bin/bash -c "cd /lancacheprefill && ./BattleNetPrefill/BattleNetPrefill prefill --force" & PID=$! wait $PID # Clean up if the script finishes cleanup Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.