VRx Posted September 13, 2021 Share Posted September 13, 2021 24 minutes ago, Flemming said: Delete all files older then 14 days in /mnt/user/surveillance_camera/cam1 and /mnt/user/surveillance_camera/cam2 (including subfolders) find /mnt/user/surveillance_camera/cam[12] -type f -mtime +14 -delete 28 minutes ago, Flemming said: Then Remove all empty folders whitin the same paths. find /mnt/user/surveillance_camera/cam[12] -type d -empty -delete Quote Link to comment
Aerodb Posted September 26, 2021 Share Posted September 26, 2021 Hey All, I have userscripts set to move "Move Array Only Shares to Array" daily. I also just ran it manually. but i still have share data on my (3rd) cache pool. The related share has cache set to NO. The mover doesn't see to resolve this either. I feel like I read somewhere that there was a bug with multiple cache pools and spacing but I'm curious if there is solution anyone can offer. As a last resort, if you know how to move these files from the pool to the array without breaking anything via another docker or command I could attempt that. Quote Link to comment
trurl Posted September 26, 2021 Share Posted September 26, 2021 17 minutes ago, Aerodb said: Array Only Shares Do you mean shares that are set to Use cache: No? Nothing can move open files. Are you sure they aren't open? Mover won't move duplicates. How are you handling duplicates? How are you specifying the source path on cache? How are you specifying the destination path on array? Quote Link to comment
Aerodb Posted September 26, 2021 Share Posted September 26, 2021 2 minutes ago, trurl said: Do you mean shares that are set to Use cache: No? Nothing can move open files. Are you sure they aren't open? How are you specifying the source path on cache? How are you specifying the destination path on array? 1- yes, the share that these files are associated to is set to cache: No. yet they are on the cache drive currently. 2- I don't believe they are open, its quite a few of them and have no idea what or why they would be open for this long. some have been on the machine, on the cache pool for longer than the machine has been up. (there have been restarts since they were written to the pool.) 3- I'm not sure how to answer this. Squid posted some user script templates/examples that I used and have had success with this thus far. I can provide the user script code if you think it would be helpful. Quote Link to comment
itimpi Posted September 26, 2021 Share Posted September 26, 2021 14 hours ago, Aerodb said: 1- yes, the share that these files are associated to is set to cache: No. yet they are on the cache drive currently. Mover ignores files for a share that has use Cache=No. If you want them moved to the array then you need to change Use Cache to Yes (at least temporarily). Note that the Use Cache setting primarily determines where NEW files are placed - it does not stop old files from being left on the cache (conversely the Only setting does not automatically move files from array to cache). Mover only gets involved for the Yes and Prefer settings. Quote Link to comment
Aerodb Posted September 26, 2021 Share Posted September 26, 2021 46 minutes ago, itimpi said: Mover ignores files for a share that has use Cache=No. If you want them moved to the array then you need to change Use Cache to Yes (at least temporarily). Note that the Use Cache setting primarily determines where NEW files are placed - it does not stop old files from being left on the cache (conversely the Only setting does not automatically move files from array to cache). Mover only gets involved for the Yes and Prefer settings. so after changing the share associated to the files on the cache pool/drives to yes(use cache), I then started the mover. the mover finished and did not move the files. They remained on the cache pool drives. I really think this was related to the issue I mentioned. I don't think I made that up in my head, I must have read it somewhere that there was an issue with having multiple cache pools and using the mover on shares that have a space in the share name. Quote Link to comment
itimpi Posted September 26, 2021 Share Posted September 26, 2021 Did you make sure that the correct pool was referenced when you change the setting to Yes? There have been reports of files ending up on a pool not named in the share setting - they will not get moved. Quote Link to comment
Aerodb Posted September 27, 2021 Share Posted September 27, 2021 5 hours ago, itimpi said: Did you make sure that the correct pool was referenced when you change the setting to Yes? There have been reports of files ending up on a pool not named in the share setting - they will not get moved. can confirm this did fix the issue. thank you so much. two follow up questions, 1- Do we know how this issue starts? I can assure you this share has never been allowed to use cache. 2- will this "move all none cache shares to array" script will ever be adapted to handle this? im asking if its possible to do, rather than if its being worked on. Quote Link to comment
itimpi Posted September 27, 2021 Share Posted September 27, 2021 1 hour ago, Aerodb said: can confirm this did fix the issue. thank you so much. two follow up questions, 1- Do we know how this issue starts? I can assure you this share has never been allowed to use cache. 2- will this "move all none cache shares to array" script will ever be adapted to handle this? im asking if its possible to do, rather than if its being worked on. The issue starts when a file is ‘moved’ to anther share at the Linux level (rather than copy/deleted) either via the command line or via a container. Linux (which does not understand User Shares) implements ‘move’ by first trying a rename and only if that fails doing a copy+delete if source and target appear to be on the same mount point (/mnt/user in the case of user shares). The rename succeeds leaving the file on the original drive so the copy/delete never gets triggered. Doing an explicit copy+delete gives the desired result, or (in the case of containers) map the paths so they appear to be different mount points. 2) as it is not part of standard UnRaid. It should be possible to get the script to handle this correctly by making it get the list of pool names from /boot/config/pools on the flash drive. Having said that there are valid Use cases for keeping files for non-cached shares on a cache/pool (and I know of Users who exploit this behaviour) so this script would break those Use Cases unless it is explicitly written to allow for them. 1 Quote Link to comment
Aerodb Posted September 27, 2021 Share Posted September 27, 2021 15 hours ago, itimpi said: The issue starts when a file is ‘moved’ to anther share at the Linux level (rather than copy/deleted) either via the command line or via a container. Linux (which does not understand User Shares) implements ‘move’ by first trying a rename and only if that fails doing a copy+delete if source and target appear to be on the same mount point (/mnt/user in the case of user shares). The rename succeeds leaving the file on the original drive so the copy/delete never gets triggered. Doing an explicit copy+delete gives the desired result, or (in the case of containers) map the paths so they appear to be different mount points. 2) as it is not part of standard UnRaid. It should be possible to get the script to handle this correctly by making it get the list of pool names from /boot/config/pools on the flash drive. Having said that there are valid Use cases for keeping files for non-cached shares on a cache/pool (and I know of Users who exploit this behaviour) so this script would break those Use Cases unless it is explicitly written to allow for them. that makes total sense. I can also confirm that i have used a container to move files so that's VERY likely what happened. thank you! Quote Link to comment
Isanderthul Posted October 4, 2021 Share Posted October 4, 2021 Hi none of my scripts are running, so I have tried to manually run then the same way the plugin does ie echo /usr/local/emhttp/plugins/user.scripts/startBackground.php /tmp/user.scripts/tmpScripts/arrayStartup/script | at NOW -M > /dev/null 2>&1 and in the /var/log/syslog there is this error: Oct 4 10:42:51 Gen8 atd[9567]: PAM unable to dlopen(/lib64/security/pam_unix.so): /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /lib64/libpthread.so.0) Oct 4 10:42:51 Gen8 atd[9567]: PAM adding faulty module: /lib64/security/pam_unix.so Oct 4 10:42:51 Gen8 kernel: atd[9567]: segfault at 0 ip 00001516594bf886 sp 00007ffc66660bb0 error 4 in libpthread-2.33.so[1516594bf000+f000] Oct 4 10:42:51 Gen8 kernel: Code: 00 48 c7 82 e8 02 00 00 e0 ff ff ff b8 11 01 00 00 66 48 0f 6e c7 66 0f 6c c0 0f 11 82 d8 02 00 00 0f 05 48 8b 05 3a 57 01 00 <48> 8b 00 64 48 89 04 25 98 06 00 00 0f b6 05 6f 5a 01 00 64 88 04 Quote Link to comment
Isanderthul Posted October 4, 2021 Share Posted October 4, 2021 4 minutes ago, Isanderthul said: Hi none of my scripts are running, so I have tried to manually run then the same way the plugin does ie echo /usr/local/emhttp/plugins/user.scripts/startBackground.php /tmp/user.scripts/tmpScripts/arrayStartup/script | at NOW -M > /dev/null 2>&1 and in the /var/log/syslog there is this error: Oct 4 10:42:51 Gen8 atd[9567]: PAM unable to dlopen(/lib64/security/pam_unix.so): /lib64/libc.so.6: version `GLIBC_2.32' not found (required by /lib64/libpthread.so.0) Oct 4 10:42:51 Gen8 atd[9567]: PAM adding faulty module: /lib64/security/pam_unix.so Oct 4 10:42:51 Gen8 kernel: atd[9567]: segfault at 0 ip 00001516594bf886 sp 00007ffc66660bb0 error 4 in libpthread-2.33.so[1516594bf000+f000] Oct 4 10:42:51 Gen8 kernel: Code: 00 48 c7 82 e8 02 00 00 e0 ff ff ff b8 11 01 00 00 66 48 0f 6e c7 66 0f 6c c0 0f 11 82 d8 02 00 00 0f 05 48 8b 05 3a 57 01 00 <48> 8b 00 64 48 89 04 25 98 06 00 00 0f b6 05 6f 5a 01 00 64 88 04 I think I might be causing this by installing aaa_glibc-solibs-2.33-x86_64-3.txz in my /boot/extras Quote Link to comment
Isanderthul Posted October 5, 2021 Share Posted October 5, 2021 20 hours ago, Isanderthul said: I think I might be causing this by installing aaa_glibc-solibs-2.33-x86_64-3.txz in my /boot/extras Ah now I realised I needed aaa_glibc-solibs-2.33-x86_64-3.txz for vim. But I need scripts more than I need vim... so 🤷♂️. Quote Link to comment
Noim Posted October 10, 2021 Share Posted October 10, 2021 Hey guys, I have a problem. Currently I use User Scripts for controlling my fans. For this I have two scripts. One for the night and one for the day. Always when I go to bed, I abort the day script and start the night one. After some time I noticed, my script stopped working and throws the same error again and again. The error itself has nothing to do with user script. It only made me realize something. When I press abort, the script itself doesn't get killed. Here you can see the output of ps -ax | grep script after I aborted the script: The highlighted process is the process I aborted. It is still running which obviously is an issue. The first time I had 20 processes running without noticing. All doing the same thing which overloaded my ipmi controller. I attached the script to this comment. fan-script.sh Why does user script not kill the script. This is what I expect when I press abort. Or is it even a problem of my script? Quote Link to comment
Squid Posted October 10, 2021 Author Share Posted October 10, 2021 If the user script spawns another script / process that won't exit, then the main script stays running. Stopping spawned processes automatically is a royal PITA 1 Quote Link to comment
Noim Posted October 11, 2021 Share Posted October 11, 2021 19 hours ago, Squid said: If the user script spawns another script / process that won't exit, then the main script stays running. Stopping spawned processes automatically is a royal PITA But it doesn't spawn any new process. The content of fan-script.sh is the content of the user script. Quote Link to comment
Aerodb Posted October 15, 2021 Share Posted October 15, 2021 Hey all, I was thinking that my UPS would run for much longer if some other dockers were not running during a power outage. I also figured user scripts would be my best bet at accomplishing this. Has anyone attempted this or found any resources for this? Quote Link to comment
JonathanM Posted October 15, 2021 Share Posted October 15, 2021 1 hour ago, Aerodb said: my UPS would run for much longer That approach is hard on equipment, unless it's a +$1K USD UPS unit. Your goal should be to get a safe shutdown complete as soon as possible, hopefully with 50 to 60% battery remaining. 1 Quote Link to comment
TexasUnraid Posted October 16, 2021 Share Posted October 16, 2021 (edited) On 2/15/2021 at 7:17 AM, Squid said: You would need to do this: Create the file called /usr/local/emhttp/plugins/myscript/script.page Menu="Tasks:80" Name="My Script" Type="xmenu" Tabs="true" Code="e942" You can move where the tab is located by adjusting the number within "Menu" as per the tab numbering here: https://forums.unraid.net/topic/57109-plugin-custom-tab/ Create another file called /usr/local/emhttp/plugins/myscript/myscript.page Menu="script" --- <? echo "<h1>Running script</h1><br><br>"; passthru("/bin/bash /boot/myscript.sh"); ?> Change the bash command appropriately within "passthru" Note: The page will keep loading while the script executes. If you're planning on doing something like starting a long running script (rsync?) by doing this, then the script that you call has to start another script via the "AT" command in bash. The screen does not update as the script executes, but will only be updated once the script is finished. If after starting the script you want the GUI to go back to where ever you were at prior to clicking the custom tab, then add to the second file <script>history.back();</script> ALL of these files need to be saved with standard linux line endings (use Notepad++) Copy the files to the folder in RAM via the userscripts plugin (or go file) at first boot While trying to track down a random kernel panic issue I am dealing with, I decided to upgrade to 6.10. After doing this the above script method does not seem to be working anymore? The tab is still there but when I click it I just get a 404 error? I checked and the files are still copied to the same location on array start and the script is unchanged from what was working yesterday? Any ideas why it is not working today after the update? I use this all the time, it is VERY handy. Cuts as much as 150-200w off the power draw when it is not doing anything time sensitive and I can crank it back up when I am working with it directly. Edited October 16, 2021 by TexasUnraid Quote Link to comment
Squid Posted October 16, 2021 Author Share Posted October 16, 2021 There is zero difference between 6.10 and 6.9 (or earlier versions all the way down to 6.0) with regards to this. You'd have to post the exact .page file you're using etc Quote Link to comment
TexasUnraid Posted October 16, 2021 Share Posted October 16, 2021 35 minutes ago, Squid said: There is zero difference between 6.10 and 6.9 (or earlier versions all the way down to 6.0) with regards to this. You'd have to post the exact .page file you're using etc Ok, here is the script.page file: Menu="Tasks:94" Name="CPU Power" Type="xmenu" Tabs="true" Code="e942" and the cpu-power-switch.page file Menu="script" --- <? echo "<h1>Changing CPU Governer</h1><br><br>"; passthru("/bin/bash /boot/config/plugins/user.scripts/scripts/Power\ Governer\ switcher/script"); ?> <script>history.back();</script> I noticed that the 404 error just adds /script to the server address. aka, unraid.local/script Just confused since this all worked yesterday. Quote Link to comment
zerenx Posted October 17, 2021 Share Posted October 17, 2021 A quick report of a possible bug I am using Unraid v6.9.2 with CA Userscripts v2021.03.10 when running a script that references the home directory using '~' If I run it in foreground (ie. click the "RUN SCRIPT" button) it runs OK but it refuses to run if I do it in the background (ie. click the "RUN IN BACKGROUND" button) I investigated a little, and it seems that when running in background, the script forgot to set the proper home directory. I checked it with this simple script #!/bin/bash set | grep HOME and the result if I run it in foreground is HOME=/root while the result if I run it in background is HOME=/ For the moment, this can be easily fixed with adding this line inside the script export HOME=/root I am not sure if this is intentional or a bug, hope that helps. Quote Link to comment
TexasUnraid Posted October 22, 2021 Share Posted October 22, 2021 On 10/16/2021 at 4:26 PM, Squid said: There is zero difference between 6.10 and 6.9 (or earlier versions all the way down to 6.0) with regards to this. You'd have to post the exact .page file you're using etc Was doing some more troubleshooting but since I have no idea how or why this works I am out of my depth. I did find this error in the syslog when clicking on the cpu power button though. nginx: 2021/10/22 13:11:56 [error] 15209#15209: *138612 open() "/usr/local/emhttp/script" failed (2: No such file or directory) while sending to client, client: 192.168.x.xxx, server: xxx, request: "GET /script HTTP/2.0", host: "xxxxxxx.xxxxx", referrer: "https://xxxxxxxxxxxxxxxxxxxxxxxxxxxxxxx" I added the XXX's, the IP's and addresses were correct though. It seems to be looking for the script in "/usr/local/emhttp/script" instead of /usr/local/emhttp/plugins/myscript/script.page? No idea where to go from here. Quote Link to comment
Noim Posted October 27, 2021 Share Posted October 27, 2021 @Squid My issue still persists. Here is a test script: After aborting it, the script process is still running: What am I doing wrong?! This is how it gets spawned if I press background process: The process 10197 survives if I press abort. Plugin is up-to-date and running on unraid 6.9.2. 1 Quote Link to comment
ConnerVT Posted October 31, 2021 Share Posted October 31, 2021 (edited) This is probably a simple question, but after an hour of searching, I have yet to find a definitive answer. I have a working script that weekly will rsync several folders from my array to an external USB drive mounted with Unassigned Devices. I have two drives (External_USB_1 and External_USB_2) which I swap on the beginning of the month, and I keep the unconnected drive off site. My current solution has me manually editing my script each time I swap drives, to specify the correct destination drive. I would much rather set and forget the script, and have it check for the drive to be present before executing rsync. What would be the best way to check if these drives are present and mounted? UPDATE: I have a solution that seems to work, but if someone has a better one, I'm open to your ideas. I ended up using mountpoint. #!/bin/bash #arrayStarted=true if mountpoint -q "/mnt/disks/External_3TB_1" then rsync -avh "/mnt/disks/Samsung_USB/UNRAID_Backups" "/mnt/user/Media/Music" "/mnt/user/Photos/ORIGINALS" "/mnt/user/Backup/appdata backup" "/mnt/disks/External_3TB_1" else echo "External_3TB_1 not mounted" fi if mountpoint -q "/mnt/disks/External_3TB_2" then rsync -avh "/mnt/disks/Samsung_USB/UNRAID_Backups" "/mnt/user/Media/Music" "/mnt/user/Photos/ORIGINALS" "/mnt/user/Backup/appdata backup" "/mnt/disks/External_3TB_2" else echo "External_3TB_2 not mounted" fi Edited October 31, 2021 by ConnerVT Solution? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.