Additional Scripts For User.Scripts Plugin


Recommended Posts

14 minutes ago, jonathanm said:

Adding parity2 only writes the generated data to the parity2 drive, it doesn't touch or verify the information on parity1.

Parity1 is still checked, and corrected if there are errors.

  • Like 1
  • Thanks 1
Link to comment
3 hours ago, flyize said:

I know we're getting way off topic here, but doesn't it seem like it should? If parity1 is wrong, then won't parity2 be calculated incorrectly as well?

Parity2 will be calculated using all the data drives and as such does not need parity1 (it is perfectly valid to have parity2 without parity1).

 

While calculating parity2 if parity1 is also present then the opportunity is taken to check it and correct any errors found. 

 

it is also possible to have a scenario with parity1 present and 1 data disk missing.    In such a case parity1 is being read to emulate the missing data disk while building parity2.

  • Like 2
Link to comment

Does anyone know if its possible to remotely execute a user script? Im looking to have Home Assistant remotely (using ssh) call a User script to stop or start docker containers when different things within the system happen (For example when someone begins to watch Plex - shut down my Tdarr Node - or when i have finished gaming on my Unraid Gaming VM - shut down my 3 core plex and start up my 5 core plex etc.

Link to comment
On 4/20/2021 at 2:35 PM, jonathanm said:

Thanks for the correction.

Turns out I was wrong, pretty sure I've seen it work like that before, but at least as of 6.9.2 it's not, during parity2 sync it didn't detect (or correct) sycn errors on parity1, despite parity1 being read during the sync.

Link to comment
3 hours ago, JorgeB said:

Turns out I was wrong, pretty sure I've seen it work like that before, but at least as of 6.9.2 it's not, during parity2 sync it didn't detect (or correct) sycn errors on parity1, despite parity1 being read during the sync.

Are you going to file a feature request / bug report?

Link to comment
11 hours ago, jonathanm said:

Are you going to file a feature request / bug report?

I found this by accident yesterday while testing another thing, when I have some time I'm going to test with earlier releases to see if/when it changed then decide. 

Link to comment
22 hours ago, JorgeB said:

pretty sure I've seen it work like that before, but at least as of 6.9.2 it's not

So went back to v6.3.5 and the behavior is the same, so looks like it was always like this, maybe I got confused with parity2 being checked (and corrected) during a disk rebuild, that still works, still seems strange parity1 being read and not checked during parity2 sync, but since it was always like that I'm not going to create a bug report, most likely wouldn't get an answer anyway.

Link to comment
On 4/28/2021 at 4:45 AM, p.wrangles said:

Does anyone know if its possible to remotely execute a user script? Im looking to have Home Assistant remotely (using ssh) call a User script to stop or start docker containers when different things within the system happen (For example when someone begins to watch Plex - shut down my Tdarr Node - or when i have finished gaming on my Unraid Gaming VM - shut down my 3 core plex and start up my 5 core plex etc.

just invoke it like this

bash /boot/config/plugins/user.scripts/scripts/rcloneSync/script

 

where rcloneSync is the name given to the user script.

However, this name might be different from the current name because renaming a user script doesn't change the directory the script is in in which case you need to mouse over the gear icon in the User Script settings until the settings menu opens

image.png.b68351814a4bfea85c077233c2295ca1.png

 

Link to comment

Is it possible to have a script to check when a Plex session with encoding starts, so that one can stop a mining Docker?
In my case it's the trex-miner, which I would like to halt when Plex gets an encoding job.
'nvidia-smi' gives the active process using the GPU, so the script would have to parse the output for Plex.

Link to comment

I wrote this for my pi-hole docker.  There was probably an easy way to change the default ram setting but this was what I came up with. I was attemping to parse logs older than 24 hours at a time and the default 128M ram causes the error: "You may need to increase the memory available". Pi-hole discourse suggested waiting for version 6 LOL. 

 

 

Spoiler

#!/bin/bash
# The purpose of this script is the change the ram allocation,
# in the php.ini file for the pihole docker container.

 

# Script will break if:
# Change to default php.ini path: /etc/php/7.3/cgi/php.ini
# Change to default variable name of: memory_limit 
# Change to default memory_limit value of: 128M

 

echo ""

 

# Displays the pihole docker container details with column names
# Uses grep against the container having pihole in the container name field
docker container list | grep CONTAINER && docker container list | grep pihole

 

# Sets var1 as the docker container id
# Cut selects the first field using space as the delimiter 
var1=$(docker container list | grep pihole | cut -d" " -f1)

 

# Displays value of var1 to the user to confirm it matches the container id from previous command
echo "$var1 is being used as container ID"

 

# Executes stream editor inside the docker container replacing the default memory_limit 128M with 
# 2048 inside the /etc/php/7.3/cgi/php.ini configuration file.
docker exec $var1 sed -i 's/memory_limit = 128M/memory_limit = 2048M/g' /etc/php/7.3/cgi/php.ini && echo ""

 

#Restarts the docker container to enable previous changes
docker restart $var1 1>/dev/null

 

# Sleeps this script for 1 minute to allow the docker container to restart  
for ((i=1;i<60;i++)); do echo -ne "Restarting.\r"; sleep 0.25 ; echo -ne "Restarting..\r" ; sleep 0.25 ; echo -ne "Restarting...\r" ; sleep 0.25 ; echo -ne "\r" ; echo -ne "Restarting\r" ; sleep 0.25 ; done ; echo "Restarting...Complete!"

 

# Command that allows user to confirm the value of memory_limit after the docker container was restarted
echo "memory_limit value is now:" $(docker exec $var1 cat /etc/php/7.3/cgi/php.ini |grep memory_limit | cut -d" " -f3)

 

# Unsets the variable var1
unset var1


Pi-hole discourse link: Pi-hole Discourse
 

Link to comment
  • 3 weeks later...
On 2/25/2017 at 3:41 PM, SpaceInvaderOne said:

As nested vms have been disabled by default in 6.3 due to the issue with avast and windows vms when enabled.

I have made 2 scripts to turn it on and off. In script set cpu type 1 for intel 2 for amd.

Download link for both scripts https://www.dropbox.com/s/0b1tvotvl6y80uy/nested vms scripts.zip?dl=0

 

nested vm on script


#!/bin/bash

#set whether your cpu is Intel or AMD  [1-Intel] [2-Amd] 
cputype="1"

#Do not edit below this line
#turn nested on


	if [[ "$cputype" =~ ^(1|2)$ ]]; then

		if [ "$cputype" -eq 1 ]; then
			
			modprobe -r kvm_intel
				modprobe kvm_intel nested=1
				
				echo "Nested vms are enabled for intel cpus"


			elif [ "$cputype" -eq 2 ]; then

			modprobe -r kvm_amd
					modprobe kvm_amd nested=1
				
					echo "Nested vms are enabled for AMD cpus"			

			fi

		else
			
				echo "invalid cpu type set please check config"
			
			fi


sleep 4
exit

and nested vm  off script

 


#!/bin/bash
#set whether your cpu is Intel or AMD  [1-Intel] [2-Amd] 
cputype="1"
#Do not edit below this line
#turn nested off

    if [[ "$cputype" =~ ^(1|2)$ ]]; then
        if [ "$cputype" -eq 1 ]; then
            modprobe -r kvm_intel
                modprobe kvm_intel nested=0
                
                echo "Nested vms are disabled for intel cpus"

            elif [ "$cputype" -eq 2 ]; then
            modprobe -r kvm_amd
                    modprobe kvm_amd nested=0
                
                    echo "Nested vms are disabled for AMD cpus"            
            fi
        else
            
                echo "invalid cpu type set please check config"
            
            fi

sleep 4
exit

 

 

Slight typo when using this script with AMD cpus, your elif statement checks the wrong variable name.

 

I changed the following above:

$pushnotifications -> $cputype

Link to comment
  • 4 weeks later...
On 12/27/2020 at 11:09 AM, JorgeB said:

That script is very slow with recent Unraid releases, and the author has not been in the forums for a long time, but you can still do the procedure manually.

Hi,

 

I have the same problem, the write speed is only 6MB/s, at this rate it will take 34 days to finish clearing the drive...

 

How do I stop the process? I can't find any `dd` running.

 

Thanks!

 

EDIT : looks like there is no way to interrupt the script so I'll just reboot the server.

EDIT2: rebooting makes Unraid waiting for the script to finish.....

EDIT3: it finally timed out

Edited by wblondel
Link to comment
  • 4 weeks later...

TL;DR - this needless enhancement may already have been posted - but....

 

give a bit of feedback on the .DS_Store file deletion

 

#!/bin/bash
echo "Searching for (and deleting) .DS_Store Files (this may take a a while)"

cnt=`find /mnt/user -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \; -print | wc -l`

echo "$cnt files deleted"

 

Link to comment
  • 1 month later...

I tried to use "Clear an unRAID array data drive" but I couild not figure out how to put files on the flash drive.

I clicked "Add a new script" so I wa able to give it a name, and I think it automatically created a folder on the flash drive in the config/plugins/user.scripts/scripts folder. 

 

I downloaded the "description" and "script" files to my Windows 10 machine.  But I don't know how to put them in the unraid flash drive in the config/plugins/user.scripts/scripts folder.  When I use Krusader, and go to FLASH it says, "Error: Cannot open the folder /FLASH".  And even if i could open it, how do i move the file from Win10 to unraid flash?

 

Edited by xrqp
Link to comment
  • 4 weeks later...
On 9/4/2016 at 9:36 AM, RobJ said:

Clear an unRAID array data drive  (for the Shrink array wiki page)

 

This script is for use in clearing a drive that you want to remove from the array, while maintaining parity protection.  I've added a set of instructions within the Shrink array wiki page for it.  It is designed to be as safe as possible, and will not run unless specific conditions are met -

- The drive must be a data drive that is a part of an unRAID array

- It must be a good drive, mounted in the array, capable of every sector being zeroed (no bad sectors)

- The drive must be completely empty, no data at all left on it.  This is tested for!

- The drive should have a single root folder named clear-me - exactly 8 characters, 7 lowercase and 1 hyphen.  This is tested for!

 

Because the User.Scripts plugin does not allow interactivity (yet!), some kludges had to be used, one being the clear-me folder, and the other being a 60 second wait before execution to allow the user to abort.  I actually like the clear-me kludge, because it means the user cannot possibly make a mistake and lose data.  The user *has* to empty the drive first, then add this odd folder.

 

 

#!/bin/bash
# A script to clear an unRAID array drive.  It first checks the drive is completely empty,
# except for a marker indicating that the user desires to clear the drive.  The marker is
# that the drive is completely empty except for a single folder named 'clear-me'.
#
# Array must be started, and drive mounted.  There's no other way to verify it's empty.
# Without knowing which file system it's formatted with, I can't mount it.
#
# Quick way to prep drive: format with ReiserFS, then add 'clear-me' folder.
#
# 1.0  first draft
# 1.1  add logging, improve comments
# 1.2  adapt for User.Scripts, extend wait to 60 seconds
# 1.3  add progress display; confirm by key (no wait) if standalone; fix logger
# 1.4  only add progress display if unRAID version >= 6.2

version="1.4"
marker="clear-me"
found=0
wait=60
p=${0%%$P}              # dirname of program
p=${p:0:18}
q="/tmp/user.scripts/"

echo -e "*** Clear an unRAID array data drive ***  v$version\n"

# Check if array is started
ls /mnt/disk[1-9]* 1>/dev/null 2>/dev/null
if [ $? -ne 0 ]
then
   echo "ERROR:  Array must be started before using this script"
   exit
fi

# Look for array drive to clear
n=0
echo -n "Checking all array data drives (may need to spin them up) ... "
if [ "$p" == "$q" ] # running in User.Scripts
then
   echo -e "\n"
   c="<font color=blue>"
   c0="</font>"
else #set color teal
   c="\x1b[36;01m"
   c0="\x1b[39;49;00m"
fi

for d in /mnt/disk[1-9]*
do
   x=`ls -A $d`
   z=`du -s $d`
   y=${z:0:1}
#   echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z

   # the test for marker and emptiness
   if [ "$x" == "$marker" -a "$y" == "0" ]
   then
      found=1
      break
   fi
   let n=n+1
done

#echo -e "found:"$found "d:"$d "marker:"$marker "z:"$z "n:"$n

# No drives found to clear
if [ $found == "0" ]
then
   echo -e "\rChecked $n drives, did not find an empty drive ready and marked for clearing!\n"
   echo "To use this script, the drive must be completely empty first, no files"
   echo "or folders left on it.  Then a single folder should be created on it"
   echo "with the name 'clear-me', exactly 8 characters, 7 lowercase and 1 hyphen."
   echo "This script is only for clearing unRAID data drives, in preparation for"
   echo "removing them from the array.  It does not add a Preclear signature."
   exit
fi

# check unRAID version
v1=`cat /etc/unraid-version`
# v1 is 'version="6.2.0-rc5"' (fixme if 6.10.* happens)
v2="${v1:9:1}${v1:11:1}"
if [[ $v2 -ge 62 ]]
then
   v=" status=progress"
else
   v=""
fi
#echo -e "v1=$v1  v2=$v2  v=$v\n"

# First, warn about the clearing, and give them a chance to abort
echo -e "\rFound a marked and empty drive to clear: $c Disk ${d:9} $c0 ( $d ) "
echo -e "* Disk ${d:9} will be unmounted first."
echo "* Then zeroes will be written to the entire drive."
echo "* Parity will be preserved throughout."
echo "* Clearing while updating Parity takes a VERY long time!"
echo "* The progress of the clearing will not be visible until it's done!"
echo "* When complete, Disk ${d:9} will be ready for removal from array."
echo -e "* Commands to be executed:\n***** $c umount $d $c0\n***** $c dd bs=1M if=/dev/zero of=/dev/md${d:9} $v $c0\n"
if [ "$p" == "$q" ] # running in User.Scripts
then
   echo -e "You have $wait seconds to cancel this script (click the red X, top right)\n"
   sleep $wait
else
   echo -n "Press ! to proceed. Any other key aborts, with no changes made. "
   ch=""
   read -n 1 ch
   echo -e -n "\r                                                                  \r"
   if [ "$ch" != "!" ];
   then
      exit
   fi
fi

# Perform the clearing
logger -tclear_array_drive "Clear an unRAID array data drive  v$version"
echo -e "\rUnmounting Disk ${d:9} ..."
logger -tclear_array_drive "Unmounting Disk ${d:9}  (command: umount $d ) ..."
umount $d
echo -e "Clearing   Disk ${d:9} ..."
logger -tclear_array_drive "Clearing Disk ${d:9}  (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} $v ) ..."
dd bs=1M if=/dev/zero of=/dev/md${d:9} $v
#logger -tclear_array_drive "Clearing Disk ${d:9}  (command: dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000 ) ..."
#dd bs=1M if=/dev/zero of=/dev/md${d:9} status=progress count=1000 seek=1000

# Done
logger -tclear_array_drive "Clearing Disk ${d:9} is complete"
echo -e "\nA message saying \"error writing ... no space left\" is expected, NOT an error.\n"
echo -e "Unless errors appeared, the drive is now cleared!"
echo -e "Because the drive is now unmountable, the array should be stopped,"
echo -e "and the drive removed (or reformatted)."
exit
 

 

 

The attached zip is 'clear an array drive.zip', containing both the User.Scripts folder and files, but also the script named clear_array_drive (same script) for standalone use.  Either extract the files for User.Scripts, or extract clear_array_drive into the root of the flash, and run it from there.

 

Also attached is 'clear an array drive (test only).zip', for playing with this, testing it.  It contains exactly the same scripts, but writing is turned off, so no changes at all will happen.  It is designed for those afraid of clearing the wrong thing, or not trusting these scripts yet.  You can try it in various conditions, and see what happens, and it will pretend to do the work, but no changes at all will be made.

 

I do welcome examination by bash shell script experts, to ensure I made no mistakes.  It's passed my own testing, but I'm not an expert.  Rather, a very frustrated bash user, who lost many hours with the picky syntax!  I really don't understand why people like type-less languages!  It only *looks* easier.

 

After a while, you'll be frustrated with the 60 second wait (when run in User Scripts).  I did have it at 30 seconds, but decided 60 was better for new users, for now.  I'll add interactivity later, for standalone command line use.  It also really needs a way to provide progress info while it's clearing.  I have ideas for that.

 

The included 'clear_array_drive' script can now be run at the command line within any unRAID v6, and possibly unRAID v5, but is not tested there.  (Procedures for removing a drive are different in v5.)  Progress display is only available in 6.2 or later.  In 6.1 or earlier, it's done when it's done.

 

Update 1.3 - add display of progress; confirm by key '!' (no wait) if standalone; fix logger; add a bit of color

  Really appreciate the tip on 'status=progress', looks pretty good.  Lots of numbers presented, the ones of interest are the second and the last.

Update 1.4 - make progress display conditional for 6.2 or later; hopefully now, the script can be run in any v6, possibly v5

clear_an_array_drive.zip 4.37 kB · 1420 downloads

clear_an_array_drive_test_only.zip 4.61 kB · 128 downloads

 I was trying to shrink my array. I followed the instructions here. I started "clear_an_array" script, and kept the window open for more than 24hrs now. There is no progress showing anymore, and all disks in the array sprung down already. Am I safe to close the script window and proceed to the next step ("Go to Tools then New Config")? 

649624128_ScreenShot2021-09-13at5_23_27pm.thumb.png.76c250e6e617c64e1baa5c4f24d3fcee.png

Link to comment
  • 3 weeks later...

Does anyone have a rsync script example? they would like to share?

I am strugeling to create a script to initiate a rsync to my Unraid server from an external media server

Sofar I have managed to use dockers like Resilio sync but they ar not cutting it anymore

 

Or would I need to initiate this from the remotes server to my Unraid server as the target server?

 

Sofar I have been reading up on what I could fine, but the post is very old...

 

Link to comment
  • 4 weeks later...

How can I execute a UserScript from inside a docker? In Swag, I want to create a php web page that has a link to execute a UserScript. The script tells other dockers to do stuff.

 

I’m thinking that I will need to ssh into the unRAID console. I would love to see a php snippet on how to ssh and run the command.

 

‘’thank you.

Link to comment
  • 2 weeks later...
On 10/29/2016 at 11:56 AM, Squid said:
$vars = parse_ini_file("var/local/emhttp/var.ini");

I know this is super old but I only just tried this script out today. I think you are missing a "/" at the start of the file path in regards to the parity custom scripts. Without that it is trying to find a file in the location that the script is running:

Warning: parse_ini_file(var/local/emhttp/var.ini): failed to open stream: No such file or directory in /tmp/user.scripts/tmpScripts/paritycheck/script on line 24

 

Link to comment
  • 2 weeks later...

HI Guys.

 

I would like to create a new script that invokes the daily cache mover... but ONLY to move SPECIFIC folders from cache drive to the array. Specifically I want to move the files in /mnt/cache/Media/Movies.  Is there a way to do this? I found what I believe is the mover script in /usr/local/sbin/mover - pasted code below.

 

I was hoping to find something specific to /mnt/cache... but no luck. I confess I don't fully understand the script below. I am afraid of screwing things up in my array if I edit and run the wrong thing.

 

I suspect the section that I would need to change is:

  # Check for objects to move from pools to array
  for POOL in /boot/config/pools/*.cfg ; do
    for SHAREPATH in /mnt/$(basename "$POOL" .cfg)/*/ ; do
      SHARE=$(basename "$SHAREPATH")
      if grep -qs 'shareUseCache="yes"' "/boot/config/shares/${SHARE}.cfg" ; then
        find "${SHAREPATH%/}" -depth | /usr/local/sbin/move -d $LOGLEVEL
      fi
    done
  done

 

The line above that reads:

   "for SHAREPATH in /mnt/$(basename "$POOL" .cfg)/*/ ; do"

I would change to:

   "for SHAREPATH in /mnt/Media/Movies/$(basename "$POOL" .cfg)/*/ ; do"

 

Can someone please advise if I am correct... if not, what do I need to change?

 

Here is the complete mover script from /usr/local/sbin/mover:

PIDFILE="/var/run/mover.pid"
CFGFILE="/boot/config/share.cfg"
LOGLEVEL=0

start() {
  if [ -f $PIDFILE ]; then
    if ps h $(cat $PIDFILE) | grep mover ; then
        echo "mover: already running"
        exit 1
    fi
  fi

  if [ -f $CFGFILE ]; then
    # Only start if shfs includes pools
    if ! grep -qs 'shareCacheEnabled="yes"' $CFGFILE ; then
      echo "mover: cache not enabled"
      exit 2
    fi
    if grep -qs 'shareMoverLogging="yes"' $CFGFILE ; then
      LOGLEVEL=1
    fi
  fi
  if ! mountpoint -q /mnt/user0 ; then
    echo "mover: array devices not mounted"
    exit 3
  fi

  echo $$ >/var/run/mover.pid
  [[ $LOGLEVEL -gt 0 ]] && echo "mover: started"

  shopt -s nullglob

  # Check for objects to move from pools to array
  for POOL in /boot/config/pools/*.cfg ; do
    for SHAREPATH in /mnt/$(basename "$POOL" .cfg)/*/ ; do
      SHARE=$(basename "$SHAREPATH")
      if grep -qs 'shareUseCache="yes"' "/boot/config/shares/${SHARE}.cfg" ; then
        find "${SHAREPATH%/}" -depth | /usr/local/sbin/move -d $LOGLEVEL
      fi
    done
  done

  # Check for objects to move from array to pools
  for SHAREPATH in $(ls -dv /mnt/disk[0-9]*/*/) ; do
    SHARE=$(basename "$SHAREPATH")
    if grep -qs 'shareUseCache="prefer"' "/boot/config/shares/${SHARE}.cfg" ; then
      eval "$(grep -s shareCachePool /boot/config/shares/${SHARE}.cfg | tr -d '\r')"
      if [[ -z "$shareCachePool" ]]; then
        shareCachePool="cache"
      fi
      if [[ -d "/mnt/$shareCachePool" ]]; then
        find "${SHAREPATH%/}" -depth | /usr/local/sbin/move -d $LOGLEVEL
      fi
    fi
  done

  rm -f $PIDFILE
  [[ $LOGLEVEL -gt 0 ]] && echo "mover: finished"
}

killtree() {
  local pid=$1 child
    
  for child in $(pgrep -P $pid); do
    killtree $child
  done
  [ $pid -ne $$ ] && kill -TERM $pid
}

# Caution: stopping mover like this can lead to partial files on the destination
# and possible incomplete hard link transfer.  Not recommended to do this.
stop() {
  if [ ! -f $PIDFILE ]; then
    echo "mover: not running"
    exit 0
  fi
  killtree $(cat $PIDFILE)
  sleep 2
  rm -f $PIDFILE
  echo "mover: stopped"
}

case $1 in
start)
  start
  ;;
stop)
  stop
  ;;
status)
  [ -f $PIDFILE ]
  ;;
*)
  # Default is "start"
  # echo "Usage: $0 (start|stop|status)"
  start
  ;;
esac

 

Any advise is greatly appreciated.

 

Thank you!

 

H.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.