Additional Scripts For User.Scripts Plugin


Recommended Posts

7 hours ago, rbronco21 said:

I'm using "Clear an unRAID array data drive" to clear disks for encryption. The script stopped at 40GB on my 8TB disk and I am not sure what state I am in now. Can I try it again? Do I need to rebuild it before anything else?

I don't understand this whole post. Clearing a drive in the array is to prepare it for removal, not for reformatting. And the fact that you mentioned rebuild makes me really worried.

 

If you have already moved all the data off the disk why would you need to clear it? In fact, why would you clear a disk you weren't going to remove? The whole point of clearing a disk that is already in the array is to maintain parity so it will still be valid after the disk is removed from the array.

 

Encryption is just a reformat. You don't even have to have an empty disk, but you do need to copy any data you want to keep elsewhere.

 

If you have not already moved or copied all the data off the disk then that data is lost.

Link to comment

I moved everything off the drive and am clearing it so I can reformat and encrypt it while maintaining parity. I read a bunch of methods to shrink arrays and encrypt discs and this is the hybrid method I came up with. I wouldn't be surprised if I am doing extra steps, but this seemed to be the safest way I could put together. Thanks for the interest and sorry for not including more details. Is there a better way to do this?

Link to comment
  • 4 weeks later...
On 8/3/2016 at 2:28 AM, Squid said:

A slightly enhanced version of the run mover at a certain threshold script.  This script additionally will skip running mover (optional) if a parity check / rebuild has already been started.

 

Only makes sense to run this script on a schedule, and disable the built-in schedule by editing the config/share.cfg file on the flash drive.  Look for a like that says something like:

 


shareMoverSchedule="0 4 * * *"
 

 

and change it to:


shareMoverSchedule="#0 4 * * *"
 

 

 

Followed by a reboot.  Note that any changes to global share settings ( or mover settings ) is probably going to wind up re-enabling the mover schedule

 

 

 


#!/usr/bin/php
<?PHP
$moveAt = 0;                 # Adjust this value to suit (% cache drive full to move at)
$runDuringCheck = false;     # change to true to run mover during a parity check / rebuild

$diskTotal = disk_total_space("/mnt/cache");
$diskFree = disk_free_space("/mnt/cache");
$percent = ($diskTotal - $diskFree) / $diskTotal * 100;

if ( $percent > $moveAt ) {
  if ( ! $runDuringCheck ) {
    $vars = parse_ini_file("/var/local/emhttp/var.ini");
    if ( $vars['mdResync'] ) {
      echo "Parity Check / Rebuild Running - Not executing mover\n";
      exec("logger Parity Check / Rebuild Running - Not executing mover");
    } else {
      exec("/usr/local/sbin/mover");
    }
  } else {
    exec("/usr/local/sbin/mover");
  }
}
?>
 

 

 

run_mover_at_threshold_enhanced.zip 1.05 kB · 71 downloads

In this script it checks if parity is running and I'd like to do a similar check in a different script for if the mover is running.  Can anyone help please ie. what's the mover equivalent for:

 

$vars = parse_ini_file("/var/local/emhttp/var.ini");
    if ( $vars['mdResync'] ) {

Thanks in advance.

 

Update:  Found the answer: https://gist.github.com/fabioyamate/4087999

 

if [ -f /var/run/mover.pid ]; then
  if ps h `cat /var/run/mover.pid` | grep mover ; then
      echo "mover already running"
      exit 0
  fi
fi

 

Edited by DZMM
Added answer
Link to comment
  • 4 weeks later...
On 7/23/2016 at 1:00 PM, Squid said:

Run mover at a certain threshold of cache drive utilization.

 

Adjust the value to move at within the script.  Really only makes sense to use this script as a scheduled operation, and would have to be set to a frequency (hourly?) more often than how often mover itself runs normally.

 

 


#!/usr/bin/php
<?PHP

$moveAt = 70;    # Adjust this value to suit.

$diskTotal = disk_total_space("/mnt/cache");
$diskFree = disk_free_space("/mnt/cache");
$percent = ($diskTotal - $diskFree) / $diskTotal * 100;

if ( $percent > $moveAt ) {
  exec("/usr/local/sbin/mover");
}
?>
 

 

 

run_mover_at_threshold.zip 717 B · 94 downloads

I just added this script and set it to 80% and run hourly. The question I have is what do you set the Mover Settings in http://Server/Settings/Scheduler set to? it looks like it cant be disabled so I assume you max it to monthly maybe?

Link to comment
Just now, rcmpayne said:

I just added this script and set it to 80% and run hourly. The question I have is what do you set the Mover Settings in http://Server/Settings/Scheduler set to? it looks like it cant be disabled so I assume you max it to monthly maybe?

I would use the mover tuner plugin instead of the script.

Link to comment
9 hours ago, Squid said:

You set the mover schedule to be as often as you want the plugin to run and apply it's rules.  If you want at some point mover to move all the files then you set that custom cron schedule down at the bottom of the plugin's settings (or run mover manually)

OK I get it now, thanks. If I set the default settings to run every hour or every day, when it triggers, it won't just start moving, it will check the plug in add on and then only move if the size is greater than 80% of the disk.  

Link to comment
  • 2 weeks later...

My Borg backup script for dockers and VMs...

I have been using CA Backup for my Unraid backups for quite a while, but I discovered Borg Backup in Nerd Tools so I decided to try it. My goal was to make my backups smaller and faster with less downtime. I prefer to have individual schedules and backups for each VM and docker on Unraid so I thought I'd share my unified (VM/Docker) Borg backup script. I have reduced my Plex downtime to 25 mins on first backup, and around 7 minutes on additional backups due to deduplication.

 

Script assumes you are using the default locations for dockers and VMs and the email log only works when the script is scheduled (User Script Log is written) and not while run manually in User Scripts. Maybe you guys know a better way to capture the output?

 

I am not using an encrypted Borg repo for these backups, but I might make an additional version for backing up to an encrypted Borg repo. I have it set to retain 4 backups currently.

Any additions or changes to improve this are greatly welcomed 🙂

 

#!/bin/bash
arrayStarted=true

# Unraid display name of docker/vm
displayname=NS1

# Unraid backup source data folder
# (VM = /mnt/user/domains... Docker = /mnt/cache/appdata...)
backupsource=/mnt/user/domains/NS1

# The name of this Unraid User Script
scriptname=backup-ns1-vm

# Path to your Borg backup repo
export BORG_REPO=/mnt/user/Backup/Borg-backups/unencrypted

# Email address to receive backup log
[email protected]

###### Don't Edit Below Here ######

# Build variables, clear log
today=$(date +%m-%d-%Y.%H:%M:%S)
export backupname="${displayname}_${today}"
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes
>/tmp/user.scripts/tmpScripts/$scriptname/log.txt

# Determine if Docker or VM, set backupsourcetype var
backupsourcetype=
[[ $backupsource = */mnt/cache/appdata* ]] && backupsourcetype=Docker
[[ $backupsource = */mnt/user/domains* ]] && backupsourcetype=VM
echo "Backup source type: $backupsourcetype"

# Shutdown the Docker/VM
[[ $backupsourcetype = Docker ]] && docker stop $displayname
[[ $backupsourcetype = VM ]] && virsh shutdown $displayname --mode acpi

# Create backup
echo "Backing up $displayname $backupsourcetype folder..."
borg create --stats $BORG_REPO::$backupname $backupsource
sleep 5

# Start the Docker/VM
[[ $backupsourcetype = Docker ]] && docker start $displayname
[[ $backupsourcetype = VM ]] && virsh start $displayname

# Pruning, keep last 4 backups and prune older backups, give stats
borg prune -v --list --keep-last 4 --prefix $displayname $BORG_REPO

# Email the backup log
echo "Subject: Borg: $displayname $backupsourcetype Backup Log" > /tmp/email.txt
echo "From: Unraid Borg Backup" >> /tmp/email.txt
cat /tmp/email.txt /tmp/user.scripts/tmpScripts/$scriptname/log.txt > /tmp/notify.txt
sendmail $emailnotify < /tmp/notify.txt

 

Always test your backups before you rely on them!

Here are the commands I use for testing/restoring the backups.

Stop your docker/VM first and rename or move the folder if you are testing and are not replacing the data

Borg will restore to the current folder, so change to / directory to have it restored to original directory, or run it elsewhere if not

Edit the first 2 commands with your docker/vm and repo location

 

displayname=NS1
export BORG_REPO=/mnt/user/Backup/Borg-backups/unencrypted
export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes
borg list $BORG_REPO

Copy the backup name you want to test/restore from the Borg repo list output to your clipboard, mine is:

NS1_02-07-2020.19:00:01

so I will set the restorebackupname variable to this

export restorebackupname=NS1_02-07-2020.19:00:01
cd /
borg extract --list $BORG_REPO::$restorebackupname
restorebackupname=

Start up your docker/VM and make sure it works.

Edited by guru69
  • Like 1
Link to comment

some fast small lines for calibre-web to convert and add to library automatically, i use it in a scheduled task

drop ebook files (no folders) in startpath, script will convert to .mobi if not existent and add all existing formats to calibre and

REMOVE after job is done from startpath !!!

 

DOCKER_MOD like described from lsio is necessary for ebook conversion

Quote

-e DOCKER_MODS=linuxserver/calibre-web:calibre          #optional & x86-64 only Adds the ability to perform ebook conversion

 

in case its useful for someone

 

#!/bin/bash

startpath="/mnt/cache/Temp/download/books/"	### place where to look for new files
confpath="/mnt/user/appdata/calibre-web"	### appdata path to calibre-web
searchpattern=".mobi"				### what type of files to look for and add if missing
inputdir="/import"				### inputdir like setted in calibre-web docker (manually)
outputdir="/books"				### outputdir like setted in calibre-web (check webui)

cd $startpath

### look for books and convert

find . -mindepth 1 -iname "*"|sed "s|^\./||"|while read fname; do

	sourcefile="${fname%.*}"
	searchfile="$sourcefile$searchpattern"
	inputfile="$inputdir/$fname"
	convfile="$inputdir/$searchfile"

	if ! [[ "$searchfile" == "$fname" ]]; then
		if ! [ -n "$(lsof "$fname")" ]; then
			if ! [ -f "$searchfile" ]; then
				echo "docker exec calibre-web ebook-convert "\""$inputfile"\"" "\""$convfile"\""" > "$confpath/newbooks.txt"
				echo "docker exec calibre-web ebook-convert "\""$inputfile"\"" "\""$convfile"\""" >> "$confpath/newbookshistory.txt"
				echo "creating $searchfile ..."
				chmod +x "$confpath/newbooks.txt"
				cd $confpath
				./newbooks.txt > /dev/null
				sleep 1
				cd $startpath
			else
				cd $startpath
				echo "$searchfile already there..."
			fi
		else
			echo "$sourcefile in use ..."
		fi
	else
		echo "skipping $searchpattern for conversion ..."
	fi

done

### check files are free and prepare adding

find . -mindepth 1 -iname "*"|sed "s|^\./||"|while read fname; do

if ! [ -n "$(lsof "$fname")" ]; then
		echo "$fname ready ..."
		echo "docker exec calibre-web calibredb add -r "\""$inputdir"\"" --with-library="\""$outputdir"\""" > "$confpath/addbooks.txt"
	else
		echo "$fname in use ..."
		echo "echo "\""file in use"\""" > "$confpath/addbooks.txt"
		break
	fi

done

### exec addbooks

chmod +x "$confpath/addbooks.txt"
cd $confpath
./addbooks.txt
sleep 1

### remove books ###
rm -f "$startpath"*

exit

as note, as i need the double quotes and still have some trouble to get them properly working i use the workaround to write command to file and exec.

Link to comment

I created a small script to help me fight the issues with hardware transcoding on Apollo- & Gemini Lake CPUs. Basically, the Intel Media Driver doesn't work well with hardware transcoding on Plex, resulting in blocky streams. The workaround is to delete the iHD driver, so it falls back to the older Intel VA-API driver (see this post for more info: hardware-transcoding-broken-when-burning-subtitles-apollolake-based-synology-nases

 

The code below checks the plex container if the iHD driver is present and if so, deletes the driver and reboots the container. This script will probably be deprecated once the issues are resolved, but I got a little tired of logging into the shell manually each time the container was updated.

con="plex"

echo "Checking if iHD_drv_video.so exists..."
if docker exec $con sh -c "test -f /usr/lib/plexmediaserver/lib/dri/iHD_drv_video.so"; then
  echo "<font color='red'><b>iHD_drv_video.so found, deleting...</b></font>" && \
  docker exec $con sh -c "rm /usr/lib/plexmediaserver/lib/dri/iHD_drv_video.so" && \
  echo "iHD_drv_video.so removed, rebooting Plex" && \
  docker restart $con
fi

echo "All good!"

Any feedback on improving this script is more than welcome!

Link to comment

Hi Squid, I.m new and trying some of the settings in the User Scrips plug-in.

enabling/disabling turbo write mode and auto turbo write,

whats the difference and any advantages and should one be using it on a daily basis?

Are there any other places (sites) we can find trusted useful scrips?

 

Nice work, I was able to fix the issue i had with "write_cache_on_disk_10", Strange because this drive is a WD Red retail drive. Not shucked.

Link to comment
1 minute ago, Walter S said:

Hi Squid, I.m new and trying some of the settings in the User Scrips plug-in.

enabling/disabling turbo write mode and auto turbo write,

whats the difference and any advantages and should one be using it on a daily basis?

Are there any other places (sites) we can find trusted useful scrips?

 

Nice work, I was able to fix the issue i had with "write_cache_on_disk_10", Strange because this drive is a WD Red retail drive. Not shucked.

The turbo write scripts were a precursor to the auto turbo plugin, and it would be better to utilize it instead.

Link to comment
  • 2 weeks later...

small script to switch between 2 VM´s as i have a ubuntu and windows VM which shares the same hardware passthrough ...

 

checking if one of the given VM´s ir running, if so then stop it, wait till stopped, start the other.

it ll do nothing if either both are off or both are running, just in case ...

 

#!/bin/bash

vm1="Media PC"		## Name of first VM
vm2="Ubuntu"		## Name of second VM

############### End config

vm_running="running"
vm_down="shut off"

vm1_state=$(virsh domstate "$vm1")
vm2_state=$(virsh domstate "$vm2")

echo "$vm1 is $vm1_state"
echo "$vm2 is $vm2_state"

if [ "$vm1_state" = "$vm_running" ] && [ "$vm2_state" = "$vm_down" ]; then
	echo "$vm1 is running shutting down"
	virsh shutdown "$vm1"
	vm1_new_state=$(virsh domstate "$vm1")
	until [ "$vm1_new_state" = "$vm_down" ]; do
		echo "$vm1 $vm1_new_state"
		vm1_new_state=$(virsh domstate "$vm1")
		sleep 2
	done
	echo "$vm1 $vm1_new_state"
	sleep 2
	virsh start "$vm2"
	sleep 1
	vm2_new_state=$(virsh domstate "$vm2")
	echo "$vm2 $vm2_new_state"
else
	if [ "$vm2_state" = "$vm_running" ] && [ "$vm1_state" = "$vm_down" ]; then
		echo "$vm2 is running shutting down"
		virsh shutdown "$vm2"
		vm2_new_state=$(virsh domstate "$vm2")
		until [ "$vm2_new_state" = "$vm_down" ]; do
			echo "$vm2 $vm2_new_state"
			vm2_new_state=$(virsh domstate "$vm2")
			sleep 2
		done
		echo "$vm2 $vm2_new_state"
		sleep 2
		virsh start "$vm1"
		sleep 1
		vm1_new_state=$(virsh domstate "$vm1")
		echo "$vm1 $vm1_new_state"
	else
		echo "$vm1 $vm1_state and $vm2 $vm2_state doesnt match"
	fi
fi

 

Link to comment

*Sigh*  Everyone may know this but I didn't...  When using user scripts if you click outside the script window in the main window that will also kill the script.  So not just hitting the "X"..  I was about a 1/4 into zeroing out a drive when I accidentally click outside the script window and killed it! *sigh*

Link to comment
7 minutes ago, jbuszkie said:

*Sigh*  Everyone may know this but I didn't...  When using user scripts if you click outside the script window in the main window that will also kill the script.  So not just hitting the "X"..  I was about a 1/4 into zeroing out a drive when I accidentally click outside the script window and killed it! *sigh*

Didn't think that it allowed outside clicks (I'll change that)

Link to comment

So I just ran this on disk 8 of my array (i first moved all the data off it to the other disks in my array), it completed (5TB drive so it took a while), I clicked Done.  I then stopped the array, but when  I went to tools>new config and checked all for Preserve current assignments,  It didn't give me the option for apply, just Done.  And I can't remove the drive from my array.  I am using unraid 6.8.3 nvidia and don't have any parity drives at the moment.

 

Basically this drive is showing old, and before it starts to fail I want to remove it from the array, and replace it with a new 6TB drive.

 

Am I going about this the wrong way and/or is there an easier way for me to remove a drive from my array without any parity?

Edited by NitroNine
Link to comment
4 hours ago, NitroNine said:

So I just ran this on disk 8 of my array (i first moved all the data off it to the other disks in my array), it completed (5TB drive so it took a while), I clicked Done.  I then stopped the array, but when  I went to tools>new config and checked all for Preserve current assignments,  It didn't give me the option for apply, just Done.  And I can't remove the drive from my array.  I am using unraid 6.8.3 nvidia and don't have any parity drives at the moment.

 

Basically this drive is showing old, and before it starts to fail I want to remove it from the array, and replace it with a new 6TB drive.

 

Am I going about this the wrong way and/or is there an easier way for me to remove a drive from my array without any parity?

Sounds like you forgot to check the checkbox under Tools >> New Config confirming that you want to run the operation?   Until you have done that the Apply button will not be active.

 

however if you are intending to replace the drive why not simply use the normal procedure for Replacing a Disk Drive which does not involve you going via the Tools >> New Config route (although both options are viable).

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.