Jump to content
Squid

Additional Scripts For User.Scripts Plugin

206 posts in this topic Last Reply

Recommended Posts

31 minutes ago, Interstellar said:

Regarding the clearing an array drive script - why is it so slow (even with reconstruct write on?)

 

Doing dd bs=4M if=/dev/zero of=test to a mounted disk = max disk speed (~130MB/sec)

 

dd bs=1M if=/dev/zero of=/dev/md7 results in ~2MB/sec.

 

There should be no difference?

 

Take years to clear 2TB drive at 2MB/sec!

 

There's a problem somewhere, with turbo write enable I clear disks at 100MB/s+, post your diagnostics, maybe something visible.

Share this post


Link to post
On 13/02/2018 at 9:24 AM, johnnie.black said:

 

There's a problem somewhere, with turbo write enable I clear disks at 100MB/s+, post your diagnostics, maybe something visible.

 

Dunno what it is but it just isn’t happy.

 

Just going to pull the drives and do a rebuild - only 4 hours as it only needs to do half the array.

Edited by Interstellar

Share this post


Link to post

I've created a script to install the latest rclone beta - essentially I've converted the excellent plugin.

 

I was having problems with the rclone plugin as it was failing to re-install rclone at boot, when my PC didn't have connectivity because I have a pfsense VM.  Running as a script solves this, but I've also added a connectivity check at the start just to make sure:

 

 

if ping -q -c 1 -W 1 google.com >/dev/null; then
  echo "The network is up - proceeding"
else
  echo "The network is down - pausing"
  sleep 1m
fi

The script also installs the latest beta version each time - whereas the plugin (currently) installs a version that is around 4 months old.

 

#!/bin/bash
# optional sleep to give pfsense VM time to setup connectivity

if ping -q -c 1 -W 1 google.com >/dev/null; then
  echo "The network is up - proceeding"
else
  echo "The network is down - pausing"
  sleep 4m
fi

# make supporting directory structure on flash drive
	mkdir -p /boot/config/plugins/rclone-beta
	mkdir -p /boot/config/plugins/rclone-beta/install
	mkdir -p /boot/config/plugins/rclone-beta/scripts
	mkdir -p /boot/config/plugins/rclone-beta/logs

#download dependencies to /boot/config/plugins/rclone-beta/install

	wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/ap/man-1.6g-x86_64-3.txz -O //boot/config/plugins/rclone-beta/install/man-1.6g-x86_64-3.txz
	wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/a/infozip-6.0-x86_64-3.txz -O //boot/config/plugins/rclone-beta/install/infozip-6.0-x86_64-3.txz
	curl -o /boot/config/plugins/rclone-beta/install/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt

#Install dependencies
	installpkg /boot/config/plugins/rclone-beta/install/man-1.6g-x86_64-3.txz
	installpkg /boot/config/plugins/rclone-beta/install/infozip-6.0-x86_64-3.txz

# Check if stable is installed.

if [ -d /usr/local/emhttp/plugins/rclone ]; then
echo ""
echo ""
echo "----------Stable Branch installed----------"
echo "Uninstall Stable branch to install Beta!"
echo ""
echo ""
exit 1
fi

#Download fresh version of rclone

wget https://beta.rclone.org/rclone-beta-latest-linux-amd64.zip -O //boot/config/plugins/rclone-beta/install/rclone-beta.zip

#Download package

wget https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/archive/rclone-beta-2016.11.14-x86_64-1.txz -O //boot/config/plugins/rclone-beta/install/rclone-bundle.txz

# install package

upgradepkg --install-new /boot/config/plugins/rclone-beta/install/rclone-bundle.txz

# remove old cert and re-download

if [ -f /boot/config/plugins/rclone-beta/install/ca-certificates.crt ]; then
  rm -f /boot/config/plugins/rclone-beta/install/ca-certificates.crt
fi;

curl -o /boot/config/plugins/rclone-beta/install/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt

# remove old rclone version if exists

if [ -d /boot/config/plugins/rclone-beta/install/rclone-v*/ ]; then
  rm -rf /boot/config/plugins/rclone-beta/install/rclone-v*/
fi;

# install

unzip /boot/config/plugins/rclone-beta/install/rclone-beta.zip -d /boot/config/plugins/rclone-beta/install/

cp /boot/config/plugins/rclone-beta/install/rclone-v*/rclone /usr/sbin/
chown root:root /usr/sbin/rclone
chmod 755 /usr/sbin/rclone

mkdir -p /etc/ssl/certs/
cp /boot/config/plugins/rclone-beta/install/ca-certificates.crt /etc/ssl/certs/

if [ ! -f /boot/config/plugins/rclone-beta/.rclone.conf ]; then
  touch /boot/config/plugins/rclone-beta/.rclone.conf;
fi;

mkdir -p /boot/config/plugins/rclone-beta/logs;
mkdir -p /boot/config/plugins/rclone-beta/scripts;
cp /boot/config/plugins/rclone-beta/install/scripts/* /boot/config/plugins/rclone-beta/scripts/ -R -n;

echo ""
echo "-----------------------------------------------------------"
echo " rclone-beta has been installed."
echo "-----------------------------------------------------------"
echo ""

 

Share this post


Link to post
Bleeding Edge Toolkit
 
If the RCs are just too pedestrian for you, and you want to run the latest webui code from GitHub instead, check out the Bleeding Edge Toolkit:
 
I call it a "toolkit" because for the most part it is still up to you to decide which patches to install. I provide some examples, but I don't intend to update this every time there is a commit.
 
Instructions are in the script, but the idea is that you modify the script to install the patches you want and then set the script to run at first array start, at which point it will automatically download and install the patches for you. You can also run it manually every time you add a new patch to the list.
 
Big disclaimer... this is intended to be used on test systems only. The developers are certainly not intending unreleased code to be used in production systems. If you are interested in testing at this level, installing unRAID in a VM is a good place to start:

Share this post


Link to post
11 hours ago, ljm42 said:

Bleeding Edge Toolkit

LOL. Perhaps you should name it unPredictable Results Toolkit

Share this post


Link to post
On 2018-03-09 at 8:21 PM, ljm42 said:

I don't intend to update this every time there is a commit.

What!?!?  You mean that you're not going to sit there and recode the script every day (or hourly!?)

Share this post


Link to post
1 hour ago, Squid said:

What!?!?  You mean that you're not going to sit there and recode the script every day (or hourly!?)

 

LOL. That's not bleeding edge! Though personally I would be very prudent, not every commit works as intended (=has bugs). At the end of the day we are talking development cycles here.

Share this post


Link to post
10 hours ago, Squid said:

What!?!?  You mean that you're not going to sit there and recode the script every day (or hourly!?)

 

There were a few commits I wanted to test and I thought "no big deal, I'll just put a quick wrapper on 'patch' and grab those updates." Well, it turns out 'patch' has a few shortcomings and by the time I was happy with the script there had been two more RCs and I completely forgot what I was so interested in testing :)  But at least the script is ready for next time!

 

9 hours ago, bonienl said:

Though personally I would be very prudent, not every commit works as intended (=has bugs). At the end of the day we are talking development cycles here.

 

Agreed. This should only be used on a test system!

Share this post


Link to post
20 minutes ago, ljm42 said:

Agreed. This should only be used on a test system!

 

Another thing to be aware of.  Github holds the files related to th GUI, but sometimes GUI changes are supported with system changes which are not available thru github.

Share this post


Link to post
On 1.12.2017 at 10:33 PM, landS said:

 

can this script be duplicated for removing other items?

for example if i replace ".DS_Store" with ".trash-1000" will it remove the .trash-1000 folder and all subfolders/subfiles on each disk when present?

 

as the .trash-1000 ends up in the root of any given share, can the maxdepth be set to 1?

 

Thanks!

 

Great idea.

 

I tried:

#!/bin/bash
echo "Searching for (and deleting) .nfo Files in Filme"
echo "This may take a awhile"
find /mnt/user/Archiv/Filme -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \;
echo "Searching for (and deleting) .nfo Files in Serien"
echo "This may take a awhile"
find /mnt/user/Archiv/Serien -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \;
echo "Searching for (and deleting) .nfo Files in Musik"
echo "This may take a awhile"
find /mnt/user/Archiv/Musik -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \;

 

But it doesnt work. It display 

 

Script location: /tmp/user.scripts/tmpScripts/Clean .nfo/script
Note that closing this window will abort the execution of this script
Searching for (and deleting) .nfo Files in Filme
This may take a awhile
Searching for (and deleting) .nfo Files in Serien
This may take a awhile
Searching for (and deleting) .nfo Files in Musik
This may take a awhile
 

 

but nothing gets deleted... anyone any idea? Also a easy way to add multiple files to search for? 

Share this post


Link to post

Thanks @nuhll  I beat on this one for awhile, but alas my ignorance won the battle

Share this post


Link to post

Suggestion for the clear_an_array_drive script

 

change

for d in /mnt/disk[1-9]*
do
   x=`ls -A $d`
   z=`du -s $d`
   y=${z:0:1}
#   echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z

   # the test for marker and emptiness
   if [ "$x" == "$marker" -a "$y" == "0" ]
   then
      found=1
      break
   fi
   let n=n+1
done

 

to

 

for d in /mnt/disk[1-9]*
do
  x=`ls -A $d`
#  echo -e "d:"$d "x:"${x:0:20}

  # the test for marker
  if [ "$x" == "$marker" ]
  then
    z=`du -s $d`
    y=${z:0:1}
#    echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z

     # the test for marker and emptiness
     if [ "$x" == "$marker" -a "$y" == "0" ]
     then
       found=1
       break
     fi
   fi
   let n=n+1
done

 

This will prevent lengthy emptyness checks on disks without the clear-me marker.

Share this post


Link to post

I have a Recycle Bin share for my Sonarr and Radarr and Lidarr Recycling just in case I need to jump back to the previous version...What I didn't realize was that after years of this it's taking A LOT of space.  I was wandering if there was a script to Delete everything in my Recycling Bin that's older than 30 days every week.

 

Thank you in advance for your help,

 

Rudder2

Share this post


Link to post
On 4/7/2018 at 11:12 AM, Rudder2 said:

I have a Recycle Bin share for my Sonarr and Radarr and Lidarr Recycling just in case I need to jump back to the previous version...What I didn't realize was that after years of this it's taking A LOT of space.  I was wandering if there was a script to Delete everything in my Recycling Bin that's older than 30 days every week.

 

Thank you in advance for your help,

 

Rudder2

 

Are you using the Recycle Bin Plugin? If so it should have settings to delete files after a specified length of time. 

If not you could use the find command

 

Check this out to give you a little bit of an idea to get you started. 

https://askubuntu.com/questions/589210/removing-files-older-than-7-days

https://stackoverflow.com/questions/13868821/shell-script-to-delete-directories-older-than-n-days

Share this post


Link to post
On 4/11/2018 at 9:22 AM, kizer said:

 

Are you using the Recycle Bin Plugin? If so it should have settings to delete files after a specified length of time. 

If not you could use the find command

 

Check this out to give you a little bit of an idea to get you started. 

https://askubuntu.com/questions/589210/removing-files-older-than-7-days

https://stackoverflow.com/questions/13868821/shell-script-to-delete-directories-older-than-n-days

 

I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them.  I would like this share to delete the files every 14 or 30 days.  It might be redundant since I have the Recycling Plugin installed.  I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it?  If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them.  I will look at those links also.

 

Thank you,

 

Rudder2

Share this post


Link to post
19 minutes ago, Rudder2 said:

 

I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them.  I would like this share to delete the files every 14 or 30 days.  It might be redundant since I have the Recycling Plugin installed.  I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it?  If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them.  I will look at those links also.

 

Thank you,

 

Rudder2

 

Recycling Bin plugin uses a feature of SMB to keep files that are deleted from over the network, so it wouldn't apply to your usage.

Share this post


Link to post
1 hour ago, Rudder2 said:

 

I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them.  I would like this share to delete the files every 14 or 30 days.  It might be redundant since I have the Recycling Plugin installed.  I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it?  If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them.  I will look at those links also.

 

Thank you,

 

Rudder2

Try this- https://lime-technology.com/forums/topic/41044-recycle-bin-vfs-recycle-for-63-and-later-versions/?do=findComment&comment=589029

 

Make sure you read through the complete discussion. Basically you can map Radarr, Sonarr etc to the .recyclebin folder and use the recyclebin plugin to delete files after a set interval. You’ll need to setup the user script described in that conversation to prevent the .recylebin directory from being removed.

Share this post


Link to post
52 minutes ago, wgstarks said:

Try this- https://lime-technology.com/forums/topic/41044-recycle-bin-vfs-recycle-for-63-and-later-versions/?do=findComment&comment=589029

 

Make sure you read through the complete discussion. Basically you can map Radarr, Sonarr etc to the .recyclebin folder and use the recyclebin plugin to delete files after a set interval. You’ll need to setup the user script described in that conversation to prevent the .recylebin directory from being removed.

I like it!  This looks like it will work beautifully!  I had to manually create all the .Recycle.Bin folders in all my shares to begin with but this was no biggie!  I discovered 600GB in my darr app's Recycling Bin Share from the years since I upgraded to using all darr apps.

 

Thank you for your help!

 

Rudder2

Share this post


Link to post

I was hoping somebody could lend me a hand with a little code. 

I currently drop all my files into a folder and let a script move them around. However I want to put a little logic into it and came up with two things, but I'm having issues combining them. 

 

for instance I want to have it search for files/folders that are older than a specific time and move which I figured out. 

find /Source/* -maxdepth 1 -mmin +5 -exec mv {} /Destination/ \;

 

I also want to search for folders with this particular string in it. Because typing out the same code for each Season 1 by one can get really long

mv /Source/*S{01..50}* /Destination/

 

I attempted to do some Hair Brained combining but it doesn't work. Always results in an empty search

find /SOURCE/* -iname "*s{01..50}*" -maxdepth 1 -type d -mmin +5 -exec mv {} /Destination/ \;

 

Basically what I'm attempting to accomplish is moving TV shows I have in folders from one folder to another, but I'm trying to make sure they are at least 5minutes old so there is no confusion to the script I wrote so its not moving files that are in progress of being written from one location to another before performing some other steps. I also want to make sure they are TV Shows and since things like Plex and XBMC aka Kodi use Some.Show-S01E01.mp4 for their naming convention which I've adhered to. 

 

I can get things to work if I use the following, but honestly I was hoping for a work around. 

find /mnt/user/uploads/blah/* -iname "*s01*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \;

find /mnt/user/uploads/blah/* -iname "*s02*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \;

.

.

.

.

.

.

.

find /mnt/user/uploads/blah/* -iname "*s99*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \;

 

 

 

**************************************************Update*************************************************

 

I think I found a little work around using FileBot to achieve what I'm trying to do.  xD

Share this post


Link to post

Allow unRaid's webUI to utilize the full width of your browser instead of being limited to 1920px

 

#!/bin/bash
sed -i 's/max-width:1920px;//g' /usr/local/emhttp/plugins/dynamix/styles/*.css

 

  • Like 1
  • Upvote 1

Share this post


Link to post
Posted (edited)

Could really use some advice since I really don't know what I'm doing.:D

 

I'm currently using this script to clean hidden mac files from my Media share-

#!/bin/bash
echo "Searching for (and deleting) .DS_Store Files"
echo "This may take a awhile"
find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \;

echo "======================="
echo "Searching for (and deleting) ._ files"
find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \;
echo "Cleanup Complete"

 

 

I would like to modify it to scan other shares as well. Will this work?

#!/bin/bash
echo "Searching for (and deleting) .DS_Store Files in Media and flash"
echo "This may take a awhile"
find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \;
find /boot -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \;

echo "======================="
echo "Searching for (and deleting) ._ files in Media and flash"
find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \;
find /boot -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \;
echo "Cleanup Complete"

 

Edited by wgstarks
Edited to correct path for /boot

Share this post


Link to post
Posted (edited)
On 8/7/2017 at 9:17 AM, Squid said:

Automatically save syslog onto flash drive

 

Set the script to run at First Only Array Start in the background


#!/bin/bash
mkdir -p /boot/logs
FILENAME="/boot/logs/syslog-$(date +%s)"
tail -f /var/log/syslog > $FILENAME

 

 

I have this scheduled to start on array in background but it prevents my disks to start and stays at "Mounting Disk". Once I disabled it the array starts immediately.

Should there be any changes to the script for the latest unraid version? I'm on 6.5.0 @Squid

Edited by jrdnlc

Share this post


Link to post
Posted (edited)

Have another script run instead, and have that script run this one and fork it to the background

 

#!/bin/bash

/boot/scripts/myRealScript.sh &

 

 

Edited by Squid
  • Like 1

Share this post


Link to post
Posted (edited)
On 2/15/2018 at 3:08 PM, Interstellar said:

 

Dunno what it is but it just isn’t happy.

 

Just going to pull the drives and do a rebuild - only 4 hours as it only needs to do half the array.

Did you find a solution for this? I just started running it myself, and I'm finding the same thing. At it's current rate, it's going to take over 80 hours to clear my 640GB disk.

 

Edit: Well, it's getting even worse. It started off initially reporting around 8.0MB/s. 1500s in though, it's only at 2.5GiB complete, and is now reporting 1.7MB/s. Seems like it's loosing about 0.1MB/s for every 100MB written or so.

Edited by Caldorian
More info

Share this post


Link to post
Posted (edited)
On 5/4/2018 at 5:57 AM, Caldorian said:

Did you find a solution for this? I just started running it myself, and I'm finding the same thing. At it's current rate, it's going to take over 80 hours to clear my 640GB disk.

 

Edit: Well, it's getting even worse. It started off initially reporting around 8.0MB/s. 1500s in though, it's only at 2.5GiB complete, and is now reporting 1.7MB/s. Seems like it's loosing about 0.1MB/s for every 100MB written or so.

 

Nope.

 

I think I just pulled the drive and let parity rebuild as it was faster.

 

Although I have a vague recollection that I re-formatted the drives so I could mount them then filled them with a massive/dev/zero file (at full speed!), then did the /dev/md* command to clear the first 500M, then pulled the drive and forced the parity to remain valid.

 

Ended up with a handful of parity errors after the 11 hour check.

 

Not ideal but at least I had a 99.999% valid parity whilst it checked it.

 

System works perfectly otherwise and I haven’t tried it again on newer versions.

Edited by Interstellar

Share this post


Link to post

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now