Additional Scripts For User.Scripts Plugin


Recommended Posts

31 minutes ago, Interstellar said:

Regarding the clearing an array drive script - why is it so slow (even with reconstruct write on?)

 

Doing dd bs=4M if=/dev/zero of=test to a mounted disk = max disk speed (~130MB/sec)

 

dd bs=1M if=/dev/zero of=/dev/md7 results in ~2MB/sec.

 

There should be no difference?

 

Take years to clear 2TB drive at 2MB/sec!

 

There's a problem somewhere, with turbo write enable I clear disks at 100MB/s+, post your diagnostics, maybe something visible.

Link to comment
On 13/02/2018 at 9:24 AM, johnnie.black said:

 

There's a problem somewhere, with turbo write enable I clear disks at 100MB/s+, post your diagnostics, maybe something visible.

 

Dunno what it is but it just isn’t happy.

 

Just going to pull the drives and do a rebuild - only 4 hours as it only needs to do half the array.

Edited by Interstellar
Link to comment
  • 2 weeks later...

I've created a script to install the latest rclone beta - essentially I've converted the excellent plugin.

 

I was having problems with the rclone plugin as it was failing to re-install rclone at boot, when my PC didn't have connectivity because I have a pfsense VM.  Running as a script solves this, but I've also added a connectivity check at the start just to make sure:

 

 

if ping -q -c 1 -W 1 google.com >/dev/null; then
  echo "The network is up - proceeding"
else
  echo "The network is down - pausing"
  sleep 1m
fi

The script also installs the latest beta version each time - whereas the plugin (currently) installs a version that is around 4 months old.

 

#!/bin/bash
# optional sleep to give pfsense VM time to setup connectivity

if ping -q -c 1 -W 1 google.com >/dev/null; then
  echo "The network is up - proceeding"
else
  echo "The network is down - pausing"
  sleep 4m
fi

# make supporting directory structure on flash drive
	mkdir -p /boot/config/plugins/rclone-beta
	mkdir -p /boot/config/plugins/rclone-beta/install
	mkdir -p /boot/config/plugins/rclone-beta/scripts
	mkdir -p /boot/config/plugins/rclone-beta/logs

#download dependencies to /boot/config/plugins/rclone-beta/install

	wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/ap/man-1.6g-x86_64-3.txz -O //boot/config/plugins/rclone-beta/install/man-1.6g-x86_64-3.txz
	wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/a/infozip-6.0-x86_64-3.txz -O //boot/config/plugins/rclone-beta/install/infozip-6.0-x86_64-3.txz
	curl -o /boot/config/plugins/rclone-beta/install/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt

#Install dependencies
	installpkg /boot/config/plugins/rclone-beta/install/man-1.6g-x86_64-3.txz
	installpkg /boot/config/plugins/rclone-beta/install/infozip-6.0-x86_64-3.txz

# Check if stable is installed.

if [ -d /usr/local/emhttp/plugins/rclone ]; then
echo ""
echo ""
echo "----------Stable Branch installed----------"
echo "Uninstall Stable branch to install Beta!"
echo ""
echo ""
exit 1
fi

#Download fresh version of rclone

wget https://beta.rclone.org/rclone-beta-latest-linux-amd64.zip -O //boot/config/plugins/rclone-beta/install/rclone-beta.zip

#Download package

wget https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/archive/rclone-beta-2016.11.14-x86_64-1.txz -O //boot/config/plugins/rclone-beta/install/rclone-bundle.txz

# install package

upgradepkg --install-new /boot/config/plugins/rclone-beta/install/rclone-bundle.txz

# remove old cert and re-download

if [ -f /boot/config/plugins/rclone-beta/install/ca-certificates.crt ]; then
  rm -f /boot/config/plugins/rclone-beta/install/ca-certificates.crt
fi;

curl -o /boot/config/plugins/rclone-beta/install/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt

# remove old rclone version if exists

if [ -d /boot/config/plugins/rclone-beta/install/rclone-v*/ ]; then
  rm -rf /boot/config/plugins/rclone-beta/install/rclone-v*/
fi;

# install

unzip /boot/config/plugins/rclone-beta/install/rclone-beta.zip -d /boot/config/plugins/rclone-beta/install/

cp /boot/config/plugins/rclone-beta/install/rclone-v*/rclone /usr/sbin/
chown root:root /usr/sbin/rclone
chmod 755 /usr/sbin/rclone

mkdir -p /etc/ssl/certs/
cp /boot/config/plugins/rclone-beta/install/ca-certificates.crt /etc/ssl/certs/

if [ ! -f /boot/config/plugins/rclone-beta/.rclone.conf ]; then
  touch /boot/config/plugins/rclone-beta/.rclone.conf;
fi;

mkdir -p /boot/config/plugins/rclone-beta/logs;
mkdir -p /boot/config/plugins/rclone-beta/scripts;
cp /boot/config/plugins/rclone-beta/install/scripts/* /boot/config/plugins/rclone-beta/scripts/ -R -n;

echo ""
echo "-----------------------------------------------------------"
echo " rclone-beta has been installed."
echo "-----------------------------------------------------------"
echo ""

 

Link to comment
  • 2 weeks later...
Bleeding Edge Toolkit
 
If the RCs are just too pedestrian for you, and you want to run the latest webui code from GitHub instead, check out the Bleeding Edge Toolkit:
 
I call it a "toolkit" because for the most part it is still up to you to decide which patches to install. I provide some examples, but I don't intend to update this every time there is a commit.
 
Instructions are in the script, but the idea is that you modify the script to install the patches you want and then set the script to run at first array start, at which point it will automatically download and install the patches for you. You can also run it manually every time you add a new patch to the list.
 
Big disclaimer... this is intended to be used on test systems only. The developers are certainly not intending unreleased code to be used in production systems. If you are interested in testing at this level, installing unRAID in a VM is a good place to start:
Edited by ljm42
fix colors
Link to comment
1 hour ago, Squid said:

What!?!?  You mean that you're not going to sit there and recode the script every day (or hourly!?)

 

LOL. That's not bleeding edge! Though personally I would be very prudent, not every commit works as intended (=has bugs). At the end of the day we are talking development cycles here.

Link to comment
10 hours ago, Squid said:

What!?!?  You mean that you're not going to sit there and recode the script every day (or hourly!?)

 

There were a few commits I wanted to test and I thought "no big deal, I'll just put a quick wrapper on 'patch' and grab those updates." Well, it turns out 'patch' has a few shortcomings and by the time I was happy with the script there had been two more RCs and I completely forgot what I was so interested in testing :)  But at least the script is ready for next time!

 

9 hours ago, bonienl said:

Though personally I would be very prudent, not every commit works as intended (=has bugs). At the end of the day we are talking development cycles here.

 

Agreed. This should only be used on a test system!

Link to comment
On 1.12.2017 at 10:33 PM, landS said:

 

can this script be duplicated for removing other items?

for example if i replace ".DS_Store" with ".trash-1000" will it remove the .trash-1000 folder and all subfolders/subfiles on each disk when present?

 

as the .trash-1000 ends up in the root of any given share, can the maxdepth be set to 1?

 

Thanks!

 

Great idea.

 

I tried:

#!/bin/bash
echo "Searching for (and deleting) .nfo Files in Filme"
echo "This may take a awhile"
find /mnt/user/Archiv/Filme -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \;
echo "Searching for (and deleting) .nfo Files in Serien"
echo "This may take a awhile"
find /mnt/user/Archiv/Serien -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \;
echo "Searching for (and deleting) .nfo Files in Musik"
echo "This may take a awhile"
find /mnt/user/Archiv/Musik -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \;

 

But it doesnt work. It display 

 

Script location: /tmp/user.scripts/tmpScripts/Clean .nfo/script
Note that closing this window will abort the execution of this script
Searching for (and deleting) .nfo Files in Filme
This may take a awhile
Searching for (and deleting) .nfo Files in Serien
This may take a awhile
Searching for (and deleting) .nfo Files in Musik
This may take a awhile
 

 

but nothing gets deleted... anyone any idea? Also a easy way to add multiple files to search for? 

Link to comment
  • 2 weeks later...

Suggestion for the clear_an_array_drive script

 

change

for d in /mnt/disk[1-9]*
do
   x=`ls -A $d`
   z=`du -s $d`
   y=${z:0:1}
#   echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z

   # the test for marker and emptiness
   if [ "$x" == "$marker" -a "$y" == "0" ]
   then
      found=1
      break
   fi
   let n=n+1
done

 

to

 

for d in /mnt/disk[1-9]*
do
  x=`ls -A $d`
#  echo -e "d:"$d "x:"${x:0:20}

  # the test for marker
  if [ "$x" == "$marker" ]
  then
    z=`du -s $d`
    y=${z:0:1}
#    echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z

     # the test for marker and emptiness
     if [ "$x" == "$marker" -a "$y" == "0" ]
     then
       found=1
       break
     fi
   fi
   let n=n+1
done

 

This will prevent lengthy emptyness checks on disks without the clear-me marker.

Link to comment
  • 2 weeks later...

I have a Recycle Bin share for my Sonarr and Radarr and Lidarr Recycling just in case I need to jump back to the previous version...What I didn't realize was that after years of this it's taking A LOT of space.  I was wandering if there was a script to Delete everything in my Recycling Bin that's older than 30 days every week.

 

Thank you in advance for your help,

 

Rudder2

Link to comment
On 4/7/2018 at 11:12 AM, Rudder2 said:

I have a Recycle Bin share for my Sonarr and Radarr and Lidarr Recycling just in case I need to jump back to the previous version...What I didn't realize was that after years of this it's taking A LOT of space.  I was wandering if there was a script to Delete everything in my Recycling Bin that's older than 30 days every week.

 

Thank you in advance for your help,

 

Rudder2

 

Are you using the Recycle Bin Plugin? If so it should have settings to delete files after a specified length of time. 

If not you could use the find command

 

Check this out to give you a little bit of an idea to get you started. 

https://askubuntu.com/questions/589210/removing-files-older-than-7-days

https://stackoverflow.com/questions/13868821/shell-script-to-delete-directories-older-than-n-days

Link to comment
On 4/11/2018 at 9:22 AM, kizer said:

 

Are you using the Recycle Bin Plugin? If so it should have settings to delete files after a specified length of time. 

If not you could use the find command

 

Check this out to give you a little bit of an idea to get you started. 

https://askubuntu.com/questions/589210/removing-files-older-than-7-days

https://stackoverflow.com/questions/13868821/shell-script-to-delete-directories-older-than-n-days

 

I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them.  I would like this share to delete the files every 14 or 30 days.  It might be redundant since I have the Recycling Plugin installed.  I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it?  If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them.  I will look at those links also.

 

Thank you,

 

Rudder2

Link to comment
19 minutes ago, Rudder2 said:

 

I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them.  I would like this share to delete the files every 14 or 30 days.  It might be redundant since I have the Recycling Plugin installed.  I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it?  If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them.  I will look at those links also.

 

Thank you,

 

Rudder2

 

Recycling Bin plugin uses a feature of SMB to keep files that are deleted from over the network, so it wouldn't apply to your usage.

Link to comment
1 hour ago, Rudder2 said:

 

I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them.  I would like this share to delete the files every 14 or 30 days.  It might be redundant since I have the Recycling Plugin installed.  I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it?  If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them.  I will look at those links also.

 

Thank you,

 

Rudder2

Try this- https://lime-technology.com/forums/topic/41044-recycle-bin-vfs-recycle-for-63-and-later-versions/?do=findComment&comment=589029

 

Make sure you read through the complete discussion. Basically you can map Radarr, Sonarr etc to the .recyclebin folder and use the recyclebin plugin to delete files after a set interval. You’ll need to setup the user script described in that conversation to prevent the .recylebin directory from being removed.

Link to comment
52 minutes ago, wgstarks said:

Try this- https://lime-technology.com/forums/topic/41044-recycle-bin-vfs-recycle-for-63-and-later-versions/?do=findComment&comment=589029

 

Make sure you read through the complete discussion. Basically you can map Radarr, Sonarr etc to the .recyclebin folder and use the recyclebin plugin to delete files after a set interval. You’ll need to setup the user script described in that conversation to prevent the .recylebin directory from being removed.

I like it!  This looks like it will work beautifully!  I had to manually create all the .Recycle.Bin folders in all my shares to begin with but this was no biggie!  I discovered 600GB in my darr app's Recycling Bin Share from the years since I upgraded to using all darr apps.

 

Thank you for your help!

 

Rudder2

Link to comment

I was hoping somebody could lend me a hand with a little code. 

I currently drop all my files into a folder and let a script move them around. However I want to put a little logic into it and came up with two things, but I'm having issues combining them. 

 

for instance I want to have it search for files/folders that are older than a specific time and move which I figured out. 

find /Source/* -maxdepth 1 -mmin +5 -exec mv {} /Destination/ \;

 

I also want to search for folders with this particular string in it. Because typing out the same code for each Season 1 by one can get really long

mv /Source/*S{01..50}* /Destination/

 

I attempted to do some Hair Brained combining but it doesn't work. Always results in an empty search

find /SOURCE/* -iname "*s{01..50}*" -maxdepth 1 -type d -mmin +5 -exec mv {} /Destination/ \;

 

Basically what I'm attempting to accomplish is moving TV shows I have in folders from one folder to another, but I'm trying to make sure they are at least 5minutes old so there is no confusion to the script I wrote so its not moving files that are in progress of being written from one location to another before performing some other steps. I also want to make sure they are TV Shows and since things like Plex and XBMC aka Kodi use Some.Show-S01E01.mp4 for their naming convention which I've adhered to. 

 

I can get things to work if I use the following, but honestly I was hoping for a work around. 

find /mnt/user/uploads/blah/* -iname "*s01*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \;

find /mnt/user/uploads/blah/* -iname "*s02*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \;

.

.

.

.

.

.

.

find /mnt/user/uploads/blah/* -iname "*s99*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \;

 

 

 

**************************************************Update*************************************************

 

I think I found a little work around using FileBot to achieve what I'm trying to do.  xD

Link to comment

Could really use some advice since I really don't know what I'm doing.:D

 

I'm currently using this script to clean hidden mac files from my Media share-

#!/bin/bash
echo "Searching for (and deleting) .DS_Store Files"
echo "This may take a awhile"
find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \;

echo "======================="
echo "Searching for (and deleting) ._ files"
find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \;
echo "Cleanup Complete"

 

 

I would like to modify it to scan other shares as well. Will this work?

#!/bin/bash
echo "Searching for (and deleting) .DS_Store Files in Media and flash"
echo "This may take a awhile"
find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \;
find /boot -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \;

echo "======================="
echo "Searching for (and deleting) ._ files in Media and flash"
find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \;
find /boot -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \;
echo "Cleanup Complete"

 

Edited by wgstarks
Edited to correct path for /boot
Link to comment
On 8/7/2017 at 9:17 AM, Squid said:

Automatically save syslog onto flash drive

 

Set the script to run at First Only Array Start in the background


#!/bin/bash
mkdir -p /boot/logs
FILENAME="/boot/logs/syslog-$(date +%s)"
tail -f /var/log/syslog > $FILENAME

 

 

I have this scheduled to start on array in background but it prevents my disks to start and stays at "Mounting Disk". Once I disabled it the array starts immediately.

Should there be any changes to the script for the latest unraid version? I'm on 6.5.0 @Squid

Edited by jrdnlc
Link to comment
On 2/15/2018 at 3:08 PM, Interstellar said:

 

Dunno what it is but it just isn’t happy.

 

Just going to pull the drives and do a rebuild - only 4 hours as it only needs to do half the array.

Did you find a solution for this? I just started running it myself, and I'm finding the same thing. At it's current rate, it's going to take over 80 hours to clear my 640GB disk.

 

Edit: Well, it's getting even worse. It started off initially reporting around 8.0MB/s. 1500s in though, it's only at 2.5GiB complete, and is now reporting 1.7MB/s. Seems like it's loosing about 0.1MB/s for every 100MB written or so.

Edited by Caldorian
More info
Link to comment
On 5/4/2018 at 5:57 AM, Caldorian said:

Did you find a solution for this? I just started running it myself, and I'm finding the same thing. At it's current rate, it's going to take over 80 hours to clear my 640GB disk.

 

Edit: Well, it's getting even worse. It started off initially reporting around 8.0MB/s. 1500s in though, it's only at 2.5GiB complete, and is now reporting 1.7MB/s. Seems like it's loosing about 0.1MB/s for every 100MB written or so.

 

Nope.

 

I think I just pulled the drive and let parity rebuild as it was faster.

 

Although I have a vague recollection that I re-formatted the drives so I could mount them then filled them with a massive/dev/zero file (at full speed!), then did the /dev/md* command to clear the first 500M, then pulled the drive and forced the parity to remain valid.

 

Ended up with a handful of parity errors after the 11 hour check.

 

Not ideal but at least I had a 99.999% valid parity whilst it checked it.

 

System works perfectly otherwise and I haven’t tried it again on newer versions.

Edited by Interstellar
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.