JorgeB Posted February 13, 2018 Share Posted February 13, 2018 31 minutes ago, Interstellar said: Regarding the clearing an array drive script - why is it so slow (even with reconstruct write on?) Doing dd bs=4M if=/dev/zero of=test to a mounted disk = max disk speed (~130MB/sec) dd bs=1M if=/dev/zero of=/dev/md7 results in ~2MB/sec. There should be no difference? Take years to clear 2TB drive at 2MB/sec! There's a problem somewhere, with turbo write enable I clear disks at 100MB/s+, post your diagnostics, maybe something visible. Quote Link to comment
Interstellar Posted February 15, 2018 Share Posted February 15, 2018 (edited) On 13/02/2018 at 9:24 AM, johnnie.black said: There's a problem somewhere, with turbo write enable I clear disks at 100MB/s+, post your diagnostics, maybe something visible. Dunno what it is but it just isn’t happy. Just going to pull the drives and do a rebuild - only 4 hours as it only needs to do half the array. Edited February 15, 2018 by Interstellar Quote Link to comment
DZMM Posted March 1, 2018 Share Posted March 1, 2018 I've created a script to install the latest rclone beta - essentially I've converted the excellent plugin. I was having problems with the rclone plugin as it was failing to re-install rclone at boot, when my PC didn't have connectivity because I have a pfsense VM. Running as a script solves this, but I've also added a connectivity check at the start just to make sure: if ping -q -c 1 -W 1 google.com >/dev/null; then echo "The network is up - proceeding" else echo "The network is down - pausing" sleep 1m fi The script also installs the latest beta version each time - whereas the plugin (currently) installs a version that is around 4 months old. #!/bin/bash # optional sleep to give pfsense VM time to setup connectivity if ping -q -c 1 -W 1 google.com >/dev/null; then echo "The network is up - proceeding" else echo "The network is down - pausing" sleep 4m fi # make supporting directory structure on flash drive mkdir -p /boot/config/plugins/rclone-beta mkdir -p /boot/config/plugins/rclone-beta/install mkdir -p /boot/config/plugins/rclone-beta/scripts mkdir -p /boot/config/plugins/rclone-beta/logs #download dependencies to /boot/config/plugins/rclone-beta/install wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/ap/man-1.6g-x86_64-3.txz -O //boot/config/plugins/rclone-beta/install/man-1.6g-x86_64-3.txz wget http://slackware.cs.utah.edu/pub/slackware/slackware64-14.2/slackware64/a/infozip-6.0-x86_64-3.txz -O //boot/config/plugins/rclone-beta/install/infozip-6.0-x86_64-3.txz curl -o /boot/config/plugins/rclone-beta/install/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt #Install dependencies installpkg /boot/config/plugins/rclone-beta/install/man-1.6g-x86_64-3.txz installpkg /boot/config/plugins/rclone-beta/install/infozip-6.0-x86_64-3.txz # Check if stable is installed. if [ -d /usr/local/emhttp/plugins/rclone ]; then echo "" echo "" echo "----------Stable Branch installed----------" echo "Uninstall Stable branch to install Beta!" echo "" echo "" exit 1 fi #Download fresh version of rclone wget https://beta.rclone.org/rclone-beta-latest-linux-amd64.zip -O //boot/config/plugins/rclone-beta/install/rclone-beta.zip #Download package wget https://raw.githubusercontent.com/Waseh/rclone-unraid/beta/archive/rclone-beta-2016.11.14-x86_64-1.txz -O //boot/config/plugins/rclone-beta/install/rclone-bundle.txz # install package upgradepkg --install-new /boot/config/plugins/rclone-beta/install/rclone-bundle.txz # remove old cert and re-download if [ -f /boot/config/plugins/rclone-beta/install/ca-certificates.crt ]; then rm -f /boot/config/plugins/rclone-beta/install/ca-certificates.crt fi; curl -o /boot/config/plugins/rclone-beta/install/ca-certificates.crt https://raw.githubusercontent.com/bagder/ca-bundle/master/ca-bundle.crt # remove old rclone version if exists if [ -d /boot/config/plugins/rclone-beta/install/rclone-v*/ ]; then rm -rf /boot/config/plugins/rclone-beta/install/rclone-v*/ fi; # install unzip /boot/config/plugins/rclone-beta/install/rclone-beta.zip -d /boot/config/plugins/rclone-beta/install/ cp /boot/config/plugins/rclone-beta/install/rclone-v*/rclone /usr/sbin/ chown root:root /usr/sbin/rclone chmod 755 /usr/sbin/rclone mkdir -p /etc/ssl/certs/ cp /boot/config/plugins/rclone-beta/install/ca-certificates.crt /etc/ssl/certs/ if [ ! -f /boot/config/plugins/rclone-beta/.rclone.conf ]; then touch /boot/config/plugins/rclone-beta/.rclone.conf; fi; mkdir -p /boot/config/plugins/rclone-beta/logs; mkdir -p /boot/config/plugins/rclone-beta/scripts; cp /boot/config/plugins/rclone-beta/install/scripts/* /boot/config/plugins/rclone-beta/scripts/ -R -n; echo "" echo "-----------------------------------------------------------" echo " rclone-beta has been installed." echo "-----------------------------------------------------------" echo "" Quote Link to comment
ljm42 Posted March 10, 2018 Share Posted March 10, 2018 (edited) Bleeding Edge Toolkit If the RCs are just too pedestrian for you, and you want to run the latest webui code from GitHub instead, check out the Bleeding Edge Toolkit: https://gist.github.com/ljm42/83f41014c871f237c93c5a805086e30f I call it a "toolkit" because for the most part it is still up to you to decide which patches to install. I provide some examples, but I don't intend to update this every time there is a commit. Instructions are in the script, but the idea is that you modify the script to install the patches you want and then set the script to run at first array start, at which point it will automatically download and install the patches for you. You can also run it manually every time you add a new patch to the list. Big disclaimer... this is intended to be used on test systems only. The developers are certainly not intending unreleased code to be used in production systems. If you are interested in testing at this level, installing unRAID in a VM is a good place to start: Edited February 23, 2019 by ljm42 fix colors Quote Link to comment
JonathanM Posted March 10, 2018 Share Posted March 10, 2018 11 hours ago, ljm42 said: Bleeding Edge Toolkit LOL. Perhaps you should name it unPredictable Results Toolkit Quote Link to comment
Squid Posted March 11, 2018 Author Share Posted March 11, 2018 On 2018-03-09 at 8:21 PM, ljm42 said: I don't intend to update this every time there is a commit. What!?!? You mean that you're not going to sit there and recode the script every day (or hourly!?) Quote Link to comment
bonienl Posted March 11, 2018 Share Posted March 11, 2018 1 hour ago, Squid said: What!?!? You mean that you're not going to sit there and recode the script every day (or hourly!?) LOL. That's not bleeding edge! Though personally I would be very prudent, not every commit works as intended (=has bugs). At the end of the day we are talking development cycles here. Quote Link to comment
ljm42 Posted March 12, 2018 Share Posted March 12, 2018 10 hours ago, Squid said: What!?!? You mean that you're not going to sit there and recode the script every day (or hourly!?) There were a few commits I wanted to test and I thought "no big deal, I'll just put a quick wrapper on 'patch' and grab those updates." Well, it turns out 'patch' has a few shortcomings and by the time I was happy with the script there had been two more RCs and I completely forgot what I was so interested in testing But at least the script is ready for next time! 9 hours ago, bonienl said: Though personally I would be very prudent, not every commit works as intended (=has bugs). At the end of the day we are talking development cycles here. Agreed. This should only be used on a test system! Quote Link to comment
bonienl Posted March 12, 2018 Share Posted March 12, 2018 20 minutes ago, ljm42 said: Agreed. This should only be used on a test system! Another thing to be aware of. Github holds the files related to th GUI, but sometimes GUI changes are supported with system changes which are not available thru github. Quote Link to comment
NewDisplayName Posted March 15, 2018 Share Posted March 15, 2018 On 1.12.2017 at 10:33 PM, landS said: can this script be duplicated for removing other items? for example if i replace ".DS_Store" with ".trash-1000" will it remove the .trash-1000 folder and all subfolders/subfiles on each disk when present? as the .trash-1000 ends up in the root of any given share, can the maxdepth be set to 1? Thanks! Great idea. I tried: #!/bin/bash echo "Searching for (and deleting) .nfo Files in Filme" echo "This may take a awhile" find /mnt/user/Archiv/Filme -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \; echo "Searching for (and deleting) .nfo Files in Serien" echo "This may take a awhile" find /mnt/user/Archiv/Serien -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \; echo "Searching for (and deleting) .nfo Files in Musik" echo "This may take a awhile" find /mnt/user/Archiv/Musik -maxdepth 9999 -noleaf -type f -name ".nfo" -exec rm "{}" \; But it doesnt work. It display Script location: /tmp/user.scripts/tmpScripts/Clean .nfo/scriptNote that closing this window will abort the execution of this scriptSearching for (and deleting) .nfo Files in FilmeThis may take a awhileSearching for (and deleting) .nfo Files in SerienThis may take a awhileSearching for (and deleting) .nfo Files in MusikThis may take a awhile but nothing gets deleted... anyone any idea? Also a easy way to add multiple files to search for? Quote Link to comment
landS Posted March 15, 2018 Share Posted March 15, 2018 Thanks @nuhll I beat on this one for awhile, but alas my ignorance won the battle Quote Link to comment
cockroach Posted March 29, 2018 Share Posted March 29, 2018 Suggestion for the clear_an_array_drive script change for d in /mnt/disk[1-9]* do x=`ls -A $d` z=`du -s $d` y=${z:0:1} # echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z # the test for marker and emptiness if [ "$x" == "$marker" -a "$y" == "0" ] then found=1 break fi let n=n+1 done to for d in /mnt/disk[1-9]* do x=`ls -A $d` # echo -e "d:"$d "x:"${x:0:20} # the test for marker if [ "$x" == "$marker" ] then z=`du -s $d` y=${z:0:1} # echo -e "d:"$d "x:"${x:0:20} "y:"$y "z:"$z # the test for marker and emptiness if [ "$x" == "$marker" -a "$y" == "0" ] then found=1 break fi fi let n=n+1 done This will prevent lengthy emptyness checks on disks without the clear-me marker. Quote Link to comment
Rudder2 Posted April 7, 2018 Share Posted April 7, 2018 I have a Recycle Bin share for my Sonarr and Radarr and Lidarr Recycling just in case I need to jump back to the previous version...What I didn't realize was that after years of this it's taking A LOT of space. I was wandering if there was a script to Delete everything in my Recycling Bin that's older than 30 days every week. Thank you in advance for your help, Rudder2 Quote Link to comment
kizer Posted April 11, 2018 Share Posted April 11, 2018 On 4/7/2018 at 11:12 AM, Rudder2 said: I have a Recycle Bin share for my Sonarr and Radarr and Lidarr Recycling just in case I need to jump back to the previous version...What I didn't realize was that after years of this it's taking A LOT of space. I was wandering if there was a script to Delete everything in my Recycling Bin that's older than 30 days every week. Thank you in advance for your help, Rudder2 Are you using the Recycle Bin Plugin? If so it should have settings to delete files after a specified length of time. If not you could use the find command Check this out to give you a little bit of an idea to get you started. https://askubuntu.com/questions/589210/removing-files-older-than-7-days https://stackoverflow.com/questions/13868821/shell-script-to-delete-directories-older-than-n-days Quote Link to comment
Rudder2 Posted April 13, 2018 Share Posted April 13, 2018 On 4/11/2018 at 9:22 AM, kizer said: Are you using the Recycle Bin Plugin? If so it should have settings to delete files after a specified length of time. If not you could use the find command Check this out to give you a little bit of an idea to get you started. https://askubuntu.com/questions/589210/removing-files-older-than-7-days https://stackoverflow.com/questions/13868821/shell-script-to-delete-directories-older-than-n-days I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them. I would like this share to delete the files every 14 or 30 days. It might be redundant since I have the Recycling Plugin installed. I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it? If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them. I will look at those links also. Thank you, Rudder2 Quote Link to comment
trurl Posted April 13, 2018 Share Posted April 13, 2018 19 minutes ago, Rudder2 said: I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them. I would like this share to delete the files every 14 or 30 days. It might be redundant since I have the Recycling Plugin installed. I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it? If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them. I will look at those links also. Thank you, Rudder2 Recycling Bin plugin uses a feature of SMB to keep files that are deleted from over the network, so it wouldn't apply to your usage. Quote Link to comment
wgstarks Posted April 13, 2018 Share Posted April 13, 2018 1 hour ago, Rudder2 said: I use both the Recycling Bin Plugin and I also have a Recycling Bin Share that Sonarr, Lidarr, and Radarr moves files in to instead of deleting them. I would like this share to delete the files every 14 or 30 days. It might be redundant since I have the Recycling Plugin installed. I will have to look in to how that works...Does it copy all files deleted from unRAID no matter what did it? If so than I probably don't need the Recycling Bin share and have my darr apps move files instead of delete them. I will look at those links also. Thank you, Rudder2 Try this- https://lime-technology.com/forums/topic/41044-recycle-bin-vfs-recycle-for-63-and-later-versions/?do=findComment&comment=589029 Make sure you read through the complete discussion. Basically you can map Radarr, Sonarr etc to the .recyclebin folder and use the recyclebin plugin to delete files after a set interval. You’ll need to setup the user script described in that conversation to prevent the .recylebin directory from being removed. Quote Link to comment
Rudder2 Posted April 13, 2018 Share Posted April 13, 2018 52 minutes ago, wgstarks said: Try this- https://lime-technology.com/forums/topic/41044-recycle-bin-vfs-recycle-for-63-and-later-versions/?do=findComment&comment=589029 Make sure you read through the complete discussion. Basically you can map Radarr, Sonarr etc to the .recyclebin folder and use the recyclebin plugin to delete files after a set interval. You’ll need to setup the user script described in that conversation to prevent the .recylebin directory from being removed. I like it! This looks like it will work beautifully! I had to manually create all the .Recycle.Bin folders in all my shares to begin with but this was no biggie! I discovered 600GB in my darr app's Recycling Bin Share from the years since I upgraded to using all darr apps. Thank you for your help! Rudder2 Quote Link to comment
kizer Posted April 17, 2018 Share Posted April 17, 2018 I was hoping somebody could lend me a hand with a little code. I currently drop all my files into a folder and let a script move them around. However I want to put a little logic into it and came up with two things, but I'm having issues combining them. for instance I want to have it search for files/folders that are older than a specific time and move which I figured out. find /Source/* -maxdepth 1 -mmin +5 -exec mv {} /Destination/ \; I also want to search for folders with this particular string in it. Because typing out the same code for each Season 1 by one can get really long mv /Source/*S{01..50}* /Destination/ I attempted to do some Hair Brained combining but it doesn't work. Always results in an empty search find /SOURCE/* -iname "*s{01..50}*" -maxdepth 1 -type d -mmin +5 -exec mv {} /Destination/ \; Basically what I'm attempting to accomplish is moving TV shows I have in folders from one folder to another, but I'm trying to make sure they are at least 5minutes old so there is no confusion to the script I wrote so its not moving files that are in progress of being written from one location to another before performing some other steps. I also want to make sure they are TV Shows and since things like Plex and XBMC aka Kodi use Some.Show-S01E01.mp4 for their naming convention which I've adhered to. I can get things to work if I use the following, but honestly I was hoping for a work around. find /mnt/user/uploads/blah/* -iname "*s01*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \; find /mnt/user/uploads/blah/* -iname "*s02*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \; . . . . . . . find /mnt/user/uploads/blah/* -iname "*s99*e*" -maxdepth 1 -type d -mmin +5 mv {} /Destination/ \; **************************************************Update************************************************* I think I found a little work around using FileBot to achieve what I'm trying to do. Quote Link to comment
Squid Posted April 22, 2018 Author Share Posted April 22, 2018 Allow unRaid's webUI to utilize the full width of your browser instead of being limited to 1920px #!/bin/bash sed -i 's/max-width:1920px;//g' /usr/local/emhttp/plugins/dynamix/styles/*.css 1 1 Quote Link to comment
wgstarks Posted April 25, 2018 Share Posted April 25, 2018 (edited) Could really use some advice since I really don't know what I'm doing. I'm currently using this script to clean hidden mac files from my Media share- #!/bin/bash echo "Searching for (and deleting) .DS_Store Files" echo "This may take a awhile" find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \; echo "=======================" echo "Searching for (and deleting) ._ files" find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \; echo "Cleanup Complete" I would like to modify it to scan other shares as well. Will this work? #!/bin/bash echo "Searching for (and deleting) .DS_Store Files in Media and flash" echo "This may take a awhile" find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \; find /boot -maxdepth 9999 -noleaf -type f -name ".DS_Store" -exec rm "{}" \; echo "=======================" echo "Searching for (and deleting) ._ files in Media and flash" find /mnt/user/Media -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \; find /boot -maxdepth 9999 -noleaf -type f -name "._*" -exec rm '{}' \; echo "Cleanup Complete" Edited April 25, 2018 by wgstarks Edited to correct path for /boot Quote Link to comment
jrdnlc Posted April 28, 2018 Share Posted April 28, 2018 (edited) On 8/7/2017 at 9:17 AM, Squid said: Automatically save syslog onto flash drive Set the script to run at First Only Array Start in the background #!/bin/bash mkdir -p /boot/logs FILENAME="/boot/logs/syslog-$(date +%s)" tail -f /var/log/syslog > $FILENAME I have this scheduled to start on array in background but it prevents my disks to start and stays at "Mounting Disk". Once I disabled it the array starts immediately. Should there be any changes to the script for the latest unraid version? I'm on 6.5.0 @Squid Edited April 28, 2018 by jrdnlc Quote Link to comment
Squid Posted April 28, 2018 Author Share Posted April 28, 2018 (edited) Have another script run instead, and have that script run this one and fork it to the background #!/bin/bash /boot/scripts/myRealScript.sh & Edited April 28, 2018 by Squid 1 Quote Link to comment
Caldorian Posted May 4, 2018 Share Posted May 4, 2018 (edited) On 2/15/2018 at 3:08 PM, Interstellar said: Dunno what it is but it just isn’t happy. Just going to pull the drives and do a rebuild - only 4 hours as it only needs to do half the array. Did you find a solution for this? I just started running it myself, and I'm finding the same thing. At it's current rate, it's going to take over 80 hours to clear my 640GB disk. Edit: Well, it's getting even worse. It started off initially reporting around 8.0MB/s. 1500s in though, it's only at 2.5GiB complete, and is now reporting 1.7MB/s. Seems like it's loosing about 0.1MB/s for every 100MB written or so. Edited May 4, 2018 by Caldorian More info Quote Link to comment
Interstellar Posted May 8, 2018 Share Posted May 8, 2018 (edited) On 5/4/2018 at 5:57 AM, Caldorian said: Did you find a solution for this? I just started running it myself, and I'm finding the same thing. At it's current rate, it's going to take over 80 hours to clear my 640GB disk. Edit: Well, it's getting even worse. It started off initially reporting around 8.0MB/s. 1500s in though, it's only at 2.5GiB complete, and is now reporting 1.7MB/s. Seems like it's loosing about 0.1MB/s for every 100MB written or so. Nope. I think I just pulled the drive and let parity rebuild as it was faster. Although I have a vague recollection that I re-formatted the drives so I could mount them then filled them with a massive/dev/zero file (at full speed!), then did the /dev/md* command to clear the first 500M, then pulled the drive and forced the parity to remain valid. Ended up with a handful of parity errors after the 11 hour check. Not ideal but at least I had a 99.999% valid parity whilst it checked it. System works perfectly otherwise and I haven’t tried it again on newer versions. Edited May 8, 2018 by Interstellar Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.