trurl Posted December 13, 2019 Share Posted December 13, 2019 7 hours ago, rbronco21 said: I'm using "Clear an unRAID array data drive" to clear disks for encryption. The script stopped at 40GB on my 8TB disk and I am not sure what state I am in now. Can I try it again? Do I need to rebuild it before anything else? I don't understand this whole post. Clearing a drive in the array is to prepare it for removal, not for reformatting. And the fact that you mentioned rebuild makes me really worried. If you have already moved all the data off the disk why would you need to clear it? In fact, why would you clear a disk you weren't going to remove? The whole point of clearing a disk that is already in the array is to maintain parity so it will still be valid after the disk is removed from the array. Encryption is just a reformat. You don't even have to have an empty disk, but you do need to copy any data you want to keep elsewhere. If you have not already moved or copied all the data off the disk then that data is lost. Quote Link to comment
JorgeB Posted December 13, 2019 Share Posted December 13, 2019 2 minutes ago, trurl said: If you have already moved all the data off the disk why would you need to clear it? He's using the clear drive script so parity is maintained while removing a drive. Quote Link to comment
itimpi Posted December 13, 2019 Share Posted December 13, 2019 5 minutes ago, johnnie.black said: He's using the clear drive script so parity is maintained while removing a drive. Agreed that is the purpose of the script, but as I read it there is no intention of removing the drive. Quote Link to comment
JorgeB Posted December 13, 2019 Share Posted December 13, 2019 5 minutes ago, itimpi said: Agreed that is the purpose of the script, but as I read it there is no intention of removing the drive. Ohh, I missed that part, then also don't know why he's using it. Quote Link to comment
rbronco21 Posted December 13, 2019 Share Posted December 13, 2019 I moved everything off the drive and am clearing it so I can reformat and encrypt it while maintaining parity. I read a bunch of methods to shrink arrays and encrypt discs and this is the hybrid method I came up with. I wouldn't be surprised if I am doing extra steps, but this seemed to be the safest way I could put together. Thanks for the interest and sorry for not including more details. Is there a better way to do this? Quote Link to comment
JorgeB Posted December 13, 2019 Share Posted December 13, 2019 15 minutes ago, rbronco21 said: I moved everything off the drive and am clearing it so I can reformat and encrypt it while maintaining parity. No need to clear for that, reformatting with an encrypted filesystem maintains parity. Quote Link to comment
rbronco21 Posted December 13, 2019 Share Posted December 13, 2019 So I can move everything off the drive, then reformat it? That will definitely save some time. Quote Link to comment
JorgeB Posted December 13, 2019 Share Posted December 13, 2019 2 minutes ago, rbronco21 said: So I can move everything off the drive, then reformat it? That will definitely save some time. Yep. Quote Link to comment
DZMM Posted January 8, 2020 Share Posted January 8, 2020 (edited) On 8/3/2016 at 2:28 AM, Squid said: A slightly enhanced version of the run mover at a certain threshold script. This script additionally will skip running mover (optional) if a parity check / rebuild has already been started. Only makes sense to run this script on a schedule, and disable the built-in schedule by editing the config/share.cfg file on the flash drive. Look for a like that says something like: shareMoverSchedule="0 4 * * *" and change it to: shareMoverSchedule="#0 4 * * *" Followed by a reboot. Note that any changes to global share settings ( or mover settings ) is probably going to wind up re-enabling the mover schedule #!/usr/bin/php <?PHP $moveAt = 0; # Adjust this value to suit (% cache drive full to move at) $runDuringCheck = false; # change to true to run mover during a parity check / rebuild $diskTotal = disk_total_space("/mnt/cache"); $diskFree = disk_free_space("/mnt/cache"); $percent = ($diskTotal - $diskFree) / $diskTotal * 100; if ( $percent > $moveAt ) { if ( ! $runDuringCheck ) { $vars = parse_ini_file("/var/local/emhttp/var.ini"); if ( $vars['mdResync'] ) { echo "Parity Check / Rebuild Running - Not executing mover\n"; exec("logger Parity Check / Rebuild Running - Not executing mover"); } else { exec("/usr/local/sbin/mover"); } } else { exec("/usr/local/sbin/mover"); } } ?> run_mover_at_threshold_enhanced.zip 1.05 kB · 71 downloads In this script it checks if parity is running and I'd like to do a similar check in a different script for if the mover is running. Can anyone help please ie. what's the mover equivalent for: $vars = parse_ini_file("/var/local/emhttp/var.ini"); if ( $vars['mdResync'] ) { Thanks in advance. Update: Found the answer: https://gist.github.com/fabioyamate/4087999 if [ -f /var/run/mover.pid ]; then if ps h `cat /var/run/mover.pid` | grep mover ; then echo "mover already running" exit 0 fi fi Edited January 8, 2020 by DZMM Added answer Quote Link to comment
rcmpayne Posted February 1, 2020 Share Posted February 1, 2020 On 7/23/2016 at 1:00 PM, Squid said: Run mover at a certain threshold of cache drive utilization. Adjust the value to move at within the script. Really only makes sense to use this script as a scheduled operation, and would have to be set to a frequency (hourly?) more often than how often mover itself runs normally. #!/usr/bin/php <?PHP $moveAt = 70; # Adjust this value to suit. $diskTotal = disk_total_space("/mnt/cache"); $diskFree = disk_free_space("/mnt/cache"); $percent = ($diskTotal - $diskFree) / $diskTotal * 100; if ( $percent > $moveAt ) { exec("/usr/local/sbin/mover"); } ?> run_mover_at_threshold.zip 717 B · 94 downloads I just added this script and set it to 80% and run hourly. The question I have is what do you set the Mover Settings in http://Server/Settings/Scheduler set to? it looks like it cant be disabled so I assume you max it to monthly maybe? Quote Link to comment
Squid Posted February 1, 2020 Author Share Posted February 1, 2020 Just now, rcmpayne said: I just added this script and set it to 80% and run hourly. The question I have is what do you set the Mover Settings in http://Server/Settings/Scheduler set to? it looks like it cant be disabled so I assume you max it to monthly maybe? I would use the mover tuner plugin instead of the script. Quote Link to comment
rcmpayne Posted February 1, 2020 Share Posted February 1, 2020 7 minutes ago, Squid said: I would use the mover tuner plugin instead of the script. Ok, just installed it... looks like the same question still exist for this new plugin right? Quote Link to comment
Squid Posted February 1, 2020 Author Share Posted February 1, 2020 You set the mover schedule to be as often as you want the plugin to run and apply it's rules. If you want at some point mover to move all the files then you set that custom cron schedule down at the bottom of the plugin's settings (or run mover manually) Quote Link to comment
rcmpayne Posted February 1, 2020 Share Posted February 1, 2020 9 hours ago, Squid said: You set the mover schedule to be as often as you want the plugin to run and apply it's rules. If you want at some point mover to move all the files then you set that custom cron schedule down at the bottom of the plugin's settings (or run mover manually) OK I get it now, thanks. If I set the default settings to run every hour or every day, when it triggers, it won't just start moving, it will check the plug in add on and then only move if the size is greater than 80% of the disk. Quote Link to comment
guru69 Posted February 10, 2020 Share Posted February 10, 2020 (edited) My Borg backup script for dockers and VMs... I have been using CA Backup for my Unraid backups for quite a while, but I discovered Borg Backup in Nerd Tools so I decided to try it. My goal was to make my backups smaller and faster with less downtime. I prefer to have individual schedules and backups for each VM and docker on Unraid so I thought I'd share my unified (VM/Docker) Borg backup script. I have reduced my Plex downtime to 25 mins on first backup, and around 7 minutes on additional backups due to deduplication. Script assumes you are using the default locations for dockers and VMs and the email log only works when the script is scheduled (User Script Log is written) and not while run manually in User Scripts. Maybe you guys know a better way to capture the output? I am not using an encrypted Borg repo for these backups, but I might make an additional version for backing up to an encrypted Borg repo. I have it set to retain 4 backups currently. Any additions or changes to improve this are greatly welcomed 🙂 #!/bin/bash arrayStarted=true # Unraid display name of docker/vm displayname=NS1 # Unraid backup source data folder # (VM = /mnt/user/domains... Docker = /mnt/cache/appdata...) backupsource=/mnt/user/domains/NS1 # The name of this Unraid User Script scriptname=backup-ns1-vm # Path to your Borg backup repo export BORG_REPO=/mnt/user/Backup/Borg-backups/unencrypted # Email address to receive backup log [email protected] ###### Don't Edit Below Here ###### # Build variables, clear log today=$(date +%m-%d-%Y.%H:%M:%S) export backupname="${displayname}_${today}" export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes >/tmp/user.scripts/tmpScripts/$scriptname/log.txt # Determine if Docker or VM, set backupsourcetype var backupsourcetype= [[ $backupsource = */mnt/cache/appdata* ]] && backupsourcetype=Docker [[ $backupsource = */mnt/user/domains* ]] && backupsourcetype=VM echo "Backup source type: $backupsourcetype" # Shutdown the Docker/VM [[ $backupsourcetype = Docker ]] && docker stop $displayname [[ $backupsourcetype = VM ]] && virsh shutdown $displayname --mode acpi # Create backup echo "Backing up $displayname $backupsourcetype folder..." borg create --stats $BORG_REPO::$backupname $backupsource sleep 5 # Start the Docker/VM [[ $backupsourcetype = Docker ]] && docker start $displayname [[ $backupsourcetype = VM ]] && virsh start $displayname # Pruning, keep last 4 backups and prune older backups, give stats borg prune -v --list --keep-last 4 --prefix $displayname $BORG_REPO # Email the backup log echo "Subject: Borg: $displayname $backupsourcetype Backup Log" > /tmp/email.txt echo "From: Unraid Borg Backup" >> /tmp/email.txt cat /tmp/email.txt /tmp/user.scripts/tmpScripts/$scriptname/log.txt > /tmp/notify.txt sendmail $emailnotify < /tmp/notify.txt Always test your backups before you rely on them! Here are the commands I use for testing/restoring the backups. Stop your docker/VM first and rename or move the folder if you are testing and are not replacing the data Borg will restore to the current folder, so change to / directory to have it restored to original directory, or run it elsewhere if not Edit the first 2 commands with your docker/vm and repo location displayname=NS1 export BORG_REPO=/mnt/user/Backup/Borg-backups/unencrypted export BORG_UNKNOWN_UNENCRYPTED_REPO_ACCESS_IS_OK=yes borg list $BORG_REPO Copy the backup name you want to test/restore from the Borg repo list output to your clipboard, mine is: NS1_02-07-2020.19:00:01 so I will set the restorebackupname variable to this export restorebackupname=NS1_02-07-2020.19:00:01 cd / borg extract --list $BORG_REPO::$restorebackupname restorebackupname= Start up your docker/VM and make sure it works. Edited February 10, 2020 by guru69 1 Quote Link to comment
guruleenyc Posted February 10, 2020 Share Posted February 10, 2020 @guru69 brilliant! Tested and working on my side with dockers and vm's. Thank you! Sent from my SM-N960U using Tapatalk Quote Link to comment
alturismo Posted February 16, 2020 Share Posted February 16, 2020 some fast small lines for calibre-web to convert and add to library automatically, i use it in a scheduled task drop ebook files (no folders) in startpath, script will convert to .mobi if not existent and add all existing formats to calibre and REMOVE after job is done from startpath !!! DOCKER_MOD like described from lsio is necessary for ebook conversion Quote -e DOCKER_MODS=linuxserver/calibre-web:calibre #optional & x86-64 only Adds the ability to perform ebook conversion in case its useful for someone #!/bin/bash startpath="/mnt/cache/Temp/download/books/" ### place where to look for new files confpath="/mnt/user/appdata/calibre-web" ### appdata path to calibre-web searchpattern=".mobi" ### what type of files to look for and add if missing inputdir="/import" ### inputdir like setted in calibre-web docker (manually) outputdir="/books" ### outputdir like setted in calibre-web (check webui) cd $startpath ### look for books and convert find . -mindepth 1 -iname "*"|sed "s|^\./||"|while read fname; do sourcefile="${fname%.*}" searchfile="$sourcefile$searchpattern" inputfile="$inputdir/$fname" convfile="$inputdir/$searchfile" if ! [[ "$searchfile" == "$fname" ]]; then if ! [ -n "$(lsof "$fname")" ]; then if ! [ -f "$searchfile" ]; then echo "docker exec calibre-web ebook-convert "\""$inputfile"\"" "\""$convfile"\""" > "$confpath/newbooks.txt" echo "docker exec calibre-web ebook-convert "\""$inputfile"\"" "\""$convfile"\""" >> "$confpath/newbookshistory.txt" echo "creating $searchfile ..." chmod +x "$confpath/newbooks.txt" cd $confpath ./newbooks.txt > /dev/null sleep 1 cd $startpath else cd $startpath echo "$searchfile already there..." fi else echo "$sourcefile in use ..." fi else echo "skipping $searchpattern for conversion ..." fi done ### check files are free and prepare adding find . -mindepth 1 -iname "*"|sed "s|^\./||"|while read fname; do if ! [ -n "$(lsof "$fname")" ]; then echo "$fname ready ..." echo "docker exec calibre-web calibredb add -r "\""$inputdir"\"" --with-library="\""$outputdir"\""" > "$confpath/addbooks.txt" else echo "$fname in use ..." echo "echo "\""file in use"\""" > "$confpath/addbooks.txt" break fi done ### exec addbooks chmod +x "$confpath/addbooks.txt" cd $confpath ./addbooks.txt sleep 1 ### remove books ### rm -f "$startpath"* exit as note, as i need the double quotes and still have some trouble to get them properly working i use the workaround to write command to file and exec. Quote Link to comment
HondSchaap Posted February 17, 2020 Share Posted February 17, 2020 I created a small script to help me fight the issues with hardware transcoding on Apollo- & Gemini Lake CPUs. Basically, the Intel Media Driver doesn't work well with hardware transcoding on Plex, resulting in blocky streams. The workaround is to delete the iHD driver, so it falls back to the older Intel VA-API driver (see this post for more info: hardware-transcoding-broken-when-burning-subtitles-apollolake-based-synology-nases The code below checks the plex container if the iHD driver is present and if so, deletes the driver and reboots the container. This script will probably be deprecated once the issues are resolved, but I got a little tired of logging into the shell manually each time the container was updated. con="plex" echo "Checking if iHD_drv_video.so exists..." if docker exec $con sh -c "test -f /usr/lib/plexmediaserver/lib/dri/iHD_drv_video.so"; then echo "<font color='red'><b>iHD_drv_video.so found, deleting...</b></font>" && \ docker exec $con sh -c "rm /usr/lib/plexmediaserver/lib/dri/iHD_drv_video.so" && \ echo "iHD_drv_video.so removed, rebooting Plex" && \ docker restart $con fi echo "All good!" Any feedback on improving this script is more than welcome! Quote Link to comment
Walter S Posted February 19, 2020 Share Posted February 19, 2020 Hi Squid, I.m new and trying some of the settings in the User Scrips plug-in. enabling/disabling turbo write mode and auto turbo write, whats the difference and any advantages and should one be using it on a daily basis? Are there any other places (sites) we can find trusted useful scrips? Nice work, I was able to fix the issue i had with "write_cache_on_disk_10", Strange because this drive is a WD Red retail drive. Not shucked. Quote Link to comment
Squid Posted February 19, 2020 Author Share Posted February 19, 2020 1 minute ago, Walter S said: Hi Squid, I.m new and trying some of the settings in the User Scrips plug-in. enabling/disabling turbo write mode and auto turbo write, whats the difference and any advantages and should one be using it on a daily basis? Are there any other places (sites) we can find trusted useful scrips? Nice work, I was able to fix the issue i had with "write_cache_on_disk_10", Strange because this drive is a WD Red retail drive. Not shucked. The turbo write scripts were a precursor to the auto turbo plugin, and it would be better to utilize it instead. Quote Link to comment
alturismo Posted March 5, 2020 Share Posted March 5, 2020 small script to switch between 2 VM´s as i have a ubuntu and windows VM which shares the same hardware passthrough ... checking if one of the given VM´s ir running, if so then stop it, wait till stopped, start the other. it ll do nothing if either both are off or both are running, just in case ... #!/bin/bash vm1="Media PC" ## Name of first VM vm2="Ubuntu" ## Name of second VM ############### End config vm_running="running" vm_down="shut off" vm1_state=$(virsh domstate "$vm1") vm2_state=$(virsh domstate "$vm2") echo "$vm1 is $vm1_state" echo "$vm2 is $vm2_state" if [ "$vm1_state" = "$vm_running" ] && [ "$vm2_state" = "$vm_down" ]; then echo "$vm1 is running shutting down" virsh shutdown "$vm1" vm1_new_state=$(virsh domstate "$vm1") until [ "$vm1_new_state" = "$vm_down" ]; do echo "$vm1 $vm1_new_state" vm1_new_state=$(virsh domstate "$vm1") sleep 2 done echo "$vm1 $vm1_new_state" sleep 2 virsh start "$vm2" sleep 1 vm2_new_state=$(virsh domstate "$vm2") echo "$vm2 $vm2_new_state" else if [ "$vm2_state" = "$vm_running" ] && [ "$vm1_state" = "$vm_down" ]; then echo "$vm2 is running shutting down" virsh shutdown "$vm2" vm2_new_state=$(virsh domstate "$vm2") until [ "$vm2_new_state" = "$vm_down" ]; do echo "$vm2 $vm2_new_state" vm2_new_state=$(virsh domstate "$vm2") sleep 2 done echo "$vm2 $vm2_new_state" sleep 2 virsh start "$vm1" sleep 1 vm1_new_state=$(virsh domstate "$vm1") echo "$vm1 $vm1_new_state" else echo "$vm1 $vm1_state and $vm2 $vm2_state doesnt match" fi fi Quote Link to comment
jbuszkie Posted March 12, 2020 Share Posted March 12, 2020 *Sigh* Everyone may know this but I didn't... When using user scripts if you click outside the script window in the main window that will also kill the script. So not just hitting the "X".. I was about a 1/4 into zeroing out a drive when I accidentally click outside the script window and killed it! *sigh* Quote Link to comment
Squid Posted March 12, 2020 Author Share Posted March 12, 2020 7 minutes ago, jbuszkie said: *Sigh* Everyone may know this but I didn't... When using user scripts if you click outside the script window in the main window that will also kill the script. So not just hitting the "X".. I was about a 1/4 into zeroing out a drive when I accidentally click outside the script window and killed it! *sigh* Didn't think that it allowed outside clicks (I'll change that) Quote Link to comment
NitroNine Posted March 16, 2020 Share Posted March 16, 2020 (edited) So I just ran this on disk 8 of my array (i first moved all the data off it to the other disks in my array), it completed (5TB drive so it took a while), I clicked Done. I then stopped the array, but when I went to tools>new config and checked all for Preserve current assignments, It didn't give me the option for apply, just Done. And I can't remove the drive from my array. I am using unraid 6.8.3 nvidia and don't have any parity drives at the moment. Basically this drive is showing old, and before it starts to fail I want to remove it from the array, and replace it with a new 6TB drive. Am I going about this the wrong way and/or is there an easier way for me to remove a drive from my array without any parity? Edited March 16, 2020 by NitroNine Quote Link to comment
itimpi Posted March 16, 2020 Share Posted March 16, 2020 4 hours ago, NitroNine said: So I just ran this on disk 8 of my array (i first moved all the data off it to the other disks in my array), it completed (5TB drive so it took a while), I clicked Done. I then stopped the array, but when I went to tools>new config and checked all for Preserve current assignments, It didn't give me the option for apply, just Done. And I can't remove the drive from my array. I am using unraid 6.8.3 nvidia and don't have any parity drives at the moment. Basically this drive is showing old, and before it starts to fail I want to remove it from the array, and replace it with a new 6TB drive. Am I going about this the wrong way and/or is there an easier way for me to remove a drive from my array without any parity? Sounds like you forgot to check the checkbox under Tools >> New Config confirming that you want to run the operation? Until you have done that the Apply button will not be active. however if you are intending to replace the drive why not simply use the normal procedure for Replacing a Disk Drive which does not involve you going via the Tools >> New Config route (although both options are viable). Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.