Additional Scripts For User.Scripts Plugin


Recommended Posts

On 7/24/2016 at 11:55 AM, Squid said:

Enable / Disable Turbo Write Mode

 

Enable

 

#!/bin/bash
/usr/local/sbin/mdcmd set md_write_method 1
echo "Turbo write mode now enabled"
 

 

Disable

 

#!/bin/bash
/usr/local/sbin/mdcmd set md_write_method 0
echo "Turbo write mode now disabled"
 

 

 

turbo_writes.zip 1.02 kB · 84 downloads

 

If you wanted to enable/disable Auto mode would it be write method 2?

 

Edit: Did a little more digging and found out auto is not what I thought it was, so the question is kind of meaningless now. 🤷‍♂️

Edited by jebusfreek666
Link to comment
On 11/10/2021 at 11:12 AM, Meles Meles said:

@hernandito - here's my script that I use for the same thing...

 

 

move_it.sh 5.06 kB · 1 download

 

Hi,

 

I finally has a chance to give this a try... I believe your script is for:

"Moves all non-hardlinked files from pool onto the array disk with the most free space"

 

When you say pool, does this mean the cache disk?

 

If yes... Say my movies are stored in: /mnt/user/Media/Movies

 

What would be the proper way to invoke your script?

 

I tried:

script -f Movies -s Media --mvg

 

then I get the below response:

-------------------------------------------------------------------------------------------------------
Searching '/mnt//Media/Movies/' for non-hardlinked files to move to '/mnt/disk9/Media/Movies/'
-------------------------------------------------------------------------------------------------------
./script: line 99: cd: /mnt//Media/Movies/: No such file or directory

 

Obviously its missing the /user/ portion of the path. How do I enter that? Do I need to edit the script itself?

 

Thanks again!

 

H.

 

Link to comment
On 11/17/2021 at 6:20 PM, hernandito said:

Hi Meles.... in my case, my pool only contains "cache"... I have a single cache drive. What can I change on your script to reflect this?

 

image.png.2853ee9ad047f2aa4ca0d577df994cd0.png

 

Thank you.

 

 

 

I think i've worked out what the issue you're having is...

 

I'm guessing that your array is encrypted?

 

Therefore your disks are named like this

/dev/mapper/md1                12T  119G   12T   1% /mnt/disk1
/dev/mapper/md2                12T   84G   12T   1% /mnt/disk2
/dev/mapper/md3                12T   84G   12T   1% /mnt/disk3

 

the script was originally written on my other unRAID server which doesn't have encrypted disks, therefore the disks are like this...

/dev/md1         12T  7.3T  4.8T  61% /mnt/disk1
/dev/md2        8.0T  5.2T  2.9T  64% /mnt/disk2
/dev/md3        8.0T   56G  8.0T   1% /mnt/disk3
/dev/md4        8.0T  5.5T  2.6T  68% /mnt/disk4
/dev/md5         12T  6.4T  5.7T  54% /mnt/disk5

 

 

the line in the script which does the "work out what disk has most space on" was just looking for /dev/md originally...

 

here's a modified version of the script (also tarted up somewhat)

 

move_it.sh

Moves all non-hardlinked files from pool onto the array disk with the most free space

 Usage
 -----

 move_it.sh -f SUBFOLDER [OPTIONS]

 Options
 -f, --folder= the name of the subfolder (of the share) (default '.')

 -s, --share=  the name of the unRaid share (default 'data')
 -n,--nohup    use nohup, runs the mv in a job
 -d,--dotfiles also move files which are named .blah (or in a folder named as such)
 --mvg         use mvg rather than mv so a status bar can display for each file
 -h,--help,-?  this usage

 

move_it.sh

Link to comment
  • 4 weeks later...

I'm trying to run a backup script that I think I found in this thread (maybe???). It was working fine for several months but has stopped recently.

The user script-

#!/bin/bash
/mnt/user/backups/backupDockers.sh -b unifi binhex-radarr binhex-sonarr binhex-qbittorrentvpn jackett mariadb tautulli PlexMediaServer plex-utills

 

When I try to run this now I get permission denied-

Quote

Script location: /tmp/user.scripts/tmpScripts/dockerBackups/script
Note that closing this window will abort the execution of this script
/tmp/user.scripts/tmpScripts/dockerBackups/script: line 2: /mnt/user/backups/backupDockers.sh: Permission denied

 

I tried running New Permissions on /mnt/user/backups but no change. I'm out of my league with this.

backupDockers.sh is attached. 

 

backupDockers.sh

Link to comment
  • 4 weeks later...

I’m looking for a script that will restart my plex container using curl for those instances where it rarely goes down. 

 

I’ve found this script online but I’m not the best at debugging this and cant see to get it to work.

 


#!/bin/bash

name=Plex Media Server check and restart

description=This script will restart PLEX if it does not respond after two attempts.

arrayStarted=true

dockerid=$(docker ps -aqf “name=plex”) if [ “$dockerid” == “” ]; then echo “ERR $(date -Is) - Could not get a docker id for docker name \”plex\”.” exit 1; fi

Do not check between 1:55am to 2:30am

currentTime=date +”%H%M%S” if [[ ! ( “$currentTime” < “015500” || “$currentTime” > “023000” ) ]]; then exit 0; fi

firstcheck=$((curl -sSf -m30 https://myserver:32400/web/index.html) 2>&1) if [ “$firstcheck” != “” ]; then echo “WRN $(date -Is) - Plex did not respond in first check, waiting 15 seconds..” sleep 15 secondcheck=$((curl -sSf -m30 https://myserver:32400/web/index.html) 2>&1) if [ “$secondcheck” != “” ]; then echo “WRN $(date -Is) - Plex did not respond in second check either, restarting docker container.” echo “INF $(date -Is) - Stopping docker $dockerid.” docker stop $dockerid echo “INF $(date -Is) - Waiting 15 seconds..” sleep 15 echo “INF $(date -Is) - Starting docker $dockerid.” docker start $dockerid else echo “INF $(date -Is) - Plex docker container responded on second attempt.” fi else echo “INF $(date -Is) - Plex docker container responded on first attempt.” fi`

 

 

Edited by Bolagnaise
Link to comment
  • 3 weeks later...

This script does a  backup of the flash drive of unraid, I would like the script to keep only N backups so I can run it every week or day but will only keep N backups and will delete the older ones.

I have no idea of scripting, can someone help me please?

I have already modify in my server the route where the zip file is dropped but I don't know how to do the rest.

 

#!/usr/bin/php -q
<?PHP
/* Copyright 2005-2018, Lime Technology
 * Copyright 2012-2018, Bergware International.
 *
 * This program is free software; you can redistribute it and/or
 * modify it under the terms of the GNU General Public License version 2,
 * as published by the Free Software Foundation.
 *
 * The above copyright notice and this permission notice shall be included in
 * all copies or substantial portions of the Software.
 */
?>
<?
$docroot = $docroot ?? $_SERVER['DOCUMENT_ROOT'] ?: '/usr/local/emhttp';
$var = file_exists('/var/local/emhttp/var.ini') ? parse_ini_file('/var/local/emhttp/var.ini') : [];
$dir = ['system','appdata','isos','domains'];
$out = ['prev','previous'];

$server = isset($var['NAME']) ? str_replace(' ','_',strtolower($var['NAME'])) : 'tower';
$mydate = date('Ymd-Hi');
$backup = "$server-flash-backup-$mydate.zip";

$used = exec("df /boot|awk 'END{print $3}'") * 1.5;
$free = exec("df /|awk 'END{print $4}'");
if ($free > $used) $zip = "/$backup"; else {
  foreach ($dir as $share) {
    if (!is_dir("/mnt/user/$share")) continue;
    $free = exec("df /mnt/user/$share|awk 'END{print $4}'");
    if ($free > $used) {$zip = "/mnt/user/$share/$backup"; break;}
  }
}
if ($zip) {
  chdir("/boot");
  foreach (glob("*",GLOB_NOSORT+GLOB_ONLYDIR) as $folder) {
    if (in_array($folder,$out)) continue;
    exec("zip -qr ".escapeshellarg($zip)." ".escapeshellarg($folder));
  }
  foreach (glob("*",GLOB_NOSORT) as $file) {
    if (is_dir($file)) continue;
    exec("zip -q ".escapeshellarg($zip)." ".escapeshellarg($file));
  }
  symlink($zip,"$docroot/$backup");
  echo $backup;
}
?>

 

Link to comment
On 1/15/2022 at 8:36 AM, Bolagnaise said:

I’m looking for a script that will restart my plex container using curl for those instances where it rarely goes down. 

 

 

 

 

I run a docker container called "autoheal" (willfarrell/autoheal:1.2.0 is the version you want to use as the "latest" one is regenerated daily which is a PITA)

 

just give each container you want it to monitor a label of "autoheal=true"

image.png.2a7653944e558eafdcef3d5d8c5d62a1.png

 

it also needs some sort of healthcheck command (if there's not one included in the image itself) - here's my plex one (goes in "extra parameters" on the advanced view

 

 --health-cmd 'curl --connect-timeout 15 --silent --show-error --fail http://localhost:32400/identity'

 

*Remember - the port in this URL is the one from INSIDE the container, not where it's mapped to on the server *

 

 

you can do similar for most containers, although a trap for young players is that some of the images don't have curl, so you need to use wget (and alter parameters to suit)

 

 

I've attached my template for autoheal (from /boot/config/plugins/dockerMan/templates-user)

 

 

 

my-autoheal.xml

Edited by Meles Meles
  • Thanks 1
Link to comment
On 2/1/2022 at 10:22 AM, Meles Meles said:

 

 

 

I run a docker container called "autoheal" (willfarrell/autoheal:1.2.0 is the version you want to use as the "latest" one is regenerated daily which is a PITA)

 

just give each container you want it to monitor a label of "autoheal=true"

image.png.2a7653944e558eafdcef3d5d8c5d62a1.png

 

it also needs some sort of healthcheck command (if there's not one included in the image itself) - here's my plex one (goes in "extra parameters" on the advanced view

 

 --health-cmd 'curl --connect-timeout 15 --silent --show-error --fail http://localhost:32400/identity'

 

*Remember - the port in this URL is the one from INSIDE the container, not where it's mapped to on the server *

 

 

you can do similar for most containers, although a trap for young players is that some of the images don't have curl, so you need to use wget (and alter parameters to suit)

 

 

I've attached my template for autoheal (from /boot/config/plugins/dockerMan/templates-user)

 

 

 

my-autoheal.xml 2.55 kB · 0 downloads

 

Thank you!

Link to comment
On 2/1/2022 at 10:22 AM, Meles Meles said:

 

 

 

I run a docker container called "autoheal" (willfarrell/autoheal:1.2.0 is the version you want to use as the "latest" one is regenerated daily which is a PITA)

 

just give each container you want it to monitor a label of "autoheal=true"

image.png.2a7653944e558eafdcef3d5d8c5d62a1.png

 

it also needs some sort of healthcheck command (if there's not one included in the image itself) - here's my plex one (goes in "extra parameters" on the advanced view

 

 --health-cmd 'curl --connect-timeout 15 --silent --show-error --fail http://localhost:32400/identity'

 

*Remember - the port in this URL is the one from INSIDE the container, not where it's mapped to on the server *

 

 

you can do similar for most containers, although a trap for young players is that some of the images don't have curl, so you need to use wget (and alter parameters to suit)

 

 

I've attached my template for autoheal (from /boot/config/plugins/dockerMan/templates-user)

 

 

 

my-autoheal.xml 2.55 kB · 4 downloads

 

 

Can i ask why your using localhost:32400/identity and not localhost:32400/web/index.html as the website check. I get no response from that webpage when checking it using the curl command. 

Edited by Bolagnaise
Link to comment
On 1/7/2017 at 7:14 AM, Squid said:

ok...  How about this:  (it's actually a lot simpler than it looks  ;) )

 

 

#!/usr/bin/php
<?PHP
# description=Moves a folder based upon utilization 
# arrayStarted=true

$source            = "/mnt/disk1/test";    # Note that mixing disk shares and user shares here will result in corrupted files
$destination       = "/mnt/disk2/test";    # Use either disk or user shares.  Not both.
$moveAt            = 90;                   # Utilization %  on the source to trigger the moves
$moveOldest        = true;                 # true moves the oldest files/folders first.  false moves the newest files/folders first
$emptySource       = false;                # false stops the moves when utilization drops below $moveAt.  true will move all of the files / folders
$runDuringCheck    = false;                # false pauses the moves during a parity check/rebuild.  true continues copying
$checkSleepTime    = 300;                  # seconds to delay before checking if a parity check is still in progress
$deleteDestination = true;                 # false if the destination file/folder already exists skips copying.  true deletes the version on the destination and then copies
$abortOnError      = true;                 # true aborts move operations if one of them results in an error.  false carries on with other files/folders
$finishScript      = false;                # false do nothing.  Set to something like "/boot/my_scripts/customScript.sh" to run a custom script at the completion of copying (notifications?)
$fullLogging       = false;                # set to true for full logging.  Useful for debugging.

function runMove() {
  global $source, $moveAt;

  $percent = (disk_total_space($source) - disk_free_space($source)) / disk_total_space($source) * 100;
  return ($percent > $moveAt);
}

function logger($string) {
  global $fullLogging;

  if ( $fullLogging ) {
    echo $string;
  }
}

if ( ! runMove() ) {
  echo "Set conditions to move are not met.  Exiting\n";
  exit();
}

$raw = array_diff(scandir($source),array(".",".."));
if ( ! $raw ) {
  echo "$source does not exist or is already empty\n";
  exit();
}
foreach ($raw as $entry) {
  $sourceContents[$entry] = filemtime("$source/$entry");
}
if ( $moveOldest ){
  asort($sourceContents);
} else {
  arsort($sourceContents);
}
logger(print_r($sourceContents,true));
$contents = array_keys($sourceContents);

exec("mkdir -p ".escapeshellarg($destination));

foreach ( $contents as $entry ) {
  if ( ! $emptySource ) {
    if ( ! runMove() ) {
      break;
    }
  }
  if ( ! $runDuringCheck ) {
    while ( true ) {
      $unRaidVars = parse_ini_file("/var/local/emhttp/var.ini");
      if ( ! $unRaidVars["mdResyncPos"] ) {
        break;
      }
      logger("Parity Check / Rebuild in progress.  Pausing $checkSleepTime seconds\n");
      sleep($checkSleepTime);
    }
  }
  if ( is_dir("$destination/$entry") ) {
    echo "$destination/$entry already exists.  ";
    if ( $deleteDestination ) {
      echo "Deleting prior to moving.\n";
      exec("rm -rf ".escapeshellarg("$destination/$entry"));
    } else {
      echo "Skipping.\n";
      continue;
    }
  }
  echo "Moving $source/$entry to $destination/$entry\n";
  exec("mv ".escapeshellarg("$source/$entry")." ".escapeshellarg("$destination/$entry"),$output,$returnValue);
  if ( $returnValue ) {
    echo "An error occurred moving $source/$entry to $destination/$entry  ";
    if ( $abortOnError ) {
      echo "Aborting\n";
      exit();
    } else {
      echo "Continuing\n";
    }
  }
}

if ( $finishScript ) {
  exec($finishScript,$finishOut);
  foreach ($finishOut as $line) {
    echo "$line\n";
  }
}
?>
 

 

Allows you to set the threshold of utilization on the source to trigger the move

Moves are selectable to either completely move the source or to stop after the threshold has been reached

Move either the oldest folders or the newest folders first

Option to pause during a parity check / rebuild

Option to skip if the folder to be moved already exists in the destination or to delete it and copy

Option to skip or abort if an error happens during the copy

Option to run a custom script (ie: notifications) upon completion.

Hi, 

 

I was looking for a way to move files and was hoping that your script could help me. Here is what I am trying to do;

 

My server contains lots of media in the form of movies and tv shows. I had set the split level on the media share incorrectly which has causes all the media to be spread over multiple disks. Basically, season 1 of a tv show will be spread over multiple disks. This causes my disks to keep spinning up and down depending on what episode I am watching. I think that your script can help me fix that because it would allow me to set the split level on the share to not split any directories and just move folders to another disk if the disk is full. 

 

Your script sends the folder that you choose to a pre-selected disk. I would like it to send the latest folder to the next disk that has enough space to store that folder. If possible even that folder +100GB. Is that possible?

Link to comment

Hi everyone! Newbie here.
Great work on those custom user scripts!! I am already utilizing two of them with great results!!

However, I need you assistance.
I would like to create a custom user script so that during a specific time (i.e., 07:00 PM) a specific docker container gets automatically started and then at another specific time (i.e., 11:59 PM), the same docker container gets automatically stopped.
I believe that this can be set at a daily schedule but unfortunately I am not knowledgeable when it comes to cron and custom scripts. 
I would very much appreciate a solid example so that I can learn...

Thank you all in advance for your time and effort! 

Link to comment

Hello everyone,

 

I understand that we can get the running status of UserScripts inside Unassigned Device.

But do you now if it is possible to automatically launch a userscript when a device is connected ?

 

I backup all my NAS with spare hard drives and create a few userscripts to launch some rsync.

 

Thanks !

Link to comment
  • 2 weeks later...

 

On 2/5/2022 at 8:11 PM, Bolagnaise said:

 

 

Can i ask why your using localhost:32400/identity and not localhost:32400/web/index.html as the website check. I get no response from that webpage when checking it using the curl command. 

 

 

sorry, just noticed this!

 

No idea really... I just did, and it worked! My theory was that the /identify page returned less data so was less "work" for the server to do (for when 16 cores/32 threads isn't enough?)

Link to comment
8 hours ago, Meles Meles said:

 

 

 

sorry, just noticed this!

 

No idea really... I just did, and it worked! My theory was that the /identify page returned less data so was less "work" for the server to do (for when 16 cores/32 threads isn't enough?)

 No problems, i found that that didnt work for me, and i ended up using IPADDRESS:32400/web/index.html as the /identify page returns a blank page for me. I searched for this and found several other people saying the same thing that this page is meant to return some form of information but instead just shows a blank page, either way its been running perfect for a few weeks now thanks to you.

Link to comment
  • 1 month later...

Just a quick update for anyone looking to implement autoheal in docker, i have found that adding -i to the curl command correctly return the 2OO OK http status for the /identify page, the http return is much shorter so i recommend it. 

 

Heres my advanced docker config for my plex server to add the health cmd too. Many thanks again to @Meles Meles

 

--health-cmd 'curl -i --connect-timeout 15 --silent --show-error --fail 192.168.1.100:32400/identify'

 

Link to comment
  • 3 weeks later...

Hello Everyone,

 

thanks for helping the community with great script.

 

however I have a small request, I have one docker container that always fail or crash for no  reason,I tried to solve this issue by making a script that will start container every hour, some time it crash between that time and I lose container function so I need a script to monitor docker container and if it crash/stopped it will directly start again in anytime.

 


Thank you all in advance for your time and effort! 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.