Jump to content
Squid

Additional Scripts For User.Scripts Plugin

323 posts in this topic Last Reply

Recommended Posts

#!/usr/bin/php
<?PHP
$startStopped = true;

require_once("/usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php");

$DockerClient = new DockerClient();
$DockerTemplates = new DockerTemplates();

$info = getRunningContainers();

foreach ($info as $contName=>$cont) {
  if ( $startStopped && ! $cont['running'] ) {
    $DockerClient->startContainer($cont['Id']);
  }
  if ( $cont['running'] || $startStopped ) {
    echo "$contName  Size: ";
    $size = rtrim(exec("docker exec $contName du -shx / 2>/dev/null"),"/ ") ?: "N/A";
    echo "$size  Logs: ".human_filesize(exec("docker logs $contName 2>&1 | wc -c"),1)."\n";
  }
  if ( ! $cont['running'] && $startStopped ) {
    $DockerClient->stopContainer($cont['Id']);
  }
}

function human_filesize($bytes, $decimals = 2) {
  $size = array('B','kB','MB','GB','TB','PB','EB','ZB','YB');
  $factor = floor((strlen($bytes) - 1) / 3);
  return sprintf("%.{$decimals}f", $bytes / pow(1024, $factor)) . @$size[$factor];
}

function getRunningContainers() {
  global $DockerClient, $DockerTemplates;

  $containers = $DockerClient->getDockerContainers();
  $info = $DockerTemplates->getAllInfo();

  foreach ($containers as $container) {
    $info[$container['Name']]['running'] = $container['Running'];
    $info[$container['Name']]['Id'] = $container['Id'];
    $infoTmp[$container['Name']] = $info[$container['Name']];
  }
  return $infoTmp ?: array();
}
?>

 

Share this post


Link to post

Awesome!

 

Now we just need to convince @bonienl to embed this into the docker advanced view page. A size column, with a button at the bottom to calculate all sizes just like the share calculation page. The log size should be displayed beside the view log link as part of the page render, since that figure doesn't require the docker to be started.

 

Share this post


Link to post

I am still trying to get my dockers all backed up nicely.

 

I have them all backing up with a cron script, but I don't think they like getting backed up while running.

 

I wondered if the commands behind the 'pause all dockers' button would work, so I would pause, backup and resume.

 

If its possible, is there a script or partial script for pausing and then on a result resuming at all?

 

Thanks in advance

Share this post


Link to post
1 hour ago, local.bin said:

I am still trying to get my dockers all backed up nicely.

 

I have them all backing up with a cron script, but I don't think they like getting backed up while running.

 

Why aren't you using CA Backup plugin? It will backup appdata and flash, which is all you really need to get your dockers going again exactly like before.

 

https://forums.unraid.net/topic/61211-plugin-ca-appdata-backup-restore-v2/

 

Share this post


Link to post
On 9/16/2018 at 3:37 PM, Squid said:

#!/usr/bin/php
<?PHP
$startStopped = true;

require_once("/usr/local/emhttp/plugins/dynamix.docker.manager/include/DockerClient.php");

$DockerClient = new DockerClient();
$DockerTemplates = new DockerTemplates();

$info = getRunningContainers();

foreach ($info as $contName=>$cont) {
  if ( $startStopped && ! $cont['running'] ) {
    $DockerClient->startContainer($cont['Id']);
  }
  if ( $cont['running'] || $startStopped ) {
    echo "$contName  Size: ";
    $size = rtrim(exec("docker exec $contName du -shx / 2>/dev/null"),"/ ") ?: "N/A";
    echo "$size  Logs: ".human_filesize(exec("docker logs $contName 2>&1 | wc -c"),1)."\n";
  }
  if ( ! $cont['running'] && $startStopped ) {
    $DockerClient->stopContainer($cont['Id']);
  }
}

function human_filesize($bytes, $decimals = 2) {
  $size = array('B','kB','MB','GB','TB','PB','EB','ZB','YB');
  $factor = floor((strlen($bytes) - 1) / 3);
  return sprintf("%.{$decimals}f", $bytes / pow(1024, $factor)) . @$size[$factor];
}

function getRunningContainers() {
  global $DockerClient, $DockerTemplates;

  $containers = $DockerClient->getDockerContainers();
  $info = $DockerTemplates->getAllInfo();

  foreach ($containers as $container) {
    $info[$container['Name']]['running'] = $container['Running'];
    $info[$container['Name']]['Id'] = $container['Id'];
    $infoTmp[$container['Name']] = $info[$container['Name']];
  }
  return $infoTmp ?: array();
}
?>

 

 

Instead du you could use docker save for images and docker export for containers. Will work even on those cases du doesn't work.

Share this post


Link to post
47 minutes ago, trurl said:

Why aren't you using CA Backup plugin? It will backup appdata and flash, which is all you really need to get your dockers going again exactly like before.

 

https://forums.unraid.net/topic/61211-plugin-ca-appdata-backup-restore-v2/

 

 

Because I stopped using it after it went from v1 to v2 and I had one backup file of 30g to manage each time.

 

Now I have a borg backup taking snapshots every day and just need to find out how to power down or suspend the dockers before the backup script runs.

Share this post


Link to post
On 4/22/2018 at 9:28 AM, Squid said:

Allow unRaid's webUI to utilize the full width of your browser instead of being limited to 1920px

 


#!/bin/bash
sed -i 's/max-width:1920px;//g' /usr/local/emhttp/plugins/dynamix/styles/*.css

 

How would I go about reversing this?

Share this post


Link to post
5 minutes ago, hendeeze said:

How would I go about reversing this?

Remove or comment out the sed line in the go file and restart unraid.

Share this post


Link to post
On 9/24/2018 at 10:14 AM, jonathanm said:

Remove or comment out the sed line in the go file and restart unraid.

this is the contents of my go file

#!/bin/bash
# Start the Management Utility
 /usr/local/sbin/emhttp &




if i comment out the /usr/ line the webui doesnt come up.

Share this post


Link to post
21 minutes ago, hendeeze said:

this is the contents of my go file

#!/bin/bash
# Start the Management Utility
 /usr/local/sbin/emhttp &




if i comment out the /usr/ line the webui doesnt come up.

It won't, that line is the one that starts emhttp.

 

Jonathan told you to remove the line with sed in it.

Share this post


Link to post
8 minutes ago, hendeeze said:

thats the only complete content in my /boot/config/go file
nothing with sed in it.

How are you applying the sed code that you want to reverse then?

Edited by wgstarks

Share this post


Link to post
Just now, wgstarks said:

How are you applying the sed code that you want to reverse then?

Allow unRaid's webUI to utilize the full width of your browser instead of being limited to 1920px

 

#!/bin/bash
sed -i 's/max-width:1920px;//g' /usr/local/emhttp/plugins/dynamix/styles/*.css

i typed that into terminal and i just wanted to reverse it

Share this post


Link to post
1 minute ago, hendeeze said:

typed that into terminal

A reboot should take care of it then.

Share this post


Link to post
2 minutes ago, wgstarks said:

A reboot should take care of it then.

I thought so too when I did it, however I've rebooted several times and its still sticking?

Edited by hendeeze

Share this post


Link to post
18 minutes ago, hendeeze said:

I thought so too when I did it, however I've rebooted several times and its still sticking?

Impossible

/usr/local/emhttp/plugins/dynamix/styles/

is a location in RAM.  Cannot survive a reboot.

Share this post


Link to post
6 minutes ago, CHBMB said:

Impossible


/usr/local/emhttp/plugins/dynamix/styles/

is a location in RAM.  Cannot survive a reboot.

ok I see makes sense, maybe they changed something in the update. Formally I was able to run the webui from my portrait monitor and the ui would be max width. Now however I get that annoying horizontal scroll.

Share this post


Link to post

I want to backup a load of photos to my server and then run a user script, that based on the exif tags, will move photos into various photos. I wonder if this is possible using some sort of exif reader?

 

Any ideas guys?

 

Thanks.

Share this post


Link to post
On 7/23/2016 at 5:00 PM, Squid said:

Run mover at a certain threshold of cache drive utilization.

 

Adjust the value to move at within the script.  Really only makes sense to use this script as a scheduled operation, and would have to be set to a frequency (hourly?) more often than how often mover itself runs normally.

 

 


#!/usr/bin/php
<?PHP

$moveAt = 70;    # Adjust this value to suit.

$diskTotal = disk_total_space("/mnt/cache");
$diskFree = disk_free_space("/mnt/cache");
$percent = ($diskTotal - $diskFree) / $diskTotal * 100;

if ( $percent > $moveAt ) {
  exec("/usr/local/sbin/mover");
}
?>
 

 

 

run_mover_at_threshold.zip

I use the custom mover plugin, but I want to add a 2nd threshold for a few of my shares e.g. /mnt/cache/share1 and /mnt/cache/share2.  How do I say modify this script to move files for certain shares (not a full move) at a configurable $moveAt?  i.e. when my cache drive gets to say 50%, do the following move:

 

1. /mnt/user/share1 -->/mnt/user0/share1

2./mnt/user/share2 -->/mnt/user0/share2

 

I'm assuming that only files on the cache drive will get moved and other files that exist on user and user0 i.e. on the array won't get moved.  This way these files will get moved earlier, making more room for other files I want to stay on the cache for as long as possible.

 

Thanks in advance for any help

Share this post


Link to post
9 minutes ago, DZMM said:

How do I say modify this script to move files for certain shares (not a full move)

Can't do that without rewriting the built-in mover script 

Share this post


Link to post
5 minutes ago, Squid said:

Can't do that without rewriting the built-in mover script 

Thanks for looking.  I wasn't looking to use the mover script, but replace this line:

 

 exec("/usr/local/sbin/mover");

with something I can understand e.g.  I can't work out if this copies, moves or deletes https://stackoverflow.com/questions/9835492/move-all-files-and-folders-in-a-folder-to-another or how to integrate it

 

// Function to remove folders and files 
    function rrmdir($dir) {
        if (is_dir($dir)) {
            $files = scandir($dir);
            foreach ($files as $file)
                if ($file != "." && $file != "..") rrmdir("$dir/$file");
            rmdir($dir);
        }
        else if (file_exists($dir)) unlink($dir);
    }

    // Function to Copy folders and files       
    function rcopy($src, $dst) {
        if (file_exists ( $dst ))
            rrmdir ( $dst );
        if (is_dir ( $src )) {
            mkdir ( $dst );
            $files = scandir ( $src );
            foreach ( $files as $file )
                if ($file != "." && $file != "..")
                    rcopy ( "$src/$file", "$dst/$file" );
        } else if (file_exists ( $src ))
            copy ( $src, $dst );
    }

 

Share this post


Link to post

Hi, 

 

First of all thanks alot for all these community tools. 

 

I ended up here looking for a way to make my KVM work with unraid, everytime I switched PC's the VM which has unraid on it lost the usb devices which was quote annoying. 

 

I ended up finding the "USB Hot Plug for VM's with no passthrough" script. I had 2 issues.

  • For some reasons, I have everything in a single bus and I wanted to ignore some of the usb devices. 
  • The KVM switches 3 devices, the kvm, mouse & keyboard.  Having 3 of them switch simultaneously seems to create issues as one of them don't work randomly in windows

I fixed these by

  • Adding a $ignoredDevices variable with the name of the devices I want to ignore
  • Added a $waitBetweenPlugs delay, basically for plugging and unplugging it will wait a bit in between. 
#!/usr/bin/php
<?PHP
#backgroundOnly=true
#description=A script designed to monitor a USB hub or a particular USB bus for changes and then attach or detach devices to an applicable VM

/*
 * Configuration's
 */
$vmName = "Windows 10";  // must be the exact name
$pollingTime = 10;       // the interval between checks in seconds
$startupDelay = 300;     // startup delay before monitoring for changes in seconds (set to enough time for the VM to get up and running)
$waitBetweenPlugs = 10;  // If using a device such a KVM mounting multiple usb drives at once can cause some issues with the host. Wait in between.

$ignoredDevices = [      // List of device names that should be ignored.
	'Ours Technology, Inc. Transcend JetFlash 2.0 / Astone USB Drive / Intellegent Stick 2.0' // must be exact as listed via lsusb
];

// only use one or the other of the following lines not both  See thread for details
#$hubName = "Texas Instruments, Inc. TUSB2046 Hub";  // must be exact as listed via lsusb
$bus  = "003";                                       // see thread for details

/**
 * Code don't change from here on.
 */
 
function getDeviceList($bus, $ignoredDevices) {
  exec("lsusb | grep 'Bus $bus'",$allDevicesOnBus);
  foreach ($allDevicesOnBus as $Devices) {
    $deviceLine = explode(" ",$Devices);
	if (!in_array($deviceLine[5], $ignoredDevices)) {
		$initialState[$deviceLine[5]] = $deviceLine[5];
	} else {
		logger("Ignoring device $Devices");
	}
    var_dump();
  }
  return $initialState;
}  

function createXML($deviceID, $waitBetweenPlugs) {
  sleep($waitBetweenPlugs);
  $usb = explode(":",$deviceID);
  $usbstr .= "<hostdev mode='subsystem' type='usb'>
                <source>
                  <vendor id='0x".$usb[0]."'/>
                  <product id='0x".$usb[1]."'/>
                </source>
              </hostdev>";
  file_put_contents("/tmp/USBTempXML.xml",$usbstr);
}

function logger($string) {
  echo "$string\n";
  shell_exec("logger ".escapeshellarg($string));
}
              
# Begin Main

logger("Sleeping for $startupDelay before monitoring $bus for changes to passthrough to $vmName");
sleep($startupDelay);

$hubBus = $bus;
if ( ! $hubBus ) {
  $hub = explode(" ",exec("lsusb | grep '$hubName'"));
  $hubBus = $hub[1];
}

logger("Monitoring $hubBus for changes");

$initialState = getDeviceList($hubBus, $ignoredDevices);

while (true) {
  $unRaidVars = parse_ini_file("/var/local/emhttp/var.ini");
  if ($unRaidVars['mdState'] != "STARTED") {
    break;
  }
  $currentDevices = getDeviceList($hubBus, $ignoredDevices);
  
  foreach ($currentDevices as $Device) {
    if ( ! $initialState[$Device] ) {
      logger("$Device Added to bus $hubBus  Attaching to $vmName");
      createXML($Device, $waitBetweenPlugs);
      exec("/usr/sbin/virsh attach-device '$vmName' /tmp/USBTempXML.xml 2>&1");
      $initialState[$Device] = $Device;
    }
  }
  foreach ($initialState as $Device) {
    if ( ! $currentDevices[$Device] ) {
      logger("$Device Removed from bus $hubBus  Detaching from $vmName");
      createXML($Device, $waitBetweenPlugs);
      exec("/usr/sbin/virsh detach-device '$vmName' /tmp/USBTempXML.xml 2>&1");
      unset ($initialState[$Device]);
    }
  }
  sleep($pollingTime);
}

?>

The KVM fix isn't perfect sometimes it still doesn't work I am going to try and spend some more time for maybe a better solution then a sleep. 

 

Lastly wouldn't it be better to have these scripts in github somewhere? grouped in one project?

Share this post


Link to post
13 hours ago, oliverde8 said:

Lastly wouldn't it be better to have these scripts in github somewhere? grouped in one project?

Feel free.  I'm not doing it because I don't want to manage it.

Share this post


Link to post

I am sure someone can find a better way to accomplish this,  but here is a script I cooked up for monitoring the progress of handbrakes watch folder.   Please forgive how crude it is, but I am still new to linux and I am not a programmer.  The script itself is quite small I just included documentation to at least attempt to explain what I have this script doing.

 

#!/bin/bash
#
# A simple loop to display what file handbrake is working on and what the current status is.
#
#############################################################################################################################
#
#Example line:
#[autovideoconverter] Starting conversion of '/watch/Hercules (1997).mkv' (935bbed22b281c68d6e1840256fede3f) using preset 'Fast 480p30'...
#
#Find the last line in the log that contains the word "Starting"
#Set AWK's Field seperator to a single quote and print fields 2 and 4 to extract the movie title and preset
#Set AWK's Field seperator to a "/" and print the 3rd field to remove the path before the movie title
#Tail the log and print the last line to provide status.

#Final output:
#Hercules (1997).mkv Fast 480p30
#Encoding: 99.65 % (152.25 fps, avg 121.85 fps, ETA 00h00m04s)
#Encoding: 99.74 % (138.93 fps, avg 121.84 fps, ETA 00h00m03s)
#Encoding: 99.86 % (126.57 fps, avg 121.85 fps, ETA 00h00m02s)
#Encoding: 99.97 % (122.81 fps, avg 121.85 fps, ETA 00h00m01s)
#Encoding: 99.97 % (122.81 fps, avg 121.85 fps, ETA 00h00m01s)
#
#Last line will be encoding status or one of the below

#Change detected in watch folder '/watch'.
#Processing watch folder '/watch'...
#Waiting 5 seconds before processing '/watch/Hercules (1997).mkv'...
#Skipping '/watch/Hercules (1997).mkv': currently being copied.
#Starting conversion of '/watch/Hercules (1997).mkv' (935bbed22b281c68d6e1840256fede3f) using preset 'Fast 480p30'...
#1 title(s) to process.
#Conversion ended successfully.
#Removed /watch/Hercules (1997).mkv'.
#Watch folder '/watch' processing terminated.
############################################################################################################################


while :
do
clear
docker logs HandBrake | grep Starting | tail -1 | awk -F"'" '{print $2,$4}' | awk -F"/" '{print $3}'
docker logs HandBrake --tail 1 | cut -d' ' -f 2-
sleep 5
docker logs HandBrake --tail 1 | cut -d' ' -f 2-
sleep 5
docker logs HandBrake --tail 1 | cut -d' ' -f 2-
sleep 5
docker logs HandBrake --tail 1 | cut -d' ' -f 2-
sleep 5
done

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.