Jump to content

RAINMAN

Members
  • Posts

    179
  • Joined

  • Last visited

Everything posted by RAINMAN

  1. This looks very interesting. I didn't have a lot of luck getting the plugin working how I wanted so I hope this is more successful. Thanks for the hard work on this.
  2. So after playing with this for a bit I figured out how to sync it to amazon, if I run a command like this: rclone sync --transfers=10 --bwlimit 5M '/mnt/user/Console/Atari.2600/' encrypted:'Console/Atari.2600' It successfully writes the files to the encrypted drive but I don't see any of the files in the local mount. If I put the files in the local mount they get uploaded and I can see them. If I do a copy from /mnt/user/Console/Atari.2600/ to /mnt/disks/Console/Atari.2600/ it copies the files but I get a lot of file system errors. I definitely thing rclone is best to sync I just dont understand why I cant see the files in my local mount.
  3. So I managed to get an encrypted folder. Still playing around with it. So everything I put in my shared folder is only stored on the cloud and not locally? Just trying to understand the best way to utilize this with my existing shares. Any best practices?
  4. Opps, it was but I miss-read. For virtual machines you are a bit more limited. You can try: virsh domstats CPU usage is measured in ticks or ns so you would have to do some math on the calculation. Looks a bit complicated. I think memory just records what is available for the VM not actually used but I could be wrong. I measure the stats within each of my VMs (actually just 1 at the moment). But it would be nice to get the total usage of the entire CPU from outside the container.
  5. Haha, ok so it perked my interest. Here you go. Its almost done. Just need to parse it into a script. docker stats --no-stream $(docker ps --format={{.Names}}) Edit: Even better docker stats --no-stream $(docker ps --format={{.Names}}) | sed 's/%//g' | grep -v "CONTAINER" | awk '{print $1,$2,$3}' Just throw that into a while read and send to influx. Table: Name CPU Memory
  6. You can access the docker stats using: docker stats --no-stream Right now this outputs the dockers container IDs. There is a --format option that was implemented until version 1.8 of docker and i have 1.10 as part of unraid 6.2.4. With --format you can specify to use name instead which would probably be easier long term as the ID changes when you rebuild or change it. However, I dont see this option as existing for some reason. Not sure. What you can do is then run docker ps | grep ContainerID to and grab the container name from there. Something like that. I havent put together a script but its on my list so if someone else gets one going please share
  7. I havent finished mine yet but I just grabbed one of the templates listed below and modified it. Still in progress but this is what I have so far. The icons go from a slight faded colour and to bigger on mouse over. (like the unraid icon is -- that one has the mouseover active) It also is a bit responsive but there is a lot of extra CSS in the template that I haven't deleted.
  8. FYI for next cloud add the following to your location block and it works. Cheers proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr;
  9. but my network is actually 192.168.254.x If I access plex using http://192.168.254.3:32400/web/index.html# it does play locally however when I use https://plex.domain.com it transcodes and streams via the internet. if I ping plex.domain.com it resolves properly to 192.168.254.3 as is set in my DNS settings. I really wanted to access plex using one URL whether I am local or remote. Here is my full nginx config file: # redirect all traffic to https server { listen 80; server_name _; return 301 https://$host$request_uri; } # main server block server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-$ ssl_prefer_server_ciphers on; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } # sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp" # notice this is within the same server block as the base # don't forget to generate the .htpasswd file as described on docker hub # location ^~ /cp { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; # include /config/nginx/proxy.conf; # proxy_pass http://192.168.1.50:5050/cp; # } } # # PLEX # server { listen 443 ssl; root /config/www; index index.html index.htm index.php; server_name plex.*; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-$ ssl_prefer_server_ciphers on; client_max_body_size 0; location / { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.254.3:32400; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $remote_addr; # # PlexPy # location ^~ /plexpy/ { proxy_pass https://192.168.254.3:8181; include /config/nginx/proxy.conf; proxy_bind $server_addr; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Ssl on; # auth_basic "Restricted"; # auth_basic_user_file /config/.htpasswd; } } For now commented out the authentication and some other tests. Will add it back once its all sorted. Basically I just need plex to know its local so its not sending my data to the internet then back in. I also have a nextcloud issue but one at a time. Is Plex our version and running as host? yes, and bingo, I had it running as bridge for some reason. Switched to host and it now says nearby which is good enough not to send via the internet. Now my second question: for nextcloud: this is my conf server { listen 80; server_name cloud.mydomainhere.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name cloud.mydomainhere.com; root /config/www; index index.html index.htm index.php; ###SSL Certificates ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ###Diffie–Hellman key exchange ### ssl_dhparam /config/nginx/dhparams.pem; ###SSL Ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA38$ ###Extra Settings### ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ### Add HTTP Strict Transport Security ### add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header Front-End-Https on; client_max_body_size 0; location / { proxy_pass https://192.168.254.3:444/; } } nextcloud config file: <?php $CONFIG = array ( 'instanceid' => 'XxxxxxxxxxxxxxX', 'passwordsalt' => 'XxxxxxxxxxxxxxX', 'secret' => 'XxxxxxxxxxxxxxX', 'trusted_domains' => array ( 0 => '192.168.254.3:444', 1 => 'cloud.mydomain.com', ), 'datadirectory' => '/mnt/OwnCloud_Data/', 'overwrite.cli.url' => 'https://cloud.mydomain.com', 'overwritehost' => 'cloud.mydomain.com', 'overwriteprotocol' => 'https', 'dbtype' => 'mysql', 'version' => '9.1.0.16', 'dbname' => 'owncloud', 'dbhost' => '192.168.254.3:3306', 'dbtableprefix' => 'oc_', 'dbuser' => '', 'dbpassword' => '', 'logtimezone' => 'America/Toronto', 'installed' => true, 'theme' => '', 'maintenance' => false, 'memcache.local' => '\\OC\\Memcache\\APCu', 'loglevel' => 1, 'trashbin_retention_obligation' => 'auto', 'ldapIgnoreNamingRules' => false, 'updater.release.channel' => 'production', However, I am still getting this message when I try the nextcloud site:
  10. So, I sort of have this working but not really. Plex resolves but it doesnt detect the proper network and thus thinks the server is remote. Plex shows this as the local IP: but my network is actually 192.168.254.x If I access plex using http://192.168.254.3:32400/web/index.html# it does play locally however when I use https://plex.domain.com it transcodes and streams via the internet. if I ping plex.domain.com it resolves properly to 192.168.254.3 as is set in my DNS settings. I really wanted to access plex using one URL whether I am local or remote. Here is my full nginx config file: # redirect all traffic to https server { listen 80; server_name _; return 301 https://$host$request_uri; } # main server block server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-$ ssl_prefer_server_ciphers on; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } # sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp" # notice this is within the same server block as the base # don't forget to generate the .htpasswd file as described on docker hub # location ^~ /cp { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; # include /config/nginx/proxy.conf; # proxy_pass http://192.168.1.50:5050/cp; # } } # # PLEX # server { listen 443 ssl; root /config/www; index index.html index.htm index.php; server_name plex.*; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-$ ssl_prefer_server_ciphers on; client_max_body_size 0; location / { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.254.3:32400; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $remote_addr; # # PlexPy # location ^~ /plexpy/ { proxy_pass https://192.168.254.3:8181; include /config/nginx/proxy.conf; proxy_bind $server_addr; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Ssl on; # auth_basic "Restricted"; # auth_basic_user_file /config/.htpasswd; } } For now commented out the authentication and some other tests. Will add it back once its all sorted. Basically I just need plex to know its local so its not sending my data to the internet then back in. I also have a nextcloud issue but one at a time.
  11. Im curious on what could cause this. I got a corruption message on a file. BLAKE2 hash key mismatch, /mnt/disk5/xxxxxxxxxxxxxxxxxxxx.1080p.BluRay.X264-AMIABLE.mkv is corrupted I check the hash in the export ab681975c59300f875b81763f4e3ca7e1186240c04400abf71939aa43d762c803cd6a43530a05ceb2d869250e32388045600c0fa26b135b2e69c034fbd939c95 */mnt/disk5/Movies-HD/xxxxxxxxxxxxxx.1080p.BluRay.X264-AMIABLE.mkv I re-downloaded the file from my online backup move it to my array and it gets rehashed into disks.export.20161230.new.hash ab681975c59300f875b81763f4e3ca7e1186240c04400abf71939aa43d762c803cd6a43530a05ceb2d869250e32388045600c0fa26b135b2e69c034fbd939c95 */mnt/disk1/Movies-HD/xxxxxxxxxxxxxxxxxxx.1080p.BluRay.X264-AMIABLE (2).mkv The hashes are identical...
  12. Solution to my problem if anyone else encounters the same thing: 1. Downloaded the new tar file from nexcloud website. (probably not needed) 2. SSH to server 3. run docker ps to get the docker ID 5. Enter the docker SSH: docker exec -it abcIDxyz bash 6. Go to the nextcloud directory: cd /config/www/ 7. Give the webserver user (abc) ownership of the files: chown -R abc:abc nextcloud 8. Enter nextcloud directory: cd nextcloud 9. Run the upgrader manually: sudo -u abc php occ upgrade
  13. I had to delete my docker image file for an unrelated issue and when i rebuilt it (my nextcloud wasnt updated before) it nows gives me this: Update needed Please use the command line updater because you have a big instance. For help, see the documentation. I have no idea how to fix it. Any ideas?
  14. Here is another one for hard disk power on time: # # Hard Disk Power On Hours # # Added -n standby to the check so smartctl is not spinning up my drives # i=0 for DISK in "${DISK_ARRAY[@]}" do smartctl -n standby -A /dev/${DISK} | grep -E "Power_On_Hours" | awk '{ print $10 }' | while read POWERONHOURS do curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "diskPowerOnHours,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} PowerOnHours=${POWERONHOURS} ${CURDATE}000000000" >/dev/null 2>&1 done ((i++)) done
  15. If anyone has an edimax smart plug: The cost calculation is complicated. Thanks to the power company but you get the point from the code. # # SMART PLUG # EDIMAX SMART PLUG SWITCH # Model: SP-1101W # # Off-Peak: Weekends all day and weekdays 19:00 - 07:00 -- 0.000145 cents/watt-min # Mid-Peak: Weekdays 17:00 - 19:00 & 07:00-11:00 -- 0.00022 cents/watt-min # Peak: Weekdays 11:00 - 17:00 -- 0.0003 cents/watt-min # Extra fees: 4.14c/kwh OffPeak=0.000145 MidPeak=0.00022 Peak=0.0003 ExtraFees=0.000069 AdjustmentFactor=1.076 day=$(date +"%u") hour=$(date +"%H") DEVICE="UNRAID-PLUG" curl -d @/boot/custom/influxdb/edi.xml http://admin:[email protected]:10000/smartplug.cgi -o /boot/custom/influxdb/output.txt CURRENT=`cat /boot/custom/influxdb/output.txt | sed -n 's:.*<Device.System.Power.NowCurrent>\(.*\)</Device.System.Power.NowCurrent>.*:\1:p'` POWER=`cat /boot/custom/influxdb/output.txt | sed -n 's:.*<Device.System.Power.NowPower>\(.*\)</Device.System.Power.NowPower>.*:\1:p'` echo ${POWER} echo ${CURRENT} # Calculates the cost of each minute of electricty use based on the time of day calculation if [ "$day" -eq 6 ] || [ "$day" -eq 7 ]; then COST=$(echo $POWER $OffPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}') elif [ "$hour" -ge 19 ] || [ "$hour" -lt 7 ]; then COST=$(echo $POWER $OffPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}') elif [ "$hour" -ge 11 ] && [ "$hour" -lt 17 ]; then COST=$(echo $POWER $Peak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}') else COST=$(echo $POWER $MidPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}') fi curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "powerCost,Device=${DEVICE} Cost=${COST} ${CURDATE}000000000" >/dev/null curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "smartplugStats,Device=${DEVICE} Current=${CURRENT},Power=${POWER} ${CURDATE}000000000" >/dev/null edi.xml <? xml version = "1.0" encoding = "UTF-8"?> <SMARTPLUG id="edimax"> <CMD id="get"> <NOW_POWER></NOW_POWER> </CMD> </SMARTPLUG>
  16. It may be related to the CPU. I have a single core sempron. The stats output may be different. if you SSH into the server what do you get when you run top -b -n 3 -d.2 | grep "Cpu" | tail -n 1 and cat /proc/loadavg Feel free to PM me if you want to try and work though it. Should be relatively easy fix but it may be user specific. These scripts may not be that generic, especially the hard drive one I modified a lot since I posted that but its now really specific. I cant find a good way to make it generic.
  17. No. I'll add it, but won't add it to the GUI, as the odds of a non-successful backup rise significantly because of files being open, and don't want the people who just want their nzbGet running all the time to think that they can simply exclude it and have a successful backup done. I'll make a hidden method of selecting which apps to stop and we'll go from there. Awesome squid. Thanks.
  18. This plugin is great but I was wondering if there was a way to exclude certain dockers from being stopped? I backup my mysql database separately and I'd prefer not to stop it if possible.
  19. I got some motivation from the scripts posted here to add some monitoring to my UNRAID installation as well. I figured it was a bit less resource intensive to do it directly from bash but I'm totally guessing on that. I also wrote scripts for my DD-WRT router and windows PCs (powershell) but I figured for now I'd share the unraid scripts I wrote in case they are useful to anyone. I'm not that experienced with bash scripting so if there is anything I could do better I'd appreciate the corrections. All I ask is if you make improvements please share it back to me and the community. I actually created 3 scripts for different intervals. 1, 5 and 30 mins. Cron Jobs # # InfluxDB Stats 1 Minute (Delay from reading CPU when all the other PCs in my network report in) # * * * * * sleep 10; /boot/custom/influxdb/influxStats_1m.sh > /dev/null 2>&1 # # InfluxDB Stats 5 Minute # 0,10 * * * * /boot/custom/influxdb/influxStats_5m.sh > /dev/null 2>&1 # # InfluxDB Stats 30 Minute # 0,30 * * * * /boot/custom/influxdb/influxStats_30m.sh > /dev/null 2>&1 Basic variables I use in all 3 scripts. # # Set Vars # DBURL=http://192.168.254.3:8086 DBNAME=statistics DEVICE="UNRAID" CURDATE=`date +%s` CPU Records CPU metrics - Load averages and CPU time # Had to increase to 10 samples because I was getting a spike each time I read it. This seems to smooth it out more top -b -n 10 -d.2 | grep "Cpu" | tail -n 1 | awk '{print $2,$4,$6,$8,$10,$12,$14,$16}' | while read CPUusr CPUsys CPUnic CPUidle CPUio CPUirq CPUsirq CPUst do top -bn1 | head -3 | awk '/load average/ {print $12,$13,$14}' | sed 's/,//g' | while read LAVG1 LAVG5 LAVG15 do curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "cpuStats,Device=${DEVICE} CPUusr=${CPUusr},CPUsys=${CPUsys},CPUnic=${CPUnic},CPUidle=${CPUidle},CPUio=${CPUio},CPUirq=${CPUirq},CPUsirq=${CPUsirq},CPUst=${CPUst},CPULoadAvg1m=${LAVG1},CPULoadAvg5m=${LAVG5},CPULoadAvg15m=${LAVG15} ${CURDATE}000000000" >/dev/null 2>&1 done done Memory Usage top -bn1 | head -4 | awk '/Mem/ {print $6,$8,$10}' | while read USED FREE CACHE do curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "memoryStats,Device=${DEVICE} memUsed=${USED},memFree=${FREE},memCache=${CACHE} ${CURDATE}000000000" >/dev/null 2>&1 done Network if [[ -f byteCount.tmp ]] ; then # Read the last values from the tmpfile - Line "eth0" grep "eth0" byteCount.tmp | while read dev lastBytesIn lastBytesOut do cat /proc/net/dev | grep "eth0" | grep -v "veth" | awk '{print $2, $10}' | while read currentBytesIn currentBytesOut do # Write out the current stats to the temp file for the next read echo "eth0" ${currentBytesIn} ${currentBytesOut} > byteCount.tmp totalBytesIn=`expr ${currentBytesIn} - ${lastBytesIn}` totalBytesOut=`expr ${currentBytesOut} - ${lastBytesOut}` # Prevent negative numbers when the counters reset. Could miss data but it should be a marginal amount. if [ ${totalBytesIn} -le 0 ] ; then totalBytesIn=0 fi if [ ${totalBytesOut} -le 0 ] ; then totalBytesOut=0 fi curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "interfaceStats,Interface=eth0,Device=${DEVICE} bytesIn=${totalBytesIn},bytesOut=${totalBytesOut} ${CURDATE}000000000" >/dev/null 2>&1 done done else # Write out blank file echo "eth0 0 0" > byteCount.tmp fi Hard Disk IO # Gets the stats for disk# # # The /proc/diskstats file displays the I/O statistics # of block devices. Each line contains the following 14 # fields: # 1 - major number # 2 - minor mumber # 3 - device name # 4 - reads completed successfully # 5 - reads merged # 6 - sectors read <--- # 7 - time spent reading (ms) # 8 - writes completed # 9 - writes merged # 10 - sectors written <--- # 11 - time spent writing (ms) # 12 - I/Os currently in progress # 13 - time spent doing I/Os (ms) # 14 - weighted time spent doing I/Os (ms) # # Special Cases # sda = Flash/boot # sdf = Cache # sdd = Parity if [[ -f diskByteCountTest.tmp ]] ; then cat /proc/diskstats | grep -E 'md|sdd|sda|sdf|loop0' | grep -E -v 'sd[a-z]1' |sed 's/md//g' | awk '{print "disk" $3, $6, $10}' | while read DISK currentSectorsRead currentSectorsWrite do # Check if the disk is in the temp file. if grep ${DISK} diskByteCountTest.tmp then grep ${DISK} diskByteCountTest.tmp | while read lDISK lastSectorsRead lastSectorsWrite do # Replace current disk stats with new stats for the next read sed -i "s/^${DISK}.*/${DISK} ${currentSectorsRead} ${currentSectorsWrite}/" diskByteCountTest.tmp # Need to multiply by 512 to convert from sectors to bytes (( totalBytesRead = 512 * (${currentSectorsRead} - ${lastSectorsRead}) )) (( totalBytesWrite = 512 * (${currentSectorsWrite} - ${lastSectorsWrite}) )) (( totalBytes = totalBytesRead + totalBytesWrite)) # Cases case ${DISK} in "disksda" ) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=boot,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; "disksdd" ) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=parity,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; "disksdf" ) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=cache,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; "diskloop0" ) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=docker,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; *) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=${DISK},Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; done else # If the disk wasn't in the temp file then add it to the end echo ${DISK} ${currentSectorsRead} ${currentSectorsWrite} >> diskByteCountTest.tmp fi done else # Write out a new file cat /proc/diskstats | grep -E 'md|sdd|sda|sdf|loop0' | grep -E -v 'sd[a-z]1' |sed 's/md//g' | awk '{print "disk" $3, $6, $10}' | while read DISK currentSectorsRead currentSectorsWrite do echo ${DISK} ${currentSectorsRead} ${currentSectorsWrite} >> diskByteCountTest.tmp done fi Number of Dockers Running docker info | grep "Running" | awk '{print $2}' | while read NUM do curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "dockersRunning,Device=${DEVICE} Dockers=${NUM} ${CURDATE}000000000" >/dev/null 2>&1 done Hard Disk Temperatures # Current array assignment. # I could pull the automatically from /var/local/emhttp/disks.ini # Parsing it wouldnt be that easy though. DISK_ARRAY=( sdd sdg sde sdi sdc sdb sdh sdf ) DESCRIPTION=( parity disk1 disk2 disk3 disk4 disk5 disk6 cache ) # # Added -n standby to the check so smartctl is not spinning up my drives # i=0 for DISK in "${DISK_ARRAY[@]}" do smartctl -n standby -A /dev/$DISK | grep "Temperature_Celsius" | awk '{print $10}' | while read TEMP do curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "DiskTempStats,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Temperature=${TEMP} ${CURDATE}000000000" >/dev/null 2>&1 done ((i++)) done Hard Disk Spinup Status # Current array assignment. # I could pull the automatically from /var/local/emhttp/disks.ini # Parsing it wouldnt be that easy though. DISK_ARRAY=( sdd sdg sde sdi sdc sdb sdh sdf ) DESCRIPTION=( parity disk1 disk2 disk3 disk4 disk5 disk6 cache ) i=0 for DISK in "${DISK_ARRAY[@]}" do hdparm -C /dev/$DISK | grep 'state' | awk '{print $4}' | while read STATUS do #echo ${DISK} : ${STATUS} : ${DESCRIPTION[$i]} if [ ${STATUS} = "standby" ] then curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStatus,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Active=0 ${CURDATE}000000000" >/dev/null 2>&1 else curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStatus,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Active=1 ${CURDATE}000000000" >/dev/null 2>&1 fi done ((i++)) done Hard Disk Space # Gets the stats for boot, disk#, cache, user # df | grep "mnt/\|/boot\|docker" | grep -v "user0\|containers" | sed 's/\/mnt\///g' | sed 's/%//g' | sed 's/\/var\/lib\///g'| sed 's/\///g' | while read MOUNT TOTAL USED FREE UTILIZATION DISK do if [ "${DISK}" = "user" ]; then DISK="array_total" fi curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "drive_spaceStats,Device=${DEVICE},Drive=${DISK} Free=${FREE},Used=${USED},Utilization=${UTILIZATION} ${CURDATE}000000000" >/dev/null 2>&1 done Uptime UPTIME=`cat /proc/uptime | awk '{print $1}'` curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "uptime,Device=${DEVICE} Uptime=${UPTIME} ${CURDATE}000000000" >/dev/null 2>&1 Disk 4 and 5 just finished their file integrity check. I also have a hole every day at 11:36-12:36 that I haven't yet figured out why. I'll need to investigate that but I dont think its related to these scripts. If anyone wants the grafana json just let me know I can post it as well. Please post any suggestions of other metrics to capture.
  20. To update anything in the grafana.ini you can map a volume to it, or you can use more environment variables. See http://docs.grafana.org/installation/configuration/, basically you use the format of GF_<SectionName>_<KeyName> = some value, just like you did for the plugins. All good Atribe. Thanks for putting out this docker. I've been playing with it quite a bit getting some cool graphs. I made a shell script for my DDWRT router and a powershell script for my windows machines so far.
  21. After looking at this closer, you can create environment variables to have it install the plugins using CLI by setting an environment variable key to GF_INSTALL_PLUGINS and setting the value to the name of the plugin you want to install. Example value for 2 plugins:
  22. Something seems wrong with the grafana docker. The plugins directory doesnt seem to work if i drop files in there. If I attach to the docker and run the CLI command it adds the plugin properly but it doesnt show in the plugins directory externally but does within the docker, yet the mapping seems to be right When i update the docker it destroys the plugins as expected since the mapping seems broken. Second, how do you update the settings. (grafana.ini) This is not exposed??
  23. Reading the various links, it's a php bug beyond our control. Nextcloud should take the lead on this, I don't think workarounds like that are something we should be looking at as a rule. Ya I agree with that. I was doing a bit more digging and owncloud is working using LDAP in the same AD environment and the main difference I see is related to a iconv problem I think. In the alpine image the php extention for inconv is undefined. iconv iconv support => enabled iconv implementation => unknown iconv library version => unknown Directive => Local Value => Master Value iconv.input_encoding => no value => no value iconv.internal_encoding => no value => no value iconv.output_encoding => no value => no value The implementation is unknown. From what I can find the way to resolve this is to use the libiconv extension instead. This seems to be an issue on Alpine. Is it possible to build using this instead of iconv? no, because it's not just an alpine issue, the bug is in many implementations of php from 2008 onwards and including php7. it is up to nextcloud to resolve this, if it were "fixed" and nextcloud then made neccessary changes, the "fix" would have been a waste of time and effort. Ok. Whatever, I just updated my owncloud docker to nextcloud and its working fine so i'll just use that. I was just reporting a bug but if no one wants to look into it that's fine. Its not php as its the same version in both my dockers and its not owncloud as I have it running in another docker now fine. THe only difference is the base distro and the iconv module showing up as unknown instead of 2.19 as it does in the other docker.
  24. Reading the various links, it's a php bug beyond our control. Nextcloud should take the lead on this, I don't think workarounds like that are something we should be looking at as a rule. Ya I agree with that. I was doing a bit more digging and owncloud is working using LDAP in the same AD environment and the main difference I see is related to a iconv problem I think. In the alpine image the php extention for inconv is undefined. iconv iconv support => enabled iconv implementation => unknown iconv library version => unknown Directive => Local Value => Master Value iconv.input_encoding => no value => no value iconv.internal_encoding => no value => no value iconv.output_encoding => no value => no value The implementation is unknown. From what I can find the way to resolve this is to use the libiconv extension instead. This seems to be an issue on Alpine. Is it possible to build using this instead of iconv? Edit: Both dockers are running PHP 5.6.24 so I dont think its a pure PHP issue.
  25. Getting an iconv error using LDAP module. There is a bug report on the nextcloud website (https://github.com/nextcloud/server/issues/272) but it points to a PHP bug/a specific way to set docker. I dont know if this information (https://github.com/docker-library/php/issues/240) can be implemented in our docker so it works properly?
×
×
  • Create New...