RAINMAN

Members
  • Posts

    179
  • Joined

  • Last visited

Everything posted by RAINMAN

  1. The stats page for Network incorrectly shows eth0 when I have bonded NICs. When I am transferring on eth1 I get no traffic in the graph. Not a big issue but should probably show the combined totals.
  2. I havent finished mine yet but I just grabbed one of the templates listed below and modified it. Still in progress but this is what I have so far. The icons go from a slight faded colour and to bigger on mouse over. (like the unraid icon is -- that one has the mouseover active) It also is a bit responsive but there is a lot of extra CSS in the template that I haven't deleted.
  3. There is a bug in the script in step 3. It works well but it will only update the first hostname due to the checking if the IP is the same as last run. If you make a small change to the comparison txt file so that its unique for each record. I changed line 115/116 to if [ -f $HOME/.wan_ip-$CFHOST.txt ]; then OLD_WAN_IP=`cat $HOME/.wan_ip-$CFHOST.txt` and line 179 to echo $WAN_IP > $HOME/.wan_ip-$CFHOST.txt That way it creates a ip file unique for each hostname. Just a suggestion.
  4. FYI for next cloud add the following to your location block and it works. Cheers proxy_set_header Host $host:$server_port; proxy_set_header X-Real-IP $remote_addr;
  5. but my network is actually 192.168.254.x If I access plex using http://192.168.254.3:32400/web/index.html# it does play locally however when I use https://plex.domain.com it transcodes and streams via the internet. if I ping plex.domain.com it resolves properly to 192.168.254.3 as is set in my DNS settings. I really wanted to access plex using one URL whether I am local or remote. Here is my full nginx config file: # redirect all traffic to https server { listen 80; server_name _; return 301 https://$host$request_uri; } # main server block server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-$ ssl_prefer_server_ciphers on; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } # sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp" # notice this is within the same server block as the base # don't forget to generate the .htpasswd file as described on docker hub # location ^~ /cp { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; # include /config/nginx/proxy.conf; # proxy_pass http://192.168.1.50:5050/cp; # } } # # PLEX # server { listen 443 ssl; root /config/www; index index.html index.htm index.php; server_name plex.*; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-$ ssl_prefer_server_ciphers on; client_max_body_size 0; location / { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.254.3:32400; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $remote_addr; # # PlexPy # location ^~ /plexpy/ { proxy_pass https://192.168.254.3:8181; include /config/nginx/proxy.conf; proxy_bind $server_addr; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Ssl on; # auth_basic "Restricted"; # auth_basic_user_file /config/.htpasswd; } } For now commented out the authentication and some other tests. Will add it back once its all sorted. Basically I just need plex to know its local so its not sending my data to the internet then back in. I also have a nextcloud issue but one at a time. Is Plex our version and running as host? yes, and bingo, I had it running as bridge for some reason. Switched to host and it now says nearby which is good enough not to send via the internet. Now my second question: for nextcloud: this is my conf server { listen 80; server_name cloud.mydomainhere.com; return 301 https://$server_name$request_uri; } server { listen 443 ssl; server_name cloud.mydomainhere.com; root /config/www; index index.html index.htm index.php; ###SSL Certificates ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ###Diffie–Hellman key exchange ### ssl_dhparam /config/nginx/dhparams.pem; ###SSL Ciphers ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA38$ ###Extra Settings### ssl_prefer_server_ciphers on; ssl_session_cache shared:SSL:10m; ### Add HTTP Strict Transport Security ### add_header Strict-Transport-Security "max-age=63072000; includeSubdomains"; add_header Front-End-Https on; client_max_body_size 0; location / { proxy_pass https://192.168.254.3:444/; } } nextcloud config file: <?php $CONFIG = array ( 'instanceid' => 'XxxxxxxxxxxxxxX', 'passwordsalt' => 'XxxxxxxxxxxxxxX', 'secret' => 'XxxxxxxxxxxxxxX', 'trusted_domains' => array ( 0 => '192.168.254.3:444', 1 => 'cloud.mydomain.com', ), 'datadirectory' => '/mnt/OwnCloud_Data/', 'overwrite.cli.url' => 'https://cloud.mydomain.com', 'overwritehost' => 'cloud.mydomain.com', 'overwriteprotocol' => 'https', 'dbtype' => 'mysql', 'version' => '9.1.0.16', 'dbname' => 'owncloud', 'dbhost' => '192.168.254.3:3306', 'dbtableprefix' => 'oc_', 'dbuser' => '', 'dbpassword' => '', 'logtimezone' => 'America/Toronto', 'installed' => true, 'theme' => '', 'maintenance' => false, 'memcache.local' => '\\OC\\Memcache\\APCu', 'loglevel' => 1, 'trashbin_retention_obligation' => 'auto', 'ldapIgnoreNamingRules' => false, 'updater.release.channel' => 'production', However, I am still getting this message when I try the nextcloud site:
  6. This is great! I appreciate the dockers but I think a light script is much nicer to just run occasionally then having a docker running all the time for such a simple task. Thanks for putting this together.
  7. So, I sort of have this working but not really. Plex resolves but it doesnt detect the proper network and thus thinks the server is remote. Plex shows this as the local IP: but my network is actually 192.168.254.x If I access plex using http://192.168.254.3:32400/web/index.html# it does play locally however when I use https://plex.domain.com it transcodes and streams via the internet. if I ping plex.domain.com it resolves properly to 192.168.254.3 as is set in my DNS settings. I really wanted to access plex using one URL whether I am local or remote. Here is my full nginx config file: # redirect all traffic to https server { listen 80; server_name _; return 301 https://$host$request_uri; } # main server block server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name _; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-$ ssl_prefer_server_ciphers on; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; # With php5-cgi alone: fastcgi_pass 127.0.0.1:9000; # With php5-fpm: #fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } # sample reverse proxy config for password protected couchpotato running at IP 192.168.1.50 port 5050 with base url "cp" # notice this is within the same server block as the base # don't forget to generate the .htpasswd file as described on docker hub # location ^~ /cp { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; # include /config/nginx/proxy.conf; # proxy_pass http://192.168.1.50:5050/cp; # } } # # PLEX # server { listen 443 ssl; root /config/www; index index.html index.htm index.php; server_name plex.*; ssl_certificate /config/keys/letsencrypt/fullchain.pem; ssl_certificate_key /config/keys/letsencrypt/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-$ ssl_prefer_server_ciphers on; client_max_body_size 0; location / { # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; include /config/nginx/proxy.conf; proxy_pass http://192.168.254.3:32400; # proxy_set_header Host $host; # proxy_set_header X-Real-IP $remote_addr; # proxy_set_header X-Forwarded-For $remote_addr; # # PlexPy # location ^~ /plexpy/ { proxy_pass https://192.168.254.3:8181; include /config/nginx/proxy.conf; proxy_bind $server_addr; proxy_set_header X-Forwarded-Host $server_name; proxy_set_header X-Forwarded-Ssl on; # auth_basic "Restricted"; # auth_basic_user_file /config/.htpasswd; } } For now commented out the authentication and some other tests. Will add it back once its all sorted. Basically I just need plex to know its local so its not sending my data to the internet then back in. I also have a nextcloud issue but one at a time.
  8. Is there a MariaDB client package available? I have a few commands I want to run in a bash script to connect to my mariaDB docker but I am not having any luck finding a client package for slackware.
  9. Im curious on what could cause this. I got a corruption message on a file. BLAKE2 hash key mismatch, /mnt/disk5/xxxxxxxxxxxxxxxxxxxx.1080p.BluRay.X264-AMIABLE.mkv is corrupted I check the hash in the export ab681975c59300f875b81763f4e3ca7e1186240c04400abf71939aa43d762c803cd6a43530a05ceb2d869250e32388045600c0fa26b135b2e69c034fbd939c95 */mnt/disk5/Movies-HD/xxxxxxxxxxxxxx.1080p.BluRay.X264-AMIABLE.mkv I re-downloaded the file from my online backup move it to my array and it gets rehashed into disks.export.20161230.new.hash ab681975c59300f875b81763f4e3ca7e1186240c04400abf71939aa43d762c803cd6a43530a05ceb2d869250e32388045600c0fa26b135b2e69c034fbd939c95 */mnt/disk1/Movies-HD/xxxxxxxxxxxxxxxxxxx.1080p.BluRay.X264-AMIABLE (2).mkv The hashes are identical...
  10. Solution to my problem if anyone else encounters the same thing: 1. Downloaded the new tar file from nexcloud website. (probably not needed) 2. SSH to server 3. run docker ps to get the docker ID 5. Enter the docker SSH: docker exec -it abcIDxyz bash 6. Go to the nextcloud directory: cd /config/www/ 7. Give the webserver user (abc) ownership of the files: chown -R abc:abc nextcloud 8. Enter nextcloud directory: cd nextcloud 9. Run the upgrader manually: sudo -u abc php occ upgrade
  11. I had to delete my docker image file for an unrelated issue and when i rebuilt it (my nextcloud wasnt updated before) it nows gives me this: Update needed Please use the command line updater because you have a big instance. For help, see the documentation. I have no idea how to fix it. Any ideas?
  12. It was an open box clearance at Frys. Right place at the right time I suppose.
  13. Just to close off this topic with the resolution: I kept getting errors so I replaced the motherboard/CPU with an intel G3258 and a MSI Z97-G55 SLI (overkill on the motherboard right? but for 37$ how could I not) and there are no errors anymore. I am quite certain that the sata controller or something on the motherboard got fried.
  14. I pulled everything apart. reset the cmos, all from scratch and put as many drives as possible on my SATA card instead of the motherboard and so far no errors. 14 hours to rebuild. fingers crossed.
  15. I'm so confused. I thought the same its a bad disk or cables but its mostly on ata9 but I have sprinkles of other drives too. And I dont even have an ata12 Maybe my sata controller went poof.
  16. Ya it does look like that got changed somehow. I have set it back to AHCI mode and i moved the drives to a different port. I'm wondering if its a drive problem? That would suck since I already have one that needs to be rebuilt. I have a spare 2tb one but I cant rebuild 2 drives at the same time.
  17. I noticed in the system log that ata6.01 is actually the 2tb drive. So i replaced the cables on both my 2tb drives also. didnt help...
  18. I had a smart alert for a bad drive. I replaced this with a brand new 4tb but I am still getting errors in the unraid log. I tried to replace the SATA cable with a brand new one with no change. I changed the port on the sata card (and my SSD is also on that card without any issues) but I am still getting these exceptions. Once it configures for PIO4 it stops erroring but then the CPU usage spikes up to like 40+ and the system is unusable until I stop the parity sync. I'm testing the drive that said it was bad in another PC and so far it has no reported errors so I am tending to think its not a drive issue. Im not sure what to check anymore. Attached system diagnostics. fileserver-diagnostics-20161211-1131.zip
  19. Here is another one for hard disk power on time: # # Hard Disk Power On Hours # # Added -n standby to the check so smartctl is not spinning up my drives # i=0 for DISK in "${DISK_ARRAY[@]}" do smartctl -n standby -A /dev/${DISK} | grep -E "Power_On_Hours" | awk '{ print $10 }' | while read POWERONHOURS do curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "diskPowerOnHours,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} PowerOnHours=${POWERONHOURS} ${CURDATE}000000000" >/dev/null 2>&1 done ((i++)) done
  20. If anyone has an edimax smart plug: The cost calculation is complicated. Thanks to the power company but you get the point from the code. # # SMART PLUG # EDIMAX SMART PLUG SWITCH # Model: SP-1101W # # Off-Peak: Weekends all day and weekdays 19:00 - 07:00 -- 0.000145 cents/watt-min # Mid-Peak: Weekdays 17:00 - 19:00 & 07:00-11:00 -- 0.00022 cents/watt-min # Peak: Weekdays 11:00 - 17:00 -- 0.0003 cents/watt-min # Extra fees: 4.14c/kwh OffPeak=0.000145 MidPeak=0.00022 Peak=0.0003 ExtraFees=0.000069 AdjustmentFactor=1.076 day=$(date +"%u") hour=$(date +"%H") DEVICE="UNRAID-PLUG" curl -d @/boot/custom/influxdb/edi.xml http://admin:[email protected]:10000/smartplug.cgi -o /boot/custom/influxdb/output.txt CURRENT=`cat /boot/custom/influxdb/output.txt | sed -n 's:.*<Device.System.Power.NowCurrent>\(.*\)</Device.System.Power.NowCurrent>.*:\1:p'` POWER=`cat /boot/custom/influxdb/output.txt | sed -n 's:.*<Device.System.Power.NowPower>\(.*\)</Device.System.Power.NowPower>.*:\1:p'` echo ${POWER} echo ${CURRENT} # Calculates the cost of each minute of electricty use based on the time of day calculation if [ "$day" -eq 6 ] || [ "$day" -eq 7 ]; then COST=$(echo $POWER $OffPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}') elif [ "$hour" -ge 19 ] || [ "$hour" -lt 7 ]; then COST=$(echo $POWER $OffPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}') elif [ "$hour" -ge 11 ] && [ "$hour" -lt 17 ]; then COST=$(echo $POWER $Peak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}') else COST=$(echo $POWER $MidPeak $ExtraFees $AdjustmentFactor | awk '{printf "%.6f\n",$1*$4*($2+$3)}') fi curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "powerCost,Device=${DEVICE} Cost=${COST} ${CURDATE}000000000" >/dev/null curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "smartplugStats,Device=${DEVICE} Current=${CURRENT},Power=${POWER} ${CURDATE}000000000" >/dev/null edi.xml <? xml version = "1.0" encoding = "UTF-8"?> <SMARTPLUG id="edimax"> <CMD id="get"> <NOW_POWER></NOW_POWER> </CMD> </SMARTPLUG>
  21. It may be related to the CPU. I have a single core sempron. The stats output may be different. if you SSH into the server what do you get when you run top -b -n 3 -d.2 | grep "Cpu" | tail -n 1 and cat /proc/loadavg Feel free to PM me if you want to try and work though it. Should be relatively easy fix but it may be user specific. These scripts may not be that generic, especially the hard drive one I modified a lot since I posted that but its now really specific. I cant find a good way to make it generic.
  22. No. I'll add it, but won't add it to the GUI, as the odds of a non-successful backup rise significantly because of files being open, and don't want the people who just want their nzbGet running all the time to think that they can simply exclude it and have a successful backup done. I'll make a hidden method of selecting which apps to stop and we'll go from there. Awesome squid. Thanks.
  23. This plugin is great but I was wondering if there was a way to exclude certain dockers from being stopped? I backup my mysql database separately and I'd prefer not to stop it if possible.
  24. This is a great plugin. Thanks. Anyone know if it can monitor multiple UPS'? I have another UPS that powers my router/switches etc and I want to monitor power usage through unraid. Its working well for this UPS but I am not sure if I can just attach the USB and monitor 2?
  25. I got some motivation from the scripts posted here to add some monitoring to my UNRAID installation as well. I figured it was a bit less resource intensive to do it directly from bash but I'm totally guessing on that. I also wrote scripts for my DD-WRT router and windows PCs (powershell) but I figured for now I'd share the unraid scripts I wrote in case they are useful to anyone. I'm not that experienced with bash scripting so if there is anything I could do better I'd appreciate the corrections. All I ask is if you make improvements please share it back to me and the community. I actually created 3 scripts for different intervals. 1, 5 and 30 mins. Cron Jobs # # InfluxDB Stats 1 Minute (Delay from reading CPU when all the other PCs in my network report in) # * * * * * sleep 10; /boot/custom/influxdb/influxStats_1m.sh > /dev/null 2>&1 # # InfluxDB Stats 5 Minute # 0,10 * * * * /boot/custom/influxdb/influxStats_5m.sh > /dev/null 2>&1 # # InfluxDB Stats 30 Minute # 0,30 * * * * /boot/custom/influxdb/influxStats_30m.sh > /dev/null 2>&1 Basic variables I use in all 3 scripts. # # Set Vars # DBURL=http://192.168.254.3:8086 DBNAME=statistics DEVICE="UNRAID" CURDATE=`date +%s` CPU Records CPU metrics - Load averages and CPU time # Had to increase to 10 samples because I was getting a spike each time I read it. This seems to smooth it out more top -b -n 10 -d.2 | grep "Cpu" | tail -n 1 | awk '{print $2,$4,$6,$8,$10,$12,$14,$16}' | while read CPUusr CPUsys CPUnic CPUidle CPUio CPUirq CPUsirq CPUst do top -bn1 | head -3 | awk '/load average/ {print $12,$13,$14}' | sed 's/,//g' | while read LAVG1 LAVG5 LAVG15 do curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "cpuStats,Device=${DEVICE} CPUusr=${CPUusr},CPUsys=${CPUsys},CPUnic=${CPUnic},CPUidle=${CPUidle},CPUio=${CPUio},CPUirq=${CPUirq},CPUsirq=${CPUsirq},CPUst=${CPUst},CPULoadAvg1m=${LAVG1},CPULoadAvg5m=${LAVG5},CPULoadAvg15m=${LAVG15} ${CURDATE}000000000" >/dev/null 2>&1 done done Memory Usage top -bn1 | head -4 | awk '/Mem/ {print $6,$8,$10}' | while read USED FREE CACHE do curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "memoryStats,Device=${DEVICE} memUsed=${USED},memFree=${FREE},memCache=${CACHE} ${CURDATE}000000000" >/dev/null 2>&1 done Network if [[ -f byteCount.tmp ]] ; then # Read the last values from the tmpfile - Line "eth0" grep "eth0" byteCount.tmp | while read dev lastBytesIn lastBytesOut do cat /proc/net/dev | grep "eth0" | grep -v "veth" | awk '{print $2, $10}' | while read currentBytesIn currentBytesOut do # Write out the current stats to the temp file for the next read echo "eth0" ${currentBytesIn} ${currentBytesOut} > byteCount.tmp totalBytesIn=`expr ${currentBytesIn} - ${lastBytesIn}` totalBytesOut=`expr ${currentBytesOut} - ${lastBytesOut}` # Prevent negative numbers when the counters reset. Could miss data but it should be a marginal amount. if [ ${totalBytesIn} -le 0 ] ; then totalBytesIn=0 fi if [ ${totalBytesOut} -le 0 ] ; then totalBytesOut=0 fi curl -is -XPOST "$DBURL/write?db=$DBNAME&u=$USER&p=$PASSWORD" --data-binary "interfaceStats,Interface=eth0,Device=${DEVICE} bytesIn=${totalBytesIn},bytesOut=${totalBytesOut} ${CURDATE}000000000" >/dev/null 2>&1 done done else # Write out blank file echo "eth0 0 0" > byteCount.tmp fi Hard Disk IO # Gets the stats for disk# # # The /proc/diskstats file displays the I/O statistics # of block devices. Each line contains the following 14 # fields: # 1 - major number # 2 - minor mumber # 3 - device name # 4 - reads completed successfully # 5 - reads merged # 6 - sectors read <--- # 7 - time spent reading (ms) # 8 - writes completed # 9 - writes merged # 10 - sectors written <--- # 11 - time spent writing (ms) # 12 - I/Os currently in progress # 13 - time spent doing I/Os (ms) # 14 - weighted time spent doing I/Os (ms) # # Special Cases # sda = Flash/boot # sdf = Cache # sdd = Parity if [[ -f diskByteCountTest.tmp ]] ; then cat /proc/diskstats | grep -E 'md|sdd|sda|sdf|loop0' | grep -E -v 'sd[a-z]1' |sed 's/md//g' | awk '{print "disk" $3, $6, $10}' | while read DISK currentSectorsRead currentSectorsWrite do # Check if the disk is in the temp file. if grep ${DISK} diskByteCountTest.tmp then grep ${DISK} diskByteCountTest.tmp | while read lDISK lastSectorsRead lastSectorsWrite do # Replace current disk stats with new stats for the next read sed -i "s/^${DISK}.*/${DISK} ${currentSectorsRead} ${currentSectorsWrite}/" diskByteCountTest.tmp # Need to multiply by 512 to convert from sectors to bytes (( totalBytesRead = 512 * (${currentSectorsRead} - ${lastSectorsRead}) )) (( totalBytesWrite = 512 * (${currentSectorsWrite} - ${lastSectorsWrite}) )) (( totalBytes = totalBytesRead + totalBytesWrite)) # Cases case ${DISK} in "disksda" ) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=boot,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; "disksdd" ) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=parity,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; "disksdf" ) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=cache,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; "diskloop0" ) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=docker,Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; *) curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStats,Disk=${DISK},Device=${DEVICE} BytesPersec=${totalBytes},ReadBytesPersec=${totalBytesRead},WriteBytesPersec=${totalBytesWrite} ${CURDATE}000000000" >/dev/null 2>&1 ;; done else # If the disk wasn't in the temp file then add it to the end echo ${DISK} ${currentSectorsRead} ${currentSectorsWrite} >> diskByteCountTest.tmp fi done else # Write out a new file cat /proc/diskstats | grep -E 'md|sdd|sda|sdf|loop0' | grep -E -v 'sd[a-z]1' |sed 's/md//g' | awk '{print "disk" $3, $6, $10}' | while read DISK currentSectorsRead currentSectorsWrite do echo ${DISK} ${currentSectorsRead} ${currentSectorsWrite} >> diskByteCountTest.tmp done fi Number of Dockers Running docker info | grep "Running" | awk '{print $2}' | while read NUM do curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "dockersRunning,Device=${DEVICE} Dockers=${NUM} ${CURDATE}000000000" >/dev/null 2>&1 done Hard Disk Temperatures # Current array assignment. # I could pull the automatically from /var/local/emhttp/disks.ini # Parsing it wouldnt be that easy though. DISK_ARRAY=( sdd sdg sde sdi sdc sdb sdh sdf ) DESCRIPTION=( parity disk1 disk2 disk3 disk4 disk5 disk6 cache ) # # Added -n standby to the check so smartctl is not spinning up my drives # i=0 for DISK in "${DISK_ARRAY[@]}" do smartctl -n standby -A /dev/$DISK | grep "Temperature_Celsius" | awk '{print $10}' | while read TEMP do curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "DiskTempStats,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Temperature=${TEMP} ${CURDATE}000000000" >/dev/null 2>&1 done ((i++)) done Hard Disk Spinup Status # Current array assignment. # I could pull the automatically from /var/local/emhttp/disks.ini # Parsing it wouldnt be that easy though. DISK_ARRAY=( sdd sdg sde sdi sdc sdb sdh sdf ) DESCRIPTION=( parity disk1 disk2 disk3 disk4 disk5 disk6 cache ) i=0 for DISK in "${DISK_ARRAY[@]}" do hdparm -C /dev/$DISK | grep 'state' | awk '{print $4}' | while read STATUS do #echo ${DISK} : ${STATUS} : ${DESCRIPTION[$i]} if [ ${STATUS} = "standby" ] then curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStatus,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Active=0 ${CURDATE}000000000" >/dev/null 2>&1 else curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "diskStatus,DEVICE=${DEVICE},DISK=${DESCRIPTION[$i]} Active=1 ${CURDATE}000000000" >/dev/null 2>&1 fi done ((i++)) done Hard Disk Space # Gets the stats for boot, disk#, cache, user # df | grep "mnt/\|/boot\|docker" | grep -v "user0\|containers" | sed 's/\/mnt\///g' | sed 's/%//g' | sed 's/\/var\/lib\///g'| sed 's/\///g' | while read MOUNT TOTAL USED FREE UTILIZATION DISK do if [ "${DISK}" = "user" ]; then DISK="array_total" fi curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "drive_spaceStats,Device=${DEVICE},Drive=${DISK} Free=${FREE},Used=${USED},Utilization=${UTILIZATION} ${CURDATE}000000000" >/dev/null 2>&1 done Uptime UPTIME=`cat /proc/uptime | awk '{print $1}'` curl -is -XPOST "$DBURL/write?db=$DBNAME" --data-binary "uptime,Device=${DEVICE} Uptime=${UPTIME} ${CURDATE}000000000" >/dev/null 2>&1 Disk 4 and 5 just finished their file integrity check. I also have a hole every day at 11:36-12:36 that I haven't yet figured out why. I'll need to investigate that but I dont think its related to these scripts. If anyone wants the grafana json just let me know I can post it as well. Please post any suggestions of other metrics to capture.