Jump to content

axipher

Members
  • Content Count

    37
  • Joined

  • Last visited

Community Reputation

3 Neutral

About axipher

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. To expand on this, the typical way that Routers are set up are in DHCP where they provide at least the following: - IP adress for client machine and subnet mask - DNS settings for client to use (most allow two DNS servers and will default to their own DNS servers or your ISP's DNS servers) - Gateway to use to route all internet traffic In my case, I have the router DHCP DNS settings set up to have my Pi-Hole as the first DNS server and 1.1.1.1 as the second one in case my Pi-Hole Docker is down for an update or UnRaid is down for maintenance. This way as soon as they get a new address from DHCP, they should also pull the DNS settings I set up to what you want when they connect. You may need to reboot your router to force them to request new addresses and not just re-use one if the DHCP lease hasn't expired yet. (Yes I simplified some things so that it may help others searching for help as well, please feel free to correct me on anything and I will edit my post) P.S.: I have had Pi-Hole's DNS set to steamcache for it's primary DNS to that I could both block ads and serve cached Steam files, but I've since disabled that with a faster internet connection and speed limits on my router per client.
  2. Hi Exa, I don't want to downplay your problem, but offer an alternative solution instead. I gather that you are wanting to use your Intel CPU's iGPU for hardware transcoding instead of Software to try to offload the limited number of CPU cores from transcoding. GPU transcoding is definitely great for power efficiency as well on a server. I'm not sure what motherboard you are running as you just said a Gigabyte Z270 which would be mini ITX up to full ATX, but I'm assuming that you have at least one free PCIe slot and hopefully in a case that can support a 2-slot GPU. I would like to highly recommend something like a GTX 1050 Ti which can hardware transcode 4k in Plex with the NVDEC patch (or an upcoming Plex Server release with official NVDEC support on Linux) and potentially more than the Nvidia limited 2 streams via certain methods. Linuxserverio has a Plugin that installs a Nvidia specific version of latest Unraid to allow passing through the GPU to Plex for this purpose. The GTX 1050 Ti might seem like a costly upgrade from an iGPU just for transcoding, but it does a fantastic job and uses minimal power when idle. I would stick with the 4 GB VRAM model as Linux 4k transcodes require ~1.5 GB each at worst case form my experience, but if you only have 1080p content, you can probably get by with just a GTX 1050 2GB as it doesn't require an extra power cable and gets all of its power from the motherboard slot. If you need more help with an Nvidia setup for Plex, you might get a little more assistance here, but linuxserverio also has a great Discord too for support on stuff. Cheers, axipher
  3. Anyone else getting errors with the Plex Docker relating to Python or Libcrypto? Sep 10 15:30:30 CACHE kernel: Plex Media Scan[22769]: segfault at 300000010 ip 000014923da39097 sp 000014923430b0d0 error 4 in libcrypto.so.1.0.0[14923d928000+204000] Sep 11 22:11:31 CACHE kernel: traps: Plex Media Scan[10223] general protection ip:1484de7b85dc sp:7fff0d1091a0 error:0 in libpython2.7.so.1.0[1484de6c6000+195000] Sep 12 04:51:47 CACHE kernel: Plex Media Scan[14863]: segfault at 300000010 ip 000014e711121097 sp 000014e7039d2fe0 error 4 in libcrypto.so.1.0.0[14e711010000+204000] Sep 12 06:01:47 CACHE kernel: Plex Media Scan[27341]: segfault at 30000001c ip 000014d6f5f21097 sp 000014d6ec7f30d0 error 4 in libcrypto.so.1.0.0[14d6f5e10000+204000] Sep 12 09:06:50 CACHE kernel: Plex Media Scan[28297]: segfault at 51 ip 000014daa525a097 sp 000014da97b12fe0 error 4 in libcrypto.so.1.0.0[14daa5149000+204000] Sep 12 16:41:51 CACHE kernel: traps: Plex Media Scan[21409] trap stack segment ip:14e7b94ab4a9 sp:7ffe6fee2f70 error:0 in libpython2.7.so.1.0[14e7b93b9000+195000]
  4. My apologies everyone, I forgot to update here with the UserScript I managed to get working to output NUT-Settings to InfluxDB for use with Grafana. Now you will likely have to change which values are being pulled as I'm using an Eaton 5PX commercial UPS that reports back a lot of values along with 3 different outlet groups with their own power monitoring. The main things I changed were now I'm parsing the output of the 'upsc' CLI command instead of 'apcaccess'. I had to play with the 'preg-match' commands quite a bit and could not get a single command to work for both text and number based values so I had to separate those out as two different array combinations. The code is dirty, but it works. I decided to output some of the values under the same APC variable names as the original script and under a database called 'APC' to that is would still easily work with Grafana's already set up like GilbN's so I'm aware 3 of the values get exported twice: - ups.load as LOADPCT - battery.runtime as TIMELEFT - battery.charge as BCHARGE Another thing to keep note is if your UPS reports back the time remaining in seconds or minutes. This might require either adding some math in the Userscript on the exported value, or changing the Grafana panel queries or math along with pre/suffixes. And the UserScript: NUT-Settings_to_InFluxDB #!/usr/bin/php <?php $commandUPSC = "upsc"; $argsUPSC = "ups@127.0.0.1"; $tagsArrayAPC = array( "LOADPCT", "TIMELEFT", "BCHARGE", "input_voltage", "input_current", "input_frequency", "battery_charge", "battery_runtime", "output_voltage", "output_current", "ups_load", "ups_power", "ups_power_nominal", "ups_realpower", "ups_realpower_nominal", "outlet_current", "outlet_power", "outlet_realpower", "outlet_powerfactor", "outlet_1_current", "outlet_1_power", "outlet_1_realpower", "outlet_1_powerfactor", "outlet_2_current", "outlet_2_power", "outlet_2_realpower", "outlet_2_powerfactor" ); $tagsArrayAPC_text = array( "battery_charger_status", "outlet_1_desc", "outlet_1_status", "outlet_2_desc", "outlet_2_status", "outlet_desc", "ups_mfr", "ups_model", "ups_serial", "ups_status", "ups_type" ); $tagsArrayUPSC = array( "ups.load", "battery.runtime", "battery.charge", "input.voltage", "input.current", "input.frequency", "battery.charge", "battery.runtime", "output.voltage", "output.current", "ups.load", "ups.power", "ups.power.nominal", "ups.realpower", "ups.realpower.nominal", "outlet.current", "outlet.power", "outlet.realpower", "outlet.powerfactor", "outlet.1.current", "outlet.1.power", "outlet.1.realpower", "outlet.1.powerfactor", "outlet.2.current", "outlet.2.power", "outlet.2.realpower", "outlet.2.powerfactor" ); $tagsArrayUPSC_text = array( "battery.charger.status", "outlet.1.desc", "outlet.1.status", "outlet.2.desc", "outlet.2.status", "outlet.desc", "ups.mfr", "ups.model", "ups.serial", "ups.status", "ups.type" ); // Example Readings from Eaton 5PX 3000 // input.voltage: 121.8 // input.current: 0.00 // input.frequency: 60.0 // battery.charger.status: resting // battery.charge: 100 // battery.runtime: 3590 // output.voltage: 122.1 // output.current: 0.80 // ups.mfr: EATON // ups.model: Eaton 5PX 3000 // ups.type: offline / line interactive // ups.serial: XXXXXXXXXX // ups.status: OL // ups.load: 3 // ups.power: 98 // ups.power.nominal: 2880 // ups.realpower: 81 // ups.realpower.nominal: 2700 // outlet.desc: Main Outlet // outlet.current: 0.80 // outlet.power: 98 // outlet.realpower: 81 // outlet.powerfactor: 82.00 // outlet.1.desc: PowerShare Outlet 1 // outlet.1.current: 0.00 // outlet.1.power: 0 // outlet.1.realpower: 0 // outlet.1.powerfactor: 0.00 // outlet.1.status: on // outlet.2.desc: PowerShare Outlet 2 // outlet.2.current: 0.00 // outlet.2.power: 0 // outlet.2.realpower: 0 // outlet.2.powerfactor: 0.00 // outlet.2.status: on //do system call // For built-in UPS Monitor // $call = $commandAPC." ".$argsAPC; // $output = shell_exec($call); // For NUT-Settings $call = $commandUPSC." ".$argsUPSC; $output = shell_exec($call); //parse output for tag and value for($i = 0, $j = count($tagsArrayAPC); $i < $j ; $i++) { $tag = $tagsArrayUPSC[$i]; $field = $tagsArrayAPC[$i]; // preg_match("/".$tag."\s*:\s([\d|\.]+)/si", $output, $match); preg_match("/".$tag.":\s([\d]+)/si", $output, $match); $newVal = $match[1]; // Debug echo "Found value of '$newVal' under '$tag;' placing in '$field'\n"; //send measurement, tag and value to influx echo "Float on my friend.....\n"; sendDB($newVal, $field); echo "\n\n"; } //parse output for tag and value for strings for($i = 0, $j = count($tagsArrayAPC_text); $i < $j ; $i++) { $tag = $tagsArrayUPSC_text[$i]; $field = $tagsArrayAPC_text[$i]; // preg_match("/".$tag."\s*:\s([\d|\.]+)/si", $output, $match); // preg_match("/".$tag.":\s([\d|\D]+)$/", $output, $match); preg_match("#".$tag.":\s([\w|\s|/]+)\R#", $output, $match); $newVal = $match[1]; // Debug echo "Found value of '$newVal' under '$tag;' placing in '$field'\n"; //send measurement, tag and value to influx echo "It was a string!!!!!\n"; sendDB_String($newVal, $field); echo "\n\n"; } //send to influxdb function sendDB($val, $tagname) { $curl = "curl -i -XPOST 'http://127.0.0.1:8086/write?db=UPS' --data-binary 'APC,host=CACHE,region=us-west ".$tagname."=".$val."'"; $execsr = exec($curl); } // String version function sendDB_String($val, $tagname) { $curl = "curl -i -XPOST 'http://127.0.0.1:8086/write?db=UPS' --data-binary 'APC,host=CACHE,region=us-west ".$tagname."=\"".$val."\"'"; $execsr = exec($curl); } ?>
  5. Thanks @bluemonster and @ljm42 That fix and User Script worked to fix my Unraid 6.7.2 Docker Update issue
  6. Hey Ian, any chance you can expand on how you updated the alpine repo to latest? I'm trying to get this to work as well. EDIT: Just changed the Repository from telegraf:alpine to telegraf:latest and restarted the Docker and it is working great.
  7. For anyone who came here from the "SpeedtestforInfluxDB" Docker, I found a quick way to get proper Upload Speeds. On the current build, I get 380-400 Download and 4-6 Upload on my fibre connection which is actually a 400/200. Browsing around the web, found this page for speedtest-cli: https://www.howtoforge.com/tutorial/check-internet-speed-with-speedtest-cli-on-ubuntu/ Simply opening the Console the for SpeedtestforInfluxDB Docker and running the following command upgraded the version of speedtest-cli being used and I get much close to my 400/200 speeds from the results in the Log. pip install speedtest-cli –-upgrade Cheers community
  8. I'm by no means a super experienced programmer, I'm hoping to get some help in adapting this script to instead export the values from NUT-Settings to UnfluxDB using the same fields. Then goal is that I want to have my Eaton UPS monitored in a Grafana Dashboard from GilbN: https://technicalramblings.com/blog/setting-grafana-influxdb-telegraf-ups-monitoring-unraid/ but the built-in UnRaid UPS Monitor only works with APC brand UPS's. Here is the normal output from "apcaccess" as the script was designed for: root@CACHE:~# apcaccess status LOADPCT : 3.0 Percent BCHARGE : 100.0 Percent TIMELEFT : 130.6 Minutes Here is the normal output from "upsc" from NUT-Settings of the same UPS attached: root@CACHE:~# upsc eaton5px3000@127.0.0.1 ups.load: 3 battery.charge: 100 battery.runtime: 7833 The output and fields from the UPS are different along with "upsc" providing the runtime of the UPS in seconds instead of minutes. Most of the code makes perfect sense to me and I handle most of the PHP code there, just the "preg_match" line is beyond me. So ideally I would like to have for example "ups.load: 3" from "upsc" get written to the InfluxDB as "LOADPCT" the same way as the original apcaccess was; same for the other two variables. Thanks in advance if anyone can provide and help here, even just an explanation of the "preg_match" line so that I can maybe try to figure it out myself. I figured I would post here in case a modified UserScript for use with NUT-Settings could be archived here for others to use in the future.
  9. I thought about that, but I would prefer not to install Telegraf directly on UnRaid if possible and continue using Dockers; that would be a separate topic I think as well to install a package directly on UnRaid that would live through upgrades.
  10. What would be the easiest way to get NUT-Client integrated in to a running Telegraf Docker so that I can have Telegraf also relay UPS statistics to InfluxDB? The eventual goal is having UPS stats from NUT-Settings Plugin in a Grafana Dashboard from here: https://technicalramblings.com/blog/setting-grafana-influxdb-telegraf-ups-monitoring-unraid/ I found these sites for reference for NUT and Telegraf: https://blog.lbdg.me/n-u-t-ups-monitoring-via-pfsense-grafana/ https://yegor.pomortsev.com/post/monitoring-everything/
  11. Are there any current plugins on Unraid that use Grafana on the Unraid Dashboard page?
  12. Hey lounge, I have limited code writing experience so not sure exactly where to start with this kind of project. I've recently added an Nvidia GTX 1050 Ti to my system for use with Plex for Decoding and Encoding. I've been using 'nvidia-smi' from the console to see GPU usage (Memory, NVDEC and NVENC usage) and was hoping to maybe be able to monitor some of the output from nvidia-smi on the dashboard similar to how NUT-Settings can monitor UPS related information on the dashboard. Somewhat related to this would be the status of a Plex Docker and it's transcode status. I would love to have either of these on the Dashboard of Unraid's Web UI and possibly even integrate some monitoring in to the notification system (for emails as well) for things like: - GPU Memory usage Warning/Critical Levels - NVDEC and NVENC usage Warning/Critical Levels - GPU Temperature Warning/Critical Levels - Plex Transcode status (if possible to grab this from Plex's own Dashboard While I would appreciate anyone helping out with this, I'm more just looking for some good starting resources of plug-in writing on UnRaid and if anyone else has ideas/input or projects they started that are similar.
  13. My solution to VM backups was have the "domains" share as "Cache Only" and a share called "domains_backup" set to "No Cache". Then I just shut-down VM's after any major changes and copy the VM vdisks from "domains" to "domains_backup" then start the VM back up again. I just make a new folder with today's date then copy the Folder for the domain I want over. I'll also copy over libvirt.img after changes to the dated folder as well just in case. I'm not sure how scripts would work with shutting down and starting up VMs, but if you can figure that out, then copying files between shares like I have set up would probably be your best bet in my mind.
  14. That would be great, even just checking to see if Docker's are enabled to give a warning to the user with a check box to proceed as a quick sanity check.
  15. Just to update the thread, disabling IOMMU in BIOS fixed the Page Fault errors. Just ran a Trim on the SSD's using the Dynamix SSD TRIM Plug-in and no PAge Faults this time. Haven't done further testing on GPU pass-through in VM's as I only have a single Nvidia GPU in this box right now for Plex HW encoding. Marking thread as solved.