axipher

Members
  • Posts

    102
  • Joined

  • Last visited

Everything posted by axipher

  1. Just Export your site and Import to a version of the Beta controller running on a Windows Server 2016 Essentials VM (Free for 180 days up to 5 times for evaluation). Then once the that version of the software becomes mainstream and in the Docker, do the Import/Export again. That's how I moved my Unifi Setup when I was building a new UnRaid box but needed controller access in between. Sadly that is the downside to running early access and/or alpha or beta stuff, you won't get much support outside of the manufacturer themselves in the method they recommend. We can do what we can to help you out here though, just don't expect and miracles.
  2. WS discovery is a welcome convenience on my network and I would prefer to leave it running, is it possible to maybe lock it to a specific core to play more nicely with Docker and VM CPU Pinning settings? As it stands, I use CPU pinning to give VM's and Docker Containers access to certain cores to help with higher CPU dependent things like a game server or Plex container, so it would be nice to just throw WS Discovery on specific cores that aren't shared with the CPU dependent stuff.
  3. If you open the Docker's Console, type `influx` and hit enter, you will be brought to the InfluxDB console Then type `show databases` and hit enter to see all the created databases Type `use <database name>` and hit enter to select that database Type `show measurements` and hit enter to see all logged measurement values Type `select last(*) from <measurement name>` and hit enter and should show you only the last measurement and all its data
  4. Has anyone gotten UDP to work on InfluxDB Docker? I've tried: - Adding a port mapping for 8089 for UDP traffic - installing nano and editing /etc/influxdb/influxdb.conf to add the relevant [udp] section as it doesn't exist - restarting the docker - logs show that it is listening on :8089 for UDP But after all that, it won't accept the same UDP packets that a local Windows InfluxDB does accept.
  5. Any luck on the Memory numbers without DPI enabled? Also what are you using as your Cron job (or User Script) to restart the docker daily, I might have to look at that myself.
  6. I believe its because your new UPS is reporting LOADPCT as "16.0 percent" instead of just "16.0". So your UPS is actually providing a String Value to the formula when it needs just an Integer or a Float number. You might have to dig in the Grafana Dashboard code and help files to figure out how to strip that string value of the " percent" from the end of it in that formula. Or maybe someone here can help with that. I had to do a custom User Script using CA User Scripts Plugin and the NUT-Settings Plugin instead of the built-in UPS Settings for my Eaton Rack-mount UPS because it was not providing any of the normally named tags and I had to modify all the Grafana charts to look for the new variables, but I didn't have to do any string manipulation like your new UPS would require. Hopefully I can at least point you in the right direction.
  7. No problem, sometimes a second set of eyes is all you need on a problem.
  8. Well if you have the Unraid-Nvidia Plugin you just want to select an Nvidia Unraid build and let it do the install process on the little pop-up
  9. Looks like you may have been running the Unraid Nvidia Build as you have a GPU ID filled in. Did you update Unraid to the non Nvidia 6.8.3?
  10. Are you controlling Unifi AP's? If so, you should be able to have them all on the same network and use multiple LAN and WLAN configs in the controller then just configure the AP's themselves with the specific WLAN's you want on each one. For example, my single AP in my house has the following on it form a single cable: WLAN 2G (11n/b/g) Cayde (Disabled) Cayde-2G Cayde-Guest Cayde-IoT WLAN 5G (11n/a/ac) Cayde Cayde-2G (Disabled) Cayde-Guest Cayde-IoT Cayde and Cayde-2G SSID's both use the same network 10.2.1.0/24 Cayde-Guest is a Guest type network 10.3.1.0/24 Cayde-IoT is another network 10.4.1.0/24 that I also have tagged with VLAN 5 on certain ports for my Phillips Hue and Wink Hub This way my Guest and IoT networks can't see the devices on my main network and I give people the choice for the faster 5G on newer devices, ot 2G on older devices or things like my Google Home Speakers and Chromecasts
  11. Okay, so not just me then, thanks for at least confirming I'm not completely insane... Yes, Grafana with the dashboards found in the following article as a base, I've tweaked them quite a bit from the originals to work with my Eaton UPS with 3 outlet groups. https://technicalramblings.com/blog/setting-grafana-influxdb-telegraf-ups-monitoring-unraid/
  12. Anyone else seeing constant Memory Bloat from the Unifi Docker? My only solution has been to restart Unifi either as part of CA Backup or CA Auto Update to keep memory usage down. It typically starts at a little over 600 MB up to 2.5 GB with no sign of slowing down before restarting it.
  13. I love the Community that Lime-Tech has fostered around UnRaid. I'm hoping to see Multiple Cache Pools as an option. I would love to have one Cache Pool used for File Transfers while another Cache Pool can be used for Docker/App Data/VM Disks/etc.
  14. Thanks for the great work on getting WireGuard working. I followed the guide to get Remote Tunnelled Access working, but I see that it uses UnRaid's Internet Connection. How can I force the tunnelled traffic through a PiHole Docker? Router that UnRaid and PiHole use: 10.3.1.1 PiHole DNS Output via 'br0' (ad-free): 10.3.1.2 Local tunnel network pool: 10.253.0.1 If I try to set a Peer's DNS to 10.3.1.2, traffic just fails when the VPN is turned on, assuming due to being in a different subnet. Is there a way to get WireGuard Peers connected via Remote Tunnelled Access to also go through my PiHole Docker?
  15. I haven't dealt specifically with the Orbi, but as long as the Router is configured to use the SteamCache as the Primary DNS and Cloudflare as the Secondary, it should forward everything through the SteamCache no problem. What do you have set as SteamCache's DNS, it should be set to an external known DNS server, I've seen some setups where it gets somehow set to the Router DNS which is pointing back to the SteamCache. Now from your output, it looks like 192.168.1.14 is what you are using for your DNS address, is this your UnRaid server or the SteamCache Docker? If I recall when I had set on up before, SteamCache actually defaulted to getting its own IP address from your router so it wasn't the same IP as your UnRaid server. If you are testing on a Windows box, it may be worthwhile to do a flushdns alongside a release/renew as well to make sure you clear all the cached DNS entries on your machine.
  16. To expand on this, the typical way that Routers are set up are in DHCP where they provide at least the following: - IP adress for client machine and subnet mask - DNS settings for client to use (most allow two DNS servers and will default to their own DNS servers or your ISP's DNS servers) - Gateway to use to route all internet traffic In my case, I have the router DHCP DNS settings set up to have my Pi-Hole as the first DNS server and 1.1.1.1 as the second one in case my Pi-Hole Docker is down for an update or UnRaid is down for maintenance. This way as soon as they get a new address from DHCP, they should also pull the DNS settings I set up to what you want when they connect. You may need to reboot your router to force them to request new addresses and not just re-use one if the DHCP lease hasn't expired yet. (Yes I simplified some things so that it may help others searching for help as well, please feel free to correct me on anything and I will edit my post) P.S.: I have had Pi-Hole's DNS set to steamcache for it's primary DNS to that I could both block ads and serve cached Steam files, but I've since disabled that with a faster internet connection and speed limits on my router per client.
  17. Hi Exa, I don't want to downplay your problem, but offer an alternative solution instead. I gather that you are wanting to use your Intel CPU's iGPU for hardware transcoding instead of Software to try to offload the limited number of CPU cores from transcoding. GPU transcoding is definitely great for power efficiency as well on a server. I'm not sure what motherboard you are running as you just said a Gigabyte Z270 which would be mini ITX up to full ATX, but I'm assuming that you have at least one free PCIe slot and hopefully in a case that can support a 2-slot GPU. I would like to highly recommend something like a GTX 1050 Ti which can hardware transcode 4k in Plex with the NVDEC patch (or an upcoming Plex Server release with official NVDEC support on Linux) and potentially more than the Nvidia limited 2 streams via certain methods. Linuxserverio has a Plugin that installs a Nvidia specific version of latest Unraid to allow passing through the GPU to Plex for this purpose. The GTX 1050 Ti might seem like a costly upgrade from an iGPU just for transcoding, but it does a fantastic job and uses minimal power when idle. I would stick with the 4 GB VRAM model as Linux 4k transcodes require ~1.5 GB each at worst case form my experience, but if you only have 1080p content, you can probably get by with just a GTX 1050 2GB as it doesn't require an extra power cable and gets all of its power from the motherboard slot. If you need more help with an Nvidia setup for Plex, you might get a little more assistance here, but linuxserverio also has a great Discord too for support on stuff. Cheers, axipher
  18. Anyone else getting errors with the Plex Docker relating to Python or Libcrypto? Sep 10 15:30:30 CACHE kernel: Plex Media Scan[22769]: segfault at 300000010 ip 000014923da39097 sp 000014923430b0d0 error 4 in libcrypto.so.1.0.0[14923d928000+204000] Sep 11 22:11:31 CACHE kernel: traps: Plex Media Scan[10223] general protection ip:1484de7b85dc sp:7fff0d1091a0 error:0 in libpython2.7.so.1.0[1484de6c6000+195000] Sep 12 04:51:47 CACHE kernel: Plex Media Scan[14863]: segfault at 300000010 ip 000014e711121097 sp 000014e7039d2fe0 error 4 in libcrypto.so.1.0.0[14e711010000+204000] Sep 12 06:01:47 CACHE kernel: Plex Media Scan[27341]: segfault at 30000001c ip 000014d6f5f21097 sp 000014d6ec7f30d0 error 4 in libcrypto.so.1.0.0[14d6f5e10000+204000] Sep 12 09:06:50 CACHE kernel: Plex Media Scan[28297]: segfault at 51 ip 000014daa525a097 sp 000014da97b12fe0 error 4 in libcrypto.so.1.0.0[14daa5149000+204000] Sep 12 16:41:51 CACHE kernel: traps: Plex Media Scan[21409] trap stack segment ip:14e7b94ab4a9 sp:7ffe6fee2f70 error:0 in libpython2.7.so.1.0[14e7b93b9000+195000]
  19. My apologies everyone, I forgot to update here with the UserScript I managed to get working to output NUT-Settings to InfluxDB for use with Grafana. Now you will likely have to change which values are being pulled as I'm using an Eaton 5PX commercial UPS that reports back a lot of values along with 3 different outlet groups with their own power monitoring. The main things I changed were now I'm parsing the output of the 'upsc' CLI command instead of 'apcaccess'. I had to play with the 'preg-match' commands quite a bit and could not get a single command to work for both text and number based values so I had to separate those out as two different array combinations. The code is dirty, but it works. I decided to output some of the values under the same APC variable names as the original script and under a database called 'APC' to that is would still easily work with Grafana's already set up like GilbN's so I'm aware 3 of the values get exported twice: - ups.load as LOADPCT - battery.runtime as TIMELEFT - battery.charge as BCHARGE Another thing to keep note is if your UPS reports back the time remaining in seconds or minutes. This might require either adding some math in the Userscript on the exported value, or changing the Grafana panel queries or math along with pre/suffixes. And the UserScript: NUT-Settings_to_InFluxDB #!/usr/bin/php <?php $commandUPSC = "upsc"; $argsUPSC = "[email protected]"; $tagsArrayAPC = array( "LOADPCT", "TIMELEFT", "BCHARGE", "input_voltage", "input_current", "input_frequency", "battery_charge", "battery_runtime", "output_voltage", "output_current", "ups_load", "ups_power", "ups_power_nominal", "ups_realpower", "ups_realpower_nominal", "outlet_current", "outlet_power", "outlet_realpower", "outlet_powerfactor", "outlet_1_current", "outlet_1_power", "outlet_1_realpower", "outlet_1_powerfactor", "outlet_2_current", "outlet_2_power", "outlet_2_realpower", "outlet_2_powerfactor" ); $tagsArrayAPC_text = array( "battery_charger_status", "outlet_1_desc", "outlet_1_status", "outlet_2_desc", "outlet_2_status", "outlet_desc", "ups_mfr", "ups_model", "ups_serial", "ups_status", "ups_type" ); $tagsArrayUPSC = array( "ups.load", "battery.runtime", "battery.charge", "input.voltage", "input.current", "input.frequency", "battery.charge", "battery.runtime", "output.voltage", "output.current", "ups.load", "ups.power", "ups.power.nominal", "ups.realpower", "ups.realpower.nominal", "outlet.current", "outlet.power", "outlet.realpower", "outlet.powerfactor", "outlet.1.current", "outlet.1.power", "outlet.1.realpower", "outlet.1.powerfactor", "outlet.2.current", "outlet.2.power", "outlet.2.realpower", "outlet.2.powerfactor" ); $tagsArrayUPSC_text = array( "battery.charger.status", "outlet.1.desc", "outlet.1.status", "outlet.2.desc", "outlet.2.status", "outlet.desc", "ups.mfr", "ups.model", "ups.serial", "ups.status", "ups.type" ); // Example Readings from Eaton 5PX 3000 // input.voltage: 121.8 // input.current: 0.00 // input.frequency: 60.0 // battery.charger.status: resting // battery.charge: 100 // battery.runtime: 3590 // output.voltage: 122.1 // output.current: 0.80 // ups.mfr: EATON // ups.model: Eaton 5PX 3000 // ups.type: offline / line interactive // ups.serial: XXXXXXXXXX // ups.status: OL // ups.load: 3 // ups.power: 98 // ups.power.nominal: 2880 // ups.realpower: 81 // ups.realpower.nominal: 2700 // outlet.desc: Main Outlet // outlet.current: 0.80 // outlet.power: 98 // outlet.realpower: 81 // outlet.powerfactor: 82.00 // outlet.1.desc: PowerShare Outlet 1 // outlet.1.current: 0.00 // outlet.1.power: 0 // outlet.1.realpower: 0 // outlet.1.powerfactor: 0.00 // outlet.1.status: on // outlet.2.desc: PowerShare Outlet 2 // outlet.2.current: 0.00 // outlet.2.power: 0 // outlet.2.realpower: 0 // outlet.2.powerfactor: 0.00 // outlet.2.status: on //do system call // For built-in UPS Monitor // $call = $commandAPC." ".$argsAPC; // $output = shell_exec($call); // For NUT-Settings $call = $commandUPSC." ".$argsUPSC; $output = shell_exec($call); //parse output for tag and value for($i = 0, $j = count($tagsArrayAPC); $i < $j ; $i++) { $tag = $tagsArrayUPSC[$i]; $field = $tagsArrayAPC[$i]; // preg_match("/".$tag."\s*:\s([\d|\.]+)/si", $output, $match); preg_match("/".$tag.":\s([\d]+)/si", $output, $match); $newVal = $match[1]; // Debug echo "Found value of '$newVal' under '$tag;' placing in '$field'\n"; //send measurement, tag and value to influx echo "Float on my friend.....\n"; sendDB($newVal, $field); echo "\n\n"; } //parse output for tag and value for strings for($i = 0, $j = count($tagsArrayAPC_text); $i < $j ; $i++) { $tag = $tagsArrayUPSC_text[$i]; $field = $tagsArrayAPC_text[$i]; // preg_match("/".$tag."\s*:\s([\d|\.]+)/si", $output, $match); // preg_match("/".$tag.":\s([\d|\D]+)$/", $output, $match); preg_match("#".$tag.":\s([\w|\s|/]+)\R#", $output, $match); $newVal = $match[1]; // Debug echo "Found value of '$newVal' under '$tag;' placing in '$field'\n"; //send measurement, tag and value to influx echo "It was a string!!!!!\n"; sendDB_String($newVal, $field); echo "\n\n"; } //send to influxdb function sendDB($val, $tagname) { $curl = "curl -i -XPOST 'http://127.0.0.1:8086/write?db=UPS' --data-binary 'APC,host=CACHE,region=us-west ".$tagname."=".$val."'"; $execsr = exec($curl); } // String version function sendDB_String($val, $tagname) { $curl = "curl -i -XPOST 'http://127.0.0.1:8086/write?db=UPS' --data-binary 'APC,host=CACHE,region=us-west ".$tagname."=\"".$val."\"'"; $execsr = exec($curl); } ?>
  20. Thanks @bluemonster and @ljm42 That fix and User Script worked to fix my Unraid 6.7.2 Docker Update issue
  21. Hey Ian, any chance you can expand on how you updated the alpine repo to latest? I'm trying to get this to work as well. EDIT: Just changed the Repository from telegraf:alpine to telegraf:latest and restarted the Docker and it is working great.
  22. For anyone who came here from the "SpeedtestforInfluxDB" Docker, I found a quick way to get proper Upload Speeds. On the current build, I get 380-400 Download and 4-6 Upload on my fibre connection which is actually a 400/200. Browsing around the web, found this page for speedtest-cli: https://www.howtoforge.com/tutorial/check-internet-speed-with-speedtest-cli-on-ubuntu/ Simply opening the Console the for SpeedtestforInfluxDB Docker and running the following command upgraded the version of speedtest-cli being used and I get much close to my 400/200 speeds from the results in the Log. pip install speedtest-cli –-upgrade Cheers community
  23. I'm by no means a super experienced programmer, I'm hoping to get some help in adapting this script to instead export the values from NUT-Settings to UnfluxDB using the same fields. Then goal is that I want to have my Eaton UPS monitored in a Grafana Dashboard from GilbN: https://technicalramblings.com/blog/setting-grafana-influxdb-telegraf-ups-monitoring-unraid/ but the built-in UnRaid UPS Monitor only works with APC brand UPS's. Here is the normal output from "apcaccess" as the script was designed for: root@CACHE:~# apcaccess status LOADPCT : 3.0 Percent BCHARGE : 100.0 Percent TIMELEFT : 130.6 Minutes Here is the normal output from "upsc" from NUT-Settings of the same UPS attached: root@CACHE:~# upsc [email protected] ups.load: 3 battery.charge: 100 battery.runtime: 7833 The output and fields from the UPS are different along with "upsc" providing the runtime of the UPS in seconds instead of minutes. Most of the code makes perfect sense to me and I handle most of the PHP code there, just the "preg_match" line is beyond me. So ideally I would like to have for example "ups.load: 3" from "upsc" get written to the InfluxDB as "LOADPCT" the same way as the original apcaccess was; same for the other two variables. Thanks in advance if anyone can provide and help here, even just an explanation of the "preg_match" line so that I can maybe try to figure it out myself. I figured I would post here in case a modified UserScript for use with NUT-Settings could be archived here for others to use in the future.
  24. I thought about that, but I would prefer not to install Telegraf directly on UnRaid if possible and continue using Dockers; that would be a separate topic I think as well to install a package directly on UnRaid that would live through upgrades.
  25. What would be the easiest way to get NUT-Client integrated in to a running Telegraf Docker so that I can have Telegraf also relay UPS statistics to InfluxDB? The eventual goal is having UPS stats from NUT-Settings Plugin in a Grafana Dashboard from here: https://technicalramblings.com/blog/setting-grafana-influxdb-telegraf-ups-monitoring-unraid/ I found these sites for reference for NUT and Telegraf: https://blog.lbdg.me/n-u-t-ups-monitoring-via-pfsense-grafana/ https://yegor.pomortsev.com/post/monitoring-everything/