falconexe Posted February 16, 2022 Author Share Posted February 16, 2022 9 hours ago, pzen said: Hello everyone, I hope I will make myself understood correctly I use a translator First of all, a big thank you to falconexe for the work done to create the various visualization tables I am stalling on the establishment of information concerning the temperature, the voltage and the speed of the fan Not having the IPMI function I followed the instructions to use the sensors Is it necessary to modify a parameter or only indicated sensor in the following line on Grafana? SELECT last("value") FROM "ipmi_sensor" WHERE ("unit" = 'degrees_c' AND "host" =~ /^$Host$/) AND $timeFilter GROUP BY time($__interval), "name" Thank you in advance for your help Welcome and thanks. You will need to adjust each panel by editing it and selecting the appropriate sensor for each panel. Assuming you have everything setup correctly in the config file, and have loaded the sensors package, everything you need to adjust is in Grafana within the UUD dashboard panels. Please search this forum topic for "Sensors" or "AMD" for many previous examples of how to accomplish this. Good luck! Quote Link to comment
falconexe Posted February 16, 2022 Author Share Posted February 16, 2022 (edited) Interesting tidbit... Without writing to Cache in any way (array operations) for 24 hours, I seem to be hitting 300-400 MB of writes per hour, and roughly 40GB per day, with just my running dockers. I suspect most, if not all of this, is Telegraph/InfluxDB/Grafana. For anyone else using the UUD, are you seeing similar metrics? For my SSD, I have a manufacturer's suggested lifespan of 1200 TBW. If I did nothing but run appdata stuff (and never used the Cache for array activities), my drive would last roughly 82 years at this rate. Even if I wrote 1TB a day with array activity, it would still last 3.28 years. So I guess this puts to rest the fear of wearing out an expensive SSD Cache drive because of something cool like the UUD. Edited February 16, 2022 by falconexe Quote Link to comment
skaterpunk0187 Posted February 16, 2022 Share Posted February 16, 2022 I don't have an hourly chart, I do however have a daily chart. I'm using 170 MB per day but divide that by 24 hours I am only writing 7.1 MB per hour. I only have the telegraf docker running on the server InfluxDB and Grafana are running on a Proxmox server with docker running in a LXC container. I also have my appdata stored on a separate SSD not part of my cache. The only docker component on my cache drive is the system directory that contains the docker.img. I would guess that is the writing to the cache pool. 1 Quote Link to comment
falconexe Posted February 16, 2022 Author Share Posted February 16, 2022 21 minutes ago, skaterpunk0187 said: I don't have an hourly chart, I do however have a daily chart. I'm using 170 MB per day but divide that by 24 hours I am only writing 7.1 MB per hour. I only have the telegraf docker running on the server InfluxDB and Grafana are running on a Proxmox server with docker running in a LXC container. I also have my appdata stored on a separate SSD not part of my cache. The only docker component on my cache drive is the system directory that contains the docker.img. I would guess that is the writing to the cache pool. Nice! Thanks for sharing. I have a beast of a server, so I am not surprised I am higher than you. My UUD is also stacked with panels and I have made some personal modifications to mine that suck even more data in. I hope you are enjoying the UUD man. Quote Link to comment
koala784 Posted February 18, 2022 Share Posted February 18, 2022 (edited) On 2/16/2022 at 7:11 AM, falconexe said: Interesting tidbit... Without writing to Cache in any way (array operations) for 24 hours, I seem to be hitting 300-400 MB of writes per hour, and roughly 40GB per day, with just my running dockers. I suspect most, if not all of this, is Telegraph/InfluxDB/Grafana. For anyone else using the UUD, are you seeing similar metrics? For my SSD, I have a manufacturer's suggested lifespan of 1200 TBW. If I did nothing but run appdata stuff (and never used the Cache for array activities), my drive would last roughly 82 years at this rate. Even if I wrote 1TB a day with array activity, it would still last 3.28 years. So I guess this puts to rest the fear of wearing out an expensive SSD Cache drive because of something cool like the UUD. Hi, I have a lot of write on my SSD cache. I've changed it a few months ago (also a 2GB / 1200 TBW Samsung disk), so it's not an emergency yet. The write's stats was similar with the older SSD cache. This is my current SSD write's stats (day / year) : I also think that it seems related to the "Telegraph/InfluxDB/Grafana" stack. There's a lot of things that Grafana is monotoring on my system (got a custom UUD dashboard, a Plex dashboard and a Nginx dashboard). I'm far of being an expert on unRAID or Linux system, so I'm not sure how I could confirm where the culprit is. I think I will try for a few day to set a new InfluxDB docker container with only a SSD write stats monitoring. I was going crazy with it so it's "nice" to see someone else having some thoughs about this Edited February 18, 2022 by koala784 Image 1 Quote Link to comment
falconexe Posted February 18, 2022 Author Share Posted February 18, 2022 (edited) So I had a thought about a new Stat Panel that would be REALLY COOL. Unfortunately, it seems that it may not be possible with InfluxDB 1.X using InfluxQL, but it MAY be possible using FLUX on InfluxDB 2.X. I will lay it out for you here, as I think it is a VERY USELFUL STAT that people would want to know. Basically it involves math concepts that are NOT SUPPORTED (from what I have researched) by the technology the UUD runs on currently, but I'll show you how to calculate this manually. I'm hoping eventually I can add this specific stat to the UUD. Starting Metrics: Math Calculation: 100% / 0.26% = 384.62 Slices of 100% 384.62 Slices * 4.08 Power On Days = 1,569 Days 1,569 Days / 365.25 (1 Year) = FINAL METRIC = SSD Life Left (Time) 4.295 Years of SSD Life Left! So at any given time, you can dynamically calculate WHEN your drive will EXCEED the Manufacturer's TBW limit from the 2 above stats (the total current TBW is already calculated in the "SSD Life Used Panel"). Of course this date will change greatly, and in real time, depending on how your drive is utilized. Dump 100TB in a week and the estimated replacement date will be much sooner. Only use it for Cache appdata, then it will likely trend upward and you'll get longer life with a date far in the future. Theoretically, this future date is when you should start looking to replace the drive. I suspect once it gets within days, it will be VERY ACCURATE. Even though this calculation is pretty straight forward, such things are not permitted in InfluxQL. If I ever figure this out natively in a Stat Panel, I will definitely add this to the UUD! Let me know your thoughts! @SpencerJ @GilbN Edited February 18, 2022 by falconexe 1 1 Quote Link to comment
falconexe Posted February 20, 2022 Author Share Posted February 20, 2022 (edited) New Plex Panel: Stacked Bar Graph - Cumulative Stream Volume By User (Last Year): Edited February 20, 2022 by falconexe 1 Quote Link to comment
falconexe Posted February 21, 2022 Author Share Posted February 21, 2022 (edited) DEVELOPER UPDATE: The recent Grafana Docker update is causing a number of issues with the UUD. Some Panels are entirely broken, and Plugins that the UUD utilizes have been disabled due to security concerns. Furthermore, Grafana has deprecated entire panel types in favor of new versions of the same panel types. Example: "Graph" (OLD) is Now "Time Series" The Grafana 8.4 press release can be found here: https://grafana.com/blog/2022/02/17/grafana-8.4-release-new-panels-better-query-caching-increased-security-accessibility-features-and-more/?utm_source=grafana_news&utm_medium=rss If you choose to update to Grafana 8.4, be prepared to make some adjustments. Sadly, it looks like this will be the end of the UNRAID API/Dynamic Image Panel functionality (unless the plugin developer gets it signed by Grafana and they update it.) If/When I release UUD 1.7, it will be based on Grafana 8.4+. @SpencerJ @GilbN Edited February 21, 2022 by falconexe Quote Link to comment
falconexe Posted February 21, 2022 Author Share Posted February 21, 2022 (edited) I will say this about Grafana 8.4. There are some really slick features in it with A LOT more variety. They will even suggest new styles of panels based on the current data. Pretty cool so far. I am making good progress on fixing what broke. Not too bad so far... some panels even have a migration assistant. Panel Migration: New Curved Line Graph (Among Other Line Interpolation Options): New Gradient Line Graph: New Panel Suggestions Based on Current Data: Edited February 21, 2022 by falconexe 1 Quote Link to comment
falconexe Posted February 21, 2022 Author Share Posted February 21, 2022 (edited) Here are the "Migrated" curved Cache graphs without Gridlines or Legends (now mouse-over tooltips). This look is a pretty clean design. I like it... New: Old: Edited February 21, 2022 by falconexe Quote Link to comment
falconexe Posted February 21, 2022 Author Share Posted February 21, 2022 New Plex Stream Heatmaps: New: Old: Quote Link to comment
falconexe Posted February 21, 2022 Author Share Posted February 21, 2022 Lots of changes in Grafana 8.4, but so far here are the comparisons and new features I'm delivering. You'll notice some new heatmap/gradient graphs for lifetime array growth 🔥 Header Now: Header Before: Array Growth Now: Array Growth Before: Quote Link to comment
falconexe Posted February 21, 2022 Author Share Posted February 21, 2022 (edited) DEVELOPER UPDATE - VARKEN DATA RETENTION Something else I wanted to bring to your attention in VARKEN... For the life of me, I could not figure out why my Plex data was only going back 30 days. It has been bothering me for a while, but I did not have a ton of time to look into it. Yesterday, I found out the issue, and it really stinks I did not find this sooner, but the sooner the better to tell you ha ha. While looking into the InfluxDB logs, I found the following oddity: All queries were being passed a retention policy of .\"varken 30d-1h\". in the background! So apparently when Varken gets installed, it sets a DEFAULT RETENTION Policy of 30 days with 1 Hour Shards (Data File Chunks) when it creates the "varken" database within InfuxDB. This can be found in the Varken Docker "dbmanager.py" Python install script here: https://github.com/Boerderij/Varken/blob/master/varken/dbmanager.py InfluxDB Shards: https://docs.influxdata.com/influxdb/v2.1/reference/internals/shards/ What this means is InfluxDB will delete any Tautulli (Plex) data on a rolling 30 days basis. I can't believe I didn't see this before, but for my UUD, I want EVERYTHING going back to the Big Bang. I'm all about historical data analytics and data preservation. So, I researched how to fix this and it was VERY SIMPLE, but came with a cost. IF YOU RUN THE FOLLOWING COMMANDS, IT WILL PURGE YOUR PLEX DATA FROM UUD AND YOU WILL START FRESH BUT IT SHOULD NOT BE PURGED MOVING FORWARD (AND NOTHING WILL BE REMOVED FROM TAUTULLI - ONLY UUD/InfluxDB) You have a few different options, such as modifying the existing retention policy, making a new one, or defaulting back to the auto-generated one, which by default, seems to keep all ingested data indefinitely. Since this is what we want, here are the steps to set it to "autogen" and delete the pre-installed Varken retention policy of "varken 30d-1h". STEP 1: Bash Into the InfluxDB Docker by Right Clicking the Docker Image on the UNRAID Dashboard and Select Console STEP 2: Run the InfluxDB Command to Access the Database Backend Command: influx STEP 3: Run the Show Retention Policies Command Command: SHOW RETENTION POLICIES ON varken You should see "varken 30d-1h" listed and it will be set to default "true". STEP 4: Set the autogen Retention Policy As the Default Command: ALTER RETENTION POLICY "autogen" ON "varken" DEFAULT STEP 5: Verify "autogen" is Now the Default Retention Policy Command: SHOW RETENTION POLICIES ON varken "autogen" Should Now Say default "true" "varken 30d-1h" should now say "false" STEP 6: Remove the Varken Retention Policy Command: DROP RETENTION POLICY "varken 30d-1h" ON "varken" STEP 7: Verify That Only the "autogen" Retention Policy Remains Command: SHOW RETENTION POLICIES ON varken Once you do this, your UUD Plex data will start from that point forward and it should grow forever so long as you have the space in your appdata folder for the InfluxDB docker (cache drive). Let me know if you have any questions! Sources: https://community.grafana.com/t/how-to-configure-data-retention-expiry/26498 https://stackoverflow.com/questions/41620595/how-to-set-default-retention-policy-and-duration-for-influxdb-via-configuration#58992621 https://docs.influxdata.com/influxdb/v2.1/reference/cli/influx/ @SpencerJ @GilbN Edited February 21, 2022 by falconexe 2 Quote Link to comment
falconexe Posted February 21, 2022 Author Share Posted February 21, 2022 I'm coming in HOT 🎶 🔥😂 1 Quote Link to comment
falconexe Posted February 22, 2022 Author Share Posted February 22, 2022 (edited) 3 hours ago, falconexe said: Lots of changes in Grafana 8.4, but so far here are the comparisons and new features I'm delivering. You'll notice some new heatmap/gradient graphs for lifetime array growth 🔥 Array Growth Now: Array Growth Before: I ended up settling on this for array growth. I was tired of the data gaps in the Stat Panel Spark Lines. This is super clean and Grafana 8.4 brings A LOT more customization options. Edited February 22, 2022 by falconexe Quote Link to comment
MrLondon Posted February 22, 2022 Share Posted February 22, 2022 certainly looking great, cannot wait for version 1.7 fantastic work 1 Quote Link to comment
falconexe Posted February 22, 2022 Author Share Posted February 22, 2022 (edited) Giving the UPS Stats some Grafana 8.4 love. Did did some house cleaning, added new panels, updated/fixed existing panels, and added a new fresh style. New: Old: Edited February 22, 2022 by falconexe Quote Link to comment
Ludditus Posted February 24, 2022 Share Posted February 24, 2022 On 2/12/2022 at 10:01 PM, falconexe said: So, does any one know WHY I would re-develop UUD 1.7 into "Flux" for InfluxDB 2.0? What would be the benefit? What are the opportunities with InfluxDB 2.0 and the "Flux" query language (QL)? I have not deep dived into it yet, but for the needs and requirements of the UUD, I don't see this as a must have, YET. Unless of course "InfluxQL" becomes unsupported. In that case, I will have no choice. Let me know your thoughts. I'm not interested in upgrading to Influx 2.0 at this point. I've got a dozen-plus dashboards and I don't want to have to refactor them all to 2.0, so sticking with 1.7 is fine as far as I am concerned! 1 Quote Link to comment
falconexe Posted February 24, 2022 Author Share Posted February 24, 2022 1 hour ago, Ludditus said: I'm not interested in upgrading to Influx 2.0 at this point. I've got a dozen-plus dashboards and I don't want to have to refactor them all to 2.0, so sticking with 1.7 is fine as far as I am concerned! Yeah right now I am focusing UUD 1.7 on native Grafana 8.4 support. I still do not see a compelling reason to go to InfluxDB 2.0 yet. Thanks for your reply! Quote Link to comment
Ludditus Posted February 24, 2022 Share Posted February 24, 2022 14 minutes ago, falconexe said: Yeah right now I am focusing UUD 1.7 on native Grafana 8.4 support. I still do not see a compelling reason to go to InfluxDB 2.0 yet. Thanks for your reply! I just took a look at my Varken install and the retention policy was already on autogen with 7d shards, and I can get queries back as far as I want. I don't ever remember setting that policy explicitly, but I think I had manually created varken within the influx docker bash, since I wanted to have it alongside existing databases in the same container. Maybe doing it that way instead of having Varken create it's own database avoided the 30d retention policy. 1 Quote Link to comment
skaterpunk0187 Posted February 24, 2022 Share Posted February 24, 2022 13 hours ago, Ludditus said: I just took a look at my Varken install and the retention policy was already on autogen with 7d shards, and I can get queries back as far as I want. I don't ever remember setting that policy explicitly, but I think I had manually created varken within the influx docker bash, since I wanted to have it alongside existing databases in the same container. Maybe doing it that way instead of having Varken create it's own database avoided the 30d retention policy. I can confirm this. I have tested several times. If you allow varken to create the database in influxdb with a retention policy of 30 days. This is set in the script that installs varken in the container, and can not be changed in the varken.ini. To get around this before installing or starting varken for the first time create the database first. This will give you autogen retention policy. 1 Quote Link to comment
falconexe Posted February 24, 2022 Author Share Posted February 24, 2022 13 hours ago, Ludditus said: I just took a look at my Varken install and the retention policy was already on autogen with 7d shards, and I can get queries back as far as I want. I don't ever remember setting that policy explicitly, but I think I had manually created varken within the influx docker bash, since I wanted to have it alongside existing databases in the same container. Maybe doing it that way instead of having Varken create it's own database avoided the 30d retention policy. 4 minutes ago, skaterpunk0187 said: I can confirm this. I have tested several times. If you allow varken to create the database in influxdb with a retention policy of 30 days. This is set in the script that installs varken in the container, and can not be changed in the varken.ini. To get around this before installing or starting varken for the first time create the database first. This will give you autogen retention policy. Thanks for confirming guys! Quote Link to comment
falconexe Posted February 24, 2022 Author Share Posted February 24, 2022 Does anyone know of a way to backload ALL Historical Data from Tautulli into InfluxDB via Varken (or another API/plugin)? I have data going back years in Tautulli, and I would love to access it all through the UUD. I did some research and everything I have seen is no this is not possible. If you have seen this as a viable option, please let me know and I will add it. Quote Link to comment
skaterpunk0187 Posted February 25, 2022 Share Posted February 25, 2022 On 2/24/2022 at 11:37 AM, falconexe said: Does anyone know of a way to backload ALL Historical Data from Tautulli into InfluxDB via Varken (or another API/plugin)? I have data going back years in Tautulli, and I would love to access it all through the UUD. I did some research and everything I have seen is no this is not possible. If you have seen this as a viable option, please let me know and I will add it. Looking in the Tautulli code for varken it looks like it could be possible. snip it of code def get_historical(self, days=30): influx_payload = [] start_date = date.today() - timedelta(days=days) params = {'cmd': 'get_history', 'grouping': 1, 'length': 1000000} req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params)) g = connection_handler(self.session, req, self.server.verify_ssl) if not g: It looks like it could be possible I think, coding is not my forte. Maybe get in contact with the varken dev, or maybe fork it. Quote Link to comment
falconexe Posted February 27, 2022 Author Share Posted February 27, 2022 (edited) On 2/25/2022 at 3:59 PM, skaterpunk0187 said: Looking in the Tautulli code for varken it looks like it could be possible. snip it of code def get_historical(self, days=30): influx_payload = [] start_date = date.today() - timedelta(days=days) params = {'cmd': 'get_history', 'grouping': 1, 'length': 1000000} req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params)) g = connection_handler(self.session, req, self.server.verify_ssl) if not g: It looks like it could be possible I think, coding is not my forte. Maybe get in contact with the varken dev, or maybe fork it. Thanks for the tip. I just setup a GitHub for the UUD. Never done this before, but I am a programmer, so I should be able to figure it out. Anyone know if the creator of Varken "Boerderij" is on the forums here? I just need an option to pull in ALL historical Tautulli data upon installing Varken, or pass some kind of parameter to the docker to choose the earliest load/retention date for an existing install. Or better yet, run a command in InfluxDB to ingest everything from Tautulli via Varken going back to day one (brute force the database). I assume all of this is possible. Maybe I can fork the entire thing and just modify this single "tautulli.py" file as @skaterpunk0187 suggested... https://github.com/Boerderij/Varken/blob/master/varken/tautulli.py Heck, I'm even considering making my own UUD docker! I've never done that either, but I installed GUS a couple days ago and learned a lot from the way @testdasi did it. Not sure if he is still around, as he's been silent for over a year on this stuff. @SpencerJ, do you know anyone or any good documentation that can assist me getting started with making a docker for the UUD and incorporating everything into it (other dockers/settings), so I can make this easier for everyone to install out of the box? It would be really awesome of I could release the UUD 1.7 in it's own docker with native Grafana 8.4.X support, and then people would only have to modify their Grafana panels and never mess with a config file again. I think this would bring the UUD to the masses and it would sure make updating it much easier with different docker builds for past versions. Anyone down with this? Anyone want to help and guide me through the docker build process? DM me if you want to collaborate and take me to school ha ha. Otherwise, I'll try to learn this new stuff in my spare time. But this is the vision I see for the future. One click install of the UUD and then just customize for your UNRAID server. Edited February 27, 2022 by falconexe Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.