Ultimate UNRAID Dashboard (UUD)


Recommended Posts

9 hours ago, pzen said:

Hello everyone,

 

I hope I will make myself understood correctly I use a translator

 

First of all, a big thank you to falconexe for the work done to create the various visualization tables

 

I am stalling on the establishment of information concerning the temperature, the voltage and the speed of the fan

 

Not having the IPMI function I followed the instructions to use the sensors

 

Is it necessary to modify a parameter or only indicated sensor in the following line on Grafana?

 

SELECT last("value") FROM "ipmi_sensor" WHERE ("unit" = 'degrees_c' AND "host" =~ /^$Host$/) AND $timeFilter GROUP BY time($__interval), "name"

 

Thank you in advance for your help

 

Sans titre12.jpg

 

Welcome and thanks. You will need to adjust each panel by editing it and selecting the appropriate sensor for each panel. Assuming you have everything setup correctly in the config file, and have loaded the sensors package, everything you need to adjust is in Grafana within the UUD dashboard panels.

 

Please search this forum topic for "Sensors" or "AMD" for many previous examples of how to accomplish this. Good luck!

Link to comment

Interesting tidbit...

 

Without writing to Cache in any way (array operations) for 24 hours, I seem to be hitting 300-400 MB of writes per hour, and roughly 40GB per day, with just my running dockers. I suspect most, if not all of this, is Telegraph/InfluxDB/Grafana.

 

image.thumb.png.e0561d3261db5cee3c91fe908f1ec4ed.png

 

For anyone else using the UUD, are you seeing similar metrics?

 

For my SSD, I have a manufacturer's suggested lifespan of 1200 TBW. If I did nothing but run appdata stuff (and never used the Cache for array activities), my drive would last roughly 82 years at this rate. Even if I wrote 1TB a day with array activity, it would still last 3.28 years.

 

So I guess this puts to rest the fear of wearing out an expensive SSD Cache drive because of something cool like the UUD.

 

 

Edited by falconexe
Link to comment

I don't have an hourly chart, I do however have a daily chart. I'm using 170 MB per day but divide that by 24 hours I am only writing 7.1 MB per hour. I only have the telegraf docker running on the server InfluxDB and Grafana are running on a Proxmox server with docker running in a LXC container. I also have my appdata stored on a separate SSD not part of my cache. The only docker component on my cache drive is the system directory that contains the docker.img. I would guess that is the writing to the cache pool.

  • Like 1
Link to comment
21 minutes ago, skaterpunk0187 said:

I don't have an hourly chart, I do however have a daily chart. I'm using 170 MB per day but divide that by 24 hours I am only writing 7.1 MB per hour. I only have the telegraf docker running on the server InfluxDB and Grafana are running on a Proxmox server with docker running in a LXC container. I also have my appdata stored on a separate SSD not part of my cache. The only docker component on my cache drive is the system directory that contains the docker.img. I would guess that is the writing to the cache pool.

 

Nice! Thanks for sharing.

 

I have a beast of a server, so I am not surprised I am higher than you. My UUD is also stacked with panels and I have made some personal modifications to mine that suck even more data in. I hope you are enjoying the UUD man.

Link to comment
On 2/16/2022 at 7:11 AM, falconexe said:

Interesting tidbit...

 

Without writing to Cache in any way (array operations) for 24 hours, I seem to be hitting 300-400 MB of writes per hour, and roughly 40GB per day, with just my running dockers. I suspect most, if not all of this, is Telegraph/InfluxDB/Grafana.

 

image.thumb.png.e0561d3261db5cee3c91fe908f1ec4ed.png

 

For anyone else using the UUD, are you seeing similar metrics?

 

For my SSD, I have a manufacturer's suggested lifespan of 1200 TBW. If I did nothing but run appdata stuff (and never used the Cache for array activities), my drive would last roughly 82 years at this rate. Even if I wrote 1TB a day with array activity, it would still last 3.28 years.

 

So I guess this puts to rest the fear of wearing out an expensive SSD Cache drive because of something cool like the UUD.

 

 


Hi, I have a lot of write on my SSD cache. I've changed it a few months ago (also a 2GB / 1200 TBW Samsung disk), so it's not an emergency yet. The write's stats was similar with the older SSD cache.

 

This is my current SSD write's stats (day / year) :
image.thumb.png.cb484b9afa6ecb01e289bfebd2dade99.png

 
I also think that it seems related to the "Telegraph/InfluxDB/Grafana" stack.

There's a lot of things that Grafana is monotoring on my system (got a custom UUD dashboard, a Plex dashboard and a Nginx dashboard).
 

I'm far of being an expert on unRAID or Linux system, so I'm not sure how I could confirm where the culprit is.
I think I will try for a few day to set a new InfluxDB docker container with only a SSD write stats monitoring.

I was going crazy with it so it's "nice" to see someone else having some thoughs about this :)

Edited by koala784
Image
  • Like 1
Link to comment

So I had a thought about a new Stat Panel that would be REALLY COOL. Unfortunately, it seems that it may not be possible with InfluxDB 1.X using InfluxQL, but it MAY be possible using FLUX on InfluxDB 2.X. I will lay it out for you here, as I think it is a VERY USELFUL STAT that people would want to know.

 

Basically it involves math concepts that are NOT SUPPORTED (from what I have researched) by the technology the UUD runs on currently, but I'll show you how to calculate this manually. I'm hoping eventually I can add this specific stat to the UUD.

 

 

 

Starting Metrics:

 

image.thumb.png.23d98ab8b36f5ab36eb3e24e381a1162.png

 

 

Math Calculation:

 

100% / 0.26% = 384.62 Slices of 100%

384.62 Slices * 4.08 Power On Days = 1,569 Days

1,569 Days / 365.25 (1 Year) =

 

 

FINAL METRIC = SSD Life Left (Time)

 

4.295 Years of SSD Life Left!

 

 

So at any given time, you can dynamically calculate WHEN your drive will EXCEED the Manufacturer's TBW limit from the 2 above stats (the total current TBW is already calculated in the "SSD Life Used Panel"). Of course this date will change greatly, and in real time, depending on how your drive is utilized. Dump 100TB in a week and the estimated replacement date will be much sooner. Only use it for Cache appdata, then it will likely trend upward and you'll get longer life with a date far in the future. Theoretically, this future date is when you should start looking to replace the drive. I suspect once it gets within days, it will be VERY ACCURATE.

 

Even though this calculation is pretty straight forward, such things are not permitted in InfluxQL. If I ever figure this out natively in a Stat Panel, I will definitely add this to the UUD! Let me know your thoughts!

 

 

@SpencerJ @GilbN

 

 

 

Edited by falconexe
  • Like 1
  • Thanks 1
Link to comment

DEVELOPER UPDATE:

 

The recent Grafana Docker update is causing a number of issues with the UUD.

 

 

Some Panels are entirely broken, and Plugins that the UUD utilizes have been disabled due to security concerns.

 

image.thumb.png.177351daaaa570ab46752e08d0f9ca7e.png

 

image.thumb.png.ec9dc07c17e66cce84636bcb22caa555.png

 

 

Furthermore, Grafana has deprecated entire panel types in favor of new versions of the same panel types.

 

Example: "Graph" (OLD) is Now "Time Series"

 

image.thumb.png.0c0c9993c0ea110719f4b4626efe5a4d.png

 

 

The Grafana 8.4 press release can be found here:

 

https://grafana.com/blog/2022/02/17/grafana-8.4-release-new-panels-better-query-caching-increased-security-accessibility-features-and-more/?utm_source=grafana_news&utm_medium=rss

 

 

If you choose to update to Grafana 8.4, be prepared to make some adjustments.

 

 Sadly, it looks like this will be the end of the UNRAID API/Dynamic Image Panel functionality (unless the plugin developer gets it signed by Grafana and they update it.)

 

 

If/When I release UUD 1.7, it will be based on Grafana 8.4+.

 

@SpencerJ @GilbN

Edited by falconexe
Link to comment

I will say this about Grafana 8.4. There are some really slick features in it with A LOT more variety. They will even suggest new styles of panels based on the current data. Pretty cool so far. I am making good progress on fixing what broke. Not too bad so far... some panels even have a migration assistant.

 

 

Panel Migration:

 

image.png.1dda1951cf05758caa902503a83617c1.png

 

 

 

New Curved Line Graph (Among Other Line Interpolation Options):

 

image.thumb.png.c7c45fed666898fbb9d36f7fec6781db.png

 

image.png.74c5581fce5c5744bdb0b4c4f4fb87a7.png

 

 

New Gradient Line Graph:

 

image.thumb.png.133efc9954b7dfabd19963210a1dc14a.png

 

 

New Panel Suggestions Based on Current Data:

image.thumb.png.d4ff4fe13b6bbbac7f28101f087a003a.png

 

 

 

Edited by falconexe
  • Like 1
Link to comment

DEVELOPER UPDATE - VARKEN DATA RETENTION

 

Something else I wanted to bring to your attention in VARKEN... For the life of me, I could not figure out why my Plex data was only going back 30 days. It has been bothering me for a while, but I did not have a ton of time to look into it. Yesterday, I found out the issue, and it really stinks I did not find this sooner, but the sooner the better to tell you ha ha.

 

 

While looking into the InfluxDB logs, I found the following oddity:

 

image.thumb.png.cee26366d8f5ad96a4f7bd69c9400e03.png

 

All queries were being passed a retention policy of .\"varken 30d-1h\". in the background!

 

 

So apparently when Varken gets installed, it sets a DEFAULT RETENTION Policy of 30 days with 1 Hour Shards (Data File Chunks) when it creates the "varken" database within InfuxDB.

 

 

This can be found in the Varken Docker "dbmanager.py" Python install script here:

 

https://github.com/Boerderij/Varken/blob/master/varken/dbmanager.py

 

 

image.png.843296304e4b440d0e348c7551519e3c.png

 

image.thumb.png.3d2b1e61d91c75f7c85ba2992bf0b37b.png

 

 

InfluxDB Shards:

 

image.thumb.png.25e3c0ed13a689e2c51670fe5be9ac2d.png

 

https://docs.influxdata.com/influxdb/v2.1/reference/internals/shards/

 

 

What this means is InfluxDB will delete any Tautulli (Plex) data on a rolling 30 days basis. I can't believe I didn't see this before, but for my UUD, I want EVERYTHING going back to the Big Bang. I'm all about historical data analytics and data preservation.

 

 

So, I researched how to fix this and it was VERY SIMPLE, but came with a cost.

 

 

 IF YOU RUN THE FOLLOWING COMMANDS, IT WILL PURGE YOUR PLEX DATA FROM UUD AND YOU WILL START FRESH

BUT IT SHOULD NOT BE PURGED MOVING FORWARD (AND NOTHING WILL BE REMOVED FROM TAUTULLI - ONLY UUD/InfluxDB)

 

 

You have a few different options, such as modifying the existing retention policy, making a new one, or defaulting back to the auto-generated one, which by default, seems to keep all ingested data indefinitely. Since this is what we want, here are the steps to set it to "autogen" and delete the pre-installed Varken retention policy of "varken 30d-1h".

 

 

  • STEP 1: Bash Into the InfluxDB Docker by Right Clicking the Docker Image on the UNRAID Dashboard and Select Console
    • image.png.72d376d58b61b6160bffc5c147af73f9.png

 

  • STEP 2: Run the InfluxDB Command to Access the Database Backend
    • Command: influx
    • image.png.7ac6a1534413b59eda4cacffb993d540.png

 

  • STEP 3: Run the Show Retention Policies Command
    • Command: SHOW RETENTION POLICIES ON varken
    • image.png.0d781e33ac8a8e71748488d2739dcf17.png
    • You should see "varken 30d-1h" listed and it will be set to default "true".

 

  • STEP 4: Set the autogen Retention Policy As the Default
    • Command: ALTER RETENTION POLICY "autogen" ON "varken" DEFAULT
    • image.thumb.png.0e8a7924fe9bc4462009df679bfcd59f.png

 

  • STEP 5: Verify "autogen" is Now the Default Retention Policy
    • Command: SHOW RETENTION POLICIES ON varken
    • "autogen" Should Now Say default "true"
    • image.png.d0829a598853fbd155d76534387ab090.png
    • "varken 30d-1h" should now say "false"

 

  • STEP 6: Remove the Varken Retention Policy
    • Command: DROP RETENTION POLICY "varken 30d-1h" ON "varken"
    • image.png.bb9797616a3b234fe39e055cd50cdad6.png

 

  • STEP 7: Verify That Only the "autogen" Retention Policy Remains
    • Command: SHOW RETENTION POLICIES ON varken
    • image.thumb.png.35e6cd0417fa0c5c923a8ff23a8e58e1.png

 

Once you do this, your UUD Plex data will start from that point forward and it should grow forever so long as you have the space in your appdata folder for the InfluxDB docker (cache drive). Let me know if you have any questions!

 

 

Sources:

 

https://community.grafana.com/t/how-to-configure-data-retention-expiry/26498

 

https://stackoverflow.com/questions/41620595/how-to-set-default-retention-policy-and-duration-for-influxdb-via-configuration#58992621

 

https://docs.influxdata.com/influxdb/v2.1/reference/cli/influx/

 

 

 

@SpencerJ @GilbN

 

 

 

Edited by falconexe
  • Like 2
Link to comment

  

3 hours ago, falconexe said:

Lots of changes in Grafana 8.4, but so far here are the comparisons and new features I'm delivering. You'll notice some new heatmap/gradient graphs for lifetime array growth 🔥

 

 

Array Growth Now:

image.thumb.png.3df15a106861531500626e80dc5168b8.png

 

 

Array Growth Before:

image.thumb.png.d43cdea84043daff76b449a4ceddf4bf.png

 

 

 

I ended up settling on this for array growth. I was tired of the data gaps in the Stat Panel Spark Lines. This is super clean and Grafana 8.4 brings A LOT more customization options.

 

 

 

image.thumb.png.57bae6acfe696258d87fd69537efa731.png

 

 

 

Edited by falconexe
Link to comment
On 2/12/2022 at 10:01 PM, falconexe said:
  • So, does any one know WHY I would re-develop UUD 1.7 into "Flux" for InfluxDB 2.0?
  • What would be the benefit?
  • What are the opportunities with InfluxDB 2.0 and the "Flux" query language (QL)?
  • I have not deep dived into it yet, but for the needs and requirements of the UUD, I don't see this as a must have, YET. Unless of course "InfluxQL" becomes unsupported. In that case, I will have no choice.


Let me know your thoughts.

 

 

 

I'm not interested in upgrading to Influx 2.0 at this point. I've got a dozen-plus dashboards and I don't want to have to refactor them all to 2.0, so sticking with 1.7 is fine as far as I am concerned!

  • Like 1
Link to comment
1 hour ago, Ludditus said:

 

I'm not interested in upgrading to Influx 2.0 at this point. I've got a dozen-plus dashboards and I don't want to have to refactor them all to 2.0, so sticking with 1.7 is fine as far as I am concerned!

 

Yeah right now I am focusing UUD 1.7 on native Grafana 8.4 support. I still do not see a compelling reason to go to InfluxDB 2.0 yet. Thanks for your reply!

Link to comment
14 minutes ago, falconexe said:

 

Yeah right now I am focusing UUD 1.7 on native Grafana 8.4 support. I still do not see a compelling reason to go to InfluxDB 2.0 yet. Thanks for your reply!

 

I just took a look at my Varken install and the retention policy was already on autogen with 7d shards, and I can get queries back as far as I want. I don't ever remember setting that policy explicitly, but I think I had manually created varken within the influx docker bash, since I wanted to have it alongside existing databases in the same container.  Maybe doing it that way instead of having Varken create it's own database avoided the 30d retention policy.

  • Thanks 1
Link to comment
13 hours ago, Ludditus said:

 

I just took a look at my Varken install and the retention policy was already on autogen with 7d shards, and I can get queries back as far as I want. I don't ever remember setting that policy explicitly, but I think I had manually created varken within the influx docker bash, since I wanted to have it alongside existing databases in the same container.  Maybe doing it that way instead of having Varken create it's own database avoided the 30d retention policy.

I can confirm this. I have tested several times. If you allow varken to create the database in influxdb with a retention policy of 30 days. This is set in the script that installs varken in the container, and can not be changed in the varken.ini. To get around this before installing or starting varken for the first time create the database first. This will give you autogen retention policy.

  • Thanks 1
Link to comment
13 hours ago, Ludditus said:

 

I just took a look at my Varken install and the retention policy was already on autogen with 7d shards, and I can get queries back as far as I want. I don't ever remember setting that policy explicitly, but I think I had manually created varken within the influx docker bash, since I wanted to have it alongside existing databases in the same container.  Maybe doing it that way instead of having Varken create it's own database avoided the 30d retention policy.

 

4 minutes ago, skaterpunk0187 said:

I can confirm this. I have tested several times. If you allow varken to create the database in influxdb with a retention policy of 30 days. This is set in the script that installs varken in the container, and can not be changed in the varken.ini. To get around this before installing or starting varken for the first time create the database first. This will give you autogen retention policy.

 

Thanks for confirming guys!

Link to comment

Does anyone know of a way to backload ALL Historical Data from Tautulli into InfluxDB via Varken (or another API/plugin)? I have data going back years in Tautulli, and I would love to access it all through the UUD. I did some research and everything I have seen is no this is not possible. If you have seen this as a viable option, please let me know and I will add it.

Link to comment
On 2/24/2022 at 11:37 AM, falconexe said:

Does anyone know of a way to backload ALL Historical Data from Tautulli into InfluxDB via Varken (or another API/plugin)? I have data going back years in Tautulli, and I would love to access it all through the UUD. I did some research and everything I have seen is no this is not possible. If you have seen this as a viable option, please let me know and I will add it.

Looking in the Tautulli code for varken it looks like it could be possible. snip it of code

 def get_historical(self, days=30):

       influx_payload = []

       start_date = date.today() - timedelta(days=days)

       params = {'cmd': 'get_history', 'grouping': 1, 'length': 1000000}

       req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params))

      g = connection_handler(self.session, req, self.server.verify_ssl)

if not g:

It looks like it could be possible I think, coding is not my forte. Maybe get in contact with the varken dev, or maybe fork it.

Link to comment
On 2/25/2022 at 3:59 PM, skaterpunk0187 said:

Looking in the Tautulli code for varken it looks like it could be possible. snip it of code

 def get_historical(self, days=30):

       influx_payload = []

       start_date = date.today() - timedelta(days=days)

       params = {'cmd': 'get_history', 'grouping': 1, 'length': 1000000}

       req = self.session.prepare_request(Request('GET', self.server.url + self.endpoint, params=params))

      g = connection_handler(self.session, req, self.server.verify_ssl)

if not g:

It looks like it could be possible I think, coding is not my forte. Maybe get in contact with the varken dev, or maybe fork it.

 

 

Thanks for the tip. I just setup a GitHub for the UUD. Never done this before, but I am a programmer, so I should be able to figure it out.

 

Anyone know if the creator of Varken "Boerderij" is on the forums here? I just need an option to pull in ALL historical Tautulli data upon installing Varken, or pass some kind of parameter to the docker to choose the earliest load/retention date for an existing install. Or better yet, run a command in InfluxDB to ingest everything from Tautulli via Varken going back to day one (brute force the database). I assume all of this is possible. Maybe I can fork the entire thing and just modify this single "tautulli.py" file as @skaterpunk0187 suggested...

 

 

https://github.com/Boerderij/Varken/blob/master/varken/tautulli.py

 

 

image.thumb.png.31819b616bc0a8b9d8c09be677316676.png

 

image.png.bf2132abd6dd609ada563d517fdbf952.png

 

 

 

Heck, I'm even considering making my own UUD docker! I've never done that either, but I installed GUS a couple days ago and learned a lot from the way @testdasi did it. Not sure if he is still around, as he's been silent for over a year on this stuff.

 

@SpencerJ, do you know anyone or any good documentation that can assist me getting started with making a docker for the UUD and incorporating everything into it (other dockers/settings), so I can make this easier for everyone to install out of the box?

 

It would be really awesome of I could release the UUD 1.7 in it's own docker with native Grafana 8.4.X support, and then people would only have to modify their Grafana panels and never mess with a config file again. I think this would bring the UUD to the masses and it would sure make updating it much easier with different docker builds for past versions.

 

Anyone down with this? Anyone want to help and guide me through the docker build process? DM me if you want to collaborate and take me to school ha ha. Otherwise, I'll try to learn this new stuff in my spare time. But this is the vision I see for the future. One click install of the UUD and then just customize for your UNRAID server.

 

 

Edited by falconexe
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.