Jump to content

Jufy111

Members
  • Posts

    30
  • Joined

  • Last visited

Posts posted by Jufy111

  1. Glad to here it works for you.

    I'm curious to see if any panels didn't work for you. I've only got one system to test on so other system may not work. Let me know if there are any panels that don't work.

    As far as query speed goes, yes some queries defiantly need improvement. I'll send you a PM to discuss some things.

  2. 4 hours ago, Bili said:

    I am also using influx2, and I spent several hours converting only a few queries. It is really difficult to complete them all. I wonder if your project has been released? Maybe I can help a little bit.

     I do have a copy, but I've been waiting on falconexe to discuss some things with him. Here is a the dashboard, that is has most of the queries complete

     

    This is still a draft and a lot of the queries still need optimization.

     

    Ultimate_UNRAID_Dashboard_v1.7_influxDB2.0_v0.1.json

  3. I got it to launch, but i can't say if its doing any more than that

     

    change the volume mapping to drop the filename and map the directory. I was getting a weird directory/file type mismatch error.

    -v /mnt/user/appdata/alloy/config/config.alloy:/etc/alloy/ >>>>>> -v /mnt/user/appdata/alloy/config/config.alloy:/etc/alloy/ 

     

    In the post arguments you need 'quotes for the whole expression' if you have spaces. And i think you need to add the alloy command at the start

    https://grafana.com/docs/alloy/latest/reference/cli/run/

    'alloy run --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy'

     

    my setup below
    Screenshot-2024-04-19-170844.png

  4.  

    I see far to often, both here and on other forums people asking questions on how get docker applications, that are not available on the community applications page. I know that the answer as available in a few places on this forum, but honesty, it not that easy to find, so I'd though I'd make a guide that I can link people.

     

    This guide will hopefully provide enough information to not only get your container up and running, but help you understand how containers are mapped to to your unRAID system.

     

    Also I'm happy to admit that I don't know everything, there is a good chance that the way I have done stuff here is not the best way, so feel free to leave feedback to improve this guide.

     

    To start off, I'll clarify the nomenclature, as i regularly see people confusing terms.

    Nomenclature

    1. host: In the case of this guide, this host refers to the unRAID server.
    2. docker image: Think if this as the template for the docker installation
    3. docker container: An instance of a docker image, multiple containers might run from the same docker image
    4. persistent data: This is data that is retained (not removed)

     

    Interpreting the Docker or Docker compose information for unRAID

    The image that you are trying to install may have provide a docker run command or docker-compose config in its documentation. These often get a container up and running pretty quick and is usually all that is needed but I would recommend reading any documentation so that you are at least aware of what each part does.

     

    I have an example of each below that I have colour coded parts to make it easier.

     

    red: Host path/port
    blue: container path/port

    green: Environmental Variables

    purple: Image source

    grey: my comments, prefixed with #

    Docker run command

    Quote

     

    docker run -d \ #'-d' just run the command detached, you can ignore this

      -p 8080:8080 \ #these are not always the same (you might se 8080:80 for example)

      -v /location/of/trainingData:/usr/share/tessdata \

      -v /location/of/extraConfigs:/configs \

      -v /location/of/logs:/logs \

      -e DOCKER_ENABLE_SECURITY=false \

      -e INSTALL_BOOK_AND_ADVANCED_HTML_OPS=false \

      --name stirling-pdf \

      frooodle/s-pdf:latest

     

     

    You may also find this represented as a single line.

     

     

    Quote

    docker run -d -p 8080:8080 -v /location/of/trainingData:/usr/share/tessdata -v /location/of/extraConfigs:/configs -v /location/of/logs:/logs -e DOCKER_ENABLE_SECURITY=false -e INSTALL_BOOK_AND_ADVANCED_HTML_OPS=false --name stirling-pdf frooodle/s-pdf:latest

    Docker-Compose

    Quote

     

    version: '3.3'  #part of docker-compose, ignore this

    services:  #part of docker-compose, ignore this

      stirling-pdf:

        image: frooodle/s-pdf:latest

        ports:

          - '8080:8080'

        volumes:

          - /location/of/trainingData:/usr/share/tessdata

          - /location/of/extraConfigs:/configs

          - /location/of/customFiles:/customFiles

          - /location/of/logs:/logs

        environment:

          - DOCKER_ENABLE_SECURITY=false

          - INSTALL_BOOK_AND_ADVANCED_HTML_OPS=false

     

     

    Adding a new docker container to unRAID

    1. Scroll to the bottom of the docker tab page in the the unRAID webUI and select "add a new container"

    Spoiler

    add-container.png

     

    You can leave many of the fields blank. I'll go over the important ones

    • 2. Name: you can use any name here, or just use the name of the image.
    • 3. Repository: If you are pulling the image from dockerhub, you can just use: '<author>/<repository>:latest'
      • if the docker image you are trying to pull is from the GitHub container registry (ghcr), use: ghcr/author/repository
      • there are also other registries such as linuxserver: lscr/author/repository
      • If you want to use a specific release instead of the latest, specify that instead
    • 4. (these are not always necessary, but may improve the experiance)
      • 4.1. Icon URL: Not a necessity, but its nice to have for the unRAID UI, just link the url to an image you want.
      • 4.2 WebUI: http://[IP]:[PORT:8080]
        • # replace “8080” with the container port (not the host port). This is not necessary; it will allow you to launch the webui by right clicking on the icon from the unRAID webui
      • 4.3 Extra parameters: if there are extra parameters that or not Paths/Ports/Variable you can put them in here.
      • 4.4 Post Arguments: You can most likely ignore. This will run a command on the container startup. You can use it do do things like run script or update packages
      • 4.5 Network Type: Bridge is the default type here, just use it unless you have a reason to use something else.

     

    Spoiler

     

    Mapping Paths, Volumes and Environments for the container

    5.1 Volumes

    This maps a directory in the container to a share on your host machine. This data is persistent and remains if the container is updated or removed. One of the directory mappings shown in the docker config is: 'location/of/extraConfigs:/configs'.

    It shows the host path (red) and the container path (blue) separated by a colon.

    In this case the host path is shows as 'location/of/extraConfigs', but different authors will all show this differently. You will need to change this to the appropriate share for your unRAID machine.

    Typically, in unRAID, the 'appdata' share is used for persistent docker container data:  '/mnt/user/appdata', then this can be appended with a directory for each container, in this case I am going to call the directory 's-pdf', so the host path is:

    '/mnt/user/appdata/s-pdf'

     

    The right side of the colon, we see the container path: '/configs'. This container path should usually remain unchanged from the example that you find for your image.

     

    If you are mapping to other shares that are not appdata, maybe it’s a media folder or maybe it’s a downloads folder, its good practice is to limit the access of the share to the minimum and not just give access to the root of a share.

    For example, you might have a downloads share /downloads that multiple apps have access to (if you have an application (for example an FTP client), I will map this to a directory '/downloads/ftp-client/' and not the '/downloads/'.

     

    To add a mapped path to the container, scroll to the bottom of the add container page and click “Add another Path, Port Variable, Label or Device”The name field is used as identifier, you can really put anything in here, I’m not sure if there is a proper convention for it, I have commonly seen ALL_CAPS used, so that what I will do here.

     

    Config Type: Path

    Name: CONFIG_DIR #can be called anything

    Container path: '/configs'

    Host path: '/mnt/user/appdata/s-pdf'

    Access Mode: Set as required

     

    It should look something like this

    Spoiler

    path.png

     

    Repeat for all the volumes for the image that are required

     

    5.2 Ports

    Configuring the ports are setup similar to the paths. Left side of the colon is the host (e.g the port you might access a webUI though), the right side is the container. You can change the host port to anything that is already used unless other application need to talk to your new application, then you might have to changes some configs. Leave the container port as it is.

     

    Config Type: Port

    Name: WEBUI_PORT # Call this anything you want

    Container Port: 8080 #This will be the port number on the right side of the colon. Might not be for you.

    Host Port: 8080 # you can change this if port 8080 is used by another container

     

    Repeat for all ports that are required

     

    5.3 Environments

    Environments are basically variables that are passed thought to the container, this might be a username, password, or a value that the container uses which which is decided by the user. Left of the equals size is the key, right is the value.

     

    Config Type: Variable

    Name: DOCKER_ENABLE_SECURITY  # you can just name this same as the Key

    Key: DOCKER_ENABLE_SECURITY  

    Value: false

     

    Repeat for all variables and then press apply.

     

    FAQ

    Q: Help, I have 2 containers that both use the same port and but I can’t change as I have an application that talks to them over a port.

    A: setup a bridge connection in the unRAID network settings, this will allow you to use custom bridge network type and set an unique IP address for a container.

     

    Q: What if my image is a database that has heavy reads and writes

    A: instead of a the host path '/mnt/user/appdata/dbcontainername' you can use '/mnt/cache/appdata/dbcontainername'. This bypasses the overhead from the fuse filesystem, which I have found to increases performance.

    • Thanks 1
  5. 10 hours ago, grumpy said:

    There is no session _value from Varken to Influx, unless that is an Influx built-in. But that is moat as @falconexe is not using session_id for these queries any way. He is using player_state.

     

    The code above looks like it is the write data to influxdb code from varken.

     

    Sorry, my apologies, the "_value" is an influxDB v2.X field. In v2 "_value" is the data column associated with the "hash" field.

     

    Unfortunately not all of the v1 queries were displayed in grafana when I imported the dashboard using a v2 data source, so I'm not able to see how falconexe has structured the queries using influxql. If you post the query here I'll see if I can work it out.

     

    For the stream history, I've done my the following way. I grouped by the "hash" value and pick the last value with some data cleanup. My Flux (influxdb v2) query is below.

     

    Stream Log (Detailed)

    Spoiler

    // groups by hash value and passes data from last time value.

    from(bucket: ${bucketVarken})

      |> range(start: -7d)

      |> filter(fn: (r) => r["_measurement"] == "Tautulli")

      |> filter(fn: (r) => r["session_id"] != "") //data cleanup

      |> group(columns: ["_value"]) //groups by the "hash value"

      |> last()  //gets the  last value i group

      |> filter(fn: (r) => r["username"] != "") //data cleanup

     

     

  6. 20 hours ago, grumpy said:

    Now after watching the panel for 16 hours, and correcting some issues for myself I have noticed some other issues that go beyond my understanding.

     

    Stream Log (Overview) misses some entries. In music playing from Plex it can keep up sometimes, skip a couple of songs, or even skip 10's of songs. Probably not noticed during movies or tv shows; I do not know as I haven't made it that far during testing.

    Stream Log (Detailed) Keeps up even though there are delays, to be expected Plex -> Tautulli -> Varken -> Influx, same with Current Streams. During Music play percentage is of no use, while the song can have 72% remaining when in fact the next song has started playing.

    Even though most of my posts seem like complaints I really like to thank @falconexe for all of his hard work, it is functional and visual pleasing.

    I know I wouldn't have been able to do this on my own accord.

     

    Take the following with a grain of salt because my DB is influxDB v2 and I have not used the queries from falconexe's dashboard. The following is based on my experience updating all the queries to influxdb v2. unfortunately a lot of the queries are just missing, so I haven't been able look at the influxql query and convert to to flux.

     

    I think the stream log misses entries because the filter is using tag "session_id", if you change this to "_value" it should fix it.

    I believe the session_id stays the same for a single session, even if different media is played (songs or consecutive episode). This will mean if a stream is stopped and restarted it will still have multiple entries in the history

     

    I agree, the percentage is kind of a little useless. This is because varken (maybe tautulli too) only update at most every 30 seconds. If a song finishes between DB writes the last percentage reading is left in the history. I don't believe that there is any way around this at this stage (short of coding in some logic to the query that updates the 100 percent if conditions are met.),

  7. 12 minutes ago, Scheev said:

    I was able to figure it out, but my only hurdle is related to the use of InfluxQL for the dashboard instead of Flux. Is there a setting that will alleviate this? I started rewriting the dashboard in Flux...but realized theres probably another way around it.

    Give it a few days. I'm currently working on updating all the queries to Flux for UUD1.7.

     

    I havent been able to find a way around it.

  8. On 4/3/2024 at 8:58 AM, Scheev said:

    Need some advice on here, Ive been bashing my head for awhile trying to get this all working.

     

    I am finding that all the guides are fairly outdated. When I use telegraf 1.20.2 it doesn't gather metrics cause the only way ive found to get the config provides the latest config. Using an older config wouldn't exactly solve my problem either as explained below.

    Switching to the latest telegraf branch gives me permissions errors and won't start. Ive tried to do some chmod antics but can't seem to get the hang of where or what this is supposed to be. In my post arguments I have it running apt install arguments that have been outlined in guides and such. Would that be the culprit? how do I elevate permissions for this?

    E: List directory /var/lib/apt/lists/partial is missing. - Acquire (13: Permission denied)

    I imagine if telegraf started I could likely configure it to give metrics to influx.

    Next problem, my influxDB docker is running influxdb v2 because I have a OPNsense instance that is posting metrics to Influx and requires it. However, without telegraf starting I can't even use the influxdb_v2 inputs in the config.

     

    As well, how would I configure influx if I have two different instances of telegraf providing metrics to the database?

     

    Thanks for any and all help!

     

     

     

    I'm running influx v2 as well.

    Telegraf 1.20.2 was the last version before the devs locked down the permission an made it  impossible to install smartmontools and other packets in the container. Any release newer than this, and you will have to remove the install argument for smartmontools and remove [inputs.smart]  out of the telegraf config. This will sacrifice any smart data.

     

    I don't use OPNsense, but I just have my pfsense configured to write to a bucket called 'pfsense'. Pfsense has it's of telegraf instance. I assume that OPNsense is the same

     

    I'm still not quite sure why you need the newer version of telegraf. OPNsense should have it's own telegraf instance pointed to separate bucket in your influxdb 2 instance. This should leave the docker telegraf free to be installed as v 1.20.2

     

    See my post above for my telegraf :1.20.2 config that is configured for infludb v2

    • Like 1
  9. 16 minutes ago, grumpy said:

    @Jufy111

    Thank You, I'm looking forward to your example, as far as the smartmontools. I think it is installed with the docker image post argument; which is how I did it.

    /bin/sh -c 'apt update && apt install -y smartmontools && apt install -y lm-sensors && telegraf' --user 0 

     

    I do get the panels:  Drive SMART Health Overview, Drive Life

     

    Hmm, not sure about the SSD data then.

     

    Here is my telegraf config file (all though I'm using influxdb2 so that section will look different for you). It works for me, but still may not work for you. Full disclosure too, I've not actually tested UUD1.7, as it's all in InfluxDB v1 queries. I'm slowly changing them over to v2.

     

    Spoiler

    [agent]
      interval = "10s"
      round_interval = true
      metric_batch_size = 1000
      metric_buffer_limit = 10000
      collection_jitter = "0s"
      flush_interval = "10s"
      flush_jitter = "0s"
      precision = ""
      debug = false
      quiet = false
      logfile = ""
      hostname = ""
      omit_hostname = false

     

    [[outputs.influxdb_v2]]    
      urls = ["http://192.168.100.100:8086"]
      token = "your-token-here"
      organization = "your-org-here"
      bucket = "system"

     

    [[inputs.cpu]]
      percpu = true
      totalcpu = true
      collect_cpu_time = false
      report_active = false

     

    [[inputs.disk]]
      ignore_fs = ["tmpfs", "devtmpfs", "devfs", "overlay", "aufs", "squashfs"]

     

    [[inputs.diskio]]

     

    [[inputs.docker]]    
      endpoint = "unix:///var/run/docker.sock"
      gather_services = false
      container_names = []
      container_name_include = []
      container_name_exclude = []
      timeout = "5s"
      perdevice = true
      total = true
      docker_label_include = []
      docker_label_exclude = []

     

    [[inputs.mem]]


    [[inputs.net]]
    #  interfaces = ["eth0","br-0d215e2e2923","br-8be3c4b3369e"]


    [[inputs.processes]]
    [[inputs.swap]]
    [[inputs.system]]
    [[inputs.smart]]
     use_sudo = false
    # path_smartctl = "/usr/bin/smartctl"
     attributes = true

     

    [[inputs.sensors]]
    [[inputs.apcupsd]]

     

  10. 12 hours ago, Kilrah said:

    That's just because there's still an automated action building the code every night even though it hasn't changed in 4 years.

    Ah bummer, I might have a go at forking varken and updating it to be compatible with influx 2.0.

     

    There is an outstanding pull request on GitHub that apparently enables it and works, so I might just see if I can fork it and manually merge the PR.

  11. 54 minutes ago, grumpy said:

    So now that I have all components running as expected (I think) I have more questions.

    Did you setup a Varken database in influx as your Dashboard variable tab picture suggests you did. As the guides I followed does not suggest that but to use the Telegraph database. 

     

    I have no Plex (no media) information showing on my Dashboard but as far as I can tell Telegraph, Taututlli and Varken are all processing the data as seen through the log of each.

     

    I do not have IPMI so how do I get that information? (TEMPS)

    Array Drive, System Temps/Power,  CPU, Ramm DIMM, Fan Speeds

    I use NUTs(slave) for the UPS; the information is shown in Uraid but not in the dashboard. How to get that?

    SSD Temp, Life, Power On, Lifetime reads/writes, are missing.

     

    The dashboard variable just points to a database, not a bucket.

     

    The system stats are collected by telegraf and stired in the 'system' bucket. You will need to configure the telegraf config file to collect all the system stats.

     

    The ups info is also configured in the telegraf config.

     

    Do you get hdd stats (smart data, not dusk usage and IO? My guess is telegraf config is not configured properly. Or you haven't installed smartmontools in the telegraf container.

     

    The telegraf config is quite long, but most of it is just comments. Once I'm back at my computer, I'll post a shortened version for you.

  12. 5 hours ago, PJsUnraid said:

    For some reason my Plex statistics shows N/A until i restart Varken, then it reoccurs every 5ish minutes.. any ideas?

    Edit: more specifically the total values (total movies, total seasons, total shows etc). The rest seems to be fine

    Editx2: Looks like it only updates when tautulli starts streams, then goes back to N/A after a few mins

    I have come across issues with varken on consistent writing data to the database. It is suppose to write points every 30s, but it does not.

     

    In the in influx V2 version of UUD that I've been working I've modified the queries so that it gets the the most recent result from the last 10 minutes.

     

    You could try and do the same to the v1 queries.

  13. On 4/3/2024 at 5:12 AM, falconexe said:

    While I was researching how to currently install Varken, I noticed that it only supports InfluxDB 1.8.

     

     

    From the Varken GitHub: https://github.com/Boerderij/Varken

     

    1458606867_Screenshot2024-04-02120152.png.134d4402ec02be9c569d367eb0e69dea.png

     

     

    So, if we were to move the UUD to Influx 3.X using SQL, it would require someone to pick up the mantel on Varken, and branch it. If anyone is serious about it, and willing to continue that project, I'm willing to try and get the UUD to Influx 3.X in tandem, once they release it and we can get our hands on it.

     

    Once Varken's repository is gone gone, the main part of the UUD that I use (Plex) would need something else to get the Tautulli data into InfluxDB. I'm open to suggestions. We don't necessarily need Varken, we just need the code equivalent that moves the data. Perhaps a direct API call from Tautulli to InfluxDB? Let me know your thoughts (anyone).

     

     

    @Jufy111 @viktortras

     

     

    Also I don't think it is dead. The dockerhub shows a nightly build updated 18 hours prior to this post.

    https://hub.docker.com/r/boerderij/varken/tags

  14. 5 hours ago, falconexe said:

    While I was researching how to currently install Varken, I noticed that it only supports InfluxDB 1.8.

     

     

    From the Varken GitHub: https://github.com/Boerderij/Varken

     

    1458606867_Screenshot2024-04-02120152.png.134d4402ec02be9c569d367eb0e69dea.png

     

     

    So, if we were to move the UUD to Influx 3.X using SQL, it would require someone to pick up the mantel on Varken, and branch it. If anyone is serious about it, and willing to continue that project, I'm willing to try and get the UUD to Influx 3.X in tandem, once they release it and we can get our hands on it.

     

    Once Varken's repository is gone gone, the main part of the UUD that I use (Plex) would need something else to get the Tautulli data into InfluxDB. I'm open to suggestions. We don't necessarily need Varken, we just need the code equivalent that moves the data. Perhaps a direct API call from Tautulli to InfluxDB? Let me know your thoughts (anyone).

     

     

    @Jufy111 @viktortras

     

     

    It's possible to enable the influxdb v1 API on v2, but it requires some terminal work set up the backwards compatible bucket and endpoint.

    https://docs.influxdata.com/influxdb/v2/reference/api/influxdb-1x/

     

    Hopefully the backwards compatibility remains with the release of V3. I'll have a look at it and see how hard it would be to modify the post functions for v2/sql support. But I'm no dev, so I'm probably  out of my depth.

     

    **edit

    It actually doesn't look too hard to modify the payload  in varken to make it compatible with influxdb2 or SQL. If I find some spare time someday I might fork it and make a version that is natively compatible with both influxdb v1 and v2.

    • Upvote 1
  15. 2 hours ago, viktortras said:

    I can help you with this if you want too!! I have unraid updated to last version and UUD running with 1.8 influxdb! I will send you my telegram or discord if you want to contact me, I can do some tests to test 2.0 queries and more!

    I've made a start on this already. More than happy to collaborate.

    • Like 1
  16. If you have a influxdb 2.0+ version installed, the webUI actually has a great tool for generating basic queries. It makes it quite simple. You can just copy them out and paste them into Grafana.

     

    I'll find some time over the next week and populate what I can, I don't however have a 1.8 instance installed, so I can't just compare some of the old queries to make sure what I'm adding is correct. I think the biggest issue I came across last time was mapping the drive serial numbers (smart data) to the drive mounting points.

     

    But I'll have a crack at it, I'm certainly not expecting you to find the time to learn a new query language.

     

    It also looks like influxdb v3 will be using SQL instead of Flux ql

    https://www.influxdata.com/blog/the-plan-for-influxdb-3-0-open-source/

  17. 18 minutes ago, falconexe said:

     

    Yes, still running on InfluxDB 1.8. If/when I decide to move to InfluxDB 2.0+, I'll rebrand the UUD as "UUD 2.1". I may do this and reintegrate the new version of the UNRAID API-RE at that point. I will likely also update Grafana to version 9+ and take my unRAID 0S to the current release as well.

     

    If I take on this project, it will be out of necessity. Great question BTW. I may pick your brain at that point if you are open to assist.

     

     

    I might even start building this myself in FLUX and just pass it along to you to publish.

     

    There are a few things I'd change, such as making the influx bucket names variables, rather than hard coded.

     

    I think I got just about everything working with FLUX for UUD 1.6, but i did also butcher and change stuff, so I might have just removed some panels that I couldn't get to work.

     

    But yeah, happy to help how I can.

  18. I have considered one more thing that may/may not work. Try the following:

    1. Stopping the container (if it's running)
    2. Remove the existing 'minio' share. Confirm the folder "/mnt/user/minio/" no longer exists on the host system.
    3. Recreate the minio share (even better, call it something else like 'minio-data', you can change the name back later if you get the container working)
    4. Delete the persistent appdata folder "/mnt/user/appdata/minio/".
    5. Start the container again (make if you change the share name you correct the mapped directory).

     

    I suspect that it is possible that something went wrong when the container set up and this is causing permission errors. I had a similar experience the other day settings up PostgreSQL db.

     

  19. Yeah I don't know why it works fine for me. Last thing you could try would be to change the folder permissions in the unraid CLI using 

    chmod 777 -R /mnt/user/minio

     

    Does the container install but just not run, or see does it spit the error out when trying to install?

×
×
  • Create New...