Jufy111

Members
  • Posts

    28
  • Joined

  • Last visited

Recent Profile Visitors

370 profile views

Jufy111's Achievements

Noob

Noob (1/14)

8

Reputation

  1. I got it to launch, but i can't say if its doing any more than that change the volume mapping to drop the filename and map the directory. I was getting a weird directory/file type mismatch error. -v /mnt/user/appdata/alloy/config/config.alloy:/etc/alloy/ >>>>>> -v /mnt/user/appdata/alloy/config/config.alloy:/etc/alloy/ In the post arguments you need 'quotes for the whole expression' if you have spaces. And i think you need to add the alloy command at the start https://grafana.com/docs/alloy/latest/reference/cli/run/ 'alloy run --server.http.listen-addr=0.0.0.0:12345 --storage.path=/var/lib/alloy/data /etc/alloy/config.alloy' my setup below
  2. I see far to often, both here and on other forums people asking questions on how get docker applications, that are not available on the community applications page. I know that the answer as available in a few places on this forum, but honesty, it not that easy to find, so I'd though I'd make a guide that I can link people. This guide will hopefully provide enough information to not only get your container up and running, but help you understand how containers are mapped to to your unRAID system. Also I'm happy to admit that I don't know everything, there is a good chance that the way I have done stuff here is not the best way, so feel free to leave feedback to improve this guide. To start off, I'll clarify the nomenclature, as i regularly see people confusing terms. Nomenclature host: In the case of this guide, this host refers to the unRAID server. docker image: Think if this as the template for the docker installation docker container: An instance of a docker image, multiple containers might run from the same docker image persistent data: This is data that is retained (not removed) Interpreting the Docker or Docker compose information for unRAID The image that you are trying to install may have provide a docker run command or docker-compose config in its documentation. These often get a container up and running pretty quick and is usually all that is needed but I would recommend reading any documentation so that you are at least aware of what each part does. I have an example of each below that I have colour coded parts to make it easier. red: Host path/port blue: container path/port green: Environmental Variables purple: Image source grey: my comments, prefixed with # Docker run command You may also find this represented as a single line. Docker-Compose Adding a new docker container to unRAID 1. Scroll to the bottom of the docker tab page in the the unRAID webUI and select "add a new container" You can leave many of the fields blank. I'll go over the important ones 2. Name: you can use any name here, or just use the name of the image. 3. Repository: If you are pulling the image from dockerhub, you can just use: '<author>/<repository>:latest' if the docker image you are trying to pull is from the GitHub container registry (ghcr), use: ghcr/author/repository there are also other registries such as linuxserver: lscr/author/repository If you want to use a specific release instead of the latest, specify that instead 4. (these are not always necessary, but may improve the experiance) 4.1. Icon URL: Not a necessity, but its nice to have for the unRAID UI, just link the url to an image you want. 4.2 WebUI: http://[IP]:[PORT:8080] # replace “8080” with the container port (not the host port). This is not necessary; it will allow you to launch the webui by right clicking on the icon from the unRAID webui 4.3 Extra parameters: if there are extra parameters that or not Paths/Ports/Variable you can put them in here. 4.4 Post Arguments: You can most likely ignore. This will run a command on the container startup. You can use it do do things like run script or update packages 4.5 Network Type: Bridge is the default type here, just use it unless you have a reason to use something else. Mapping Paths, Volumes and Environments for the container 5.1 Volumes This maps a directory in the container to a share on your host machine. This data is persistent and remains if the container is updated or removed. One of the directory mappings shown in the docker config is: 'location/of/extraConfigs:/configs'. It shows the host path (red) and the container path (blue) separated by a colon. In this case the host path is shows as 'location/of/extraConfigs', but different authors will all show this differently. You will need to change this to the appropriate share for your unRAID machine. Typically, in unRAID, the 'appdata' share is used for persistent docker container data: '/mnt/user/appdata', then this can be appended with a directory for each container, in this case I am going to call the directory 's-pdf', so the host path is: '/mnt/user/appdata/s-pdf' The right side of the colon, we see the container path: '/configs'. This container path should usually remain unchanged from the example that you find for your image. If you are mapping to other shares that are not appdata, maybe it’s a media folder or maybe it’s a downloads folder, its good practice is to limit the access of the share to the minimum and not just give access to the root of a share. For example, you might have a downloads share /downloads that multiple apps have access to (if you have an application (for example an FTP client), I will map this to a directory '/downloads/ftp-client/' and not the '/downloads/'. To add a mapped path to the container, scroll to the bottom of the add container page and click “Add another Path, Port Variable, Label or Device”The name field is used as identifier, you can really put anything in here, I’m not sure if there is a proper convention for it, I have commonly seen ALL_CAPS used, so that what I will do here. Config Type: Path Name: CONFIG_DIR #can be called anything Container path: '/configs' Host path: '/mnt/user/appdata/s-pdf' Access Mode: Set as required It should look something like this Repeat for all the volumes for the image that are required 5.2 Ports Configuring the ports are setup similar to the paths. Left side of the colon is the host (e.g the port you might access a webUI though), the right side is the container. You can change the host port to anything that is already used unless other application need to talk to your new application, then you might have to changes some configs. Leave the container port as it is. Config Type: Port Name: WEBUI_PORT # Call this anything you want Container Port: 8080 #This will be the port number on the right side of the colon. Might not be for you. Host Port: 8080 # you can change this if port 8080 is used by another container Repeat for all ports that are required 5.3 Environments Environments are basically variables that are passed thought to the container, this might be a username, password, or a value that the container uses which which is decided by the user. Left of the equals size is the key, right is the value. Config Type: Variable Name: DOCKER_ENABLE_SECURITY # you can just name this same as the Key Key: DOCKER_ENABLE_SECURITY Value: false Repeat for all variables and then press apply. FAQ Q: Help, I have 2 containers that both use the same port and but I can’t change as I have an application that talks to them over a port. A: setup a bridge connection in the unRAID network settings, this will allow you to use custom bridge network type and set an unique IP address for a container. Q: What if my image is a database that has heavy reads and writes A: instead of a the host path '/mnt/user/appdata/dbcontainername' you can use '/mnt/cache/appdata/dbcontainername'. This bypasses the overhead from the fuse filesystem, which I have found to increases performance.
  3. The code above looks like it is the write data to influxdb code from varken. Sorry, my apologies, the "_value" is an influxDB v2.X field. In v2 "_value" is the data column associated with the "hash" field. Unfortunately not all of the v1 queries were displayed in grafana when I imported the dashboard using a v2 data source, so I'm not able to see how falconexe has structured the queries using influxql. If you post the query here I'll see if I can work it out. For the stream history, I've done my the following way. I grouped by the "hash" value and pick the last value with some data cleanup. My Flux (influxdb v2) query is below. Stream Log (Detailed)
  4. Take the following with a grain of salt because my DB is influxDB v2 and I have not used the queries from falconexe's dashboard. The following is based on my experience updating all the queries to influxdb v2. unfortunately a lot of the queries are just missing, so I haven't been able look at the influxql query and convert to to flux. I think the stream log misses entries because the filter is using tag "session_id", if you change this to "_value" it should fix it. I believe the session_id stays the same for a single session, even if different media is played (songs or consecutive episode). This will mean if a stream is stopped and restarted it will still have multiple entries in the history I agree, the percentage is kind of a little useless. This is because varken (maybe tautulli too) only update at most every 30 seconds. If a song finishes between DB writes the last percentage reading is left in the history. I don't believe that there is any way around this at this stage (short of coding in some logic to the query that updates the 100 percent if conditions are met.),
  5. Thanks, I'm almost done I think. Could definitely use your help for testing though. I also have a few questions for @falconexe too.
  6. Give it a few days. I'm currently working on updating all the queries to Flux for UUD1.7. I havent been able to find a way around it.
  7. I'm running influx v2 as well. Telegraf 1.20.2 was the last version before the devs locked down the permission an made it impossible to install smartmontools and other packets in the container. Any release newer than this, and you will have to remove the install argument for smartmontools and remove [inputs.smart] out of the telegraf config. This will sacrifice any smart data. I don't use OPNsense, but I just have my pfsense configured to write to a bucket called 'pfsense'. Pfsense has it's of telegraf instance. I assume that OPNsense is the same I'm still not quite sure why you need the newer version of telegraf. OPNsense should have it's own telegraf instance pointed to separate bucket in your influxdb 2 instance. This should leave the docker telegraf free to be installed as v 1.20.2 See my post above for my telegraf :1.20.2 config that is configured for infludb v2
  8. Hmm, not sure about the SSD data then. Here is my telegraf config file (all though I'm using influxdb2 so that section will look different for you). It works for me, but still may not work for you. Full disclosure too, I've not actually tested UUD1.7, as it's all in InfluxDB v1 queries. I'm slowly changing them over to v2.
  9. Ah bummer, I might have a go at forking varken and updating it to be compatible with influx 2.0. There is an outstanding pull request on GitHub that apparently enables it and works, so I might just see if I can fork it and manually merge the PR.
  10. The dashboard variable just points to a database, not a bucket. The system stats are collected by telegraf and stired in the 'system' bucket. You will need to configure the telegraf config file to collect all the system stats. The ups info is also configured in the telegraf config. Do you get hdd stats (smart data, not dusk usage and IO? My guess is telegraf config is not configured properly. Or you haven't installed smartmontools in the telegraf container. The telegraf config is quite long, but most of it is just comments. Once I'm back at my computer, I'll post a shortened version for you.
  11. I have come across issues with varken on consistent writing data to the database. It is suppose to write points every 30s, but it does not. In the in influx V2 version of UUD that I've been working I've modified the queries so that it gets the the most recent result from the last 10 minutes. You could try and do the same to the v1 queries.
  12. Also I don't think it is dead. The dockerhub shows a nightly build updated 18 hours prior to this post. https://hub.docker.com/r/boerderij/varken/tags
  13. It's possible to enable the influxdb v1 API on v2, but it requires some terminal work set up the backwards compatible bucket and endpoint. https://docs.influxdata.com/influxdb/v2/reference/api/influxdb-1x/ Hopefully the backwards compatibility remains with the release of V3. I'll have a look at it and see how hard it would be to modify the post functions for v2/sql support. But I'm no dev, so I'm probably out of my depth. **edit It actually doesn't look too hard to modify the payload in varken to make it compatible with influxdb2 or SQL. If I find some spare time someday I might fork it and make a version that is natively compatible with both influxdb v1 and v2.
  14. I've made a start on this already. More than happy to collaborate.
  15. Well it looks like I have no choice but to do it now