[Support] for atribe's repo Docker images


Recommended Posts

On 7/14/2023 at 6:13 PM, macronin said:

 

I'm a new Unraid user so was just starting my monitoring setup and also ran into the same issue. Looks like the Guide may need an update.

 

But the good news is I was able to find a newer final version of the telegraf.conf file. The one I found is from Feb of 2023

 

https://web.archive.org/web/20230209014553/https://raw.githubusercontent.com/influxdata/telegraf/master/etc/telegraf.conf

The most up to date version from before telegraf was updated to package this file differently is still in the git history.  This is from the exact commit that removed the file: https://github.com/influxdata/telegraf/blob/d8db3ca3a293bc24a9120b590984b09e2de1851a/etc/telegraf.conf

 

Obviously if telegraf has some kind of major update this file will no longer work.  Given this uses the official image, the project can't be updated with a startup shim that runs the telegraf config command before launching telegraf if it doesn't already exist. 

 

Meanwhile, you don't actually need to download the file.  You can tweak the instructions like so and run this before you launch telegraf for the first time and prevent the chicken-egg problem you are hitting:

docker run --rm telegraf telegraf config > /mnt/cache/appdata/telegraf/telegraf.conf

 

Link to comment
  • 2 weeks later...

There has to be a better/smarter way to do this...

 

I am using the IPMI sensor with the Telegraf container.  Every time the container updates it breaks and I have to:

- edit telegraf.conf to comment out the [[inputs.ipmi_sensor]] section because the container won't start.

- start the container and run "docker exec -u 0 -it telegraf /bin/sh -c 'apt update && apt install -y ipmitool' " from the Unraid terminal to install ipmitool.

- edit telegraf.conf to enable the [[inputs.ipmi_sensor]] section.

- restart the container.

 

Any suggestions on how to automate this or do to do it better?

 

Thanks

Link to comment
On 6/1/2023 at 3:42 PM, scubieman said:

Everytime i open influxdb it goes to the initial setup, Am I doing something wrong?

 

edit: I go to the directory for the appdata... and the folder is empty?

 

chrome_2c0KfZiz4h.png

 

I too noticed my appdata folder was empty.  When I dropped into terminal on the docker I noticed there is a influxdb2 folder.  So I changed the docker mapping to map this internal folder to /appdata/influxdb.   Hopefully this now means that the database gets backed up and not destroyed when the image is updated leading me to set it up from scratvh again.  I'm not sure if that is the correct solution or why or when it even changed from influxdb to influxdb2.  Happy to hear others thoughts on if this is right or wrong solution.

 

 

https://docs.influxdata.com/influxdb/v2/reference/internals/file-system-layout/?t=Docker

 

 

This would suggest I have done the correct thing.

Edited by unraid-user
Link to comment
  • 3 weeks later...
On 10/21/2022 at 12:43 AM, bclinton said:

Hi friends. Trying to get grafana running on unraid 6.11.1 by following the instructions at Unraid | Unraid Data Monitoring with Prometheus and Grafana. I am stuck at step 10. Grafana container starts but when I try to launch the webui I get this error

 

This page isn’t working right now

192.168.1.24 redirected you too many times.

To fix this issue, try clearing your cookies.

ERR_TOO_MANY_REDIRECTS

 

The log repeats this....

logger=context userId=0 orgId=0 uname= t=2022-10-20T18:37:34.404787385-05:00 level=info msg="Request Completed" method=GET path=/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/192.168.1.24/login status=302 remote_addr=192.168.1.118 time_ms=0 duration=407.592µs size=783 referer= handler=notfound
logger=cleanup t=2022-10-20T18:41:45.061524809-05:00 level=info msg="Completed cleanup jobs" duration=7.659615ms

 

Just looking for a suggestion. Thanks.
 

In case you didn't end up managing to fix this, or in case someone else needs a fix: add http:// to the start of the Key 1 field (where you write your unraid server IP) in the Docker Edit page for Grafana.

Link to comment

Really having some issues trying to get Glances to show GPU information / Stats. I have followed the guides on This Guide, But It seems that I still have no GPU stats. 

I have also Tried adding ```--runtime=nvidia``` 

--runtime=nvidia (in Extra Parameters: field)

and I added this as seperate variables:
 

NVIDIA_VISIBLE_DEVICES = all (and specific GPU-UUID)
NVIDIA_DRIVER_CAPABILITIES = All

 Still I cannot get GPU to show up. 

I have changed the Repo to nicolargo/glances:latest-full But again, still no GPU. 

I have been working on this for two days now and I still cant get it to show. I have read many post on github and other sites that state that I need NVIDIA-Container-Toolkit and/or  py3nvml Library installed. I have installed  py3nvml via ssh term and it shows to be working and the output of $nvidia-smi is this:
 

root@Pelican:~# nvidia-smi
Tue Jan  2 15:08:46 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06              Driver Version: 545.29.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti     Off | 00000000:42:00.0 Off |                  N/A |
|  0%   38C    P8              19W / 275W |      2MiB / 11264MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+



I do have Nvidia drivers installed and working with Plex, Emby, and JellyFin

Edited by Mr.N3rd
Link to comment
  • 1 month later...
On 1/2/2024 at 4:09 PM, Mr.N3rd said:

Really having some issues trying to get Glances to show GPU information / Stats. I have followed the guides on This Guide, But It seems that I still have no GPU stats. 

I have also Tried adding ```--runtime=nvidia``` 

--runtime=nvidia (in Extra Parameters: field)

and I added this as seperate variables:
 

NVIDIA_VISIBLE_DEVICES = all (and specific GPU-UUID)
NVIDIA_DRIVER_CAPABILITIES = All

 Still I cannot get GPU to show up. 

I have changed the Repo to nicolargo/glances:latest-full But again, still no GPU. 

I have been working on this for two days now and I still cant get it to show. I have read many post on github and other sites that state that I need NVIDIA-Container-Toolkit and/or  py3nvml Library installed. I have installed  py3nvml via ssh term and it shows to be working and the output of $nvidia-smi is this:
 

root@Pelican:~# nvidia-smi
Tue Jan  2 15:08:46 2024       
+---------------------------------------------------------------------------------------+
| NVIDIA-SMI 545.29.06              Driver Version: 545.29.06    CUDA Version: 12.3     |
|-----------------------------------------+----------------------+----------------------+
| GPU  Name                 Persistence-M | Bus-Id        Disp.A | Volatile Uncorr. ECC |
| Fan  Temp   Perf          Pwr:Usage/Cap |         Memory-Usage | GPU-Util  Compute M. |
|                                         |                      |               MIG M. |
|=========================================+======================+======================|
|   0  NVIDIA GeForce GTX 1080 Ti     Off | 00000000:42:00.0 Off |                  N/A |
|  0%   38C    P8              19W / 275W |      2MiB / 11264MiB |      0%      Default |
|                                         |                      |                  N/A |
+-----------------------------------------+----------------------+----------------------+
                                                                                         
+---------------------------------------------------------------------------------------+
| Processes:                                                                            |
|  GPU   GI   CI        PID   Type   Process name                            GPU Memory |
|        ID   ID                                                             Usage      |
|=======================================================================================|
|  No running processes found                                                           |
+---------------------------------------------------------------------------------------+



I do have Nvidia drivers installed and working with Plex, Emby, and JellyFin

@Mr.N3rd Did you ever get this to work?  I'd like to get it up and running too.

Link to comment
  • 2 weeks later...
  • 2 weeks later...

How do you create Influx tokens?

According to the influx docs, this is done from the cli via the influxctl command, but that executable does not exist in the docker image. Is there an alternate way of generating tokens in this docker image?

Link to comment

I am quite confused, been trying to setup Telegraf, InfluxDB, and Grafana for quite awhile now. But I keep getting no data when I setup the data source in grafana. Any help is appreciated. Telegraf also does this thing where it stops after even just touching grafana or influx.

 

Here are my setups:

Telegraf is 1.20.2

Influx is 1.8.4 ( I don't get any WebUI)

 

telegraf Post Arguments:

/bin/sh -c 'apt update && apt install -y smartmontools && apt install -y lm-sensors && apt install -y nvme-cli && apt install -y ipmitool && telegraf' -u 0

 

telegraf config

image.png.36f6a85a06ccb162266282c44d6822dd.png

 

my data source looks like this

image.thumb.png.07c37b130f23191d99ee6e4673ec22b1.png

 

Just not really sure where to go from here. I have tried so many things to see if it will work. Googled this so much, but can't seem to find a guide that solves the problem.

 

Thanks for any help

Link to comment

Hello, with the recent update of unraid to 6.12.9 I notice some strage behaviour with HDDTemp. I use Grafana, Influx, Telegraf to monitor disk temps. Before the update HDDTemp always reported 0°C on spun down discs, which was pretty nice because it was easily possible to see which disc are up and which are not (see screenshot).

However with the latest update HDDTemp always reports a temp even with spun down discs. First I thought it was a problem with my Grafana setup but I opened console on the HDDTemp docker and tested `hddtemp /dev/sdc`. It clearly reports a temp greater 0 although the disk is spun down.

Is this expected behaviour now? Could be something wrong on my end? I would be happy if someone could help because the temps on spun down discs is pretty irrelevant to me.

dsik_temps.jpg

Link to comment
  • 3 weeks later...

I keep finding plex closed, this is the last output of the log.

2024-04-19 07:57:05,372 CRIT uncaptured python exception, closing channel <POutputDispatcher at 22566202968144 for <Subprocess at 22566203126160 with name plexmediaserver in state RUNNING> (stderr)> (<class 'OSError'>:[Errno 28] No space left on device: '/config/supervisord.log' [/usr/lib/python3.11/site-packages/supervisor/supervisord.py|runforever|218] [/usr/lib/python3.11/site-packages/supervisor/dispatchers.py|handle_read_event|276] [/usr/lib/python3.11/site-packages/supervisor/dispatchers.py|record_output|210] [/usr/lib/python3.11/site-packages/supervisor/dispatchers.py|_log|189] [/usr/lib/python3.11/site-packages/supervisor/loggers.py|log|345] [/usr/lib/python3.11/site-packages/supervisor/loggers.py|emit|227] [/usr/lib/python3.11/site-packages/supervisor/loggers.py|doRollover|276])

 

I tried to add this to extra parameters but the container fails.
--restart-unless-stopped

 

EDIT: Weird now it won't even start

2024-04-19 10:05:34.759878 [info] Host is running unRAID
2024-04-19 10:05:34.771724 [info] System information Linux Unraid 6.1.74-Unraid #1 SMP PREEMPT_DYNAMIC Fri Feb  2 11:06:32 PST 2024 x86_64 GNU/Linux
2024-04-19 10:05:34.785155 [info] PUID defined as '99'
2024-04-19 10:05:34.800228 [info] PGID defined as '100'
2024-04-19 10:05:34.816307 [info] UMASK defined as '000'
2024-04-19 10:05:34.829559 [info] Permissions already set for '/config'
2024-04-19 10:05:34.845927 [info] Deleting files in /tmp (non recursive)...
2024-04-19 10:05:34.862603 [warn] TRANS_DIR not defined,(via -e TRANS_DIR), defaulting to '/config/tmp'
chmod: cannot access '/config/supervisord.log': No such file or directory

 


Any ideas? Thanks folks.

Edited by lightsout
Link to comment
32 minutes ago, lightsout said:

I keep finding plex closed, this is the last output of the log.

2024-04-19 07:57:05,372 CRIT uncaptured python exception, closing channel <POutputDispatcher at 22566202968144 for <Subprocess at 22566203126160 with name plexmediaserver in state RUNNING> (stderr)> (<class 'OSError'>:[Errno 28] No space left on device: '/config/supervisord.log' [/usr/lib/python3.11/site-packages/supervisor/supervisord.py|runforever|218] [/usr/lib/python3.11/site-packages/supervisor/dispatchers.py|handle_read_event|276] [/usr/lib/python3.11/site-packages/supervisor/dispatchers.py|record_output|210] [/usr/lib/python3.11/site-packages/supervisor/dispatchers.py|_log|189] [/usr/lib/python3.11/site-packages/supervisor/loggers.py|log|345] [/usr/lib/python3.11/site-packages/supervisor/loggers.py|emit|227] [/usr/lib/python3.11/site-packages/supervisor/loggers.py|doRollover|276])

 

I tried to add this to extra parameters but the container fails.
--restart-unless-stopped

 

EDIT: Weird now it won't even start

2024-04-19 10:05:34.759878 [info] Host is running unRAID
2024-04-19 10:05:34.771724 [info] System information Linux Unraid 6.1.74-Unraid #1 SMP PREEMPT_DYNAMIC Fri Feb  2 11:06:32 PST 2024 x86_64 GNU/Linux
2024-04-19 10:05:34.785155 [info] PUID defined as '99'
2024-04-19 10:05:34.800228 [info] PGID defined as '100'
2024-04-19 10:05:34.816307 [info] UMASK defined as '000'
2024-04-19 10:05:34.829559 [info] Permissions already set for '/config'
2024-04-19 10:05:34.845927 [info] Deleting files in /tmp (non recursive)...
2024-04-19 10:05:34.862603 [warn] TRANS_DIR not defined,(via -e TRANS_DIR), defaulting to '/config/tmp'
chmod: cannot access '/config/supervisord.log': No such file or directory

 


Any ideas? Thanks folks.

 

The "No space left on device" look to point to either a bad mount, or a disk or vdisk or image is full with no free space for the log file and that is causing errors.

 

Have you checked where stuff is mounting too, your docker image, and permissions of the appdata folder?

Link to comment
26 minutes ago, axipher said:

 

The "No space left on device" look to point to either a bad mount, or a disk or vdisk or image is full with no free space for the log file and that is causing errors.

 

Have you checked where stuff is mounting too, your docker image, and permissions of the appdata folder?

Thanks just came here to post this, music analysis fills up the tmp folder and never clears. It was 1.7tb on my 2tb nvme. Need to figure out how to avoid this. Not sure why this is posted in this thread, I was certain I was in the binhex/plex thread.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.