[Support] Linuxserver.io - Scrutiny


54 posts in this topic Last Reply

Recommended Posts

[BUG] So it seems like scrutiny api endpoint port is not updating with the docker container setting ' SCRUTINY_API_ENDPOINT'. When i entered the command manually it worked:

/usr/local/bin/scrutiny-collector-metrics run --api-endpoint http://172.17.0.23:8023

Whatever i put in the ' SCRUTINY_API_ENDPOINT', nothing happened when i ran the command. I think this is a bug.

I think the people that are using this right now and got it working did not change the port on the api endpoint.

Edited by cagemaster
Link to post
  • Replies 53
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

That was my bad. It's fixed now.

I wish there was a way to enter custom display data for each of your drives...   Just seeing /dev/sdb is not informative enough. You should be able to give the disk a custom name, and to dis

Posted Images

1 hour ago, cagemaster said:

[BUG] So it seems like scrutiny api endpoint port is not updating with the docker container setting ' SCRUTINY_API_ENDPOINT'. When i entered the command manually it worked:


/usr/local/bin/scrutiny-collector-metrics run --api-endpoint http://172.17.0.23:8023

Whatever i put in the ' SCRUTINY_API_ENDPOINT', nothing happened when i ran the command. I think this is a bug.

I think the people that are using this right now and got it working did not change the port on the api endpoint.

Why are you changing the variable? It refers to inside the container and mapping the port to something else on the host side does not effect that setting.

 

The container can be run as a collector or a webgui or both. For most people the default settings running both collector and webgui is fine.

Link to post
Just now, saarg said:

Why are you changing the variable? It refers to inside the container and mapping the port to something else on the host side does not effect that setting.

 

The container can be run as a collector or a webgui or both. For most people the default settings running both collector and webgui is fine.

I was using the default values at first, except for the post mapping in the container settings, since i have another container running at that port. Even then, when i had changed almost nothing, the command did not work. Changing the port setting inside the config file was only for testing if that solved the problem for me. Which it didn't

Link to post
5 hours ago, cagemaster said:

I'm also trying and can not get this to work.

 

Container settings:

963761734_Screenshot2020-09-30at21_14_34.thumb.png.ef53a312fb48e2ba32e7cc64367461d8.png

 

 

 

Well, I can help you out (even though no one has responded to why mine is not working) 

 

Apparently you can NOT change the API endpoint of the scrutiny UI which you have done.  The port has to stay 8080

Link to post

Works for me

image.thumb.png.a0c62a8846c5488531abd4e1ce689b94.png

 

But, set the API endpoint back to http://localhost:8080 and just change the port.  (That line, [not quite sure why its there], dicates where scrutiny is listening to.  A docker container has no concept of port mapping, and will always listen to its internal port (which is also always going to be on local host)

Link to post

No one has any idea about this error still?

 

INFO[0000] Generating WWN                                type=metrics
INFO[0000] Sending detected devices to API, for filtering & validation  type=metrics
2020/10/01 12:54:07 ERROR: Post http://localhost:8080/api/devices/register: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Link to post
1 hour ago, CoZ said:

No one has any idea about this error still?

 

INFO[0000] Generating WWN                                type=metrics
INFO[0000] Sending detected devices to API, for filtering & validation  type=metrics
2020/10/01 12:54:07 ERROR: Post http://localhost:8080/api/devices/register: net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)

Maybe dumb question but did you type ‘scrutiny ’ before the command? 

Link to post

Yup I copied and pasted the term that shows up when you login to the WebGUI the first time.

 

I can see it go down the list and detect all my HDDs and then fails at the end with that error message above.  No idea what it means or, why.

Edited by CoZ
Left out info
Link to post
12 minutes ago, CoZ said:

Yup I copied and pasted the term that shows up when you login to the WebGUI the first time.

 

I can see it go down the list and detect all my HDDs and then fails at the end with that error message above.  No idea what it means or, why.

Did you leave the api endpoint at localhost:8080?

Link to post

I have checked it here now and running the container with privileged mode, you shouldn't need to add any disks for device pass through. But you have to run the

scrutiny-collector-metrics run

command if you want to see the data after installation or you have to wait until midnight when the cron job runs.

 

The container is made to be either Web service, metrics collector or both. The default is both and you do not change the API endpoint then. That is only needed if you use this container as a remote collector.

 

Edited by saarg
Link to post

Hey,

This is AnalogJ, the developer of Scrutiny.

 

I've seen some confusion around the SCRUTINY_API_ENDPOINT. If you're using the "all-in-one" container, which runs both the API and the collector, you do not need to touch that variable. Set it to the default value (http://localhost:8080). Within the container, the API is listening on localhost:8080, even if you bind that port to something else on your host. It's primarily used by users who are deploying scrutiny in a hub/spoke model -- one API server & db with multiple collectors.

 

Hopefully that fixes most of your issues, however if any of you are still running into problems, feel free to open an issue on Github: https://github.com/AnalogJ/scrutiny. 

 

I can also be found on the Self-Hosted podcast discord & the LinuxServer.io discord: AnalogJ#3506

 

If you find Scrutiny useful, and you'd like to support my work, here's a link to my Github Sponsors page: https://github.com/sponsors/AnalogJ

 

Thanks for your interest & support!

Link to post

Well this is DAMN strange.  I've changed nothing but noticed a update to the docker container a few days ago.  I updated the container and left it alone.  I started it today to come back and try and figure out how to get past the error message I was getting... started the WebGUI and all my HDD data was there.

 

I hate and love when stuff like this happens lol

Link to post
  • 3 weeks later...

I just replaced 2 drives and re-ran the scrutiny-collector-metrics run command and it shows the new drives BUT the old drive data still exists. anyway to remove those?

 

Edit: I also noticed it's not updating the powered on area which leads me to believe it's not properly keeping drive data updated.

Edited by SkinnySkelly
Link to post
On 10/24/2020 at 7:56 AM, SkinnySkelly said:

I just replaced 2 drives and re-ran the scrutiny-collector-metrics run command and it shows the new drives BUT the old drive data still exists. anyway to remove those?

 

Edit: I also noticed it's not updating the powered on area which leads me to believe it's not properly keeping drive data updated.

 

Currently there's no way to delete old drives. It's on my to-do list.

It's not updating powered on data for your new drives? By default the collector runs once a day, so after 24h you should see a change to that number. If it doesn't can you open an issue on the Github (with logs)?

https://github.com/AnalogJ/scrutiny/

 

 

Link to post
  • 1 month later...

So this was working for me without issue previously (after not working for whatever reason).  I just upgraded my Cache drive from 240GB to a 1TB different manufacturer. 

 

I tried to run the scrutiny command again in the docker's console and, once again was greeted by this lovely message that no one seems to have an answer for:

 

INFO[0000] Sending detected devices to API, for filtering & validation  type=metrics
2020/12/09 22:07:35 ERROR: Post http://localhost:8080/api/devices/register: dial tcp 192.168.1.123:8080: connect: no route to host

 

Link to post
  • 2 weeks later...
  • 1 month later...

how do I refresh it?????? as it is showing me data from drives a day old, I have uninstalled it and reinstalled it, still old data.

I know it does it one a day, but when you have been changing drives, you want a refresh button, please 

Also it is seeing my WD SAS 3tb and 4tb drives as seagate Parity 2ST4000NM0023_Z1Z0C3DW - 4 TB (sdo)32 C1,216,397163,6080Disk 1ST33000650SS_Z293MTCQ - 3 TB (sdc)25

 

 

Edited by viola2572
adding
Link to post
  • 4 weeks later...

Does the new version spam the unraid log with hundreds of line?

 

Quote

Feb 20 19:36:00 Unraid crond[273]: user:root entry:*/15 * * * * run-parts /etc/periodic/15min
Feb 20 19:36:00 Unraid crond[273]: user:root entry:0 * * * * run-parts /etc/periodic/hourly
Feb 20 19:36:00 Unraid crond[273]: user:root entry:0 2 * * * run-parts /etc/periodic/daily
Feb 20 19:36:00 Unraid crond[273]: user:root entry:0 3 * * 6 run-parts /etc/periodic/weekly
Feb 20 19:36:00 Unraid crond[273]: user:root entry:0 5 1 * * run-parts /etc/periodic/monthly
Feb 20 19:36:00 Unraid crond[273]: user:root entry:0 0 * * * /usr/local/bin/scrutiny-collector-metrics run --api-endpoint http://localhost:8080 >> /config/log/scrutiny-collector-metrics.log 2>&1
Feb 20 19:36:00 Unraid crond[273]: wakeup dt=56
Feb 20 19:36:00 Unraid crond[273]: file root:
Feb 20 19:36:00 Unraid crond[273]: line run-parts /etc/periodic/15min
Feb 20 19:36:00 Unraid crond[273]: line run-parts /etc/periodic/hourly
Feb 20 19:36:00 Unraid crond[273]: line run-parts /etc/periodic/daily
Feb 20 19:36:00 Unraid crond[273]: line run-parts /etc/periodic/weekly
Feb 20 19:36:00 Unraid crond[273]: line run-parts /etc/periodic/monthly
Feb 20 19:36:00 Unraid crond[273]: line /usr/local/bin/scrutiny-collector-metrics run --api-endpoint http://localhost:8080 >> /config/log/scrutiny-collector-metrics.log 2>&1

 

Edited by L0rdRaiden
Link to post
On 2/20/2021 at 11:50 PM, saarg said:

That should be in the unraid log. Not sure what is happening.

Could you post your docker run command?

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Scrutiny' --net='host' --cpuset-cpus='3,7' --privileged=true -e TZ="Europe/Paris" -e HOST_OS="Unraid" -e 'TCP_PORT_8080'='8080' -e 'PUID'='1000' -e 'PGID'='1000' -e 'SCRUTINY_API_ENDPOINT'='http://localhost:8080' -e 'SCRUTINY_WEB'='true' -e 'SCRUTINY_COLLECTOR'='true' -v '/mnt/user/Docker/Scrutiny':'/config':'rw' -v '/run/udev':'/run/udev':'ro' -v '/dev':'/dev':'ro' --dns=10.10.50.5 --cap-add=SYS_ADMIN --cap-add=SYS_RAWIO 'linuxserver/scrutiny'

02278bda8e2da04c238c983c5e85df76703f16eaaacb45dd974f6451b5ba0fe3

Link to post
3 hours ago, L0rdRaiden said:

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='Scrutiny' --net='host' --cpuset-cpus='3,7' --privileged=true -e TZ="Europe/Paris" -e HOST_OS="Unraid" -e 'TCP_PORT_8080'='8080' -e 'PUID'='1000' -e 'PGID'='1000' -e 'SCRUTINY_API_ENDPOINT'='http://localhost:8080' -e 'SCRUTINY_WEB'='true' -e 'SCRUTINY_COLLECTOR'='true' -v '/mnt/user/Docker/Scrutiny':'/config':'rw' -v '/run/udev':'/run/udev':'ro' -v '/dev':'/dev':'ro' --dns=10.10.50.5 --cap-add=SYS_ADMIN --cap-add=SYS_RAWIO 'linuxserver/scrutiny'

02278bda8e2da04c238c983c5e85df76703f16eaaacb45dd974f6451b5ba0fe3

I don't see anything wrong there. Does it stop if you stop the container?

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.