Jump to content

ABEIDO

Members
  • Posts

    27
  • Joined

Posts posted by ABEIDO

  1. On 12/19/2023 at 7:59 PM, Rysz said:

     

    Hello!

     

    This is caused by the pfSense side and is the fallout of the following bug:

    https://github.com/networkupstools/nut/issues/2104

     

    You cannot fix this from the UNRAID side, you need to update NUT on your pfSense to the latest version. The bug is already fixed in NUT 2.8.1 (or 2.8.2 on pfSense), so please check the pfSense's NUT version again because it looks like a version before 2.8.1. from the logs! After updating to the latest version on the pfSense, you can then test it by running a manual battery self-test on the UPS and checking if it still shuts down the pfSense (it shouldn't anymore). On UNRAID itself no changes are needed as it's already fixed there. 🙂

     

    Sorry for bringing back a old post.

     

    I get this in Unraid log:

     

    Mar 25 11:22:19 UNRAID usbhid-ups[6922]: ups_status_set: seems that UPS [qnapups] is in OL+DISCHRG state now. Is it calibrating (perhaps you want to set 'onlinedischarge_calibration' option)? Note that some UPS models (e.g. CyberPower UT series) emit OL+DISCHRG when in fact offline/on-battery (perhaps you want to set 'onlinedischarge' option).

     

    But everything works as it should and nothing is being shutdown when my APC UPS is doing its biweekly test. But it logs that above message. In previous poster he had problems with pfsense shutting down which i dont have problems with.

     

    And according to documentation it states that APC models are effected.

    Quote

     

    onlinedischarge_calibration

    If this flag is set, the driver will treat OL+DISCHRG status as calibration. Some UPS models (e.g. APC were seen to do so) report OL+DISCHRG when they are in calibration mode. This usually happens after a few seconds reporting an OFF state as well, while the hardware is switching to on-battery mode.

     

     

    Can i add a flag/line some where to fix this or is not a "real" problem as its only shown in the logs?

  2. Bokker, any possible fix for getting rid of nchan error thats spamming the unraid log like crazy which came after last update:

     

    Also have a question if there is a possible fix for neededing rsetart container whenever HA is restarted, was working before last update.

     

    To add to above poster i also have issues with attributes on status sensor.

     

    Example of nchan error:
    Jan 29 05:28:42 UNRAID monitor: Stop running nchan processes
    Jan 29 05:29:15 UNRAID monitor: Stop running nchan processes
    Jan 29 05:29:48 UNRAID monitor: Stop running nchan processes
    Jan 29 05:30:21 UNRAID monitor: Stop running nchan processes
    Jan 29 05:30:54 UNRAID monitor: Stop running nchan processes
    Jan 29 05:31:28 UNRAID monitor: Stop running nchan processes
    Jan 29 05:32:01 UNRAID monitor: Stop running nchan processes
    Jan 29 05:32:34 UNRAID monitor: Stop running nchan processes
    Jan 29 05:33:07 UNRAID monitor: Stop running nchan processes
    Jan 29 05:33:40 UNRAID monitor: Stop running nchan processes
    Jan 29 05:34:13 UNRAID monitor: Stop running nchan processes
    Jan 29 05:34:46 UNRAID monitor: Stop running nchan processes

  3. 20 hours ago, BoKKeR said:

     

    Did you update to the naming fork? 

     

     

    Yes bokker/unraidapi-re:6.12-naming, but to be honest i cannot find where to look for any errors for sure. I remember it there was supposed to be errors in debug log regarding naming. Cannot find any there now. So it seems to be working fine.

     

    Example from  core.entity_registry file looks ok:

            "original_name": "docker_bazarr_restart",
            "unique_id": "unraid_bazarr_restart",

     

     

    That said i rename my entities always according to my own standard thats why i need to look in core.entity_registry file.

     

    Example:

    image.thumb.png.ad3bedcfc05622b8e9151dad6ae14e07.png

     

     

  4. Updated and couldnt find any issues with the naming

     

    I keep it kinda simple and set my entities to:

     

    docker on/off = mdi:power 

    docker restart = mdi:restart

     

    mover: mdi:folder-arrow-up-down-outline

    array on/off: mdi:power 

    parity check: mdi:sync-alert

    reboot: mdi:restart

    shutdown: mdi:power 

    status: mdi:checkbox-marked-outline

  5. 40 minutes ago, Rysz said:

     

    So I've been able to reproduce the problem on my test server at last, thanks for your patience and testing. The culript is that the UPS seems to send an "OFF" event during the self-testing, which basically says "Hey, I'm offline and no longer providing power to your server" and NUT acts on that and starts a shutdown sequence because it requires at least one functional, online UPS.

     

    This is the line where this happens:

    Oct  8 17:30:11 UNRAID upsmon[26302]: UPS [email protected]: administratively OFF or asleep

     

    What's even stranger is the UPS seems to be "OL" (Online) and "OFF" (Offline) at the same time, these two events shouldn't be able to exist at the same time. I'm guessing this is an APC driver issue with NUT, so I'll have to run this up to the NUT backend developers, the UNRAID plugin is basically just a frontend for the NUT backend (which is developed for more systems than UNRAID). What is curious is that I've found no report from other APC users where this happens, this makes me curious if that is something that is just happening on your UPS or UPS series.

     

    What you can try in the meantime:
    Change the NUT backend to "legacy (2.7.4)" in NUT Settings

    Change the NUT backend to "stable (2.8.0)" in NUT Settings

    And please report back if the problem also happens on the different backends.

     

    If it doesn't work with the other backends, I might have one more idea you could try.

    It's not an ideal one, so I'll keep this as a last "solution" in case all else fails for now.

     

    Please also let me know the exact UPS vendor and model that you have!

     

     

    Tested everything now:

     

    Change the NUT backend to "legacy (2.7.4)" in NUT Settings Kill UPS Power off
    Worked perfect

     

    Change the NUT backend to "stable (2.8.0)" in NUT Settings, Kill UPS Power off
    Worked perfect

     

    Change the NUT backend to "stable (2.8.0)" in NUT Settings, Kill UPS Power on
    Worked perfect

     

    So seems to have be something with backend then.  And i can run on stable i guess.

     

    I REALLY want to thank you for the time you put into this. And for all the good info.


    UPS Model

    Back-UPS RS 900MI
    https://www.apc.com/se/sv/product/BR900MI/apc-backups-pro-900va-230v-avr-lcd-6-iecuttag/

     

    Not to be confused with below(mine has IEC outlets and below has Schuko outlets). There are other diffrences aswell.
    https://www.apc.com/se/sv/product/BR900G-GR/powersaving-backups-pro-900-230-v-schuko/?%3Frange=61888-backups-pro&parent-subcategory-id=88975&selected-node-id=27590292604

     


     

    • Like 1
  6. 1 hour ago, Rysz said:

     

    OK this is super useful information so thanks for that so far. UPS inverter clicking during self-tests is normal, so that in combination with the other information provided about the UPS hardware's state also makes me think more in direction of a configuration or software issue now.

     

    One major problem I've identified with the configuration is that you're using the same usernames for "NUT Monitor Username" and "NUT Slave Username", but those can never be the same because they come with a different set of permissions each. "NUT Monitor Username" is the one the NUT master will use, "NUT Slave Username" is the one your NUT clients will use. Please change your "NUT Monitor Username" to a different username, you won't need to change any settings with the clients. This will make sure the NUT services can distinguish between who is NUT master and who is NUT slave (client), that's really important for functionality.

     

    I've read about some UPS only reporting a limited set of variables to NUT during self-tests, so it's possible that during these self-tests the "Runtime Left" variable becomes either unavailable or zero for a few (milli-)seconds while the inverter switches between line power and battery power for testing (that's the clicking you hear).

     

    This could (in theory) lead to NUT seeing the UPS on battery power during the testing and at the same time the "Runtime Left" variable unavailable and/or below your configured 15 minutes - resulting in the false shutdown sequence with NUT thinking that it's a power loss scenario.

     

    So I'd suggest doing this:

    • Change "NUT Monitor Username" to something else.
    • Change "Shutdown Mode" to "Time On Battery" (for test purposes)
    • Set "Time on Battery before Shutdown (minutes):" to 5-6 minutes (for test purposes)
    • Set "Kill UPS Power" to "No"

     

    If you can, trigger a manual self-test of the UPS and see if the problem still occurs.

     

    You can also choose "Battery Charge" as "Shutdown Mode", it's still better than "Runtime Left", but "Time On Battery" is mostly independent from UPS-reported variables so it's best for testing now.

     

    In general I'd always advise to using either "Battery Charge" or "Time on Battery", rather than "Runtime Left", because "Runtime Left" is the least predictable and trustworthy of the different "Shutdown Mode"s.

     

    The reason is reported UPS runtime can fluctuate a lot depending on UPS and UPS battery and many UPS vendors do not have clean implementations of this variable.

     

     

    I didnt know that with slave and master users, good info. Have changed that now.

     

    The manual test acts the same as the schedueled in regards of symptons which i didnt thought was the case, so i have a way of testing solutions now. Btw i tested without Qnap running, didnt want to messup the NAS. And problems is the same even the load now only is 34w when ups is off.

     

    Test 1

    -Did a test before but with only Kill UPS Power change to No.

    Only Unraid shuts down (correctly thou as it always had) but it does it directly. So not staying on until set rule.

     

    Test 2

    -Tested with all the suggested changes. Unraid shuts down (correctly thou as it always had) So not staying on until set rule. So same as before.

     

    Test 3

    -Tested with all the suggested changes but with Shutdown Mode Battery Level . Unraid shuts down (correctly thou as it always had). So not staying on until set rule. So same as before.

     

    Saw below just before it turned off both times.

    1

    image.png.1b1126a0b93ede14acc764b839af3ba5.png

     

    2

    image.png.3ea6a9afcfb3eb4b953d520d0ce22873.png

     

    In Home Assistant (just as extra info). Yellow status name is same status as above 2. Black part is when unraid is off.

    image.thumb.png.902a462ac9af24495fa040a9e9185dcd.png

     

     

    So still same issue but without the powerloss for the connected devices. When i thinkm about it i couldnt have had these problem before on the old repo. I mean i would remember that my nas/unraid/server was down randomly every 2 weeks as i cannot disable auto selftest (what i have found)

     

    Only other ui option i havent touched is:

    image.png.fd09edc943b3cc18e3098282cff04ff5.png

    and

    image.png.943d9a41cc80cc4a9c1e84df63aeb5be.png

     

    Are those of any importance in regads to the problem?

     

     

    Some Log from a test.

     

    Quote

    Octt  8 17:17:27 UNRAID ool www[15552]: /usr/local/emhttp/plugins/nut/scripts/stop
    Oct  8 17:17:28 UNRAID root: Writing NUT configuration...
    Oct  8 17:17:29 UNRAID flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
    Oct  8 17:17:30 UNRAID root: Updating permissions for NUT...
    Oct  8 17:17:30 UNRAID root: Checking if the NUT Runtime Statistics Module should be enabled...
    Oct  8 17:17:30 UNRAID root: Disabling the NUT Runtime Statistics Module...
    Oct  8 17:17:31 UNRAID root: Stopping the NUT services... 
    Oct  8 17:17:31 UNRAID upsd[8243]: mainloop: Interrupted system call
    Oct  8 17:17:31 UNRAID upsd[8243]: Signal 15: exiting
    Oct  8 17:17:31 UNRAID root: Network UPS Tools upsd 2.8.0.1
    Oct  8 17:17:31 UNRAID root: Network UPS Tools upsmon 2.8.0.1
    Oct  8 17:17:31 UNRAID upsmon[8247]: Signal 15: exiting
    Oct  8 17:17:31 UNRAID upsmon[8246]: upsmon parent: read
    Oct  8 17:17:31 UNRAID usbhid-ups[8212]: WARNING: send_to_all: write 34 bytes to socket 16 failed (ret=-1), disconnecting: Broken pipe
    Oct  8 17:17:33 UNRAID usbhid-ups[8212]: Signal 15: exiting
    Oct  8 17:17:34 UNRAID root: Network UPS Tools - UPS driver controller 2.8.0.1
    Oct  8 17:17:43 UNRAID ool www[15551]: /usr/local/emhttp/plugins/nut/scripts/stop
    Oct  8 17:17:44 UNRAID root: Writing NUT configuration...
    Oct  8 17:17:46 UNRAID root: Updating permissions for NUT...
    Oct  8 17:17:46 UNRAID root: Checking if the NUT Runtime Statistics Module should be enabled...
    Oct  8 17:17:46 UNRAID root: Disabling the NUT Runtime Statistics Module...
    Oct  8 17:17:47 UNRAID root: Stopping the NUT services... 
    Oct  8 17:17:49 UNRAID root: Network UPS Tools - UPS driver controller 2.8.0.1
    Oct  8 17:18:06 UNRAID ool www[17328]: /usr/local/emhttp/plugins/nut/scripts/stop
    Oct  8 17:18:07 UNRAID root: Writing NUT configuration...
    Oct  8 17:18:08 UNRAID root: Updating permissions for NUT...
    Oct  8 17:18:08 UNRAID root: Checking if the NUT Runtime Statistics Module should be enabled...
    Oct  8 17:18:08 UNRAID root: Disabling the NUT Runtime Statistics Module...
    Oct  8 17:18:09 UNRAID root: Stopping the NUT services... 
    Oct  8 17:18:11 UNRAID root: Network UPS Tools - UPS driver controller 2.8.0.1
    Oct  8 17:18:23 UNRAID ool www[17320]: /usr/local/emhttp/plugins/nut/scripts/start
    Oct  8 17:18:24 UNRAID root: Writing NUT configuration...
    Oct  8 17:18:26 UNRAID root: Updating permissions for NUT...
    Oct  8 17:18:26 UNRAID root: Checking if the NUT Runtime Statistics Module should be enabled...
    Oct  8 17:18:26 UNRAID root: Disabling the NUT Runtime Statistics Module...
    Oct  8 17:18:27 UNRAID root: Using subdriver: APC HID 0.100
    Oct  8 17:18:27 UNRAID root: Network UPS Tools - Generic HID driver 0.52 (2.8.0.1)
    Oct  8 17:18:27 UNRAID root: USB communication driver (libusb 1.0) 0.46
    Oct  8 17:18:27 UNRAID usbhid-ups[19923]: Startup successful
    Oct  8 17:18:27 UNRAID usbhid-ups[19923]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it
    Oct  8 17:18:27 UNRAID root: Network UPS Tools - UPS driver controller 2.8.0.1
    Oct  8 17:18:28 UNRAID upsd[20052]: listening on 0.0.0.0 port 3493
    Oct  8 17:18:28 UNRAID upsd[20052]: Connected to UPS [qnapups]: usbhid-ups-qnapups
    Oct  8 17:18:28 UNRAID upsd[20052]: Found 1 UPS defined in ups.conf
    Oct  8 17:18:28 UNRAID usbhid-ups[19923]: sock_connect: enabling asynchronous mode (auto)
    Oct  8 17:18:28 UNRAID upsd[20053]: Startup successful
    Oct  8 17:18:28 UNRAID upsd[20053]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it
    Oct  8 17:18:28 UNRAID upsmon[20056]: Startup successful
    Oct  8 17:18:28 UNRAID upsmon[20057]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it
    Oct  8 17:18:28 UNRAID upsd[20053]: User [email protected] logged into UPS [qnapups]
    Oct  8 17:18:29 UNRAID flash_backup: adding task: /usr/local/emhttp/plugins/dynamix.my.servers/scripts/UpdateFlashBackup update
    Oct  8 17:19:49 UNRAID ool www[23606]: /usr/local/emhttp/plugins/nut/scripts/stop
    Oct  8 17:19:50 UNRAID root: Writing NUT configuration...
    Oct  8 17:19:52 UNRAID root: Updating permissions for NUT...
    Oct  8 17:19:52 UNRAID root: Checking if the NUT Runtime Statistics Module should be enabled...
    Oct  8 17:19:52 UNRAID root: Disabling the NUT Runtime Statistics Module...
    Oct  8 17:19:53 UNRAID root: Stopping the NUT services... 
    Oct  8 17:19:53 UNRAID upsd[20053]: mainloop: Interrupted system call
    Oct  8 17:19:53 UNRAID upsd[20053]: Signal 15: exiting
    Oct  8 17:19:53 UNRAID root: Network UPS Tools upsd 2.8.0.1
    Oct  8 17:19:53 UNRAID upsmon[20057]: Signal 15: exiting
    Oct  8 17:19:53 UNRAID root: Network UPS Tools upsmon 2.8.0.1
    Oct  8 17:19:53 UNRAID upsmon[20056]: upsmon parent: read
    Oct  8 17:19:53 UNRAID usbhid-ups[19923]: WARNING: send_to_all: write 34 bytes to socket 16 failed (ret=-1), disconnecting: Broken pipe
    Oct  8 17:19:55 UNRAID usbhid-ups[19923]: Signal 15: exiting
    Oct  8 17:19:56 UNRAID root: Network UPS Tools - UPS driver controller 2.8.0.1
    Oct  8 17:20:11 UNRAID ool www[23608]: /usr/local/emhttp/plugins/nut/scripts/start
    Oct  8 17:20:12 UNRAID root: Writing NUT configuration...
    Oct  8 17:20:14 UNRAID root: Updating permissions for NUT...
    Oct  8 17:20:14 UNRAID root: Checking if the NUT Runtime Statistics Module should be enabled...
    Oct  8 17:20:14 UNRAID root: Disabling the NUT Runtime Statistics Module...
    Oct  8 17:20:15 UNRAID root: Using subdriver: APC HID 0.100
    Oct  8 17:20:15 UNRAID root: Network UPS Tools - Generic HID driver 0.52 (2.8.0.1)
    Oct  8 17:20:15 UNRAID root: USB communication driver (libusb 1.0) 0.46
    Oct  8 17:20:15 UNRAID usbhid-ups[26260]: Startup successful
    Oct  8 17:20:15 UNRAID usbhid-ups[26260]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it
    Oct  8 17:20:15 UNRAID root: Network UPS Tools - UPS driver controller 2.8.0.1
    Oct  8 17:20:16 UNRAID upsd[26294]: listening on 0.0.0.0 port 3493
    Oct  8 17:20:16 UNRAID upsd[26294]: Connected to UPS [qnapups]: usbhid-ups-qnapups
    Oct  8 17:20:16 UNRAID usbhid-ups[26260]: sock_connect: enabling asynchronous mode (auto)
    Oct  8 17:20:16 UNRAID upsd[26294]: Found 1 UPS defined in ups.conf
    Oct  8 17:20:16 UNRAID upsd[26295]: Startup successful
    Oct  8 17:20:16 UNRAID upsd[26295]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it
    Oct  8 17:20:16 UNRAID upsmon[26301]: Startup successful
    Oct  8 17:20:16 UNRAID upsmon[26302]: upsnotify: failed to notify about state 2: no notification tech defined, will not spam more about it
    Oct  8 17:20:16 UNRAID upsd[26295]: User [email protected] logged into UPS [qnapups]
    Oct  8 17:21:36 UNRAID kernel: TCP: request_sock_TCP: Possible SYN flooding on port 8181. Sending cookies.  Check SNMP counters.
    Oct  8 17:30:11 UNRAID upsmon[26302]: UPS [email protected]: administratively OFF or asleep
    Oct  8 17:30:11 UNRAID upsd[26295]: Client [email protected] set FSD on UPS [qnapups]
    Oct  8 17:30:11 UNRAID upsmon[26302]: Executing automatic power-fail shutdown
    Oct  8 17:30:11 UNRAID upsmon[26302]: Auto logout and shutdown proceeding
    Oct  8 17:30:16 UNRAID shutdown[27242]: shutting down for system halt
    Oct  8 17:30:16 UNRAID init: Switching to runlevel: 0
    Oct  8 17:30:16 UNRAID flash_backup: stop watching for file changes
    Oct  8 17:30:16 UNRAID init: Trying to re-exec init

     

  7. 1 hour ago, Rysz said:

     

    OK this is quite an advanced setup you've got there, so let's see.

     

    Definitely before this happens, some kind of power problem occurs (due to mains power loss or mains power condition like voltage/frequency spike as seen in brownouts) OR your UPS is reporting such a state to NUT falsely (due to defective UPS, driver problem ...). So NUT is receiving information from your UPS that makes it consider your UPS to be in a critical state, triggering an immediate shutdown despite any of your configured NUT settings (e.g. "Runtime Left" as "Shutdown Mode").

     

    Interesting is, your UPS is only reporting a UPS "OFF" event to NUT here:

    Oct  8 02:35:51 UNRAID upsmon[27897]: UPS [email protected]: administratively OFF or asleep

     

    But are there any more log lines before this one indicating that your UPS is going on battery power at some point? Please post any NUT log lines before this one, if there are any. Because this line on its own makes no sense to trigger such a critical situation - your UPS would need to be on battery for a shutdown sequence as below to occur.

     

    Before proceeding to initiate a shutdown sequence (FSD) thinking your UPS is critical:

    Oct  8 02:35:51 UNRAID upsd[27893]: Client [email protected] set FSD on UPS [qnapups]


    We need to find out why NUT thinks your UPS is critical, this usually only happens when your UPS is both on battery and the UPS battery is almost depleted at the same time. But there's no indication from your logs (just from the inverter clicking sounds) that your UPS is in fact on battery, that's what I don't understand. Plus, for your UPS to reach such a low battery (critical) state faster than your UPS master & clients can shutdown would mean either the UPS battery is severely overloaded (too much power draw) or on the verge of dying (due to old age or defects). Both should be reported by your UPS somehow, so this wouldn't happen out of the blue (unless something was defective).

     

    In such a shutdown sequence NUT (by default) waits max. 15 seconds for your clients to shutdown. If your clients don't shutdown within those 15 seconds, it proceeds to continue to shutdown the UNRAID server. Normally this wouldn't have negative effects on your clients shutdown sequences, but you had "Kill UPS Power" set to "Yes". So after UNRAID shutdown is completed, it kills the power to all your clients even if they're not done with their own shutdown sequences yet - this is what caused your unclean shutdowns there.

     

    With "Kill UPS Power" set to "Yes" one needs to make sure that the clients start their shutdown sequences earlier than the master and also have sufficient time to shutdown before the master starts shutting down and cuts the power. This is, of course, not possible when your UPS suddenly goes critical for reasons unknown - so we'll need to set this to "No" for now.

     

    So where to go from all this information?

    It's not from updating the plugin, definitely, but rather something happening with your UPS. You hear the inverter clicking, so the UPS hardware is doing something at that stage (which is not NUT-caused).

     

    First of all, keep "Kill UPS Power" set to "No" until we figure out what is going on with your UPS. This should at least solve the unclean shutdowns for now, but you'll still see your devices shutting down (though gracefully) when your UPS is going into such a sudden critical state. We need to figure out what's happening with your UPS hardware.

     

     

    So if there's no power outage this could either be your UPS testing its battery, conditioning power (because of voltage/frequency spikes) or your UPS/UPS battery being defective. A battery test or power conditioning should not cause a NUT critical state, unless the battery is almost dead in the first place and does not even survive the short time on battery power in combination with the load connected. Did you see any lights flickering in your house when this was happening?

     

    First I would make sure your UPS is not somehow overloaded (too many watts for too little battery). If that's not the case, I'd attempt to switch out the UPS batteries, that being the cheapest solution. Especially if your UPS batteries are already older, changing them can be magic to solving very weird problems.

     

    If the problem persists even then I'd consider the UPS defective and have it checked by the vendor. If you're still in warranty for your UPS I'd definitely contact the vendor and get the UPS and UPS battery checked.

     

     

    Firstly, not many takes the time you took here. So a massive thanks for that :).

     

    Im kinda torn were the problem is as explained below. But in short the UPS new, tests ok and only happens during 2 week auto selftest. 

     

    Auto Selftest:

    It seems that the problem only occur during auto selftest thats done every 2 weeks. Havent been able to find any way to disable those. They are done at start aswell but no problem then with either blinking or power offs(hard to see as devices are off as ups is off aswell).

     

    Blinking lights:

    When i got the ups (first setup) i had some issues with blinking monitor when inverter acted and that was plugged into same area as UPS. This went away as i think i changed Input Sensitivity to low. No blinking during my own testing at all. 

     

    Age/wear:

    My UPS is kinda new is less than a year old (even warranty on both batteries and ups) and is in a good location with normal temps. All selftest reports ok. 

     

    Overload:

    Average usage is 80-100w and it should be able to handle 540w and havent been close to that. My homelab is based on lower power usage so no fancy big machines (Optiplex x2, RPI, router and QNAP Intel Atom)

     

    Load history

    image.png.c7d0a66a6b0f77cb5eccae6b3730e965.png

     

    Test:

    Have done full test to make sure all devices and automations works fine before and it was working, had battery runtime on about 40-45 min before it was depleted. Tested a quickie now so i unplugged power to UPS with everything up and running as normal and it worked as expected. Log below

     

    QNAP power outtage test
    Oct 8 15:11:04 NAS qlogd[9991]: event log: Users: System, Source IP: 127.0.0.1, Computer name: localhost, Content: Power loss detected on UPS. System would be shutdown after 10 minute(s).

    Oct 8 15:11:37 NAS qlogd[9991]: event log: Users: System, Source IP: 127.0.0.1, Computer name: localhost, Content: Power has returned to UPS. Canceling shutdown.

     

    UNRAID power outtage test

    Oct 8 15:10:40 UNRAID upsmon[3260]:

    UPS [email protected] on battery

    Oct 8 15:11:35 UNRAID upsmon[3260]: UPS [email protected] on line power

     

     

    Full log:

    Theres nothing close before that log, earlier there were unrelated logs imo. Regarding Stop running nchan processes i think its related to other problems with a container in below thread. And this error is always getting logged and i would have issue all the time with the NUT ot that was causing it. https://forums.unraid.net/topic/141974-support-fork-unraid-api-re/?do=findComment&comment=1314157

     

    Quote

    Oct  8 00:44:48 UNRAID webGUI: Successful login user root from XXXXX
    Oct  8 00:45:05 UNRAID monitor: Stop running nchan processes
    Oct  8 00:45:38 UNRAID monitor: Stop running nchan processes
    Oct  8 00:46:11 UNRAID monitor: Stop running nchan processes
    Oct  8 00:49:40 UNRAID kernel: br-58b51fbafe10: port 18(vethfe385d5) entered disabled state
    Oct  8 00:49:40 UNRAID kernel: veth3bdeac6: renamed from eth0
    Oct  8 00:49:40 UNRAID kernel: br-58b51fbafe10: port 18(vethfe385d5) entered disabled state
    Oct  8 00:49:40 UNRAID kernel: device vethfe385d5 left promiscuous mode
    Oct  8 00:49:40 UNRAID kernel: br-58b51fbafe10: port 18(vethfe385d5) entered disabled state
    Oct  8 00:58:14 UNRAID kernel: br-58b51fbafe10: port 18(veth52b59e3) entered blocking state
    Oct  8 00:58:14 UNRAID kernel: br-58b51fbafe10: port 18(veth52b59e3) entered disabled state
    Oct  8 00:58:14 UNRAID kernel: device veth52b59e3 entered promiscuous mode
    Oct  8 00:58:14 UNRAID kernel: eth0: renamed from vethe1b859f
    Oct  8 00:58:14 UNRAID kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth52b59e3: link becomes ready
    Oct  8 00:58:14 UNRAID kernel: br-58b51fbafe10: port 18(veth52b59e3) entered blocking state
    Oct  8 00:58:14 UNRAID kernel: br-58b51fbafe10: port 18(veth52b59e3) entered forwarding state
    Oct  8 00:58:17 UNRAID webGUI: Successful login user root from XXXXX
    Oct  8 01:00:18 UNRAID monitor: Stop running nchan processes
    Oct  8 01:00:51 UNRAID monitor: Stop running nchan processes
    Oct  8 01:01:24 UNRAID monitor: Stop running nchan processes
    Oct  8 01:01:57 UNRAID monitor: Stop running nchan processes
    Oct  8 01:02:30 UNRAID monitor: Stop running nchan processes
    Oct  8 01:03:03 UNRAID monitor: Stop running nchan processes
    Oct  8 01:03:41 UNRAID kernel: vethe1b859f: renamed from eth0
    Oct  8 01:03:41 UNRAID kernel: br-58b51fbafe10: port 18(veth52b59e3) entered disabled state
    Oct  8 01:03:41 UNRAID kernel: br-58b51fbafe10: port 18(veth52b59e3) entered disabled state
    Oct  8 01:03:41 UNRAID kernel: device veth52b59e3 left promiscuous mode
    Oct  8 01:03:41 UNRAID kernel: br-58b51fbafe10: port 18(veth52b59e3) entered disabled state
    Oct  8 01:06:03 UNRAID webGUI: Successful login user root from 10.20.30.41
    Oct  8 01:06:31 UNRAID kernel: br-58b51fbafe10: port 18(veth68a62f0) entered blocking state
    Oct  8 01:06:31 UNRAID kernel: br-58b51fbafe10: port 18(veth68a62f0) entered disabled state
    Oct  8 01:06:31 UNRAID kernel: device veth68a62f0 entered promiscuous mode
    Oct  8 01:06:32 UNRAID kernel: eth0: renamed from veth8e90f61
    Oct  8 01:06:32 UNRAID kernel: IPv6: ADDRCONF(NETDEV_CHANGE): veth68a62f0: link becomes ready
    Oct  8 01:06:32 UNRAID kernel: br-58b51fbafe10: port 18(veth68a62f0) entered blocking state
    Oct  8 01:06:32 UNRAID kernel: br-58b51fbafe10: port 18(veth68a62f0) entered forwarding state
    Oct  8 01:06:34 UNRAID webGUI: Successful login user root from XXXXX
    Oct  8 01:15:14 UNRAID ool www[20963]: Successful logout user root from XXXXX
    Oct  8 01:15:18 UNRAID webGUI: Successful login user root from XXXXX
    Oct  8 01:15:37 UNRAID monitor: Stop running nchan processes
    Oct  8 01:16:10 UNRAID monitor: Stop running nchan processes
    Oct  8 01:16:43 UNRAID monitor: Stop running nchan processes
    Oct  8 01:17:16 UNRAID monitor: Stop running nchan processes
    Oct  8 01:17:49 UNRAID monitor: Stop running nchan processes
    Oct  8 01:18:22 UNRAID monitor: Stop running nchan processes
    Oct  8 01:18:56 UNRAID monitor: Stop running nchan processes
    Oct  8 01:19:29 UNRAID monitor: Stop running nchan processes
    Oct  8 01:20:02 UNRAID monitor: Stop running nchan processes
    Oct  8 01:20:35 UNRAID monitor: Stop running nchan processes
    Oct  8 01:21:08 UNRAID monitor: Stop running nchan processes
    Oct  8 01:21:41 UNRAID monitor: Stop running nchan processes
    Oct  8 01:22:14 UNRAID monitor: Stop running nchan processes
    Oct  8 01:22:48 UNRAID monitor: Stop running nchan processes
    Oct  8 01:23:21 UNRAID monitor: Stop running nchan processes
    Oct  8 01:23:54 UNRAID monitor: Stop running nchan processes
    Oct  8 01:24:27 UNRAID monitor: Stop running nchan processes
    Oct  8 01:25:00 UNRAID monitor: Stop running nchan processes
    Oct  8 01:25:33 UNRAID monitor: Stop running nchan processes
    Oct  8 01:26:07 UNRAID monitor: Stop running nchan processes
    Oct  8 01:26:40 UNRAID monitor: Stop running nchan processes
    Oct  8 01:26:57 UNRAID webGUI: Successful login user root from XXXXX
    Oct  8 02:03:43 UNRAID monitor: Stop running nchan processes
    Oct  8 02:04:16 UNRAID monitor: Stop running nchan processes
    Oct  8 02:04:50 UNRAID monitor: Stop running nchan processes
    Oct  8 02:05:23 UNRAID monitor: Stop running nchan processes
    Oct  8 02:05:56 UNRAID monitor: Stop running nchan processes
    Oct  8 02:06:29 UNRAID monitor: Stop running nchan processes
    Oct  8 02:07:02 UNRAID monitor: Stop running nchan processes
    Oct  8 02:07:35 UNRAID monitor: Stop running nchan processes
    Oct  8 02:08:08 UNRAID monitor: Stop running nchan processes
    Oct  8 02:08:42 UNRAID monitor: Stop running nchan processes
    Oct  8 02:09:15 UNRAID monitor: Stop running nchan processes
    Oct  8 02:09:48 UNRAID monitor: Stop running nchan processes
    Oct  8 02:10:21 UNRAID monitor: Stop running nchan processes
    Oct  8 02:10:54 UNRAID monitor: Stop running nchan processes
    Oct  8 02:11:27 UNRAID monitor: Stop running nchan processes
    Oct  8 02:12:01 UNRAID monitor: Stop running nchan processes
    Oct  8 02:12:34 UNRAID monitor: Stop running nchan processes
    Oct  8 02:13:07 UNRAID monitor: Stop running nchan processes
    Oct  8 02:13:40 UNRAID monitor: Stop running nchan processes
    Oct  8 02:14:13 UNRAID monitor: Stop running nchan processes
    Oct  8 02:14:46 UNRAID monitor: Stop running nchan processes
    Oct  8 02:15:19 UNRAID monitor: Stop running nchan processes
    Oct  8 02:15:53 UNRAID monitor: Stop running nchan processes
    Oct  8 02:16:26 UNRAID monitor: Stop running nchan processes
    Oct  8 02:16:59 UNRAID monitor: Stop running nchan processes
    Oct  8 02:17:32 UNRAID monitor: Stop running nchan processes
    Oct  8 02:18:05 UNRAID monitor: Stop running nchan processes
    Oct  8 02:18:38 UNRAID monitor: Stop running nchan processes
    Oct  8 02:19:11 UNRAID monitor: Stop running nchan processes
    Oct  8 02:19:45 UNRAID monitor: Stop running nchan processes
    Oct  8 02:20:18 UNRAID monitor: Stop running nchan processes
    Oct  8 02:20:51 UNRAID monitor: Stop running nchan processes
    Oct  8 02:21:24 UNRAID monitor: Stop running nchan processes
    Oct  8 02:21:57 UNRAID monitor: Stop running nchan processes
    Oct  8 02:22:30 UNRAID monitor: Stop running nchan processes
    Oct  8 02:23:04 UNRAID monitor: Stop running nchan processes
    Oct  8 02:23:37 UNRAID monitor: Stop running nchan processes
    Oct  8 02:24:10 UNRAID monitor: Stop running nchan processes
    Oct  8 02:24:43 UNRAID monitor: Stop running nchan processes
    Oct  8 02:25:16 UNRAID monitor: Stop running nchan processes
    Oct  8 02:25:49 UNRAID monitor: Stop running nchan processes
    Oct  8 02:26:22 UNRAID monitor: Stop running nchan processes
    Oct  8 02:26:56 UNRAID monitor: Stop running nchan processes
    Oct  8 02:27:29 UNRAID monitor: Stop running nchan processes
    Oct  8 02:28:02 UNRAID monitor: Stop running nchan processes
    Oct  8 02:28:35 UNRAID monitor: Stop running nchan processes
    Oct  8 02:29:08 UNRAID monitor: Stop running nchan processes
    Oct  8 02:29:41 UNRAID monitor: Stop running nchan processes
    Oct  8 02:30:14 UNRAID monitor: Stop running nchan processes
    Oct  8 02:30:48 UNRAID monitor: Stop running nchan processes
    Oct  8 02:31:21 UNRAID monitor: Stop running nchan processes
    Oct  8 02:31:54 UNRAID monitor: Stop running nchan processes
    Oct  8 02:32:27 UNRAID monitor: Stop running nchan processes
    Oct  8 02:33:00 UNRAID monitor: Stop running nchan processes
    Oct  8 02:33:33 UNRAID monitor: Stop running nchan processes
    Oct  8 02:34:07 UNRAID monitor: Stop running nchan processes
    Oct  8 02:34:40 UNRAID monitor: Stop running nchan processes
    Oct  8 02:35:13 UNRAID monitor: Stop running nchan processes
    Oct  8 02:35:46 UNRAID monitor: Stop running nchan processes
    Oct  8 02:35:51 UNRAID upsmon[27897]: UPS [email protected]: administratively OFF or asleep
    Oct  8 02:35:51 UNRAID upsd[27893]: Client [email protected] set FSD on UPS [qnapups]
    Oct  8 02:35:51 UNRAID upsmon[27897]: Executing automatic power-fail shutdown
    Oct  8 02:35:51 UNRAID upsmon[27897]: Auto logout and shutdown proceeding
    Oct  8 02:35:56 UNRAID shutdown[3133]: shutting down for system halt
    Oct  8 02:35:57 UNRAID init: Switching to runlevel: 0

    and so on...

     

     

  8. 1 hour ago, Rysz said:

     

    This definitely shouldn't happen and doesn't happen with my NUT clients.

     

    Are your NUT clients somehow configured to shutdown when losing connection to the NUT master? The update routine of the plugin just stops the NUT service (on the master) before updating but definitely does not trigger a shutdown scenario.

     

    Have happend a couple of times and im not sure really why. Been reading logs a lot and it seems that the plugin was updated in the vicinity of the time it happend. Som maybe incorrect deduction by me.
     

    Setup:

    UPS: Back-UPS RS 900MI connected via USB to Unraid.

    Unraid: NUT controls Unraid shutdown normally, power via UPS

    Qnap: Builtin QNAP UPS feature is controlling shutdown, power via UPS

    RPI(Home Assistant): NUT Integration to HA and automation thats shutdown server at 50% battery and then RPI(itself) at 20% battery, power via UPS.

    Server(Windows): Shutdown via automation as above, power via UPS.

    Router: No NUT, power via UPS

     

    After looking closer the UPS kills power to all connected devices (Router, Unraid, RPI Home Assistant, Qnap NAS, Server Windows). It seems that Unraid gets shutdown command and turns of fast enough but the others shows logs or messages that points to that they werent shutdown correctly

     

    The Home Assistant automation above wasnt triggerd by low battery so the server lost power. Windows logs confirm it too. So thats why i was thinking that the UPS kills the power outlets before the battery can even be drained by the machines and trigger the automation and so on. Can also add that i heard the UPS tick (switch between battery and power) a couple of times during this(no home power outage thou). It seems to run selftest every 2 weeks and thereyby the ticking and switching between battert and power.

     

    Is there anything you see that im doing wrong?

     

    Could the selftest affect NUT in some way(dont seem to be a way to disable it)?

     

    Could the the kill ups power to no help(or what does it do)?

     

    Logs/info

     

    QNAP LOG(that shows that shutdown was incorrectly done):

    Oct  8 02:48:53 QNAP qlogd[10732]: event log: Users: System, Source IP: 127.0.0.1, Computer name: localhost, Content: The system was not shut down properly last time.
    Oct  8 02:48:53 QNAP qlogd[10732]: event log: Users: System, Source IP: 127.0.0.1, Computer name: localhost, Content: System started.
    Oct  8 02:48:53 QNAP qlogd[10732]: event log: Users: System, Source IP: 127.0.0.1, Computer name: localhost, Content: [UPS] USB UPS device plugged in.
    Oct  8 02:48:57 QNAP qlogd[10732]: event log: Users: System, Source IP: 127.0.0.1, Computer name: localhost, Content: [Volume DataVol1, Pool 1] The file system is not clean.

     

    NUT/UNRAID Log:
    Oct  8 02:35:51 UNRAID upsmon[27897]: UPS [email protected]: administratively OFF or asleep
    Oct  8 02:35:51 UNRAID upsd[27893]: Client [email protected] set FSD on UPS [qnapups]
    Oct  8 02:35:51 UNRAID upsmon[27897]: Executing automatic power-fail shutdown
    Oct  8 02:35:51 UNRAID upsmon[27897]: Auto logout and shutdown proceeding
    Oct  8 02:35:56 UNRAID shutdown[3133]: shutting down for system halt
    Oct  8 02:35:57 UNRAID init: Switching to runlevel: 0

    Oct 8 02:35:57 UNRAID init: Trying to re-exec init

    Oct 8 02:35:58 UNRAID kernel: mdcmd (36): nocheck cancel

    Oct 8 02:35:59 UNRAID emhttpd: Spinning up all drives...

     

     Qnap settings

    image.thumb.png.63ebd569d6b1068d1af1be804e525b1c.png

     

    NUT Settings (i change Kill UPS Power this morning to NO)

    image.thumb.png.dbcbf9f5593336f45b26b8e228d6967f.png

  9. @Booker

     

    First checking if im using correct repo. Im using bokker/unraidapi-re:6.12 and i am again having issues after restarting HA that all sensors goes unavailable and i need restart UnraidApi container to get the sensors working again. Wrote here about but no one else seemed to have this issue. This was fixed before new restart/button updat. So im wondering if im using incorrect repo or something?

     

    Also a new issue that ive traced back to UnraidAPI. Whenever this docker is running my Unraid log gets spammed but below. Have had this a while but didnt know what was the source. Other people have talked about it (unknown if they were running UnraidAPI) and said that its most likely it isnt more than logspamm, but it would be nice to not have them.
     
    Example:
    Oct  7 14:14:01 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:14:34 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:15:07 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:15:40 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:16:13 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:16:46 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:17:20 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:17:53 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:18:26 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:18:59 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:19:32 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:20:05 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:20:39 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:21:12 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:21:45 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:22:18 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:22:51 SERVER-NAME monitor: Stop running nchan processes
    Oct  7 14:23:24 SERVER-NAME monitor: Stop running nchan processes
     


     

  10. 31 minutes ago, BoKKeR said:

    Good news everyone! I located the naming issues cause https://community.home-assistant.io/t/psa-mqtt-name-changes-in-2023-8/598099

     

    I couldnt reproduce it since I was running the older version of HA. 

     

    Here is an example error you might have noticed in HA:

     

    2023-10-03 10:48:52.147 WARNING (MainThread) [homeassistant.components.mqtt.mixins] MQTT entity name starts with the device name in your config {'payload_available': 'True', 'payload_not_available': 'False', 'json_attributes_topic': 'homeassistant/unraidvm/home-assistant-container', 'name': 'unraidvm_docker_home-assistant-container_restart', 'unique_id': 'unraidvm_home-assistant-container_restart', 'payload_press': 'restart', 'device': {'identifiers': ['unraidvm'], 'name': 'unraidvm', 'manufacturer': 'ASUS ASUSTeK COMPUTER INC. , Version Rev 2802', 'model': 'Docker', 'connections': []}, 'command_topic': 'homeassistant/unraidvm/home-assistant-container/dockerState', 'encoding': 'utf-8', 'availability_mode': 'latest', 'qos': 0, 'enabled_by_default': True, 'retain': False}, this is not expected. Please correct your configuration. The device name prefix will be stripped off the entity name and becomes '_docker_home-assistant-container_restart'

     

    Meaning the friendly name cant start with the same string as the device name! 

     

    I will have to look into proper naming with the new HA release. I am open to suggestions! 

     

    image.png.61667ebea40df673ca51a2fa6ea2f75d.png

     

     

    Yeah theres more integrations with the same issue with naming. Ive never really understood the issue fully more than that entity cannot have the devicename in its name. And i rename everything to fit my naming scheme. With thats said keep it simple and set unraidapi or unraid, short and simple.

     

    FYI: When you do update i noticed that parity-check button was misspelled, now we are on a really shallow level of problems :)

     

    And regarding the HA restart issue that was back for me (where you need to restart UnraidAPI docker after HA restart), did anyone else notice this or is it something at my setup only?

     

     

  11. Hi, that you went with the restart and buttons, awsome to see that the integration grows. Good work Bokker.

    Im having 2 issues thou:

     

    HA Restart bug:
    But it seems that the HA restart issue is back which you fixed a couple of updates before. When i restart HA the sensors/switches/buttons goes unavailable again.


    Doubled naming:
    Same as above that has strange naming on the 5 ”new” button/switch, doubled unraidserver name.
    button.unraidservername_unraidservername_mover
    button.unraidservername_unraidservername__partitycheck
    button.unraidservername_unraidservername__power_off
    button.unraidservername_unraidservername__reboot
    switch.unraidservername_unraidservername__array
     

  12. 19 hours ago, BoKKeR said:

    Thanks, will check on that, I think the problem is unrelated but I will create a fix for it. 

     

    Is this is right? considering the HA developers page states buttons are stateless

     

    A button entity is an entity that can fire an event / trigger an action towards a device or service but remains stateless from the Home Assistant perspective. It can be compared to a real live momentary switch, push-button, or some other form of a stateless switch. 

     

    https://developers.home-assistant.io/docs/core/entity/button/

     

    First off, massive thanks to you Bokker for keeping Unraid-API alive and continue improving the functionality. The last fix that fixed HA reboot issues was really nice.

     

    Regarding buttons/switches:

    I think reboot/shutdown/parity check was seen as button before some updates, i could be wrong thou. And also everything works fine anyway with the switch as it is now. It does what is supposed to and the switch goes to on for a bit then back to off(as it is not a static on or off activity)

     

    So it just estetics i my mind, like below from a another mqtt service i use. Here reboot is considered as button and gets press the "Press" in ui. So nothing big on the user end, and its really up to you.

     

    image.png.892ab495bacdf387e7d23fecec33df36.png

     

    Restart Functionality for Unraid API

    Im trying to find a way to restart UnraidAPI container via HA. As stop/start wont work as stop kills the container and then no more connection. All other containers i can work with stop then start. Is there any way to add a restart for especially unraid-api?

     

  13. 5 hours ago, Orishas said:

    Hi,

     

    i tested this release it fixes problem that the entity becomes unavailable after a HA restart, but now it always retains the status on:true

     

    I tested it to but and the issues with unavailabe after HA restart seems fixed. What do you mean by that it retains statsus on:true?

  14. 1 hour ago, Orishas said:

    Hi,

    first of all thanks to @BoKKeR for supporting this API again.
    I just have a minor problem that when I shutdown UNRAID via the MQTT power_off switch it changes from the state on: true to on: false for a short time and directly back to on:true.
    Means that the last MQTT state in Home Assistant is on:true and my POWER ON / OFF script does not work because there is a condition that checks if UNRAID is on or off state. Does anyone else have this problem?

    Thanks
     

     

     

    Not happening for me atleast. 

     

     

     

  15. @Bokker

    Some nice to haves, just if its possble. Im really glad that you fixed it as it is so everything else is icing on the cake.

     

    Restart fix

    If i restart HA the unraid api devices will go unknown and i need to restart urnaid api docker to get it up and running again. Old problem and  not a major issue. Is there something that can be done from Unraid APIs side?

     

    Sorting

    The dockers and vms is own devices under HA MQTT plugin, is it possible to group them under UNRAID mqtt device. I use HASS Agent for windows machines and that adds all sensors and switches under the machine. So my machine called Server gets for examble plex on and off buttons under itself, would be nice to get all dockers switches/sensors under Unraid. Its just less cluttered and is easier to find stuff.

    Capture.JPG.bf98ac4a56956a1a557bc130db9f1b9c.JPGCapture2.jpg.28ab22efc42fe933bbb0ce31a7c87915.jpg

  16. 3 hours ago, MAM59 said:

    Hi! great Idea I also need an API (mainly for waking up the drives).

     

    But beeing a newby with this api stuff (never used the original one before), I have some problems with the basic setup already.

     

    It asks me for "Server IP", but my unraid is running on a different port. I've entered <ip>:<port> as usual and it seems to work a bit

    but I am getting constant errors like

    Jul 19 06:33:09 F nginx: 2023/07/19 06:33:09 [error] 13914#13914: *235440 limiting requests, excess: 20.753 by zone "authlimit", client: 172.17.0.5, server: , request: "GET /login HTTP/1.1", host: "192.168.0.4:800"
    Jul 19 06:33:14 F webGUI: Unsuccessful login user root from 172.17.0.5
    Jul 19 06:33:14 F nginx: 2023/07/19 06:33:14 [error] 13914#13914: *235463 limiting requests, excess: 20.229 by zone "authlimit", client: 172.17.0.5, server: , request: "GET /login HTTP/1.1", host: "192.168.0.4:800"
    Jul 19 06:33:14 F nginx: 2023/07/19 06:33:14 [error] 13914#13914: *235464 limiting requests, excess: 20.229 by zone "authlimit", client: 172.17.0.5, server: , request: "GET /login HTTP/1.1", host: "192.168.0.4:800"

     

    Username and password are correct, so I assume, it needs some other settings in unraid. or should i use somebody else (not root) ?

     

     

    Im not sure this helps, but a while back (on vanilla unraidapi)i had rate limit issues aswell. with similar errors like you. Unraid has authlimit on UI . So it its not just Upnraid API that can affect it.

     

    https://github.com/ElectricBrainUK/UnraidAPI/issues/23

    https://forums.unraid.net/search/?&q=authlimit&search_and_or=or

     

    I tried som of those commands in first link but some other issues occured (so make sure u dont lock your self out as my unraid gui got stuck and i needed to ssh and reset ui via cli)

     

    But what solved it for me in the end as i remember was to disable unraid api and wait 24 hours  (to clear the auth) and try again, b

  17. Ive been awaiting somekind of a fix and was glad to see you fork it. So thank you for commiting your time to it.

     

    Ive replaced it now and i have problems with not beeing able to control anything.

    Im using lastest Unraid and latest Homeassistant. Also tried resetting Unraid-Api after fork change.

     

    When installing your fork i get back my old issues with VM not beeing controllable. Thats fine as it was that way before unraid update and i dont need to control VM(could be nice thou).

    Get VM Details for ip: ip:port Failed
    Cannot read properties of undefined (reading '0')
    Get VM Details for ip: ip:port Failed
    Cannot read properties of undefined (reading '0')

     

    But i get error controlling Unraid, stopping/starting array from HA for example (something i could do with vanilla unraid-api).

    I get in unraid-api log:

    Received MQTT Topic: homeassistant/unraid/array and Message: Stopped assigning ID: MQTT-R-lk2e6ayi

    But nothing happens.

     

    When i try and start/stop a container from HA i get:

    Received MQTT Topic: homeassistant/unraid/organizr/dockerState and Message: stopped assigning ID: MQTT-R-lk2elcf8
    Part of MQTT-R-lk2elcf8 failed.

    And nothing happens

     

    When i do the same from Unraid API webui i first get a popup:

    Please Enter Your Password For ip:port

    And if i fill that in manually (even thou it should be taken from my serversetup i unraid-ui which is correct)

    I get nothing in unraid-api logs and nothing happens to the container.

     

     Home Assistant can read the container state. So when i turn on a container within unraid manually, the sensor in HA gets turned on aswell.

     

    It seem there is something with the auth/write but Im not sure what you need more specified so just ask and i will try my best to give logs and so on.

    • Like 1
  18. Hi

    Im hoping im posting in correct subforum. Looking for some input regarding storage setup for my unraid:

    As im awating deals on hardware im using a Dell OptiPlex 7080 Micro (16gb ram/i5-10500T) while wating. So i will be using my present as Qnap NAS as storage until i got new hardware and move that data to Unraid then and retire the QNAP(reuse as backup).

    So Unraid will be used for Docker applications and 1 Home Assisntant VM as im moving from a windows server and Raspberry PI to Docker/VM aswell. And when i gotten new hardware with more sata ports and hdd slots i will move to that and then start using it as NAS aswell.


    Until i gotten my new hardware i have (from work so no costs involved), im not wanting to buy new drives until i built the full hardware.
    3 x 256gb SSD
    1 x 500gb SSD
    2 x 256gbs NVME

    2 x M.2 NVME slots on Optiplex
    1 x SATA 2.5 slot on Optiplex

    Im not wanting to buy new drives until i built the full hardware.

     

    Opt 1 (present setup as test) 
    1 x SSD 500GB as array

    Used size as full test with everthing configured at 3.5 gb.
    -isos
    -download, temporary then rrs-apps dumps it to remote Qnap media share (no cache setting to stop downloads affect io for dockers/vm)

    2 x NVME 256GB as cache RAID1

    Used size as full test with all dockers installed at 22.6 gb.
    -appdata prefer cache setting so these folder get full speed and has raid1 for security, 
    -domain prefer cache setting so these folder get full speed and has raid1 for security
    -system prefer cache setting so these folder get full speed and has raid1 for security.

     

     

    Or

     

    Opt 2
    1 x SSD 500GB as parity (reuse for real build as cache when its time)
    2 x NVME 256GB as part of array (reuse for real build as cache when its time)
    I realize that trim not supported or experimental at best.

     

    Whats the best way until i got fullsetup, is other options i should consider?

×
×
  • Create New...