Jump to content

Rysz

Community Developer
  • Posts

    500
  • Joined

  • Last visited

  • Days Won

    2

Posts posted by Rysz

  1. Thanks, the problem with the files not patching properly before a reboot should be fixed with the next update - it's likely due to locked files when NUT processes are still running while doing the upgrade. At the moment if you're experiencing problems a reboot should resolve them.

    • Like 2
    • Thanks 1
  2. It seems quite likely to me now that for the users experiencing those problems the relevant PHP files didn't get reloaded when upgrading the plugin (could be locked files due to still running services during patching) and that a reboot would fix those display issues on the front page and the dashboard eventually - I've submitted a pull request to reduce the chance of this happening again with future upgrades of the plugin.

     

     

  3. 40 minutes ago, CamCorp said:

    Hey Everyone,

    Can I ask a design questions, Have an APC SMX3000RMLV2UNC that has a Network Management Card 2. I dont have any other communication cables attached because I can use the PowerChute clients on my win boxes. Question is: Any way I can connect my UNRAID box (6.12.2) to communicate to the UPS via the network and then act as a NUT server? Or what would be the best configuration setup? I can use the USB if that is the only way but was hoping it wasnt.

     

    thanks

     

    If your UPS' network card supports SNMP you can connect NUT to the UPS via ethernet utilizing SNMP.

     

    1 hour ago, carthis said:

     

    I'm having the same issue even with 24a. Windows 11, both Firefox latest and MS Edge, Unraid 6.12.3.

    Screenshot 2023-07-24 232823.png

     

    I can't reproduce this, could you screenshot the UPS' reported variables under "NUT Settings" => "NUT Details" so we can test this with the variables your UPS is returning to NUT? I mean the table where there's all the UPS information.

     

    Did you try clearing your cache or pressing CTRL+F5 on the front page so a full refresh of the page is done?

  4. I've also added another PR removing the low battery event fail-safe setting (NUT does this by default so it's obsolete as a separate setting). Plus a rare scenario with that setting in combination with some UPS devices could theoretically lead to a shutdown-reboot-shutdown loop, so best switch that low battery event fail-safe setting to "No" until it's removed from the plugin with the next update. 🙂

  5. 1 minute ago, Masterwishx said:

     

    it cant be updated by auto update? 

    v2.8.0 on 2023.02.14 version was autoupdated  to 2023.06.03 v 2.70 somehow ...


    I think this is the reason

    <!ENTITY pluginURL "&gitURL;/plugin/&name;.plg">

    should be

    <!ENTITY pluginURL "&gitURL;/plugin/&name;-2.8.0.plg">

    inside nut-2.8.0.plg

    • Thanks 1
  6. On 7/21/2023 at 9:47 PM, yonesmit said:

     

    It's very strange because in unraid shutdown script (rc.6) you can find this command:

    /bin/umount -v -a

    the output of this command in my system is:

    /mnt                     : successfully unmounted
    /sys/fs/cgroup           : successfully unmounted
    /hugetlbfs               : successfully unmounted
    /sys/fs/fuse/connections : successfully unmounted
    /dev/shm                 : successfully unmounted
    /dev/pts                 : ignored
    /usr                     : successfully unmounted
    /usr                     : successfully unmounted
    /lib                     : successfully unmounted
    /lib                     : successfully unmounted
    /run                     : successfully unmounted
    /sys                     : ignored
    /proc                    : ignored

    so /usr is unmounted.

    it is imposible that later UPS can be shutdown:

    /usr/sbin/upsdrvctl shutdown

     

    I think that this command should be run before unmounting the drive.

    How can you ups shutdown?

     

    Best Regards,

     

     

     

    Just to check back here, I've tested this with 6.12.3 and you're right about your observations. I actually consider this unwanted behaviour and something UNRAID should fix as it's likely causing problems for the system and other plugins as well. Hacking together a plugin-wise solution for this does not seem practical and would require re-mounting parts of the unmounted filesystems, something that I think no plugin should do lightly and without a very good reason.

     

    I've submitted a relevant bug report in hopes of getting this fixed in newer versions of UNRAID.

     

    • Like 1
    • Thanks 1
  7. 3 hours ago, yonesmit said:

    Are you running unraid 6.12.x ?

    In both rc.0 and rc.6 i find:

    # Unmount local file systems:
    echo "Unmounting local file systems:"
    /bin/umount -v -a

    may be this is a new addition in 6.12 (because as I said the UPS power off was working ok with my ups when I tested time ago).

    And also UPS shutdown when i run from command line:

    upsdrvctl -u root shutdown

     

    I looked my plugins and I don't find anyone adding the "umount -v -a" command so it must be from unraid 6.12

     

    Please, can anyone running unraid 6.12.x confirm if "/bin/umount -v -a" is present in file /etc/rc.d/rc.6 or not?

     

    Thnaks in advance

     

     

     

    If this line really is there in all versions 6.12.x this must've been an addition in a newer than mine UNRAID version and you might be right about your observation that the plugin is attempting to call a binary which is no longer there due to previous mass-unmounting of all filesystems including the root filesystem. I'm unfortunately running an older version of UNRAID myself so I can't really investigate much more into that direction, in my version the root filesystem does not get unmounted as part of the shutdown.

     

    This new way of mass-unmounting all filesystems poses a more complex problem to killing the UPS power however. We cannot simply move the NUT kill-power command to above the mass-unmount command, because then theoretically the UPS would be able to kill the power in the middle of unmounting some more critical filesystems... resulting in an unpredictable end result of the actual shutdown - you'd never know if it was a 100% clean shutdown. The reason is that the UPS doesn't know and won't wait for what's going on, it only gets the command to kill the power to the devices pronto.

     

    In my eyes the only possibility to mitigate that risk would be re-mounting the necessary root filesystem as read-only after the mass-unmounts are already done, so that the necessary binary becomes available to the shutdown script once again before the system halts - but this way making sure all critical filesystems were successfully unmounted before.

     

    Judging by my rc.6 file this has actually been done in older versions of UNRAID and that functionality was later commented out and deactivated because the root filesystem wasn't unmounted (anymore) due to being in RAM anyway (until now apparently).

     

    But generally adding such a remount as part of a plugin would be, at least in my opinion, a major and particularly an unpredictable change of the system's base shutdown sequence. Personally I'm out of my depth here at the moment as I wouldn't know how such a change would effect all other (older) versions of UNRAID.

     

    P.S. I'll test this with a newer version and report back here on my findings.

  8. 11 hours ago, yonesmit said:

     

    It's very strange because in unraid shutdown script (rc.6) you can find this command:

    /bin/umount -v -a

    the output of this command in my system is:

    /mnt                     : successfully unmounted
    /sys/fs/cgroup           : successfully unmounted
    /hugetlbfs               : successfully unmounted
    /sys/fs/fuse/connections : successfully unmounted
    /dev/shm                 : successfully unmounted
    /dev/pts                 : ignored
    /usr                     : successfully unmounted
    /usr                     : successfully unmounted
    /lib                     : successfully unmounted
    /lib                     : successfully unmounted
    /run                     : successfully unmounted
    /sys                     : ignored
    /proc                    : ignored

    so /usr is unmounted.

    it is imposible that later UPS can be shutdown:

    /usr/sbin/upsdrvctl shutdown

     

    I think that this command should be run before unmounting the drive.

    How can you ups shutdown?

     

    Best Regards,

     

     

     

    Not sure what you mean, but the relevant files are in RAM in remain accessible.

    Besides I cannot find the command "/bin/umount -v -a" on its own anywhere in my RC.6 file.

    It does exist with additional parameters to unmount specific targets, but not "/bin/umount -v -a" on its own?

     

    Maybe your UPS does not support powering off - which one are you using?

    I know my Eaton 5P has a setting where you specifically have to allow a software-initiated poweroff.

     

     

     

     

  9. On 7/21/2023 at 8:02 PM, yonesmit said:

    Hi,

     

    Is really this plugin turning off UPS for you when there is a power outage?

    In my tests the UPS is not turned off but it was working in previous versions of Unraid or plugin when I initially setup.

     

    Why it does not turn off UPS?

    Because when it tries to do it is in rc.6 running the command:

    [ -x /etc/rc.d/rc.nut ] && /etc/rc.d/rc.nut shutdown

    this runs rc.nut with parameter shutdown, so in this script it executes:

    /usr/sbin/upsdrvctl shutdown

     

    The problem is: at this point of rc.6 the drives are unmounted so upsdrvctl is not found.

     

    Please can somebody confirm if his UPS is  turned off? Mine is not and can't be done.

     

    Thanks in advance

    Best Regards,

     

    My Eaton 5P powers off just fine (via USB). edit: on my old 6.8.3 system!

  10. 21 hours ago, rama3124 said:

    Hi I just set up this plugin on unraid and installed seperate plugins on home assistant and OPNsense to use the unraid nut server. My understanding is that now all 3 devices will shutdown as per the settings on the unraid plugin (2min time on battery in my case). Is that correct?

     

    When UNRAID shuts down as per your configured settings, the secondary devices (slaves) will also receive shutdown commands and shutdown slightly before (usually 15 seconds) your primary device (master) does.

     

    Some clients allow you to configure separate shutdown settings for your client instance (e.g. time on battery), but to answer your question: Yes, the primary instance of NUT on your UNRAID will always instruct the secondary devices to shutdown with it.

     

    But you should test the shutdown sequence before blindly relying on your configuration:

    upsmon -c fsd

    Is the command to execute on your primary NUT instance (in this case your UNRAID server).

  11. Dear fellow NUT-ters,

     

    I've fixed the problem with configuration files not persisting a system reboot. Once these changes are incorporated into @SimonF's project, please make sure to add any configuration changes at the end of the respective configuration files.

     

    While at it, I've also added a feature which presents a fail-safe in case the UPS emits a LOWBATT event. This allows for a graceful shutdown in case of a prematurely dying battery (regardless of and in addition to any other shutdown conditions). One theoretic scenario would be a degraded battery not being able to withstand the configured shutdown conditions (i.e. "Time on battery before shutdown"). Before implementing this feature a degraded battery could theoretically run completely empty before any other shutdown condition is met, resulting in a hard shutdown of the server. By activating this new feature the server will immediately but gracefully shutdown when the UPS emits a LOWBATT event (usually happens around 20% remaining battery, configurable directly on some UPS screens like for Eaton 5P) regardless of other configured shutdown conditions. By default this setting will be DISABLED.

     

    I cannot guarantee a long-term commitment to this project and particularly do not mean to hijack @SimonF's considerable efforts. I will therefore be submitting respective pull-requests for any fixes and additions so that they can be incorporated into his repository for sake of continuance, rather than trying to advertise users into using a new fork of the project.

     

    However, for those urgently needing those changes, you can find the (obviously experimental) modified plugins here:

    Please make sure to uninstall any other NUT installations and ideally reboot your server before installing the experimental modified plugins. The same obviously applies if you're switching back to @SimonF's versions!

     

    I'd appreciate any errors being reported back here so I can work on fixing those before they make their way into the upstream repositories, so far I've however not encountered any on my UNRAID installation with an USB-connected Eaton 5P.

     

    Best regards!

     

     

     

    • Like 1
    • Thanks 1
  12. On 6/4/2023 at 5:02 AM, JudasD said:

    is anyone else seeing "/etc/nut/upssched.conf" get overwritten by the default file on each reboot?

     

    Thanks,

    JD

     

    After some investigation I've managed to find the cause for this issue and the other users' issues.

     

    The changes to the NUT configuration files (present in /etc/nut/) done via the web-interface or directly inside the files are currently lost because the script does not save those modified NUT configuration files onto the flash drive (USB) at all when the mode is set to "Enable Manual Config Only: No".

     

    The information "You can still edit the conf files to add extras not included on the NUT Settings Page." at the moment only applies to the LIVE/CURRENT running session of UNRAID. After a reboot the package basically gets reinstalled and the installations script pulls the default NUT configuration files from the package into /etc/nut, when it should actually be pulling the previously saved backups of the modified NUT configuration files from the flash drive (USB) into /etc/nut.

     

    This behaviour can only be changed inside the package by the developer, basically the script would need to backup the NUT configuration files onto the flash drive (USB) regardless of the state of the setting "Enable Manual Config".

     

    In any case the script should save the NUT configuration files from /etc/nut onto the flash drive (USB) for re-population of /etc/nut after a system reboot, no matter if it's using:

     

    1. only the settings from the web-interface
    2. a mixture of the web-interface settings plus custom extra settings written into the configuration files (either via web-interface or directly)
    3. only the configuration files without the web-interface settings

     

    for the configuration of the services; it does this currently only in the last scenario (number 3).

     

    @SimonF: Any chance we can get a fix for this in the next version?

  13. 6 hours ago, johnwhicker said:

     

    Thanks much. I moved on already. Command line and scripting is king  :) I tried several ways and didn't work well. Perhaps too much data, who knows.

    RAID-1 actually was not my operating system or the entire system.  Is basically an 8TB drive that I used  for a share named RAID-1. The high number of files was basically all video and pictures that I store there.

    I didn't hurt any unraid to operating system during this process :)

     

    Thanks again for responding Sir

     

     

    Fair enough - still exposing / to the Docker exposes your whole operating system structure to the container.

    Much better to just go a few directories deeper and expose /mnt/user/<share name> rather than exposing / and picking the folder from the whole tree.

     

  14. Hello everyone. I'd like to prepare for a USB-death scenario (it is quite old) and hence perform backups of the USB through "Main->Flash->Flash Backup" whenever doing any changes in the UNRAID web interface whatsoever - so that the configuration is always the same on the backup and the actual machine. Now - in case my USB should die...

     

    https://wiki.unraid.net/UnRAID_6/Changing_The_Flash_Device#Using_the_Flash_Creator

    Here using the "Local Zip" functionality of the Unraid USB Flash Creator for restoring the configuration is explained.

    Basically they describe it as easy as creating a USB from the ZIP and plugging it in - done.

     

    What I was wondering about is that many (older) guides describe it as absolutely crucial to delete the super.dat from backups. Looking inside the flash backups created through the web interface there always is a super.dat file though and the Wiki does not seem to mention it either. Is removing the super.dat only necessary for older backups where the array configuration might have changed or does this also apply to backups you did a minute before and I would need to remove this file from the ZIPs created through the web interface?

     

    What I want to know is whether I can just use the (up-to-date) ZIPs in combination with the USB Flash Creator without any further modifications in order to boot the UNRAID into exactly the same configuration (including array/disk & docker configuration) as with the previous (dead) USB stick?

     

    What's the deal with the super.dat then?

     

     

    Thanks a lot.

     

  15. 1 minute ago, itimpi said:

    I guess that might make it happen as that would create the folder.

     

    The yellow status is triggered by there being a folder corresponding to the share name.   The system does not look inside the folder to check its contents.

    Fair enough - thanks a lot for figuring this out with me. ;-)

    Definitely a valuable lesson learned regarding move-operations between user-shares.

     

     

  16. 9 minutes ago, itimpi said:

    I think the point is that if you were doing things over the network then this situation would never arise so is has not been catered for (perhaps for efficiency reasons).

     

    I did do this over the network though just now and the very same thing occurs.

     

    If you're interested you can reproduce it like this over the network:

    1. Invoke the mover to make sure that all your files are protected - the green light will be next to your cache-enabled user-share.

    2. Drag a single file onto the cache-enabled user-share (via Samba, etc..) - the light will turn yellow as the file will be on the cache drive.

    3. Delete the file and check the status of the light - it will remain yellow due to the empty user-share-named directory on the cache drive.

    4. Invoke the mover (removes the empty user-share-named directory from the cache drive) and only then you get a green light.

     

    What I am saying is that this behaviour is not limited to rare command-line operations. Basically any time you delete the last cached file on a cache-enabled user-share the empty folder on the cache drive will be considered as "unprotected files", as long as you do not invoke the mover and that directory gets removed.

  17. I've just been able to replicate this behaviour on my Windows machine as well.

    Putting a file on a cache-enabled user-share puts the file on the cache drive where it belongs.

    The warning sign shows up next to the user-share as there is an unprotected file (in the cache) - this is correct.

     

    When deleting the file from the user-share (using Windows Explorer, accessing the Samba share) the protection status is not refreshed.

    The warning sign next to the user-share remains until the mover is triggered manually and only then reverts back to green.

    Despite no unprotected file being there and the mover not moving anything (confirmed in logs) the protection status goes back to green.

     

    So a new file triggers a protection status refresh but a deletion does not, weird.

     

     

  18. 2 minutes ago, itimpi said:

    It should auto correct.  The fact it has not suggests you still have a file or folder in the wrong location.   Did you remember so also delete the containing folder and not just the file?

     

    You should be able to find the culprit by going to the Shares tab in the GUI and click the 'folder' location against the share.  That will show which files/folders are on which drive - and you can drill down if required to find the culprit.

     

    The interesting part is I actually did this.

    After deleting the test file I checked both user-shares and confirmed the file was not there.

    The warning remained though, despite all other files and directories being on the parity-protected disks.

     

    It was only after switching the cache on & off and triggering a manual mover operation that it refreshed the status.

×
×
  • Create New...