Jump to content

Rysz

Community Developer
  • Posts

    470
  • Joined

  • Last visited

  • Days Won

    2

Everything posted by Rysz

  1. Sorry but "I'm not the only one with this problem" is not very helpful to pinpoint the cause. We would need to know what UPS you are using and what UPS driver your NUT is configured for. A screenshot of the NUT settings page could be helpful too (you can black out your serial number) to identify any problems with the driver. Again, this is not an error message, this is a status that apparently the NUT driver thinks it's receiving from the UPS. It's not the plugin that puts those messages into the syslog, it's the NUT daemon in connection with the NUT driver. If by the original plugin working you mean the in-built acupsd that is using a different protocol and driver. If that works for you, it's likely the NUT driver is not playing well with your UPS, but we'd need to know the details there.
  2. That's not an error of the script, that's what your UPS reports to the driver. Those messages are put into the syslog by the NUT services (the upsmon daemon). What UPS are you using, what UPS driver and how is it connected (USB, Serial) to your server? Is there any indication (beeps, LEDs ...) that your UPS is maybe really switching between line/battery here?
  3. You're not running the 2.8.0 package which has those drivers included. Please uninstall the plugin, ideally restart the server, manually install 2.8.0 version: https://raw.githubusercontent.com/SimonFair/NUT-unRAID/master/plugin/nut-2.8.0.plg Then set it up as follows: It should detect it out of the box with the 2.8.0 version - please let us know here.
  4. What error is that - can you make a screenshot?
  5. If you say it like that, that does make sense. I'll also comment it into the pull request.
  6. Should be fixed in the next update too thanks to @Peuuuur Noel
  7. Wouldn't this work, for example? /bin/umount -v -a -t no,proc,sysfs,devtmpfs,fuse.gvfsd-fuse,tmpfs,overlay That way everything not on the "no,[...]" list would get unmounted cleanly, including the vfat /boot filesystem. Since /boot is a vfat filesystem we could unmount it this way while also keeping our local filesystems mounted. This was actually already done cleanly (unmounting everything but local filesystems) in 6.8.3 for example: # Unmount local file systems: # limetech - remove /boot, /lib/firmware, and /lib/modules from mtab first /bin/umount --fake /boot /bin/umount --fake /lib/firmware /bin/umount --fake /lib/modules echo "Unmounting local file systems:" /bin/umount -v -a -t no,proc,sysfs,devtmpfs,fuse.gvfsd-fuse,tmpfs # limetech - shut down the unraid driver if started if grep -qs 'mdState=STARTED' /proc/mdstat ; then echo "Stopping md/unraid driver:" /usr/local/sbin/mdcmd stop fi # limetech - now unmount /lib/firmware, /lib/modules and /boot /bin/umount -v /lib/firmware /bin/umount -v /lib/modules /bin/umount -v /boot We'd just have to account for the new "overlay" filesystem to add to the "no,[...]" list of local filesystems to persist.
  8. Thanks, the problem with the files not patching properly before a reboot should be fixed with the next update - it's likely due to locked files when NUT processes are still running while doing the upgrade. At the moment if you're experiencing problems a reboot should resolve them.
  9. Just to follow up on this, you also upgraded the plugin and didn't restart after - right?
  10. It seems quite likely to me now that for the users experiencing those problems the relevant PHP files didn't get reloaded when upgrading the plugin (could be locked files due to still running services during patching) and that a reboot would fix those display issues on the front page and the dashboard eventually - I've submitted a pull request to reduce the chance of this happening again with future upgrades of the plugin.
  11. I'm not having the problem myself - it works here, but I've added a pull request for a function to be able to export a UPS diagnostics file to help us diagnose such and other problems better.
  12. If your UPS' network card supports SNMP you can connect NUT to the UPS via ethernet utilizing SNMP. I can't reproduce this, could you screenshot the UPS' reported variables under "NUT Settings" => "NUT Details" so we can test this with the variables your UPS is returning to NUT? I mean the table where there's all the UPS information. Did you try clearing your cache or pressing CTRL+F5 on the front page so a full refresh of the page is done?
  13. Thanks for checking, the NUT plugin inserts the UPS inverter shutdown above the "# Now halt [...]" comment: [ -x /etc/rc.d/rc.nut ] && /etc/rc.d/rc.nut shutdown # Now halt (poweroff with APM or ACPI enabled kernels) or reboot. if [ "$shutdown_command" = "reboot" ]; then echo "Rebooting." /sbin/reboot else /sbin/poweroff fi The script then in turn calls the binary /usr/sbin/upsdrvctl shutdown (however /usr/sbin at that stage is no longer available due to unmounting) to power off the UPS inverter. So basically I think the local (RAM) filesystems should be available until the system halts entirely, as in older versions where this still works. It shouldn't have a negative effect on a graceful shutdown to keep those filesystems mounted since all in RAM is lost on reboot regardless, if I'm not misunderstanding something here. The upside of keeping the filesystems mounted would be that all services (be it plugins or other core processes) could do everything to facilitate a clean shutdown right until the very end (system halt) without having any of their resources taken away beforehand. 🙂
  14. I've also added another PR removing the low battery event fail-safe setting (NUT does this by default so it's obsolete as a separate setting). Plus a rare scenario with that setting in combination with some UPS devices could theoretically lead to a shutdown-reboot-shutdown loop, so best switch that low battery event fail-safe setting to "No" until it's removed from the plugin with the next update. 🙂
  15. Thanks everyone for the combined efforts, glad everything's working now (except for the /usr bug). ☺️
  16. I think this is the reason <!ENTITY pluginURL "&gitURL;/plugin/&name;.plg"> should be <!ENTITY pluginURL "&gitURL;/plugin/&name;-2.8.0.plg"> inside nut-2.8.0.plg
  17. Just to check back here, I've tested this with 6.12.3 and you're right about your observations. I actually consider this unwanted behaviour and something UNRAID should fix as it's likely causing problems for the system and other plugins as well. Hacking together a plugin-wise solution for this does not seem practical and would require re-mounting parts of the unmounted filesystems, something that I think no plugin should do lightly and without a very good reason. I've submitted a relevant bug report in hopes of getting this fixed in newer versions of UNRAID.
  18. In previous versions of UNRAID the /usr and part of the /lib folder were a part of the root filesystem / and in the process were never unmounted because the root filesystem / persisted in RAM throughout the shutdown sequence in /etc/rc.d/rc.6. This also meant that all the binaries in /usr/sbin and /usr/local/sbin were available to the services until the system halted entirely, ensuring - for example - plugins could also run their respective shutdown sequences gracefully by utilizing their binaries in /usr/sbin (e.g. shutdown devices, stop their associated services etc.) In newer versions of UNRAID (e.g. the latest 6.12.3) the /usr and entire /lib folder are no longer part of the root filesystem / and are unmounted being separate mount-points due to the mass-unmount-all command present early on in /etc/rc.d/rc.6: # Unmount local file systems: echo "Unmounting local file systems:" /bin/umount -v -a From that point on anything in the /usr and /lib folders becomes unavailable including anything in /usr/sbin or /usr/local/sbin, which is curious because the shutdown script actually proceeds to reference such an unavailable binary directly below that mass-unmount-all command: # Unmount local file systems: echo "Unmounting local file systems:" /bin/umount -v -a # limetech - shut down the unraid driver if started if /bin/grep -qs 'mdState=STARTED' /proc/mdstat ; then echo "Stopping md/unraid driver:" /usr/local/sbin/mdcmd stop if ! /bin/grep -qs 'mdState=STOPPED' /proc/mdstat ; then echo "Unclean shutdown - Cannot stop md/unraid driver" else # we have to mount /boot again if ! /sbin/mount -v /boot ; then echo "Unclean shutdown - Cannot remount /boot" else /bin/rm -f /boot/config/forcesync /sbin/umount /boot echo "Clean shutdown" fi fi fi This if-clause will never be able to stop the md/unraid driver (if it is started), because the binary mdcmd at that point is already unmounted and unavailable due to the command above that if-close unmounting the entire /usr tree. Another problem we're facing with this behaviour is - as an example - our NUT plugin (for UPS) being unable to shutdown the UPS inverter because the binary call to /usr/sbin/upsdrvctl is made impossible by the premature unmounting of /usr - see further details here: I therefore propose the developers consider persisting the /usr and /lib mount-points until the system halts, as they should only be in RAM anyhow and their unmounting seems to be no requirement for a graceful shutdown but rather complicates such a graceful shutdown for both the plugins and the core system itself. Perhaps reverting to either specifying which types of filesystems to unmount or which types of filesystems not to unmount utilizing umount -v -a -t instead of just umount -v -a could be considered, as it was done in earlier versions of UNRAID. Overall, is unmounting the local filesystems on shutdown even necessary at all when they exist just in RAM?
  19. If this line really is there in all versions 6.12.x this must've been an addition in a newer than mine UNRAID version and you might be right about your observation that the plugin is attempting to call a binary which is no longer there due to previous mass-unmounting of all filesystems including the root filesystem. I'm unfortunately running an older version of UNRAID myself so I can't really investigate much more into that direction, in my version the root filesystem does not get unmounted as part of the shutdown. This new way of mass-unmounting all filesystems poses a more complex problem to killing the UPS power however. We cannot simply move the NUT kill-power command to above the mass-unmount command, because then theoretically the UPS would be able to kill the power in the middle of unmounting some more critical filesystems... resulting in an unpredictable end result of the actual shutdown - you'd never know if it was a 100% clean shutdown. The reason is that the UPS doesn't know and won't wait for what's going on, it only gets the command to kill the power to the devices pronto. In my eyes the only possibility to mitigate that risk would be re-mounting the necessary root filesystem as read-only after the mass-unmounts are already done, so that the necessary binary becomes available to the shutdown script once again before the system halts - but this way making sure all critical filesystems were successfully unmounted before. Judging by my rc.6 file this has actually been done in older versions of UNRAID and that functionality was later commented out and deactivated because the root filesystem wasn't unmounted (anymore) due to being in RAM anyway (until now apparently). But generally adding such a remount as part of a plugin would be, at least in my opinion, a major and particularly an unpredictable change of the system's base shutdown sequence. Personally I'm out of my depth here at the moment as I wouldn't know how such a change would effect all other (older) versions of UNRAID. P.S. I'll test this with a newer version and report back here on my findings.
  20. Not sure what you mean, but the relevant files are in RAM in remain accessible. Besides I cannot find the command "/bin/umount -v -a" on its own anywhere in my RC.6 file. It does exist with additional parameters to unmount specific targets, but not "/bin/umount -v -a" on its own? Maybe your UPS does not support powering off - which one are you using? I know my Eaton 5P has a setting where you specifically have to allow a software-initiated poweroff.
  21. My Eaton 5P powers off just fine (via USB). edit: on my old 6.8.3 system!
  22. When UNRAID shuts down as per your configured settings, the secondary devices (slaves) will also receive shutdown commands and shutdown slightly before (usually 15 seconds) your primary device (master) does. Some clients allow you to configure separate shutdown settings for your client instance (e.g. time on battery), but to answer your question: Yes, the primary instance of NUT on your UNRAID will always instruct the secondary devices to shutdown with it. But you should test the shutdown sequence before blindly relying on your configuration: upsmon -c fsd Is the command to execute on your primary NUT instance (in this case your UNRAID server).
  23. Dear fellow NUT-ters, I've fixed the problem with configuration files not persisting a system reboot. Once these changes are incorporated into @SimonF's project, please make sure to add any configuration changes at the end of the respective configuration files. While at it, I've also added a feature which presents a fail-safe in case the UPS emits a LOWBATT event. This allows for a graceful shutdown in case of a prematurely dying battery (regardless of and in addition to any other shutdown conditions). One theoretic scenario would be a degraded battery not being able to withstand the configured shutdown conditions (i.e. "Time on battery before shutdown"). Before implementing this feature a degraded battery could theoretically run completely empty before any other shutdown condition is met, resulting in a hard shutdown of the server. By activating this new feature the server will immediately but gracefully shutdown when the UPS emits a LOWBATT event (usually happens around 20% remaining battery, configurable directly on some UPS screens like for Eaton 5P) regardless of other configured shutdown conditions. By default this setting will be DISABLED. I cannot guarantee a long-term commitment to this project and particularly do not mean to hijack @SimonF's considerable efforts. I will therefore be submitting respective pull-requests for any fixes and additions so that they can be incorporated into his repository for sake of continuance, rather than trying to advertise users into using a new fork of the project. However, for those urgently needing those changes, you can find the (obviously experimental) modified plugins here: nut 2.7.4 (previous stable modified, now experimental but tested working): https://raw.githubusercontent.com/desertwitch/NUT-unRAID/testing/plugin/nut.plg nut 2.8.0 (previous experimental modified, now even more experimental and untested): https://raw.githubusercontent.com/desertwitch/NUT-unRAID/testing/plugin/nut-2.8.0.plg Please make sure to uninstall any other NUT installations and ideally reboot your server before installing the experimental modified plugins. The same obviously applies if you're switching back to @SimonF's versions! I'd appreciate any errors being reported back here so I can work on fixing those before they make their way into the upstream repositories, so far I've however not encountered any on my UNRAID installation with an USB-connected Eaton 5P. Best regards!
  24. After some investigation I've managed to find the cause for this issue and the other users' issues. The changes to the NUT configuration files (present in /etc/nut/) done via the web-interface or directly inside the files are currently lost because the script does not save those modified NUT configuration files onto the flash drive (USB) at all when the mode is set to "Enable Manual Config Only: No". The information "You can still edit the conf files to add extras not included on the NUT Settings Page." at the moment only applies to the LIVE/CURRENT running session of UNRAID. After a reboot the package basically gets reinstalled and the installations script pulls the default NUT configuration files from the package into /etc/nut, when it should actually be pulling the previously saved backups of the modified NUT configuration files from the flash drive (USB) into /etc/nut. This behaviour can only be changed inside the package by the developer, basically the script would need to backup the NUT configuration files onto the flash drive (USB) regardless of the state of the setting "Enable Manual Config". In any case the script should save the NUT configuration files from /etc/nut onto the flash drive (USB) for re-population of /etc/nut after a system reboot, no matter if it's using: only the settings from the web-interface a mixture of the web-interface settings plus custom extra settings written into the configuration files (either via web-interface or directly) only the configuration files without the web-interface settings for the configuration of the services; it does this currently only in the last scenario (number 3). @SimonF: Any chance we can get a fix for this in the next version?
×
×
  • Create New...