kubed_zero

Community Developer
  • Posts

    168
  • Joined

  • Last visited

Everything posted by kubed_zero

  1. That is an Unraid-specific executable as far as I'm aware, so I think trying to get it showing up there would be a dead end. On the other hand, you don't actually need mdcmd to get disk temp. That responsibility is held by smartctl, which is not unraid-specific. You just give it a disk mount point (aka /dev/sdb) and then it will report back temperature and SMART information https://github.com/kubedzero/unraid-snmp/blob/main/source/usr/local/emhttp/plugins/snmp/disk_temps.sh#L120 So you could make a script that determines what the non-pool drives are mounted as, and then calls smartctl for those /dev/sdX paths. And then if you added that script to your SNMP config, it would start to show up in SNMP. smartctl can actually do a scan for attached devices, aka `smartctl --scan` so you could always start there as well. Hope that makes sense! Feel free to browse through the Github code for the disk temp script for further insight.
  2. There is a separate standalone script that parses the output of "mdcmd status" to determine what disks are installed: https://github.com/kubedzero/unraid-snmp/blob/main/source/usr/local/emhttp/plugins/snmp/disk_temps.sh#L52C18-L52C30 here is some more information on the output of the mdcmd command The SNMP configuration only will adjust the free/used disk space output and won't add on disk temperature.
  3. Ah, I found the true root cause of everyone's issue. https://linux.die.net/man/8/snmpd notes the different flags. Looking more closely at the output from ps, I see the default settings for snmpd are to call with the -a flag which "Logs the source addresses of incoming requests" hence the huge log files. root@Test:~# ps -ef | grep snmp root 2070 1 0 18:54 ? 00:00:00 /usr/sbin/snmpd -A -p /var/run/snmpd -a -c /etc/snmp/snmpd.conf root 2933 2715 0 18:55 pts/0 00:00:00 grep snmp So the root cause is twofold: I didn't have the doinst.sh script built in a way that was compatible with SNMP 5.9.3, and this allowed the default flag to log all addresses and fill up people's disks. This is fixed in the version I just released, 2023.04.20 (blaze it). https://github.com/kubedzero/unraid-snmp/releases/tag/2023.04.20 Now the ps output should look the same as what I have below: root@Test:~# ps -ef | grep snmp root 4749 1 0 19:20 ? 00:00:00 /usr/sbin/snmpd -A -p /var/run/snmpd -LF 0-5 /var/log/snmpd.log -c /etc/snmp/snmpd.conf root 5247 5167 0 19:21 pts/0 00:00:00 grep snmp One can also verify the correct behavior by looking at the update log output in the plugin update popup window. This is wrong, because the desired flags don't match the actual flags: Editing SNMP startup options in rc.snmpd to be [-LF 0-5 /var/log/snmpd.log -A -p /var/run/snmpd -a] Restart SNMP daemon now that we've adjusted how rc.snmpd starts it Starting snmpd: /usr/sbin/snmpd -A -p /var/run/snmpd -a -c /etc/snmp/snmpd.conf This is the correct output: Writing extra SNMPD_OPTIONS into /etc/default/snmpd to configure logging Start SNMP daemon back up now that snmpd.conf and /etc/default/snmpd modifications are done Starting snmpd: /usr/sbin/snmpd -A -p /var/run/snmpd -LF 0-5 /var/log/snmpd.log -c /etc/snmp/snmpd.conf Thanks all and let me know if you have issues!
  4. OK, I've figured out the issue and will have a fix out soon. The newest version of the SNMP plugin (2023.02.19) declares a dependency on net-snmp-5.9.3-x86_64-1.txz. Meanwhile the old version (2021.05.21) uses net-snmp-5.9-x86_64-1.txz. The problem is that the net-snmp package differs in its /etc/rc.d/rc.snmpd file from 5.9 to 5.9.3, so the do-inst.sh file in my own unraid-snmp plugin fails to modify the file to adjust the logging. Old /etc/rc.d/rc.snmpd snippet: OPTIONS="-A -p /var/run/snmpd -a" start() { if [ -x /usr/sbin/snmpd -a -f /etc/snmp/snmpd.conf ]; then echo -n "Starting snmpd: " /usr/sbin/snmpd $OPTIONS -c /etc/snmp/snmpd.conf echo " /usr/sbin/snmpd $OPTIONS -c /etc/snmp/snmpd.conf" fi } New /etc/rc.d/rc.snmpd snippet: [ -r /etc/default/snmpd ] && . /etc/default/snmpd SNMPD_OPTIONS=${SNMPD_OPTIONS:-"-A -p /var/run/snmpd -a"} start() { if [ -x /usr/sbin/snmpd -a -f /etc/snmp/snmpd.conf ]; then echo -n "Starting snmpd: " /usr/sbin/snmpd $SNMPD_OPTIONS -c /etc/snmp/snmpd.conf echo " /usr/sbin/snmpd $SNMPD_OPTIONS -c /etc/snmp/snmpd.conf" fi } My modification code, compatible with 5.9 but not 5.9.3: new_flags="-LF 0-5 /var/log/snmpd.log " options=$(grep "OPTIONS=" /etc/rc.d/rc.snmpd | cut -d'"' -f 2) if [[ $options != *"-L"* ]]; then options=$new_flags$options echo "Editing SNMP startup options in rc.snmpd to be [$options]" sed --in-place=.bak --expression "s|^OPTIONS=.*|OPTIONS=\"$options\"|" /etc/rc.d/rc.snmpd else echo "SNMP logging flag already present in rc.snmpd, skipping modification" fi I need to change this code to something that can handle the new syntax. Some thoughts I'll be exploring: I saw /etc/default/snmpd included in the new SNMP package but not the old, and perhaps that could be used I need to check the syntax, but I might be able to just set an environment variable SNMPD_OPTIONS as the install script now seems to merge whatever it's set to with the values declared in /etc/rc.d/rc.snmpd I could update my script to just do the text modification inside the /etc/rc.d/rc.snmpd file, the same as it was before but with updated find and replace logic Hopefully I'll have a plugin update out in the next week. Thanks @Uncore for the details!
  5. You didn't need to do this, I explicitly noted this in an earlier message: anyway... Odd. I'm running 6.9.2 and have no issues: root@Unraid:~# cat /etc/rc.d/rc.snmpd #!/bin/sh # # rc.snmpd This shell script takes care of starting and stopping # the net-snmp SNMP daemon OPTIONS="-LF 0-5 /var/log/snmpd.log -A -p /var/run/snmpd -a" root@Unraid:~# ps -ef | grep snmp root 3744 1 0 Apr15 ? 00:01:35 /usr/sbin/snmpd -LF 0-5 /var/log/snmpd.log -A -p /var/run/snmpd -a -c /etc/snmp/snmpd.conf root 19665 19643 0 17:33 pts/0 00:00:00 grep snmp Yes, this is expected. A clean install of SNMP would be: - Uninstall SNMP plugin, and ensure it fully completes and removes all the SNMP-related files in /boot/config/plugins - Reboot Unraid - Reinstall SNMP after reboot That's great news. That means the issue people are seeing seems to be from SNMP's logging modification not installing properly. Again, odd that I can't reproduce this and haven't seen this myself. What version of Unraid?
  6. Still no luck. I tried the following on 6.12-rc3 with macOS 13.3.1. I added these settings to /boot/config/smb-fruit.conf and did an array stop and start each time. I checked /etc/samba/smb-shares.conf to confirm the settings were applied each time. No luck, from vfs objects = acl_xattr fruit streams_xattr fruit:encoding = native fruit:metadata = stream fruit:posix_rename = yes No luck, from vfs objects = catia fruit streams_xattr fruit:encoding = native fruit:metadata = stream fruit:posix_rename = yes No luck, from https://wiki.samba.org/index.php/Configure_Samba_to_Work_Better_with_Mac_OS_X vfs objects = fruit streams_xattr fruit:metadata = stream fruit:model = MacSamba fruit:posix_rename = yes fruit:veto_appledouble = no fruit:nfs_aces = no fruit:wipe_intentionally_left_blank_rfork = yes fruit:delete_empty_adfiles = yes In all cases, I can't mount the sparsebundle, the Time Machine backup does not complete, etc. Back to 6.9.2 🤦‍♀️ Edit: One more note, since I only saw one comment on this that I now can't find. https://www.samba.org/samba/security/CVE-2021-44142.html versions of Samba prior to 4.13.17 had a vulnerability if the use of vfs_fruit and fruit:metadata=netatalk or fruit:resource=file. Considering 6.9.2 used Samba 4.12.14, it was affected by this, and it's possible the fix has affected functionality.
  7. So more people are having this issue, odd. What's in your logs, the same thing as @irishjd? We were chatting in PMs, but I guess this applicable to more people. I've never had luck with log output. - What version of Unraid are you using? - Did you apply any customizations to the SNMP config? - Have you rebooted recently? - Has uninstalling and reinstalling the SNMP plugin made any difference? I suspect I know the line that is causing this. Inside the unraid-snmp TXZ file there is the do-inst.sh script that configures and modifies everything. Specifically, there is a line that sets up the logging https://github.com/kubedzero/unraid-snmp/blob/main/source/install/doinst.sh#L58 Right now it outputs levels 1=a=alert, 2=c=crit, 3=e=err, 4=w=warn, 5=n=notice into /var/log/snmpd.log. I'd suspect that changing this line to output something else (0-4, 1-4, 1-3, etc) would probably cut down on your log output. That said I've never gotten ANYTHING to show up in /var/log/snmpd.log on my own installations so I've never been able to test. I need reproduction steps to get a better idea of what's happening.
  8. What is it filled with? The file has always been empty whenever I checked or did development on it.
  9. I stand by my decision to stick with 6.9.2 until the DEFAULT settings for Unraid's SMB configs work correctly for the BUILT-IN functionality of Time Machine. 6.9.2 did that just fine, and there's no valid excuse in my mind for 6.11 and onwards to not include it. Especially given people have been able to hack their way to something functional. I still challenge Unraid to raise the bar and get Time Machine backups working by default on a fresh install of 6.11/6.12 with zero additional manual config necessary. AKA: - Install Unraid to a flash drive - Boot, create a one disk no-parity array - Create a Time Machine compatible share after enabling macOS optimization in the SMB settings - Perform a successful backup - Perform successful incremental backups (which is the part that I've not yet seen working correctly) Bonus points if Time Machine shares created on 6.9 and earlier can continue to work with 6.11 and onwards.
  10. Thanks for that info @MVLP. I realized I could optimize the way I'm grabbing CPU MHz so I applied that change and stopped using cat, hopefully it won't happen anymore. @cjhammel I also added the change to read the Community string from the snmpd.conf file, so changing the Community should no longer fail the SNMP plugin installation after boot. These changes are a part of 2023.02.19 😄
  11. Thanks all for the info. I just released an update 2023.02.18 that should fix this, now reading from /proc/cpuinfo instead of lscpu. I found during testing Unraid 6.12 that my VMs with an updated version of lscpu (2.38 vs 2.36) were no longer outputting CPU MHz at all, much less min and max. Granted, my implementation only reads the speed of CPU core 0, but I figure that's better than nothing. Technically someone could submit a PR that grabs the CPU MHz from all the different cores and averages them out, or gets the max (or min) from all of them. Let me know how install/update and testing goes @mattie112@MVLP @irishjd
  12. Thanks for sharing, what a beast of a CPU! I'll be interested in seeing what @MVLP posts as well. If it's the same, I might have to modify the script to attempt to grab "CPU max MHz" if "CPU MHz" doesn't output anything.
  13. CPU speed is manually fetched with a shell script that calls `lscpu` and then parses the results. https://github.com/kubedzero/unraid-snmp/blob/main/source/usr/local/emhttp/plugins/snmp/cpu_mhz.sh#L11 Can you run `lscpu` and share the results with me, either here or by PM? I should be able to modify the script to parse other lines if your system (and others' too I imagine) uses a different way to post CPU speed.
  14. I added this info to the Recommended post for this topic, but a good debugging step is to do a sanity check that SNMP is fully removed from the OS (not from the boot drive) by running `removepkg net-snmp unraid-snmp`. @iball used this to get back to a good state, without the need for a reboot! I have no plans to change the default at the moment. You can also use the Settings->SNMP page to modify the SNMP configuration. One of the lines by default is `rocommunity public` which I assume is the change you're looking for. Be warned that the plugin install script (the PLG file at /boot/config/plugins/snmp.plg) expects certain config values such as the "public" community, and may require modification. You're also welcome to make a PR that automatically picks up the community name from the SNMP config, that functionality is not yet built in.
  15. Simple Network Management Protocol (SNMP) is a widely used protocol for monitoring the health and welfare of electronics. This Unraid plugin installs and configures SNMP for basic use with tools such as Observium and Grafana. This Unraid plugin assists with installing SNMP and its dependencies and then configuring them for Unraid. For example, it adds extensions into SNMP to output Unraid Share and disk size, disk temperature, memory usage, and CPU speed. Install Link (6.11 and newer): https://raw.githubusercontent.com/kubedzero/unraid-snmp/main/snmp.plg Install Link (6.7 thru 6.10.3): https://raw.githubusercontent.com/kubedzero/unraid-snmp/14cf6e875860526a25148bd86c2f812bbfddb590/snmp.plg Unraid 6.10.3 and older use older versions of libc/glibc (2.33 in 6.10.3, 2.30 in 6.9.2), and the latest versions of net-snmp and perl require libc-2.34 and newer. GitHub: https://github.com/kubedzero/unraid-snmp Other notes: Instructions for building and/or contributing features can be found in the GitHub repo's README There is a Settings page for users to tweak the SNMP configuration to their liking. Be warned that the plugin install script expects certain config values, and may also require modification depending on the changes to the SNMP config (such as the Community, permissions, etc.) Good to know debugging: If you don't want to reboot and are experiencing install issues, perhaps it's because Unraid thinks the package is already installed. Run `removepkg net-snmp unraid-snmp` to uninstall them if they're installed (it will do nothing otherwise). You can also run `rm -rf /boot/config/plugins/snmp/` to remove the plugin install directory from the USB drive, wiping out any customizations. Both of these steps are performed when uninstalling the plugin through Unraid's UI.
  16. I'm still on 6.9.2 and all is still working fine here with macOS Ventura 13.1. Now that 6.11.5 is out I might try upgrading and giving it a shot at some point.
  17. Right on. The install script does a sanity check to ensure it's working correctly, and fiddling with the snmpd.conf file can absolutely get it in a state where the install script no longer works. I can't think of a good way of validating the install was successful while also allowing for custom snmpd.conf files. For those adventurous enough to modify the config, I'm hoping they're also confident in editing the PLG to disable the sanity test and allow installs to proceed. And also as you noted, if one gets into a bad state, deleting the SNMP files from /boot and rebooting would be a surefire way of starting from scratch. The logs posted by @dgaglioni showing "unraid-snmp-2021.05.21-x86_64-1 (already installed)" indicates that this was not a fresh install.
  18. Got it. Well, if there's anything I can help with (different settings, additional logs on the macOS or Unraid side, reverting to 6.9.2 and diffing outputs, etc) let me know!
  19. Still no luck on getting Time Machine to work with Unraid 6.11.1. Nothing shows up in the syslog to indicate a crash of SMB. The typical "Operation not supported by device" UserInfo={DIErrorVerboseInfo=Failed to initialize IO manager: Failed opening folder for entries reading}" still shows up on the MacOS (12.6) side. I've got zero extra Samba custom configurations/files, so this is running with vanilla Unraid as far as I'm aware. testparm -sv shows the following: https://pastebin.com/mmYPTz9n Specifically, here is the share for TimeMachine [TimeMachine] path = /mnt/user/TimeMachine valid users = backup vfs objects = catia fruit streams_xattr write list = backup fruit:time machine max size = 1200000M fruit:time machine = yes fruit:encoding = native @limetech As requested, I've output diagnostics from Unraid as well immediately after a reboot and failing to back up from two different computers. pbox-diagnostics-20221017-1212.zip
  20. Great to see. Hopefully this will help correct the Time Machine incompatibilities many people are seeing after 6.10. More details in and
  21. If you use SNMP without adjusting any of the default settings, it should work without a hitch. Those that have had issues still have not been able to tell me as the maintainer what exactly to do to reproduce their issues, so as far as I know there shouldn't be any problems with running this plugin. At the end of the day, this plugin is just a wrapper script to install the SNMP Slackware package, so in the worst case you could just default to installing SNMP manually.
  22. My guess, in conjunction with your Docker logs, is that somehow the localhost address isn't configured/accessible. There's some discussion earlier on in the thread, though I don't think we ever collectively got reproduction steps to isolate whether or not this was the case. The thinking with this SNMP install script is that localhost should be available since Unraid would just be pinging itself, and then that /boot exists on all Unraid installations and should be good to baseline. If either one of those fails to be true, then the installation script fails. That said, SNMP might have actually been installed, and perhaps these tests aren't passing for some reason. I suspect that Docker networking might be fiddling with localhost, but don't use Docker myself so I've been unable to reproduce. I'd say you have two options here: - Try to figure out if localhost is working, or if SNMP is working at all. Mess with Docker networking, install SNMP manually, and just generally deep-dive on your system - Remove the validation lines from your local copy of the PLG file, which on next install would just skip over the validation and treat it as a successful install. You could strip it all out, or just remove the "exit 1" line https://github.com/kubedzero/unraid-snmp/blob/main/snmp.plg#L195
  23. Yes! I think this is all you need.
  24. Another option is just to downgrade to 6.9.2. Time Machine has been working flawlessly on all my machines for the past few weeks since performing the downgrade!