Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

3 Neutral

About kubed_zero

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. I don't know what "check_snmp" is but if you use "snmpwalk" remotely it works fine. Sorry, can't help you here beyond saying: - the same command I posted earlier with localhost subbed out for the IP address and executed from a Mac on the network runs fine - you might want to consider using the same SNMP implementation, aka "snmpwalk." Otherwise, you should go hunt through the documentation for the SNMP tool you're using to figure out how to pass in the arguments correctly. - It could also be that whatever host OS you're using is interpreting quote characters differently good luck!
  2. Wrap your identifier in a set of single quotes so the double quotes don't get interpreted by the CLI: root@myUnraid:~# snmpwalk -v 2c -c public localhost 'NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree"' NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".1 = STRING: CoolShare1: 446446297088 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".2 = STRING: RandomShare: 446446297088 NET-SNMP-EXTEND-MIB::nsExtendOutLine."sharefree".3 = STRING: SharedShare: 392926117888
  3. The log says /boot can't be found. It's failing on https://github.com/kubedzero/unraid-snmp/blob/master/snmp.plg#L229 When I try to run the command "snmpwalk -v 1 localhost -c public hrFSMountPoint" on my host I get root@myUnraid:~# snmpwalk -v 1 localhost -c public hrFSMountPoint HOST-RESOURCES-MIB::hrFSMountPoint.1 = STRING: "/mnt/disk1" HOST-RESOURCES-MIB::hrFSMountPoint.2 = STRING: "/mnt/disk2" HOST-RESOURCES-MIB::hrFSMountPoint.3 = STRING: "/mnt/disk3" HOST-RESOURCES-MIB::hrFSMountPoint.24 = STRING: "/dev/shm" HOST-RESOURCES-MIB::hrFSMountPoint.25 = STRING: "/var/log" HOST-RESOURCES-MIB::hrFSMountPoint.26 = STRING: "/boot" It's also possible to run this replacing "localhost" with your host's IP address, like "" or something. If you can try to run the SNMP command manually and show us what you see, that could be helpful. Perhaps it's just not running and you need to reboot/reinstall. Or perhaps /boot truly is missing, but I can't see how that would be possible.
  4. Wild. A quick Google for "realtek invalid ocp" gave me https://lists.openwall.net/netdev/2012/07/09/143 which makes me think it's something to do with the Realtek software stack, although I can't pinpoint exactly what's up. I wonder if it's the SNMP Slackware package trying to query/bind the different network interfaces in the system, and since the Realtek interface isn't disabled in the BIOS it finds that as well. This is just speculation though. I'd suggest BIOS-disabling the unused Realtek NIC that's causing you issues, but would be interested in hearing anything else you find. Assuming the motherboards or network cards are different, maybe the working Realtek NIC is a different model than the one that's causing issues. Maybe the single Realtek NIC doesn't have problems because it's actively bound to an IP and is in use.
  5. Ha! Sorry. I run unRAID within a VM under ESXi. Thus, I only use it for its NAS functionality with only a couple plugins: Nerd Tools, Open Files, SNMP, and a couple Dynamix ones. No need for CA with a setup as simple as this! Not to mention this is all on a 1GB USB key with very little free space
  6. Just created a new repo for this: https://github.com/kubedzero/unraid-community-apps-xml I don't use CA myself but I can spin up a test instance of unRAID later to confirm this works. That is assuming there's a way to reference this new XML file manually.
  7. Created a pull request for the Coppit package on GitHub https://github.com/coppit/unraid-snmp/pull/6 but it doesn't seem like they've been active since August 2018 on here or on GitHub. With that in mind, I've made a version compatible with unRAID 6.7.0 in a forked repository that I'll keep updated for myself and anyone else that wants to use it: https://raw.githubusercontent.com/kubedzero/unraid-snmp/master/snmp.plg
  8. Thanks for pointing me to the SMB Extras section. While that wasn't the issue (I don't use any SMB Extras parameters) it did alert me to the separate SMB settings page, where I saw the "Enhanced macOS interoperability" setting. I had this set to No because I've primarily used this unRAID server for Windows clients, but I stopped the server and enabled it and then started the server back up (no reboot needed). Immediately after enabling Enhanced macOS interoperability, I saw two new SMB Export dropdown options, Yes/TimeMachine and Yes/TimeMachine (hidden). I enabled that, set the 2.5TB volume size limit I had on my old AFP share, and then mounted the share on my Mac. Upon opening the TimeMachine Select Disk settings, I now see it listed. Great! So if anyone else is struggling, make sure the SMB global setting on the Settings page for "Enhanced macOS interoperability" is set to "YES" and that the SMB "Export" option for the share you want to use is set to "Yes/TimeMachine." I personally set the security to Private just so I have a clear user/password to log in. The option won't show up (I didn't even know it existed) until the global SMB setting is turned on. The only requirement here is to have the array in a stopped state while you change the global setting. Hope this helps!
  9. I can guarantee that did not happen for me when I tested it multiple times. New share called TMBackup created with private access to a user called "backup." Then connect to server on the Mac, "smb://ip.ip.ip.ip/TMBackup" and add in the user and password. See the empty share, then go to the Time Machine settings. Go to Select Disk and only the "Other Airport Time Capsule" is listed. If you got it working, can you list the steps you took?
  10. Strange. I've got two machines running 10.14.4 that connect via AFP to the Time Machine share running on UnRAID 6.7.0 and they have been backing up without issue. I will experiment some more with the SMB share for Time Machine backup, but it hasn't been obvious so far. Is it in the sense that we'll need to create a volume on the SMB share and then back up to that volume, or should it just show up as a destination once we create the share? Ah, then you're running the AFP version then if you followed the guide.
  11. Have you gotten TM working with SMB or AFP? I had it working fine for years with AFP (still working fine) but wanted to migrate to SMB and ran into issues there.
  12. +1 seeing the same behavior. As a data point, my computer worked fine with the AFP time machine backup beforehand.
  13. Just wanted to say I had the same issue. Updated a couple days ago to 6.5.0 and my script stopped getting scheduled. I had it under /boot/plugins/mycron/myscript.cron and the file looked like: #will need to call command `update_cron` to load these in without reboot * * * * * /boot/myScripts/hdStats.php &> /dev/null I moved it to /boot/plugins/dynamix/myscript.cron. After running update_cron it showed up in /etc/cron.d/root just like it used to. Thanks for the tip!
  14. As I understand it, "doing things behind the scenes" can occur in two different ways. 1, the trim/garbage collection command is issued by the OS, and the drive goes and moves blocks around to get to an ideal performance state. If the OS doesn't support this or the commands are blocked by an intermediate layer like hardware RAID or unRAID, TRIM can't occur and the drive's performance will decay over time. The second option is if the drive does garbage collection behind the scenes without informing the OS. Then, I don't think there are any changes observed by the system before/after a garbage collection occurs, since it is abstracted away entirely into the SSD's hardware. With this second scenario then, it should be safe to run SSDs alongside HDDs since they will behave identically from the perspective of the OS. Am I understanding that right?
  15. Thanks for the link! That was a rabbit hole What I got out of that was that if the SSD supports automatic/behind the scenes trim/garbage collection, it's safe to use in the array. Then it's just a matter of storing files on that drive and moving them to slower storage when I run out of space.