Jump to content

kubed_zero

Members
  • Content Count

    39
  • Joined

  • Last visited

Community Reputation

3 Neutral

About kubed_zero

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Wild. A quick Google for "realtek invalid ocp" gave me https://lists.openwall.net/netdev/2012/07/09/143 which makes me think it's something to do with the Realtek software stack, although I can't pinpoint exactly what's up. I wonder if it's the SNMP Slackware package trying to query/bind the different network interfaces in the system, and since the Realtek interface isn't disabled in the BIOS it finds that as well. This is just speculation though. I'd suggest BIOS-disabling the unused Realtek NIC that's causing you issues, but would be interested in hearing anything else you find. Assuming the motherboards or network cards are different, maybe the working Realtek NIC is a different model than the one that's causing issues. Maybe the single Realtek NIC doesn't have problems because it's actively bound to an IP and is in use.
  2. Ha! Sorry. I run unRAID within a VM under ESXi. Thus, I only use it for its NAS functionality with only a couple plugins: Nerd Tools, Open Files, SNMP, and a couple Dynamix ones. No need for CA with a setup as simple as this! Not to mention this is all on a 1GB USB key with very little free space
  3. Just created a new repo for this: https://github.com/kubedzero/unraid-community-apps-xml I don't use CA myself but I can spin up a test instance of unRAID later to confirm this works. That is assuming there's a way to reference this new XML file manually.
  4. Created a pull request for the Coppit package on GitHub https://github.com/coppit/unraid-snmp/pull/6 but it doesn't seem like they've been active since August 2018 on here or on GitHub. With that in mind, I've made a version compatible with unRAID 6.7.0 in a forked repository that I'll keep updated for myself and anyone else that wants to use it: https://raw.githubusercontent.com/kubedzero/unraid-snmp/master/snmp.plg
  5. Thanks for pointing me to the SMB Extras section. While that wasn't the issue (I don't use any SMB Extras parameters) it did alert me to the separate SMB settings page, where I saw the "Enhanced macOS interoperability" setting. I had this set to No because I've primarily used this unRAID server for Windows clients, but I stopped the server and enabled it and then started the server back up (no reboot needed). Immediately after enabling Enhanced macOS interoperability, I saw two new SMB Export dropdown options, Yes/TimeMachine and Yes/TimeMachine (hidden). I enabled that, set the 2.5TB volume size limit I had on my old AFP share, and then mounted the share on my Mac. Upon opening the TimeMachine Select Disk settings, I now see it listed. Great! So if anyone else is struggling, make sure the SMB global setting on the Settings page for "Enhanced macOS interoperability" is set to "YES" and that the SMB "Export" option for the share you want to use is set to "Yes/TimeMachine." I personally set the security to Private just so I have a clear user/password to log in. The option won't show up (I didn't even know it existed) until the global SMB setting is turned on. The only requirement here is to have the array in a stopped state while you change the global setting. Hope this helps!
  6. I can guarantee that did not happen for me when I tested it multiple times. New share called TMBackup created with private access to a user called "backup." Then connect to server on the Mac, "smb://ip.ip.ip.ip/TMBackup" and add in the user and password. See the empty share, then go to the Time Machine settings. Go to Select Disk and only the "Other Airport Time Capsule" is listed. If you got it working, can you list the steps you took?
  7. Strange. I've got two machines running 10.14.4 that connect via AFP to the Time Machine share running on UnRAID 6.7.0 and they have been backing up without issue. I will experiment some more with the SMB share for Time Machine backup, but it hasn't been obvious so far. Is it in the sense that we'll need to create a volume on the SMB share and then back up to that volume, or should it just show up as a destination once we create the share? Ah, then you're running the AFP version then if you followed the guide.
  8. Have you gotten TM working with SMB or AFP? I had it working fine for years with AFP (still working fine) but wanted to migrate to SMB and ran into issues there.
  9. +1 seeing the same behavior. As a data point, my computer worked fine with the AFP time machine backup beforehand.
  10. Just wanted to say I had the same issue. Updated a couple days ago to 6.5.0 and my script stopped getting scheduled. I had it under /boot/plugins/mycron/myscript.cron and the file looked like: #will need to call command `update_cron` to load these in without reboot * * * * * /boot/myScripts/hdStats.php &> /dev/null I moved it to /boot/plugins/dynamix/myscript.cron. After running update_cron it showed up in /etc/cron.d/root just like it used to. Thanks for the tip!
  11. As I understand it, "doing things behind the scenes" can occur in two different ways. 1, the trim/garbage collection command is issued by the OS, and the drive goes and moves blocks around to get to an ideal performance state. If the OS doesn't support this or the commands are blocked by an intermediate layer like hardware RAID or unRAID, TRIM can't occur and the drive's performance will decay over time. The second option is if the drive does garbage collection behind the scenes without informing the OS. Then, I don't think there are any changes observed by the system before/after a garbage collection occurs, since it is abstracted away entirely into the SSD's hardware. With this second scenario then, it should be safe to run SSDs alongside HDDs since they will behave identically from the perspective of the OS. Am I understanding that right?
  12. Thanks for the link! That was a rabbit hole What I got out of that was that if the SSD supports automatic/behind the scenes trim/garbage collection, it's safe to use in the array. Then it's just a matter of storing files on that drive and moving them to slower storage when I run out of space.
  13. Luckily there was no data on the drive, it was gifted to me and I plugged it in to find clicking. Shame, I was hoping to make something of it! Oh well. Thanks for the info!
  14. Hi friends, I have a variety of things on my shelf that I'm looking to get rid of. ***Hard drives and Storage related --1 WD20EFRX 3TB Western Digital Red NASware 2.0 - Used in a NAS - 90 --1 WD2002FAEX 2TB Western Digital Caviar Black HDD - Used in a NAS, ~30k POH - 75 --1 WD2001FASS 2TB Western Digital Caviar Black HDD - Used in a NAS, ~30k POH - 75 --500GB WD5000AAKB IDE drive - 15 --ICY DOCK FatCage MB155SP-B 5 in 3 HDD cage - 80 -Old SATA drives, but fully functional and good for experimenting if you don't want to partition your main drive --Hitachi Deskstar 160GB - 5 --Seagate Barracuda 80GB (I have two of these) - 5 --Seagate Momentus 2.5" 160GB - 5 --500GB 2.5" Hitachi - 20 ***Peripherals --Massdrop Hall Effect Bamboo RGB Mechanical keyboard - Feels like MX Reds, great animations - 100 --Westone UM Pro 30 headphones, brand new and sealed in packaging - 375 --Logitech Anywhere MX mouse - 30 --Orico USB 3.0 HDD Docking station - 10 --HP full size keyboard - cost of shipping, free to someone that needs it --Logitech M315 wireless optical mouse - 5 --Two PS3 Sixaxis controllers, one with L2/R2 trigger mods - 25 each --Logitech Quickcam Pro 900 - Looks like there's a speck on the lens, may be able to be removed with cleaning - 10 ***Main components (motherboards, CPUs, PSUs) --Intel Core i3 4170 LGA1150 Haswell processor - 75 --Thermaltake Toughpower XT 675W modular PSU - 50 --H61MGC LGA1155 motherboard with Ivy Bridge G1610 processor and DDR3 RAM - 75 --ASUS B150M-A D3 motherboard (brand new), Skylake/Kaby Lake support (confirmed flashed to latest BIOS) with DDR3 RAM - 50 --G41T-M7 LGA775 motherboard with 4C/4T Intel Xeon E5420 processor and DDR3 RAM - 40 ***Other Components --ARK IPC-3U380 3U Rackmount Chassis - 3x 5.25" bays, ATX motherboard support, ATX PSU, 5 PCIe slots, 8 3.5" internal bays (if small motherboard used) - 40 --Dell Powerconnect 2708 8 port managed Gigabit Ethernet switch - 30 --LSI 9210-8i RAID card with top-mounted SAS ports and full height bracket - 60 --Bluray Burner drive - 35 -Indigo Xtreme Thermal compound for LGA 1155/1156 - $15 --EK-FC Terminal DUAL Parallel 3-Slot - https://www.ekwb.com/shop/ek-fc-terminal-dual-parallel-3-slot - 10 --EK-FC Terminal TRIPLE Parallel - https://www.ekwb.com/shop/ek-fc-terminal-triple-parallel - 15 --GoPro Hero 4 black with a bunch of extras - 350 -Hero 4 Black -32GB microSD card and microSD to SD adapter -2 original and 1 extended battery -WiFi remote -Suction Windshield/Window mount (cars, time lapses, etc) -Chest strap -Head Strap -Handle grip -Dive housing (deeper waterproof rating) -Regular housing -Low profile housing -Connection cables -90 degree and extension mounts -Extra backdoors Timestamp and picture of accessories: http://i.imgur.com/l6uDqUQ.jpg http://imgur.com/a/Q8kH9 http://i.imgur.com/lLRRLyC.jpg http://imgur.com/a/21gHF
  15. Is there any way for the cache pool to be a READ cache as well as a write cache, or even just a read cache? I read the same or predictable/sequential data off the array over long periods of time, and it would be great to be able to spin down the HDD and just have the last-accessed files in a cache on the SSD cache pool, or even in memory. I think this is considered "bcache" in some Linux Distros. Has this ever been discussed? Thanks!