AgentXXL

Members
  • Posts

    400
  • Joined

  • Last visited

Everything posted by AgentXXL

  1. FYI - I have been seeing this issue (the disappearing user shares) since upgrading to 6.10 RC4. I am using NFS for my shares as the majority of my systems are Linux or Mac. Mac's in particular have speed issues with SMB shares, but NFS works great. The gotcha is that I don't use tdarr... in fact I don't use any of the *arr apps. I've grabbed diagnostics just now as it just happened again. I will send them via PM if anyone else wants to look at them, but I prefer not posting them here. Although I use anonymize, going through the diagnostics reveal they still contain information that I consider private. I'll be taking my own stab at the diagnostics shortly, but I've disabled hard links as suggested and will see if that helps.
  2. Thank you for the updates! Worked and now I'm just left with the 'logger' message, but I do think it's from using a remote syslog.
  3. @ich777 I just had a small realization on why there's two such messages - because I'm now using 2 of your plugins - Nvidia and ZFS. That makes it likely that it's the same code in both plugins, and as such it could be related to the Plugin Update Helper if both use it. Note that the messages go away if I boot without those plugins installed, so it's certainly looking like they're the culprits. I'll be sending you my fresh diagnostics from yesterday shortly. I may have tracked down the logger message as something related to using a remote syslog server. The rsyslog functions are initialized before the network is fully up, so it fails with a few errors during the bootup. That's a native unRAID issue so I'll follow up with a bug report for them. Regardless, thank you again for your plugins! I'm very pleased with the functionality from both of the ones I'm using.
  4. Thank you... I'm not seeing any issues that I can trace to the job 3/job 4 messages, but my OCD prefers to see a clean boot. I'm in the process of doing some minor hardware maintenance today so I'll capture the fresh set of diagnostics and send them to you. It's odd that I can't see anything in syslog that mirrors those messages, and same for the 'logger' message that I'm also trying to track down. I haven't seen anything that would point at the Plugin Update Helper, but I'm using both your Nvidia and ZFS plugins. If you need me to check anything on my system, let me know. Its obviously not a rush, but hopefully we can track down the source of these messages. Even if there's not a simple way to prevent them, knowing what actually causes them will help satisfy my OCD.
  5. I will grab a clean set after my next reboot, but I prefer to send them direct vs public posting. I do use the anonymize function but having looked through them, there's still info captured that I prefer not be public. That said, it's more about the job 3/job 4 messages which seem to be very similar to the message reported by N385 last year. Has the issue that caused those messages been fixed? I find no evidence of those messages in syslog, and haven't found any logs specific to the Nvidia plugin that I can look through. I understand wanting a full set of diagnostics but it would be nice to get tips from more advanced users like yourself so that we can do our own troubleshooting.
  6. I just had to do a flash drive replacement and everything is back up and running, but I noticed an old reported issue is apparently still occurring and is now being duplicated. @ich777's reply: The attached picture of my normal boot now shows 2 of those messages: I fixed the /var/temp/go issue (ZFS plugin related) but I'm also still trying to find what causes the logger: send message failed: Bad file descriptor message. Any thoughts on the job 3/4 messages?
  7. 6.10 RC4, and using NFS4. But this also occurred when I was using Samba shares, so likely not related to NFS.
  8. No, I tried but had issues with lots of packet retransmit requests and collisions so I turned jumbo frames off.
  9. First thing I've checked usually and it's always been green. Both servers are connected using 10Gbps NICs on a Mikrotik CRS305 switch. That switch is linked to my Brocade ICX6610 with a passive DAC cable. Even other systems that are connected to the Brocade don't see any server disconnection issues so I'm pretty sure my network and systems themselves are OK. Just a wierd glitch. Especially since there's no messages in syslog, even when re-trying the Mount button. Regardless, I'll grab diagnostics the next time it happens.
  10. Yes, that's what I suspected. But the issue occurs when the servers are still online. I've been watching the logs from my network switches and so far there's no evidence of a dropped connection when the issue occurs. As it doesn't happen often and I've been used to rebooting to resolve it, I've never bothered grabbing full diagnostics. I've checked syslog before rebooting, but as mentioned I see no messages, even when I click the Mount button. I'll keep reminding myself to grab full diagnostics the next time it happens.
  11. I'm encountering an odd issue with shares on my 2 unRAID servers. Every so often the NFS shares that I mount via UD will 'drop' and show 0 bytes for size and no info (red bar using the color theme) for free/used space. This also used to happen with SMB shares. When these 'drops' occur, clicking the Mount button does nothing and even odder, no errors or messages are reported in syslog. When it does occur, I usually have to stop all running Docker containers and reboot to be able to re-mount the shares. Any idea what might be causing this and any idea(s) to try and prevent it from happening? I'll be sure to grab diagnostics the next time it happens, but thought I'd ask to see if others have seen this issue.
  12. I see this question was asked back in 2019, but UD has changed a lot since then so I'll ask again: is there a way to hide devices so that they don't show up under UD? I'm using the ZFS plugin and both ZFS Companion (Dashboard widget) and ZFS Master (Main tab addition). As the ZFS array is monitored by those 2 plugins, it would be nice if there was a toggle in UD so that a drive can be hidden from even showing when it's been passed through or set to disable the Mount button.
  13. The rule I'm trying to implement is 115 characters long. If you could increase it to that or higher, I'd appreciate it. Thanks!
  14. I'm trying to setup my NFS rule for my UD mounted disks but have run into an issue. There appears to be a 100 character limit in that field in UD settings so I can't enter the complete rule. Please advise if this is done for a specific reason or if it's just something that was overlooked and can be changed. Thanks!
  15. FYI, I've migrated my main unRAID system to a Chenbro RM41300, essentially a full-size rackmount but uses the same drive cages as the RM42300. I finally added a 90mm fan to the middle 4 drive cage (not shown in attached pics) and one more 90mm at the rear of the case. There's room for one more but I didn't have another 90mm fan on hand and so far the temps inside the case are quite nice - 30 - 40C max. I'm holding onto my RM42300 as I have plans to migrate my 2nd unRAID system out of the Fractal Design Define 7XL.
  16. The non-hotswap option is perfectly fine, but since I already have the iStarUSA rack, I might as well use it. If I remember correctly you can install an 80mm slim fan at the front panel in front of the 4 bay fixed drive cage. I'll check again when I next have the unit opened up. The drive I'm using is a Matshita BD-MLT UJ240AS which is 12.7mm. It's this one: https://www.amazon.com/Notebook-Internal-Panasonic-Tray-Loading-Replacement/dp/B0822XG6G1 I've thought about getting one that also handles BDXL and is UHD friendly, but I have an external USB drive that is UHD friendly so I'm content to use it when necessary.
  17. Thanks for the feedback on my previous posts. Glad to know they're helping others. And yes, the RM42300 can hold 16 drives - 2 x 5 bay hotswap modules for 10 drives, 4 in the middle cage behind the front panel USB ports/LEDs/switches and 2 more in the 3.5" bays below the left and right drive bays. You will have to remove the fan module from the left side of the case, and purchase the extra 3x5.25" cage for the Chenbro. The part number for the 2nd cage is 84H342310-003. It's the same cage that Chenbro provides for the right side and cost me $25 CAD from a local shop that carries the Chenbro line. Note that if you do put another cage in and remove the fan module, you'll definitely want to get a decent 80mm fan for the rear of the case that pulls air from the front over the drive bays and motherboard compartment, and exhausts it out the rear. I haven't been able to install the 2nd cage on the left as my video card is too long, preventing the cage from being installed correctly. As shown in the picture in my previous post, I used one of the bottom 3.5" bays for a front panel USB module. The RM42300-F1 has USB 2.0 ports in the center front section, but the RM42300-F2 has two USB 3.0 ports. I added the 3.5" USB module as I got the F1 version of the RM42300. When I get the RM41300 with it's increased length (full size 4U, like the Supermicro CSE-847), I'll be able to install the left side cage that I've got waiting, also with 1 of the iStarUSA 5 bay hot-swap units. So my RM41300 will have 10 hot-swap bays at the front plus the 36 hot-swap bays in my Supermicro CSE-847.
  18. I use the Chenbro RM42300 with my main unRAID server. I have an iStar 5 bay hot-swap enclosure and slimline Blu-ray drive installed, and a front panel USB module with USB-A and USB C ports. It's connected to my Supermicro CSE-847 DAS conversion by a LSI 2308 series HBA. Here it is sitting on top of the Supermicro. I've been having issues with the onboard Aquantia AQC111C 5Gbps NIC for the Asus Prime x299 Deluxe II motherboard and Asus are no help at all. It likely just needs a firmware update as I found many other users with similar issues that were resolved once they updated the firmware. Alas the Marvell official fw update only works for certain USB, add-in card and onboard versions of the Aquantia AQC series NICs. Some in the community have been able to update their Asus onboard NICs. This was done by adding the specific device IDs for their motherboard into the XML config file that's used to determine which NICs are eligible for the update. Asus claims there's no problems and no firmware updater available, but I provided evidence links in their own ROG forums to show otherwise. As I'm unwilling to live without my main unRAID system for the time it would take for the RMA, I decided to purchase a used x299 motherboard, the one that I actually wanted when I 1st built the system, but the board was out of stock everywhere. I found the used one on eBay - an Asus WS x299 SAGE/10 with dual onboard 10Gbps NICs from an Intel X550 controller. It is unfortunately a SSI-CEB sized board and the Chenbro RM42300 won't accept that size. So, as I've been happy with the Chenbro, I purchased one of its full-sized 4U rackmount brothers, the RM41300. The advantage of going with the 41300 is that the front panel is essentially the same as the 42300, so I'll be able to transplant the hot-swap bays, USB module and slimline Blu-ray drive into it fairly easily. Both the case and motherboard have been shipped. I'll take some pics during the transfer/build and post them here.
  19. @ich777 Is the unraid-kernel-helper container usable to try and create a custom kernel based on 5.17 (for better Alder Lake support)? I would also like to add the latest drivers for the Aquantia 5Gbe and 10Gbe NICs (vers. 2.4.15 released 2022-02-22, i.e. yesterday or today depending on your time zone). I've tried the latest drivers (3.1.6.0) on Win10 x64 and they did improve my connection with far fewer packet retransmits so I'm hoping the Linux driver does the same.
  20. Here's a snapshot of the wiring for the main disk mount area. I actually upgraded my system yesterday - went from a 6th gen i7-6700K to a 12th gen i9-12900K. unRAID has some issues with the 12th gen, but I'll live with them until unRAID goes to a new kernel that better supports all of Alder Lake's features. Specifically the iGPU - for now I'm using a GTX970. As you can see, using the breakout cables with the LSI HBA means you don't have a horrid mess of regular SATA cables. The gotcha is these thinner wires are more prone to breakage, but they're inexpensive so I always keep spares on hand. As I was upgrading the system, I also made a custom power cable for the top 8 drives. The PSU (eVGA 1200W 85+ Gold) has a high power rail that feeds these so no worry about killing the PSU.
  21. There's not a lot to complain about with Synology - certainly I think they still provide a decent product line for those that don't want to 'roll their own' with unRAID (or other OSes). That said, I ran FreeNAS (now TrueNAS Core) for many years before jumping ship to unRAID in 2019. I couldn't be more pleased with how it's worked out. As for the LSI HBA info, they are the most reliable and proven way to add SATA ports to an unRAID system. Using them with the breakout cables for discrete drives or using miniSAS SFF-8087 to SFF-8087 for SAS/SATA expanders makes them a great choice. The most common model is the LSI 9207-8i which can use 2 breakout cables for a total of 8 SATA ports. If you were thinking you might want to maximize the storage in a Define 7XL, then a LSI 9201-16i is the better choice as it can do 16 SATA ports with 4 breakout cables. Here's an example link of one from eBay. Also check out 'The Art of Server' website and their eBay store. Note that some have had issues with the OEM models from China but for the most part they're OK. I bought a couple from China and one didn't work properly so I returned it and ordered a retail boxed version from the US. https://www.ebay.ca/itm/133882350555 Also see this section of the unRAID documentation on PCI (and PCIe) controllers. The table may be out of date but the LSI products are still the recommended way to go. https://wiki.unraid.net/Hardware_Compatibility#PCI_SATA_Controllers As for me and needing more storage, it's all about what you intend to use the system for. I've been collecting media since I was 8 years old - almost 50 years worth of TV/movies that's all been digitized or ripped from disc. There's a lot of family pics and video, as well as music, ebooks and more. My media server is in my signature below, as well as the backup server that resides in my Define 7XL. Personally, the case is impressive in its capability, but it's too nice to use for purely storage so I'm planning to repurpose it for a new video editing/flight sim setup. I already have one of the Supermicro CSE-847 (36 x 3.5" bays) for the main media server, but am watching for a CSE-846 (24 x 3.5" bays) to transplant the backup server into. I might get a chance to take some pics of the system cabling and will post them if/when I do. If you have any other questions, feel free to ask.
  22. I have a total of 23 drives installed. 20 x 3.5" drives installed, with room for 2 more. I also have 3 x 2.5" SSDs installed. I used all the brackets I ordered and was able to add 1 SSD mounted upside down at the top, along with the 3 x 3.5" drives that are part the normal 18. I also added 1 more 3.5" drive at the bottom of the motherboard compartment, which is where I'll place the other 2 x 3.5" drives if/when needed. Cabling isn't as bad as you might imagine, but it definitely is a candidate for custom cables so it can look a lot cleaner and help with airflow. I use 3 x 140mm fans at the front of the case to suck air in over the hard drives, through the motherboard compartment and then exhausted out the rear. As for cabling, most of my SATA cables are from my LSI 9201-16i which has 4 x SFF-8087 miniSAS connectors. I connect 4 of the SFF-8087 to 4 x SATA forward breakout cables giving me 16 SATA ports. The other SATA ports come from my motherboard and a m.2 to 5 port SATA controller. I haven't done any serious cleanup of the cables as I eventually plan to move the entire setup into another enclosure like a Supermicro CSE-846. The Define 7XL is a great case, but it's also too nice to use purely as a storage server (mine is my 2nd unRAID system, using it for backups primarily).
  23. COVID has forced many of us to build out our homelabs so that we can work from home. unRAID certainly isn't what I'd use if I was back at work daily. I just find it odd that it started after one of the last 2 or 3 updates to UD. I have done this same capacity upgrade procedure a couple of times since running rc2 and this is the 1st time I've encountered this problem. What's really annoying is that a drive that has been completely wiped by the manufacturer tools (SeaTools in this instance) still shows up with the 'ARRAY' tag. That's obviously pointing to something that's tracking the drive by s/n or other methods in either unRAID or UD. Very frustrating.
  24. I assume you mean the 'refresh' symbol? Tried that numerous times and it hasn't resolved it. Now waiting for users to respond to my reboot notification but that could be hours. And as for rc3, who knows when that will come. Guess I get a forced break from my tasks until then. Do you know which version of UD I could try to roll-back to so I can try to resolve it in the interim?
  25. Thanks for the quick response. It's definitely bothersome and until the last few UD updates it hasn't been an issue. In my case the drives were not dropped from the array or disabled - they were replaced while the drives were still fine and just upgraded to higher capacity models. Perhaps you might consider adding some code to UD that will let users 'override' this scenario? At least the option to wipe all partitions from the disk and remove it from historical devices. I'm running 6.10.0-rc2 and have been for a few months and this is the 1st time I've encountered this. I’ve used the same pro-active replacement of the drives before they fail or get dropped from the array a couple of times since I've been on rc2. It's only after the last 2 - 3 UD updates that the issue started. I don't use USB drives in the array and so far I can't recall any times where a SATA connected drive was dropped except when it completely failed. Stopping and restarting the array or rebooting unRAID may work for some but it isn’t a great option when you have numerous users actively accessing data on the server.