SirChillsAlot

Members
  • Posts

    4
  • Joined

  • Last visited

SirChillsAlot's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Here is the diagnostics. I was just trying some other solutions (none of which worked), so this file is from a clean reinstall just earlier. I haven't even attached my key, installed plugins, or tried to start the array. The logs should pretty clearly show what's going on as soon as the server boots. You should be able to see where I booted with all my drives attached, then unplugged half the connections as I outlined previously to see if the problem was still the same. I then plug them back in so the errors pop back up for a couple minutes before I captured the diagnostics file. Thank you! tower-diagnostics-20220625-1259.zip
  2. Thanks for the reply! Should I obtain the diagnostics while the problem is occurring? (both controllers plugged in and throwing errors)? Or will that file show the past events?
  3. I recently began migrating the data on my array to a new setup of disks, using hardware that has all worked previously. I just cleaned everything up and configured differently. I also formatted my usb with a clean unraid install to "start fresh". (unraid 6.10.3) The array hangs badly when trying to start, and even using unassigned devices with the array stopped is almost impossible. After looking at the logs, the server continuously displays the following error for every disk connected to my system through the storageworks enclosures I use (some 50 or so drives in 3 enclosures) --Tower emhttpd: device /dev/sdae problem getting id- Just replace the "sdae" with a similar point for each drive connected. After a lot of troubleshooting, I was able to solve the problem by unplugging the SAS cable running to one of each of the enclosure's 2 controllers. I then get a string of messages in the log for each drive in the enclosure. Again, you can change small details in the following example for each drive that was affected: -- Tower kernel: sd 5:0:13:0: device_unblock and setting to running, handle(0x0018) Tower kernel: mpt2sas_cm1: mpt3sas_transport_port_remove: removed: sas_addr(0x5000cca030155aea) Tower kernel: mpt2sas_cm1: removing handle(0x0018), sas_addr(0x5000cca030155aea) Tower kernel: mpt2sas_cm1: enclosure logical id(0x50014380367ac8c0), slot(26) -- After this, unraid stops freaking out, and I can use the drives, but the speed I can move files has obviously dropped. Before now, all of my hardware seemed to "work its magic" and used the multiple controllers for extra speed and redundancy. It all just...worked as advertised. The system always seemed to understand that the two connections lead to the same drives and how to handle them. The enclosures are attached to pci cards with 2 SAS ports each. I've tried replacing the cards, moving to different pci slots, as well as routing the SAS connections to the enclosures in different configurations with the multiple pci cards. For example; connecting both controllers on enclosure 1 to pci card 1, enclosure 2 to card 2, etc, or instead connecting each enclosures 1st controller to card one, 2nd controllers to card 2, etc. Both of these examples have functioned in the past with no other input from me. It seems to me that unraid is trying to see each physical disk as two devices, with a different ID for each controller in the enclosure the disk is contained in. I'm really at a loss as to what changed. I cannot think of any configuration or installation changes that I could have made that haven't worked for me before. Please help! *Edit* By "different ID for each controller" in the last paragraph, I intend to say a "different mount point". that's probably more correct. The mount point (ex. sdae) that shows up for each drive in the errors does not match the final mount point that's used when I only hook up one controller. As if the system is trying to mount each drive twice? Once for each controller?