Gorby

Members
  • Posts

    4
  • Joined

  • Last visited

Everything posted by Gorby

  1. This just happen to me on my ML350 G6 and I was able to work around the issue by installing an HP / Intel 82571EB/82571GB Dual Port NIC I had laying around. I was just wondering if anyone knows if the corruption issue has been fixed or how to tell if your system has this corruption issue and it's safe to re-enable the TG3 driver? ripley-diagnostics-20220528-2218.zip
  2. I replaced all the cables and the hotswap trays and the issue persisted so I installed both drives in an external SATA to USB enclosure on my desktop pc and found that the drives were still showing a zfs pool installed from a previous FRENAS test array. After deleting the zfs pool and wiping the drives I reinstalled them back in to my Unraid test server and everything worked. Just to see if I could recreate the issue I removed the drives, put them back in my FREENAS eval server and recreate the ZFS pool. Then I reinstalled the drives back in my Unraid server and the problem reappeared. After wiping the drives again the problem went away and everything is working great.
  3. Here is the diagnostic file you requested. tower-diagnostics-20191126-0729.zip
  4. I have an issue with drives disappearing when attempting to add them as data or parity drives, but not as cache drives. I am running a trial version of Unraid 6.7.2 stable on an HP ML360 Gen 6 with (2) Intel 2.4 GHz CPUs, 64GB ECC RAM, one four bay 3.5" hotswap chassis with (2) 3 TB 3.5" Seagate Drives, (1) 4 TB 3.5" Seagate Drive, one 8 bay 2.5" hotswap chassis with (1) 2.5" PNY 240GB SSD Drive, (2) 2.5" Seagate Compute Fire 2.0TB Drives. The 3.5" drives are running off the built in HP SATA controller and the 2.5" drives are running on an LSI SAS controller running IT firmware. Unraid see's all the drives and I was able to preclear all the drives. When I go to build the array Unraid again sees all the drives, but when I add the drives to the array the (2) 2.5 Seagate Compute Fire Cuda drives disappear after attenpting to add them to the array. Another drive in the same hot swap assembly on the same LSI controller is setup as my cache drive and I am able to add it as my cache drive with no issues. If I build the array without the (2) 2.5" 2TB drives. Everything works fine and the 2 drives show up as unassigned. When I stop the array and attempt to add one of those 2 drives all of the drives disappear including the PNY SSD which I have setup as a cache drive. The drives will not show backup until I reboot the server at which point the cache drive works fine but the two other drives no longer show up as unassigned or in the array. If I take the 2 drives out and format them in another PC and put them back in the server they again show up as unassigned. I can again preclear them with no issues and if I install the CA plugin I can mount the drives and access them without any issues, but if I clear them and attempt to add them to the array they again disappear when trying to add them to the array. After they disappear they still show up as scsi devices under system devices and I can access them from the CA Application Disk Locator plugin. It appears the issue has something to do with the array subsystem rather than a hardware issue as the drives still show up under the devices section. I also don't know if having drives on separate controllers is something that is supported, but the cache drive works. I can also setup either of the (2) 2TB drives as cache drives and the array works fine, there is only an issue when I try to add them as data or parity drives. Any ideas or assistance would be greatly appreciated.