Nhatch411

Members
  • Posts

    13
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Nhatch411's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I'm having the same issue with SSH since upgrading to 6.9.1. I can telnet to the console without issue, no issues in GUI, however any SSH connection fails with Access Denied (even on a test account without a PW). Anyone have a suggestion?
  2. USB Hardware RAID Enclosure issues: I'm having issues trying to attach a large (30TB-R50) USB HW Raid Array (Sans Digital TR8UT, USB 3.0) to my 6.7.0 array. The array functions as intended on a Mac as well as my Windows laptop. My UD plugin is up to date, as well as my server's (SuperMicro's X10DRH-iT) firmware. When I initially plug in the unit, lsusb shows a JMicron Tech Corp device attached and lsusb -t lists it as Class=Mass Storage, Driver=usb-storage, 5000M. After a few moments, the array disappears from lsusb and unraid logs show: kernel: usb 4-1: device not accepting address 4, error -62. I've enabled and disabled EHCI Hand-Off in the BIOS with no visible effect. If I plug into a "legacy" USB 2.x port, the error is similar with error -71. I'm attaching some lsusb screen caps as well as my diagnostics zip. I know this HW RAID unit is not top of the line, however this is for a secondary copy of Plex media. We have limited cell service at the lake, so I'm setting up a Shield Pro with this unit as local storage for a secondary Plex Server. I've already vetted the overall solution with an old Drobo I have, however it limits my volumes to 16TB which would be a PITA to split and sync the media files. I was also considering the Mediasonic H8R2-SU3S2, if anyone has a better experience with those units. I have a hunch that either the JMicron chipset is missing something, there is a USB schema problem, or the 30TB volume size is an issue. Any ideas would be greatly appreciated! hatchnas-diagnostics-20190521-1651.zip
  3. I have an array with 15(3-8TB) data, 2(8TB) Parity (Mirrored), and 2(256GB) Cache Drives (Mirrored), running v6.4.1. I've purchased 2 new 10TB drives, with the intent of swapping them with the existing 8TB parity drives. Sifting through the forums, I've been able to piece together the process (thanks SSD and others!), however I haven't been able to locate any posts referring to the upgrade process with Mirrored Parity drives. Any help (or pointers to an appropriate post) would be greatly appreciated! Thanks All!
  4. Quick and final update. I rebooted the array, all drives were recognized and the array auto started without issue! The original problem with the 4 newest HDD's listing as "No Device" in their disk slot after reboot has been resolved, as well as the "Wrong" device error that appeared after the upgrade. Thanks all!
  5. Running the New Config Utility (with Retain All) corrected the naming issue and all drives are now online with a running array! Once back in the Main view, I selected Parity is Valid and started the Array, everything seems to be back to norm. Later tonight I will reboot the array to see if the original issue is resolved, I'll update this thread after verification. Thanks all (especially bonienl)!
  6. So I'm a little confused. If the identification on the Data Slots is where I'm having an issue, why should I choose "Retain all"? Won't that just keep the same Device/Identification naming problem that I'm currently seeing on those 4 data slots? I'm definitely not a guru on what "New Config' actually re-writes (frankly on anything), however reading the Utility's notes seems to imply that I should retain those slot groups that are "Known Good" (thus the Parity and Cache). My apologies for questioning your advise, I just want to make sure I understand the process/utility.
  7. I've pulled the Diagnostics to submit with a defect report. Any recommendations around my "New Config" question? What are the ramifications to running "New Config" while preserving the Parity and Cache slot assignments. Then reassigning the disks to the same slots they were previously (with WWDN set to automatic)? Thoughts?
  8. Here is the controller: Serial Attached SCSI controller: LSI Logic / Symbios Logic SAS2008 PCI-Express Fusion-MPT SAS-2 [Falcon] (rev 03) I'm hesitant to add the plugin...
  9. Help! I was previously running v6.34, with 2 Parity Disks (8TB), 2 Cache Disks (256GB SSD), and 13 Data disks (Mix of 3 & 4 TB SAS). Months back, I had added 4 new disks to expand the array. After which, when rebooting, those Disks would show as "Missing" / "No Device". I would then select the appropriate disk for each device and start the array. PITA, but hadn't been able to spend the time to troubleshoot. Jump to this morning, verified by nightly CA_Backup and upgraded to 6.4 through the Plugin. Upgrade completed successfully, I went into Settings\Disk Settings and disabled Auto Start prior to rebooting the array (since I assumed I would need to re-assign the "Problem" devices). Sure enough, after reboot the same 4 disks are marked as "Missing" / "No Device". I selected the appropriate drives, at which point they change from "Missing" to "Wrong". Unlike in 6.34, which recognized the identification and allowed me to start the array. The names do appear different in Identification- for example: H7240AS60SUN4.0T_001402E60HRX_PBH60HRX - 4 TB when it's looking for: H7240AS60SUN4.0T_001402E60HRX - 4 TB I believe the LSI controller is contributing to the name difference, however in 6.34 I was able to manually assign (through the GUI) and start the array. So my big question is should I use Tools\New Config to reassign the disk to device mappings (Preserving the Parity and Cache slots), or is there a better way to resolve (I can always revert back to 6.34)? I've include my last syslog, and some images of the Array Devices screen from 6.34 and now 6.4. Thanks all, any help would be greatly appreciated! hatchnas-syslog-20180117-1313.zip