Jump to content

pete69

Members
  • Posts

    20
  • Joined

  • Last visited

Posts posted by pete69

  1. 1 hour ago, UtahDeLorean said:

    Is there a way to speed up the process? Looking to get more and bigger drives as time keeps moving along

     

    If you are re-cycling Netapp or some other modified sector size drives, then you will need the same process each time.

     

    If you are buying new drives, the sector size should be fine as Unraid handles most standard sector sizes out of the box.

    • Like 1
  2. On 12/3/2021 at 7:42 AM, almulder said:

    So I am thinking about getting another one, but all the cable guides show using 2 controller cards and 2 stacks and cabling them all together a special way., I am assuming for redundancy. I only have one controller, with 2 ports on the controller. This is how I would set it up?

     

    I only have the one shelf currently connected, but had 2 in the past.

     

    Since you have 2 ports on your controller, just connect each port to the square SAS port on both your IOMs, that would give you 2 independent SAS paths for each shelf and they would be treated as separate entities by unraid.

     

    I think that would be better than trying to daisy chain the shelves which would have no benefit in Unraid.

     

    • Like 1
  3. 12 minutes ago, sendas said:

    What could I be doing wrong or is there an issue with WD Red's and the NetApp PM80xx series cards? 

     

    I had similar issues on start up running 6.8.3, with some drives not showing (sometimes reds sometimes others).

     

    Usually pulling the drives and reseating or moving drives to other locations got them online.

     

    Once online it was rock solid until I needed to restart unraid and go through the process again.

     

    I initially thought it might be a backplane issue with the disk shelf, but didn't care too much as I don't need to restart unraid very often.

     

    When I upgraded unraid to 6.9.0 none of the drives were recognised due to the kernel issue.

     

    Rather than wait for a fix or remain on 6.8.3, I purchased an LSI 9212-4i4e 6Gb SAS Controller Card set on IT mode, and a QSFP SFF-8436 to Mini SAS SFF-8088 Cable.

     

    Since then it works fine on 6.9.x and the drives are all online at each restart without any fiddling.

     

    Pete.

    • Thanks 1
  4. 22 minutes ago, CrashMedia said:

    Just an update to others who may have similar issues.

     

    I bought a server for another storage server that came with 2 H200E dell external HBA's.

     

    After flashing them to IT Mode this seems to have resolved the issues. I tried updating to 6.9.2 and the PM8003 still wasn't discovered and I was having a ton of reliability issues with it with the drives dropping out.

     

    You will also need a different DAC you will need a SFF-8436 to SFF-8088

     

    I had the same problem using the PM8003. With V6.9.0 Stable. The PM8003 also used to drop drives occasionally on 6.8.3, but only when unraid was started, requiring drives to be pulled and re-seated to refresh the array. It was rock sold once running.

     

    Rather than wait for a fix or remain on 6.8.3, I purchased an LSI 9212-4i4e 6Gb SAS Controller Card set on IT mode, and a QSFP SFF-8436 to Mini SAS SFF-8088 Cable.

     

    I have not seen any issues with the LSI controller on 6.9.2.

  5. On 3/9/2021 at 8:43 AM, pete69 said:

    I had copied 'config/disk.cfg.bak' to 'config/disk.cfg' as suggested here.

     

    I then manually took a backup of unraid.

     

    I downgraded back to 6.8.3, but no drives were shown in the Cache. 

     

    I can add both drives back, but they both show the blue square meaning new device.

     

    Other drives have appeared correctly with the green circle.

     

    I have not restarted the array yet, as I am concerned I may loose data on the Cache drives.

     

    Any suggestions?

     

    Answering my own question.

     

    After the above did not work I recopied 'config/disk.cfg.bak' to 'config/disk.cfg' after the downgrade had been applied.

     

    And the Cache worked again.

     

    Hope that helps someone else.

  6. On 3/2/2021 at 8:23 AM, limetech said:

    Reverting back to 6.8.3

    If you have a cache disk/pool it will be necessary to either:

    • restore the flash backup you created before upgrading (you did create a backup, right?), or
    • on your flash, copy 'config/disk.cfg.bak' to 'config/disk.cfg' (restore 6.8.3 cache assignment), or
    • manually re-assign storage devices assigned to cache back to cache

     

    This is because to support multiple pools, code detects the upgrade to 6.9.0 and moves the 'cache' device settings out of 'config/disk.cfg' and into 'config/pools/cache.cfg'.  If you downgrade back to 6.8.3 these settings need to be restored.

     

     

     

    I reverted back to 6.8.3 because 6.9.0 does not support my Netapp SAS 4-Port 3/6 GB QSFP PCIE 111-00341+B0 Controller (PMC Sierra PM8003).

     

    I originally upgraded via the link on the GUI, without looking here first.

     

    I had assumed that the GUI upgrade process took a backup of the current image. so did not take a separate one myself.

     

    I had copied 'config/disk.cfg.bak' to 'config/disk.cfg' as suggested here.

     

    I then manually took a backup of unraid.

     

    I downgraded back to 6.8.3, but no drives were shown in the Cache. 

     

    I can add both drives back, but they both show the blue square meaning new device.

     

    Other drives have appeared correctly with the green circle.

     

    I have not restarted the array yet, as I am concerned I may loose data on the Cache drives.

     

    Any suggestions?

     

     

  7. I have unraid running on a HP Z800 workstation.

     

    It has a Netapp SAS 4-Port 3/6 GB QSFP PCIE 111-00341+B0 Controller (PMC Sierra PM8003)

     

    Connected to a Netapp Disk DS-4243 24 bay diskshelf.

     

    I have just installed 6.9.0 and now none of my diskshelf drives appear.

     

    Everything has been running fine for around 2 years and have done system updates before without any issues.

     

    The only drives that appear are the USB stick for unraid and the dvd rom both of which are directly connected to the Z800.

     

    I have tried rebooting both the shelf and Z800 without any difference.

     

    I have attached the log, and would appreciate some assistance in either fixing this issue or rolling back unraid.

     

    Thanks,

     

    Peter.

    tower-diagnostics-20210308-1125.zip

  8. I am just starting out with unraid, coming from drivepool.

     

    Basic setup seems to have gone ok with a pair of 10TB hdds.

     

    I then created a bunch of shares and copied around 5tb of data across from my drivepool.

     

    This went well, and I could access the shares across the network.

     

    I then added 2 10TB parity drives one at a time.

     

    Each took about a day to build, then completed fine.

     

    But now I have no shares showing on the shares tab, and no shares other than the flash drive showing across the network.

     

    The shares are showing on Main/Browse?dir=/mnt/disk1, but not on /Shares or across the network.

     

    Any ideas?

     

    Peter.

     

     

  9. I am using preclear for the first time.

     

    I installed 2 seagate drives that I knew had issues and 1 brand new 10TB WD RED.

     

    The 2 Seagate drives I knew had issues eventually both failed the pre-read, and the new WD passed.

     

    image.png.f3e3bc797cd96b1239dcbc63b40f2da0.png

     

    The WD Continued into zeroing as far as 21%. 

     

    The logs look good for 5-20%.

    image.png.eaa813da38e62a0a3689e1bda40a2f51.png

     

    Each 5% has generally taken less than a hour.

     

    Preclear has now been stuck on 21% for over 3 hours, and there is no activity on the drive LED.

     

    Before I cancel it and retry, is there anything I can grab to assist the author with checking why it froze without showing any error message?

     

    Pete.

  10. Hi Thanks for the reply.

     

    I wasn't expecting to access the data on the removed drives when I reinstalled them after the rebuild, I just wanted to add them back as new drives to expand the the array again.

     

    But unraid was not allowing them to be added, even though the settings data was still showing for those drives in unraid.

     

    With the array stopped the option to assign was not available on these drives.

     

    The issue turned out to be the internal HP P410i raid controller tagging these drives as bad after I removed and re instated them a couple of times, even though it is running in HBA mode.

     

    Thanks for answering the 2 questions though. 

     

    As I understand it, in this case the parity rebuild wouldn't have been rebuilding 2 drives, as the drives had been removed without replacements. It would be rebuilding parity across the entire array with the remaining drives so that 2 drive redunancy is mantained with 2 fewer drives?

  11. I am new to unraid so trying a few things to see how it all works before moving my data across.

     

    I am using an old HP 380DL G7 with 8 147gb SAS drives for the testing.

     

    Install went well, I set up 2 of the drives as parity drives and the rest as storage.

     

    I wanted to test a 2 drive failure, so removed 2 drives.

     

    This seemed to work with the data being emulated at first.

     

    Then I did a parity rebuild which also worked correctly.

     

    -1st question: during the parity rebuild, I assume there is no protection if another drive fails as we are already rebuilding from 2 drives?

    -2nd question: what if only a single drive failed and I started a parity rebuild, is the array safe from another drive failing due to having 2 parity drives, or am I in the same position as Q1 since both parity drives are being rewritten, there is no protection from a 2nd drive failing while the rebuild takes place?

     

    So now I wanted to move on, and re-add the 2 drives I removed back to the array, but unraid did not see them, it just shows "Not installed" on the Main page.

     

    The disks were labeled as Disk 5 & 6 in my case. Even though they were not showing as installed I could click on the disks and see their settings.

     

    They also do not appear in unassigned devices.

     

    Stopping and restarting the array made no difference, nor did restarting the server.

     

    image.thumb.png.f807492d06d4f3576e952e295b5c3c1d.png

     

    If I look at tools>system Devices, it showed all 8 disks in the SCSI Devices list.

     

    Reading through various threads it seemed I might need to re-define the array.

     

    I did a Tools>New Config and Preserve current assignment for all.

     

    This shank the array to just 6 disks showing out of the 8.

     

    I rebuit the parity again after the new config and the 6 drives are fine, but I still can't find a way to add the 2 removed drives back to the array.

     

    now on the main page Disk 5 & 6 are just showing as unassigned, but there is no option to assign them in the drop down.

     

    image.thumb.png.494797c304ba5856569ca500378a81bc.png

     

    If I look at tools>system Devices, it now only shows 6 of the 8 disks in the SCSI Devices list.

     

    HP bois shows all 8 drives

     

    Q3: Any suggestions?

     

    Thanks,

     

    Pete.

     

    PS I don't have any other sas drives to test if the problem is just with the serials of the 2 I removed.

    unraid1-diagnostics-20190817-0630.zip

×
×
  • Create New...