pete69

Members
  • Posts

    20
  • Joined

  • Last visited

Everything posted by pete69

  1. If you are re-cycling Netapp or some other modified sector size drives, then you will need the same process each time. If you are buying new drives, the sector size should be fine as Unraid handles most standard sector sizes out of the box.
  2. I only have the one shelf currently connected, but had 2 in the past. Since you have 2 ports on your controller, just connect each port to the square SAS port on both your IOMs, that would give you 2 independent SAS paths for each shelf and they would be treated as separate entities by unraid. I think that would be better than trying to daisy chain the shelves which would have no benefit in Unraid.
  3. I had similar issues on start up running 6.8.3, with some drives not showing (sometimes reds sometimes others). Usually pulling the drives and reseating or moving drives to other locations got them online. Once online it was rock solid until I needed to restart unraid and go through the process again. I initially thought it might be a backplane issue with the disk shelf, but didn't care too much as I don't need to restart unraid very often. When I upgraded unraid to 6.9.0 none of the drives were recognised due to the kernel issue. Rather than wait for a fix or remain on 6.8.3, I purchased an LSI 9212-4i4e 6Gb SAS Controller Card set on IT mode, and a QSFP SFF-8436 to Mini SAS SFF-8088 Cable. Since then it works fine on 6.9.x and the drives are all online at each restart without any fiddling. Pete.
  4. You will still need the QSFP SFF-8436 to Mini SAS SFF-8088 Cable. The PMC 8003 had the QSFP SFF-8436 connector (same as the disk shelf). The new card has a Mini SAS SFF-8088 connector. They are not the same.
  5. Good stuff, I'll take a look at that channel. Spaceinvader one (https://www.youtube.com/channel/UCZDfnUn74N0WeAPvMqTOrtA) is good for all things unraid on the software side. make sure you order a new QSFP SFF-8436 to Mini SAS SFF-8088 Cable also. The old one will not work as the connectors are different.
  6. I purchased an LSI 9212-4i4e 6Gb SAS Controller Card set on IT mode, and a QSFP SFF-8436 to Mini SAS SFF-8088 Cable. Less than A$100 (US$80) all up. Works fine on 6.9.2
  7. I had the same problem using the PM8003. With V6.9.0 Stable. The PM8003 also used to drop drives occasionally on 6.8.3, but only when unraid was started, requiring drives to be pulled and re-seated to refresh the array. It was rock sold once running. Rather than wait for a fix or remain on 6.8.3, I purchased an LSI 9212-4i4e 6Gb SAS Controller Card set on IT mode, and a QSFP SFF-8436 to Mini SAS SFF-8088 Cable. I have not seen any issues with the LSI controller on 6.9.2.
  8. I had the same problem using the PM8003. With V6.9.0 Stable. The PM8003 also used to drop drives occasionally on 6.8.3, but only when unraid was started, requiring drives to be pulled and reseated to refresh the array. It was rock sold once running. I have not seen this issue with the LSI controller. I purchased an LSI 9212-4i4e 6Gb SAS Controller Card set on IT mode, and a QSFP SFF-8436 to Mini SAS SFF-8088 Cable. It all works well again on 6.9.1 now.
  9. Answering my own question. After the above did not work I recopied 'config/disk.cfg.bak' to 'config/disk.cfg' after the downgrade had been applied. And the Cache worked again. Hope that helps someone else.
  10. I reverted back to 6.8.3 because 6.9.0 does not support my Netapp SAS 4-Port 3/6 GB QSFP PCIE 111-00341+B0 Controller (PMC Sierra PM8003). I originally upgraded via the link on the GUI, without looking here first. I had assumed that the GUI upgrade process took a backup of the current image. so did not take a separate one myself. I had copied 'config/disk.cfg.bak' to 'config/disk.cfg' as suggested here. I then manually took a backup of unraid. I downgraded back to 6.8.3, but no drives were shown in the Cache. I can add both drives back, but they both show the blue square meaning new device. Other drives have appeared correctly with the green circle. I have not restarted the array yet, as I am concerned I may loose data on the Cache drives. Any suggestions?
  11. Thanks. Placed an order for a new card and cable. Will try to roll back today.
  12. Upgrade to 6.9.0 did not go well for me. no drives visible. Based on other people with the same issue the drivers in 6.9.0 do not work with Netapp SAS 4-Port 3/6 GB QSFP PCIE 111-00341+B0 Controller (PMC Sierra PM8003). Peter.
  13. I have unraid running on a HP Z800 workstation. It has a Netapp SAS 4-Port 3/6 GB QSFP PCIE 111-00341+B0 Controller (PMC Sierra PM8003) Connected to a Netapp Disk DS-4243 24 bay diskshelf. I have just installed 6.9.0 and now none of my diskshelf drives appear. Everything has been running fine for around 2 years and have done system updates before without any issues. The only drives that appear are the USB stick for unraid and the dvd rom both of which are directly connected to the Z800. I have tried rebooting both the shelf and Z800 without any difference. I have attached the log, and would appreciate some assistance in either fixing this issue or rolling back unraid. Thanks, Peter. tower-diagnostics-20210308-1125.zip
  14. Apparently stopping and restarting the array was not enough. Shutting down the server and restarting brought back the shares. It also fixed another issue I was having where one of the cache drives was showing 0gb. Thanks Pete.
  15. I am just starting out with unraid, coming from drivepool. Basic setup seems to have gone ok with a pair of 10TB hdds. I then created a bunch of shares and copied around 5tb of data across from my drivepool. This went well, and I could access the shares across the network. I then added 2 10TB parity drives one at a time. Each took about a day to build, then completed fine. But now I have no shares showing on the shares tab, and no shares other than the flash drive showing across the network. The shares are showing on Main/Browse?dir=/mnt/disk1, but not on /Shares or across the network. Any ideas? Peter.
  16. I am using preclear for the first time. I installed 2 seagate drives that I knew had issues and 1 brand new 10TB WD RED. The 2 Seagate drives I knew had issues eventually both failed the pre-read, and the new WD passed. The WD Continued into zeroing as far as 21%. The logs look good for 5-20%. Each 5% has generally taken less than a hour. Preclear has now been stuck on 21% for over 3 hours, and there is no activity on the drive LED. Before I cancel it and retry, is there anything I can grab to assist the author with checking why it froze without showing any error message? Pete.
  17. Hi Thanks for the reply. I wasn't expecting to access the data on the removed drives when I reinstalled them after the rebuild, I just wanted to add them back as new drives to expand the the array again. But unraid was not allowing them to be added, even though the settings data was still showing for those drives in unraid. With the array stopped the option to assign was not available on these drives. The issue turned out to be the internal HP P410i raid controller tagging these drives as bad after I removed and re instated them a couple of times, even though it is running in HBA mode. Thanks for answering the 2 questions though. As I understand it, in this case the parity rebuild wouldn't have been rebuilding 2 drives, as the drives had been removed without replacements. It would be rebuilding parity across the entire array with the remaining drives so that 2 drive redunancy is mantained with 2 fewer drives?
  18. I am new to unraid so trying a few things to see how it all works before moving my data across. I am using an old HP 380DL G7 with 8 147gb SAS drives for the testing. Install went well, I set up 2 of the drives as parity drives and the rest as storage. I wanted to test a 2 drive failure, so removed 2 drives. This seemed to work with the data being emulated at first. Then I did a parity rebuild which also worked correctly. -1st question: during the parity rebuild, I assume there is no protection if another drive fails as we are already rebuilding from 2 drives? -2nd question: what if only a single drive failed and I started a parity rebuild, is the array safe from another drive failing due to having 2 parity drives, or am I in the same position as Q1 since both parity drives are being rewritten, there is no protection from a 2nd drive failing while the rebuild takes place? So now I wanted to move on, and re-add the 2 drives I removed back to the array, but unraid did not see them, it just shows "Not installed" on the Main page. The disks were labeled as Disk 5 & 6 in my case. Even though they were not showing as installed I could click on the disks and see their settings. They also do not appear in unassigned devices. Stopping and restarting the array made no difference, nor did restarting the server. If I look at tools>system Devices, it showed all 8 disks in the SCSI Devices list. Reading through various threads it seemed I might need to re-define the array. I did a Tools>New Config and Preserve current assignment for all. This shank the array to just 6 disks showing out of the 8. I rebuit the parity again after the new config and the 6 drives are fine, but I still can't find a way to add the 2 removed drives back to the array. now on the main page Disk 5 & 6 are just showing as unassigned, but there is no option to assign them in the drop down. If I look at tools>system Devices, it now only shows 6 of the 8 disks in the SCSI Devices list. HP bois shows all 8 drives Q3: Any suggestions? Thanks, Pete. PS I don't have any other sas drives to test if the problem is just with the serials of the 2 I removed. unraid1-diagnostics-20190817-0630.zip