Matheew

Members
  • Posts

    12
  • Joined

  • Last visited

Matheew's Achievements

Noob

Noob (1/14)

2

Reputation

1

Community Answers

  1. FYI - upgrading the BIOS on the HBA from 07.29.00.00 to 07.39.02.00 solved the issue.
  2. Well... I currently have two out of six SAS backplanes connected to the M1015 and disks connected to each of them without issue, and moving the new HDD from one backplane to another does not fix the issue. The probability that both backplanes are broken but none of the currently installed disks are affected seems very unlikely.
  3. Hmm, it is a WD Red, so no obscure brand or anything. A number of reasons, but the primary one being the fact that my chassi is using SAS backplanes to install disks, which is how I want it for expansion possibilities. There is no way to mount a disk permanently and connect it directly to the MB.
  4. Thanks for the reply! My M1015 is now updated to firmware 20.00.07.00, the issue is still the same. I connected the HDD directly to the motherboard (obviously not a feasible long term solution) and the issue is gone. What conclusion can we draw from this? I would not rush to the conclusion that the M1015 is broken since I have four drives that have been working perfectly for more than a year. The difference here is that I've never before have connected a drive larger than 8TB to the HBA, I've googled but could not find anything that would indicate that there is a max HDD size for the card in question. Any thoughts?
  5. Thanks for the replies guys, see attached Diagnostics file for a syslog without SSH attempts. Also, I switched the HDD to another drive bay, no difference. unraid-diagnostics-20220818-1856.zip
  6. Hello unRAID Community! I've bought a brand new WD Red 12TB disk which I will use to replace my current 8TB parity drive. I've read a lot of different ways to do this but I decided to go for this method: Stop the array Install new drive. In main assign the new 12TB drive to parity slot 2 Start the array and allow parity to rebuild Stop the array In main unassign the old 8TB parity drive from parity slot 1 Start the array However, when I've assigned the new 12TB drive to parity slot 2 and started the array, things go south. The parity rebuild pretty much instantly pauses and unRAID gives me the error messages you can see in the attached file. The new 12TB drive reports errors and unRAID tells me that the new drive is in an error state. In order to get back to normal I: Cancel the paused parity build. Stop the array Remove the 12TB drive from the parity 2 slot. Start the array unRAID then tells me everything is fine: If I then repeat the process above the error occurs again in the same way, and that is where I am at right now. First thought is obviously a broken disk, but that doesn't seem too likely since the disk is brand new. So I ran an extended SMART-test on the disk yesterday and the disk reported no error at all, you can find the SMART report in the attached Diagnostics file. System: unRAID version 6.8.3 Logic 24-bay Hot Swap with IBM M1015 Thanks in advance! unraid-diagnostics-20220818-0848.zip
  7. I'm sorry if I'm slow here but I'm not following. Even if I can't restore the data from backups the NEW Disk 3 is still corrupted and needs formatting either way? I'm quite confused regarding how to proceed here without messing up more than I've already done.
  8. I see, so what is the recommended action here? Reformat the NEW Disk 3?
  9. Hi! How would a new config work in this scenario? Why would Disk 3 have corruption after running a XFS repair? Also, if I would do a new config, I would have to do a parity check again correct? This would put strain on the disks as well?
  10. Thanks for the reply, then it is as I feared. I suspect there is no reason to rebuild disk 2 since the file system is corrupted? If I accept the loss of the data on Disk 2 and wish to replace it with a new disk, how would I go on about doing this the best way? I do not wish to simply replace the disk in the array with a new one and put further strain on the other disks by rebuilding the Disk 2 - only to find it corrupted and then empty just as before. Thanks once again in advance!
  11. Certainly. see attached file. Thanks for the insanely quick reply. unraid-diagnostics-20211108-1559.zip
  12. Hello unRAID Community! I had the unfortune to have one of my disks failing during the weekend. The series of events: OLD Disk 3 shows up as disabled in the unRAID GUI with a red cross and reports as failed by SMART. The array is stopped and OLD Disk 3 and replaced with NEW Disk 3 in the chassi. The array is started and data rebuild begins, shortly after unRAID reports read errors with Disk 2, everything is still fine, the array looks OK and the rebuild is doing fine. Some time later during the rebuild Disk 2 is showing up both as a green disk in the array but also as an Unassigned Device with the ability to mount it. At this stage I touch nothing and let the rebuild finish. See link for how it looked with the Unassiged DIsk. Do note that this picture is taken at a later stage where DIsk 2 has failed and NEW Disk 3 has already been rebuilt. Disk 2 was green during the rebuild. (and yes, I know my disks are too hot at the moment, but it has nothing to do with this topic, I don't question the drive failure itself or the reason behind it). https://imgur.com/a/1aknzNP When the rebuild of NEW Disk 3 is done it turns green in the array. I SSH to /mnt/disk3/movies/ and ls gives me "Structure needs cleaning" unRAID Forums tells me it is caused by XFS-corruption, I mount the array in maintenance mode and do an XFS Repair. After stopping the array from maintenance mode the Start array button is gone and the only ones visible are shutdown and reboot. Seems to be a bug, I reboot the server without first saving diagnostics logs, so I do not have them. After reboot it is possible to start the array, running ls in /mnt/disk3/movies/ is now possible but all data previously on OLD Disk 3 is gone, the disk is practically empty. Shortly after this Disk 2 gets disabled with a red cross but SMART show Pass. XFS Repair gets stuck on: Phase 1 - find and verify superblock... couldn't verify primary superblock - not enough secondary superblocks with matching geometry !!! attempting to find secondary superblock... ...found candidate secondary superblock... unable to verify superblock, continuing... ....found candidate secondary superblock... unable to verify superblock, continuing... I'm left with loss of data on two disks. I basically have three questions here: Why did I loose all the data on Disk 3? I rebuilt it and repaired XFS, why is the data gone? Did the weird behavior shown on Disk 2 actually meant that I lost it during the rebuild and only having one parity disk led to data loss? What do I do with the current disabled Disk 2? Should I rebuild it? If it actually failed during the rebuild of NEW DIsk 3 I guess the data is gone on that one as well? I still have the OLD Disk 3 (do not know if it is broken or not), can I add it as an Unassigned Disk and decrypt it in an attempt to retrieve data from it? Thanks in advance!