redbear

Members
  • Posts

    14
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

redbear's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Yes, exactly, the idea is to remove the smaller disks and not replace them. That wasn't the original plan, but now I'm worried about another one of the drives becoming unmountable and not having a recent copy of the data.
  2. Fair, disk4 frees up a slot to bring back the old disk3 for the restore. pretzel-diagnostics-20210908-1509.zip
  3. Thanks for the tip to set explicitly. I did that, and I was able to run xfs_repair -L. The disk mounts now, and it's down about 6TB of data. Oddly similar amount to the first disk that became unmountable. There were 8 8TB data drives in the machine originally. My goal is to 5 16TB data drives. I have upgraded the parity drive and 4 of the data drives. Currently parity is valid. My current thought is: 1. shut down docker and mover, 2. use Unbalance to copy data from disk4 up to disk1. 3. remove disk 4, 4. use it's port to attach my old disk3 5. use Unassigned Devices & MC to copy the lost data back to the new disk3 6. use Unbalance to copy data from my remaining 8tb drives to the new 16TB drives 7. remove the remaining 8TB drives 8. add the fifth 16TB disk 9. rebuild parity Thoughts?
  4. Ok, I've removed the Marvel card (swapped with an LSI/Broadcom board) and changed the Intel controller to AHCI mode (@JorgeB, just to be safe). The array is back online, disk3 is still unmountable. I no longer have the option to run an xfs_repair, since the disk's format now reads as "auto". At this point I'm willing to move forward and replace or rebuild the disk. Based on the diags (new set attached) should I rebuild it from parity replace it and rebuild it, or something else? pretzel-diagnostics-20210907-2241.zip
  5. The memtest ran overnight with three passes, no errors. I figured out the Startech is also Marvel based so I'll swap in the LSI as soon as I get a new breakout cable. Not sure I want to turn the array back on until I get rid of the Marvel based card.
  6. Ok, kicked off the memtest a couple of hours ago. So far so good. Also found a couple of controllers lying about (Startech and LSI). I'll check them against the recommended list and potentially swap one in for the Marvell tonight after the memtest.
  7. Thanks for your time and response. The first unmountable disk issue with disk 8 occurred on the Marvel controller. The second/current unmountable disk issue with disk 3 is occurring on the Intel controller. I can reboot the machine and place the controllers into AHCI mode instead of RAID mode. If I can stabilize the machine, I can move all of the data to four 16tb data drives while I look into replacing the marvel based controller. For what it's worth the machine has been rock solid for three years.
  8. Hi, My server has 8 data disks and 1 parity disk. A couple of weeks ago I started the process of upgrading my data disks from 8TB to 16TB. The parity disk, disk 1, disk 2 and disk 3 went fine. Just after finishing the rebuild on disk 3, disk 8 became unmountable. Since I was upgrading anyway, I went ahead and pulled disk 8, installed a new disk and kicked off the rebuild. It finished, but the new 16TB disk 8 was unmountable. I ran the xfs_repair with no luck, then ran xfs_repair -L. The disk became mountable, but I lost 6TB of data. I paused on adding new drives while I tried to figure out which data were lost and how to recover, a significant amount was in the lost+found directory. All was stable for a couple of days now, now disk 3 has became unmountable. I've run both short and extended SMART self-tests and they come back clean. a xfs_repair without parameters comes back with: I do have the old 8TB disk 3, so if need be I can reformat and restore from the old disk. Since this error has occurred on two different disk types on two different cables and two different controllers, I'm concerned that something needs correcting or this behavior may continue with other disks. Any advice appreciated, diagnostics attached. Many thanks, Redbear pretzel-diagnostics-20210903-2016.zip
  9. Sorry,for the lack of details. I hope the following helps clarify. For my test I wrote the same 4 GB file to and copied it from: unraid pro 4.5-beta8: wrote to a user share which was comprised of 3 disks + the parity disk. (I know the file just writes to one disk + the parity disk, just letting you know the setup) 2k8: wrote to a share on a software raid 5 array comprised of 3 disks (each disk with one partition taking up the entire drive) I run an Abit 9 Pro using all 9 internal sata ports + a Promise tx-300 using 3 of its ports. All drives are Western Digital Caviar Green 1TB's, with the exception of one lone Hitachi E7K500. Background: I picked up 4 new 1TB drives so I thought it would be a good time to have a Windows Home Server, 2k8, Unraid, Ubuntu, OpenSolaris shootout. I've been really itching to consolidate servers/services since moving and I wanted something that I could easily run vm's on. With the release of beta8, I thought let's do it.
  10. the great news... my write speed into my array went from 15 MB/sec with beta7 to 30 MB/sec with beta 8... on the other hand i got write speeds of 100 MB/sec with the same hardware using win 2008 r2.
  11. I'm running 4.3.1 and disabling users shares fixed this issue for me as well. [Edit] So I went ahead and upgraded to 4.5-beta2 and I'm able to get past this issue without disabling user shares.