uaktags Posted December 6, 2021 Share Posted December 6, 2021 So my setup is with 8 Array Drives (4 connected to motherboard directly, 4 connected via a PCIE adapter) and then I have 2 nvme drives attached to a PCIe adapter. I had attempted to add a 3rd drive (which probably blew through my pcie lanes) and attempted to access it. Upon having a VM access the mounted nvme drive, the VM glitched out due to "pci" error that I did not capture, but I also lost one of my Array drives in the process. The drive became disabled and does not want to come back online. SMART appears clean for the drive to me and I was able to access the drive via another machine to ensure data was still intact and drive was operational. I'm currently running a Read-Check, but honestly not sure if that'll do anything (its an 18hour process, so if not, then i've wasted 18hours). Any help is appreciated. tower-diagnostics-20211206-1054.zip Quote Link to comment
JorgeB Posted December 6, 2021 Share Posted December 6, 2021 Diags are after rebooting so we can't see what happened, but once a disk gets disable it needs to be rebuilt. https://wiki.unraid.net/Manual/Storage_Management#Rebuilding_a_drive_onto_itself Quote Link to comment
uaktags Posted December 6, 2021 Author Share Posted December 6, 2021 Stupid me. But thjank you Jorge! I'll get to rebuilding Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.