Jump to content

[6.12.10] 1 of 2 Cache Pool SSDs died, but both SSDs show "unmountable: unsupported or no file system" (was btrfs)


Go to solution Solved by JorgeB,

Recommended Posts

Posted

Yesterday 1 of 2 Cache Pool SSDs died (SMART shows failure), but both SSDs show up as "unmountable: unsupported or no file system" (it was btrfs, with a mirrored backup drive - non-technical term, because I don't really remember 😅 I'm pretty sure I set it up after watching a SpaceinvaderOne Tutorial video).

 

I have the appdata and nextcloud folders that are/were on these drives backed up, so I am quite calm. But nonetheless I am at a loss on how to proceed from here. I thought I would be able to use the other drive when one would eventually fail, but ... here I am with both being unmountable.

 

The diagnostics unfortunately were taken after a reboot (because of course ... sorry). But since the Samsung 970 EVO Plus shows a SMART failure and I have the data backed up to the array that probably does not matter? Either way, I'll now know better in the future. 😅

 

My Cache Pool consisted of these drives:

Cache 1: Samsung 970 EVO Plus

Cache 2: WD Blue SN570

 

I have since removed the Samsung SSD from the pool and tried just the WD SN570. I even tried formatting the drive through Unraid but that would start and stop immediately. 

 

Any help from a kind stranger to get the probably okay WD SN570 working again would be much appreciated.

unraid-nas-diagnostics-20240415-1406.zip

Posted

Because of the backups I "recklessly" did try formatting the pool, that did not work. Than removed the drive and tried formatting the SN570. Didn't succeed either. Thank you very much for looking into my mess!

 

root@UNRAID-NAS:~# btrfs fi show
Label: none  uuid: d1776af5-1bda-45f7-a369-70e42d89cf09
        Total devices 2 FS bytes used 153.51GiB
        devid    1 size 931.51GiB used 215.06GiB path /dev/nvme0n1p1
        devid    2 size 931.51GiB used 82.03GiB path /dev/nvme1n1p1

 

Posted

Looking at that output, the pool won't be redundant, since one of the devices has much more data than the other one, so if one failed it may not be recoverable.

 

Try:

 

mkdir /x
mount -v -o rescue=all,ro /dev/nvme0n1p1 /x

 

If there are errors post them.

Posted
mount -v -o rescue=all,ro /dev/nvme0n1p1 /x
mount: /x: /dev/nvme0n1p1 already mounted on /temp.
       dmesg(1) may have more information after failed mount system call.

I mounted the NVMe to /temp yesterday working through the Unraid forums and manual. There were a few occassions where Midnight Command was unable to copy some files (msg: Stalling ....).

 

That's why I stopped bothering with the drives and went nuclear on the pool/drives trying to format everything. 😁

Posted
34 minutes ago, JorgeB said:

so if one failed it may not be recoverable.

I do have backups on my array, so recovery is not a must. It's probably no good with some of it being unreadable and starting fresh would be best.

  • Solution
Posted
8 minutes ago, YoHoNoMo said:

That's why I stopped bothering with the drives and went nuclear on the pool/drives trying to format everything. 😁

You must unmount that first:

 

umount /temp

 

You can then try reformatting just the good drive.

  • Like 1
Posted
5 minutes ago, JorgeB said:

You must unmount that first:

 

umount /temp

 

You can then try reformatting just the good drive.

Oh my ... *facepalm*

 

Now it worked, thank you!

  • Like 1

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...