Jump to content

"UNMOUNTABLE DISK PRESENT" on cache pool disks, are the disks dead? (Solved)


Recommended Posts

Hi,

I noticed today that my cache pool disks were not been written into and there were errors in the disk logs.

this has happened a few times in the past month, however usually a server restart fixed the issue, and the error did not return for days or weeks at a time.

 

Today I noticed it again, and restarted my server, however when the server booted, and array started I noticed that both Cache disks in the pool had the error " Unmountable: No file system"

All the shares there were on those disks (mainly appdata) are empty, and docker service and vm service showed there isn't any docker or vm present.

 

I tried multiple ways to get Unraid to recognize the disks again, to no avail,

Mounting the drives seem to always return "No file system"

 

I followed the guide below and I am able to copy the content of the cache disks to another disk in the array.

Now the questions are:

1. Can I trust the validity of the contents of those disks? or maybe it is better to restore a backup (I have a backup that is a few days old, so data loos will not be too painful)

2. Can I reformat those disks and use them again? or are they dead and better buy new ones instead? (They are barely a year old, and only be used on Unraid, so also why did they degrade so fast?)

 

 

cahce_disk_file_sytem_check_1PNG.PNG

cahce_pool_Erroe1.PNG

cahce_pool_Erroe2.PNG

nomad-smart-20210914-1706.zip nomad-smart-20210914-1700 (1).zip

Edited by CyberPunkMind
Solved
Link to comment

Hi,

So to update,

My appdata backup (from CA Backup/Restore) was sadly corrupt, trying to restore it threw an error of "unexcepted eof" when unpack the appdata.tar.gz.

I am writing the steps of what I ended up doing so if someone else has the same issue, this could be used to help them.

 

1. used btrfs --restore into another disk using the amazing guide @JorgeB wrote and mentioned in my previous comment.

2. Disabled cache pool.

3. configured my 2 SSD as two single cache disks with xfs instead of btrfs (I don't see the point of having a mirrored cache pool if a corruption of the fs causes both disks to die, might as well use the storage and dived one cache disks for only appdata, and the other for VM and fast rw of other applications)

4. restored libvirt.img from CA Backup

5. recreated docker.img

6. copied all the appdata files from step 1 using rsync to the reformatted cache disk

7. reinstalled all container apps from "previous apps"

8. tested containers one by one, inspecting logs and functionality.

9. found out the following container apps were corrupted: authelia (postgress error), swag (certificate chain file error), komga (don't remember the error), lazylibrarian (the docker started fine, but logs showed database errors)

10. removed corrupted container apps, and deleted appdata folders for those containers.

11. reinstalled authelia, swag, komg, lazylibrarian.

12. manually reconfigured those containers, luckily all my swag nginx configuration were save on my pc as well.

 

 

  • Like 1
Link to comment
  • CyberPunkMind changed the title to "UNMOUNTABLE DISK PRESENT" on cache pool disks, are the disks dead? (Solved)

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...