GlennCottam Posted April 1, 2022 Share Posted April 1, 2022 Came home today to find out the server can no longer mount the cache drives. All 4 of them that are in the pool. Booted into safemode, mounted the array, all other disks mount no problem, but the cache disks will not mount, giving the error: Unmountable: No File System. I came across this article: And attempted to mount the drive under /x, but then got the following error: mount: /x: can't read superblock on /dev/sdd1. This happens for all 4 drives I attempt to mount. There is a decent amount of data stored on these drives that I do need back. The drives should be fine, they have not been spitting out any errors the last few weeks, until this happened. Let me know if you require any more information! Any help is much appricated! unraid-diagnostics-20220331-2157.zip Quote Link to comment
GlennCottam Posted April 1, 2022 Author Share Posted April 1, 2022 After re-reading the article, I apparently missed the command I was meant to use for 6.10+. After using the proper command, I was able to mount the SSD's. I take it from here, I just need to copy the data onto a drive, format the cache pool, and copy the data back? Thanks! Quote Link to comment
ChatNoir Posted April 1, 2022 Share Posted April 1, 2022 5 hours ago, GlennCottam said: I take it from here, I just need to copy the data onto a drive, format the cache pool, and copy the data back? Yes. Quote Link to comment
GlennCottam Posted April 2, 2022 Author Share Posted April 2, 2022 I have attempted to re-format the SSD's several times, but I am unable to format the NVME SSD I have installed. I tried formatting the cache in the configuration it was (all the SSD's in one cache), but decided I would try to seperate the NVME from the pool. I keep getting the error "Unmountable: Wrong or no file system" when I attempt to format the drive. In the syslog, I have the following entries when I attempt to format the drive: Apr 2 12:50:27 Unraid emhttpd: shcmd (37542): /sbin/wipefs -a /dev/nvme0n1 Apr 2 12:50:27 Unraid kernel: blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0 Apr 2 12:50:28 Unraid kernel: blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0 Apr 2 12:50:28 Unraid root: /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (dos): 55 aa Apr 2 12:50:28 Unraid root: /dev/nvme0n1: calling ioctl to re-read partition table: Success Apr 2 12:50:28 Unraid kernel: nvme0n1: p1 Apr 2 12:50:28 Unraid emhttpd: shcmd (37543): /sbin/wipefs -a /dev/nvme0n1p1 Apr 2 12:50:28 Unraid kernel: nvme0n1: p1 Apr 2 12:50:28 Unraid kernel: blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0 Apr 2 12:50:28 Unraid kernel: blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0 Apr 2 12:50:28 Unraid root: /dev/nvme0n1p1: 8 bytes were erased at offset 0x00010040 (btrfs): 5f 42 48 52 66 53 5f 4d Apr 2 12:50:28 Unraid emhttpd: shcmd (37544): mkfs.btrfs -f /dev/nvme0n1p1 Apr 2 12:50:28 Unraid kernel: blk_update_request: critical medium error, dev nvme0n1, sector 2048 op 0x3:(DISCARD) flags 0x800 phys_seg 1 prio class 0 Apr 2 12:50:28 Unraid kernel: blk_update_request: critical medium error, dev nvme0n1, sector 0 op 0x1:(WRITE) flags 0x800 phys_seg 0 prio class 0 Apr 2 12:50:28 Unraid root: ERROR: error during mkfs: Operation not permitted Apr 2 12:50:28 Unraid root: btrfs-progs v5.15.1 Apr 2 12:50:28 Unraid root: See http://btrfs.wiki.kernel.org for more information. Apr 2 12:50:28 Unraid root: Apr 2 12:50:28 Unraid root: Performing full device TRIM /dev/nvme0n1p1 (447.13GiB) ... Apr 2 12:50:28 Unraid root: NOTE: several default settings have changed in version 5.15, please make sure Apr 2 12:50:28 Unraid root: this does not affect your deployments: Apr 2 12:50:28 Unraid root: - DUP for metadata (-m dup) Apr 2 12:50:28 Unraid root: - enabled no-holes (-O no-holes) Apr 2 12:50:28 Unraid root: - enabled free-space-tree (-R free-space-tree) Apr 2 12:50:28 Unraid root: Apr 2 12:50:28 Unraid emhttpd: shcmd (37544): exit status: 1 Apr 2 12:50:28 Unraid emhttpd: shcmd (37545): mkdir -p /mnt/nvme Apr 2 12:50:28 Unraid emhttpd: shcmd (37546): blkid -t TYPE='xfs' /dev/nvme0n1p1 &> /dev/null Apr 2 12:50:28 Unraid emhttpd: shcmd (37546): exit status: 2 Apr 2 12:50:28 Unraid emhttpd: shcmd (37547): blkid -t TYPE='btrfs' /dev/nvme0n1p1 &> /dev/null Apr 2 12:50:28 Unraid emhttpd: shcmd (37548): mount -t btrfs -o noatime,space_cache=v2 /dev/nvme0n1p1 /mnt/nvme Apr 2 12:50:28 Unraid kernel: BTRFS info (device nvme0n1p1): flagging fs with big metadata feature Apr 2 12:50:28 Unraid kernel: BTRFS info (device nvme0n1p1): using free space tree Apr 2 12:50:28 Unraid kernel: BTRFS info (device nvme0n1p1): has skinny extents Apr 2 12:50:28 Unraid root: mount: /mnt/nvme: wrong fs type, bad option, bad superblock on /dev/nvme0n1p1, missing codepage or helper program, or other error. Apr 2 12:50:28 Unraid emhttpd: shcmd (37548): exit status: 32 Apr 2 12:50:28 Unraid emhttpd: /mnt/nvme mount error: Wrong or no file sysem Apr 2 12:50:28 Unraid emhttpd: shcmd (37549): umount /mnt/nvme Apr 2 12:50:28 Unraid kernel: BTRFS error (device nvme0n1p1): devid 3 uuid 09a2f355-c5d8-46dc-94ad-3316672eda04 is missing Apr 2 12:50:28 Unraid kernel: BTRFS error (device nvme0n1p1): failed to read the system array: -2 Apr 2 12:50:28 Unraid kernel: BTRFS error (device nvme0n1p1): open_ctree failed I have attached a more recent diagnostics file. I can only assume that there is a physical issue with this SSD. I just hope when I pulled the data from the pool, the data on that drive was included. unraid-diagnostics-20220402-1257.zip Quote Link to comment
Solution itimpi Posted April 2, 2022 Solution Share Posted April 2, 2022 Those ‘critical medium error’ messages almost certainly mean that the SSD is faulty. Quote Link to comment
GlennCottam Posted April 2, 2022 Author Share Posted April 2, 2022 Thats what I was afraid of. Thank you for the reply, I will see what data I am missing after restoring the data to the other 3 SSD's (hopefully none). Quote Link to comment
GlennCottam Posted April 3, 2022 Author Share Posted April 3, 2022 All is good, from what I can tell I was able to pull the data off the SSD. Took a while to finish wokring on a couple currupt databases, but other than that, everything went smoothly. Guess I need a new SSD. Thanks for the help! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.