Jump to content

Can not delete directory after removing docker container - Critical medium error - Unable to verify superblock,


Recommended Posts

Hi There,

 

I'm fairly new to Unraid so if there's any details I have missed in this explanation please let me know and I will try my best to provide them!

 

Unraid Version: 6.10.3 2022-06-14

 

Hardware

 

- Motherboard: Gigabyte C246M-WU4 s1151 XEON

- Processor: Intel Xeon E-2246G 3.60GHz 1151

- Ram: 2 x Kingston KSM32ED8/32ME 32GB DDR4 3200MT/s ECC Unbuffered

- Cache Drives: 2 x Samsung 1TB 970 EVO PLUS

- HDDs: 4 x 12TB Seagate IW ST12000VN0008

 

Plugins

 

- My Servers

- CA Cleanup Appdata

- Community Applications

- Dynamix File Manager

- Nerd Tools

- Recycle Bin

- Unassigned Devices

- Unassigned Devices Plus

- Unassigned Devices Preclear

 

Recently I removed the Organizr V2 docker container from my server as I was running into issues with composers vendor file not containing the packages it needed in order to fully initialise. This led me down a rabbit hole trying to figure out what was going and I noticed that the even though I had uninstalled the container the `organizrv2/www/organizr/api/vendor` director still remained in my `appdata` share. It was completely empty and this was the only chain of directories that existed.

 

When I try to delete it I get the following error:

root@Vivec:/mnt/user/appdata#: rm -rf organizrv2/
rm: cannot remove 'organizrv2/www/organizr/api/vendor': Directory not empty

 

So I navigate through to that direction and try listing what's inside of it:
 

root@Vivec:/mnt/user/appdata/organizrv2/www/organizr/api/vendor# ls -lah
/bin/ls: reading directory '.': No data available total 0

 

I've tried giving myself full permissions, changing ownership and deleting with `sudo` only to get the same error. I had a feeling maybe that composer's vendor directory might have been mounted somewhere so with my limited knowledge I checked for this:

 

root@Vivec:~# cat /proc/mounts
rootfs / rootfs rw,size=32771260k,nr_inodes=8192815,inode64 0 0
proc /proc proc rw,relatime 0 0
sysfs /sys sysfs rw,relatime 0 0
tmpfs /run tmpfs rw,nosuid,nodev,noexec,relatime,size=32768k,mode=755,inode64 0 0
/dev/sda1 /boot vfat rw,noatime,nodiratime,fmask=0177,dmask=0077,codepage=437,iocharset=iso8859-1,shortname=mixed,flush,errors=remount-ro 0 0
/dev/loop0 /lib/firmware squashfs ro,relatime,errors=continue 0 0
overlay /lib/firmware overlay rw,relatime,lowerdir=/lib/firmware,upperdir=/var/local/overlay/lib/firmware,workdir=/var/local/overlay-work/lib/firmware 0 0
/dev/loop1 /lib/modules squashfs ro,relatime,errors=continue 0 0
overlay /lib/modules overlay rw,relatime,lowerdir=/lib/modules,upperdir=/var/local/overlay/lib/modules,workdir=/var/local/overlay-work/lib/modules 0 0
hugetlbfs /hugetlbfs hugetlbfs rw,relatime,pagesize=2M 0 0
devtmpfs /dev devtmpfs rw,relatime,size=8192k,nr_inodes=8192817,mode=755,inode64 0 0
devpts /dev/pts devpts rw,relatime,gid=5,mode=620,ptmxmode=000 0 0
tmpfs /dev/shm tmpfs rw,relatime,inode64 0 0
fusectl /sys/fs/fuse/connections fusectl rw,relatime 0 0
cgroup_root /sys/fs/cgroup tmpfs rw,relatime,size=8192k,mode=755,inode64 0 0
cpuset /sys/fs/cgroup/cpuset cgroup rw,relatime,cpuset 0 0
cpu /sys/fs/cgroup/cpu cgroup rw,relatime,cpu 0 0
cpuacct /sys/fs/cgroup/cpuacct cgroup rw,relatime,cpuacct 0 0
blkio /sys/fs/cgroup/blkio cgroup rw,relatime,blkio 0 0
memory /sys/fs/cgroup/memory cgroup rw,relatime,memory 0 0
devices /sys/fs/cgroup/devices cgroup rw,relatime,devices 0 0
freezer /sys/fs/cgroup/freezer cgroup rw,relatime,freezer 0 0
net_cls /sys/fs/cgroup/net_cls cgroup rw,relatime,net_cls 0 0
perf_event /sys/fs/cgroup/perf_event cgroup rw,relatime,perf_event 0 0
net_prio /sys/fs/cgroup/net_prio cgroup rw,relatime,net_prio 0 0
hugetlb /sys/fs/cgroup/hugetlb cgroup rw,relatime,hugetlb 0 0
pids /sys/fs/cgroup/pids cgroup rw,relatime,pids 0 0
tmpfs /var/log tmpfs rw,relatime,size=131072k,mode=755,inode64 0 0
cgroup /sys/fs/cgroup/elogind cgroup rw,nosuid,nodev,noexec,relatime,xattr,release_agent=/lib64/elogind/elogind-cgroups-agent,name=elogind 0 0
rootfs /mnt rootfs rw,size=32771260k,nr_inodes=8192815,inode64 0 0
tmpfs /mnt/disks tmpfs rw,relatime,size=1024k,inode64 0 0
tmpfs /mnt/remotes tmpfs rw,relatime,size=1024k,inode64 0 0
tmpfs /mnt/rootshare tmpfs rw,relatime,size=1024k,inode64 0 0
/dev/md1 /mnt/disk1 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/md2 /mnt/disk2 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/md3 /mnt/disk3 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/nvme0n1p1 /mnt/cache xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
/dev/nvme1n1p1 /mnt/cache_files xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
shfs /mnt/user0 fuse.shfs rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
shfs /mnt/user fuse.shfs rw,nosuid,nodev,noatime,user_id=0,group_id=0,default_permissions,allow_other 0 0
/dev/loop2 /var/lib/docker btrfs rw,noatime,ssd,space_cache=v2,subvolid=5,subvol=/ 0 0
/dev/loop2 /var/lib/docker/btrfs btrfs rw,noatime,ssd,space_cache=v2,subvolid=5,subvol=/ 0 0
nsfs /run/docker/netns/17456fb7326a nsfs rw 0 0
nsfs /run/docker/netns/f8624d281ea7 nsfs rw 0 0
/dev/loop3 /etc/libvirt btrfs rw,noatime,space_cache=v2,subvolid=5,subvol=/ 0 0
nsfs /run/docker/netns/f37fe19bf85e nsfs rw 0 0
nsfs /run/docker/netns/5c2171aff11c nsfs rw 0 0
nsfs /run/docker/netns/ec7cb29dd72b nsfs rw 0 0
nsfs /run/docker/netns/1d05c3c92002 nsfs rw 0 0
nsfs /run/docker/netns/default nsfs rw 0 0
nsfs /run/docker/netns/4c8b7fe055db nsfs rw 0 0
nsfs /run/docker/netns/b4cb6a31a5df nsfs rw 0 0
nsfs /run/docker/netns/cda0fa4b923a nsfs rw 0 0
nsfs /run/docker/netns/27b47e6ad080 nsfs rw 0 0
nsfs /run/docker/netns/9751c6b9b4c1 nsfs rw 0 0

 

I couldn't see any reference to it but I tried `umount` just incase but it told me that the directory was not mounted:

 

root@Vivec:/mnt/user/appdata# umount organizrv2/www/organizr/api/vendor/
umount: organizrv2/www/organizr/api/vendor/: not mounted.

 

I then checked the Unraid logs after attempting to `rm -rf ./vendor` directory and discovered the following which I'm not sure how to interpret:
 

Sep 18 16:06:10 unraid kernel: blk_update_request: critical medium error, dev nvme0n1, sector 272000 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0
Sep 18 16:06:10 unraid kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_imap_to_bp+0x4e/0x6a [xfs]" at daddr 0x41e80 len 32 error 61
Sep 18 16:06:10 unraid kernel: blk_update_request: critical medium error, dev nvme0n1, sector 272000 op 0x0:(READ) flags 0x1000 phys_seg 4 prio class 0
Sep 18 16:06:10 unraid kernel: XFS (nvme0n1p1): metadata I/O error in "xfs_imap_to_bp+0x4e/0x6a [xfs]" at daddr 0x41e80 len 32 error 61

 

After reading other posts on the forums I ran a smart scan on the drive that gave me the following results:

 

root@Vivec:~# smartctl -a /dev/nvme0n1p1
smartctl 7.3 2022-02-28 r5338 [x86_64-linux-5.15.46-Unraid] (local build)
Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Model Number:                       Samsung SSD 970 EVO Plus 1TB
Serial Number:                      S6P7NG0R712023M
Firmware Version:                   3B2QEXM7
PCI Vendor/Subsystem ID:            0x144d
IEEE OUI Identifier:                0x002538
Total NVM Capacity:                 1,000,204,886,016 [1.00 TB]
Unallocated NVM Capacity:           0
Controller ID:                      6
NVMe Version:                       1.3
Number of Namespaces:               1
Namespace 1 Size/Capacity:          1,000,204,886,016 [1.00 TB]
Namespace 1 Utilization:            85,334,732,800 [85.3 GB]
Namespace 1 Formatted LBA Size:     512
Namespace 1 IEEE EUI-64:            002538 5711502ef7
Local Time is:                      Sun Sep 18 16:43:13 2022 BST
Firmware Updates (0x16):            3 Slots, no Reset required
Optional Admin Commands (0x0017):   Security Format Frmw_DL Self_Test
Optional NVM Commands (0x0057):     Comp Wr_Unc DS_Mngmt Sav/Sel_Feat Timestmp
Log Page Attributes (0x0f):         S/H_per_NS Cmd_Eff_Lg Ext_Get_Lg Telmtry_Lg
Maximum Data Transfer Size:         128 Pages
Warning  Comp. Temp. Threshold:     82 Celsius
Critical Comp. Temp. Threshold:     85 Celsius

Supported Power States
St Op     Max   Active     Idle   RL RT WL WT  Ent_Lat  Ex_Lat
 0 +     7.54W       -        -    0  0  0  0        0       0
 1 +     7.54W       -        -    1  1  1  1        0     200
 2 +     7.54W       -        -    2  2  2  2        0    1000
 3 -   0.0500W       -        -    3  3  3  3     2000    1200
 4 -   0.0050W       -        -    4  4  4  4      500    9500

Supported LBA Sizes (NSID 0x1)
Id Fmt  Data  Metadt  Rel_Perf
 0 +     512       0         0

=== START OF SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

SMART/Health Information (NVMe Log 0x02)
Critical Warning:                   0x00
Temperature:                        36 Celsius
Available Spare:                    96%
Available Spare Threshold:          10%
Percentage Used:                    0%
Data Units Read:                    1,858,641 [951 GB]
Data Units Written:                 1,653,169 [846 GB]
Host Read Commands:                 17,918,106
Host Write Commands:                48,378,253
Controller Busy Time:               1,622
Power Cycles:                       55
Power On Hours:                     2,295
Unsafe Shutdowns:                   25
Media and Data Integrity Errors:    17,630
Error Information Log Entries:      17,630
Warning  Comp. Temperature Time:    0
Critical Comp. Temperature Time:    0
Temperature Sensor 1:               36 Celsius
Temperature Sensor 2:               40 Celsius

Error Information (NVMe Log 0x01, 16 of 64 entries)
Num   ErrCount  SQId   CmdId  Status  PELoc          LBA  NSID    VS
  0      17630    12  0x4395  0xc502  0x000       193760     1     -
  1      17629     3  0x2129  0xc502  0x000       193760     1     -
  2      17628     4  0x50d9  0xc502  0x000       272016     1     -
  3      17627    10  0x8388  0xc502  0x000       354832     1     -
  4      17626     3  0x410f  0xc502  0x000       272016     1     -
  5      17625     8  0x13ad  0xc502  0x000       272016     1     -
  6      17624    11  0x6021  0xc502  0x000       354832     1     -
  7      17623     5  0x2033  0xc502  0x000       272016     1     -
  8      17622     7  0x400c  0xc502  0x000       272016     1     -
  9      17621     3  0x111e  0xc502  0x000       354832     1     -
 10      17620     9  0x3094  0xc502  0x000       272016     1     -
 11      17619    11  0x3006  0xc502  0x000       272016     1     -
 12      17618     3  0x1125  0xc502  0x000       354832     1     -
 13      17617     7  0x102b  0xc502  0x000       272016     1     -

 

and then attempted to run `xfs_repair` but was given the following error:

 

root@Vivec:~# xfs_repair -v -n /dev/nvme0n1
Phase 1 - find and verify superblock...
bad primary superblock - bad magic number !!!

attempting to find secondary superblock...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
.found candidate secondary superblock...
unable to verify superblock, continuing...
................................................................................Sorry, could not find valid secondary superblock
Exiting now.

 

I'm a little out of depth at this point but have a feeling this is either related to something with the xfs file system itself or potential a bad NVMe drive?

 

I have attached an export of the diagnostics in case they are useful.

 

Thank you for taking the time to read this and let me know if you need any more information.

vivec-diagnostics-20220921-1732.zip

Edited by neutraltone
Add info about setup
Link to comment
  • neutraltone changed the title to Can not delete folder after removing docker container - Critical medium error - Unable to verify superblock,
24 minutes ago, JorgeB said:

It should be:

xfs_repair -v /dev/nvme0n1p1

 

 

Thanks for the correction. Running the corrected command gives me the following:
 

root@Vivec:~# xfs_repair -v /dev/nvme0n1p1
Phase 1 - find and verify superblock...
        - block cache size set to 3068192 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 165116 tail block 165116
        - scan filesystem freespace and inode maps...
        - found root inode chunk
Phase 3 - for each AG...
        - scan and clear agi unlinked lists...
        - process known inodes and perform inode discovery...
        - agno = 0
xfs_repair: read failed: No data available
cannot read inode 269952, disk block 269952, cnt 32
Aborted

 

Link to comment
  • neutraltone changed the title to Can not delete directory after removing docker container - Critical medium error - Unable to verify superblock,

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...