Jump to content

Upgraded to 6.10.3 - Cache disk Unmountable: No file system


Recommended Posts

Just upgraded to 6.10.3 on primary system and cache drive fails to mount, pertinent log:

 

Jun 26 22:49:12 QUAZAR emhttpd: shcmd (1455): mkdir -p /mnt/cache
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache uuid: 9e26bfb5-cb8e-4261-afed-937a83aa6106
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache ERROR: system chunk array too small 34 < 97
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache ERROR: system chunk array too small 34 < 97
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache ERROR: superblock checksum matches but it has invalid members
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache ERROR: superblock checksum matches but it has invalid members
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache ERROR: cannot scan /dev/sdi1: Input/output error
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache ERROR: cannot scan /dev/sdi1: Input/output error
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache Label: none  uuid: 9e26bfb5-cb8e-4261-afed-937a83aa6106
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache Total devices 2 FS bytes used 80.77GiB
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache devid    3 size 223.57GiB used 92.48GiB path /dev/sdg1
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache devid    5 size 223.57GiB used 92.48GiB path /dev/sdk1
Jun 26 22:49:12 QUAZAR emhttpd: /mnt/cache mount error: Invalid pool config
Jun 26 22:49:12 QUAZAR emhttpd: shcmd (1456): umount /mnt/cache
Jun 26 22:49:12 QUAZAR root: umount: /mnt/cache: not mounted.

 

If however I manually:

 

root@QUAZAR:~# mkdir -p /mnt/cache
root@QUAZAR:~# mount /dev/disk/by-uuid/9e26bfb5-cb8e-4261-afed-937a83aa6106 /mnt/cache
root@QUAZAR:~# mount | grep cache
/dev/md1 on /mnt/disk1 type btrfs (rw,noatime,space_cache=v2,subvolid=5,subvol=/)
/dev/md2 on /mnt/disk2 type btrfs (rw,noatime,space_cache=v2,subvolid=5,subvol=/)
/dev/md3 on /mnt/disk3 type btrfs (rw,noatime,space_cache=v2,subvolid=5,subvol=/)
/dev/md4 on /mnt/disk4 type btrfs (rw,noatime,space_cache=v2,subvolid=5,subvol=/)
/dev/md5 on /mnt/disk5 type btrfs (rw,noatime,space_cache=v2,subvolid=5,subvol=/)
/dev/md6 on /mnt/disk6 type btrfs (rw,noatime,space_cache=v2,subvolid=5,subvol=/)
/dev/md8 on /mnt/disk8 type btrfs (rw,noatime,space_cache=v2,subvolid=5,subvol=/)
/dev/sdg1 on /mnt/cache type btrfs (rw,relatime,ssd,space_cache=v2,subvolid=5,subvol=/)
root@QUAZAR:~# ls -al /mnt/cache/
total 16
drwxrwxrwx  1 nobody users  52 Jun 26 04:40 ./
drwxr-xr-x 15 root   root  300 Jun 26 22:58 ../
drwxrwx---  1 nobody users   0 Jun 26 20:55 Svc.nextcloud/
drwxrwxrwx  1 nobody users 874 Nov  9  2021 appdata/
drwxrwxrwx  1 nobody users  26 Feb  1  2019 system/
root@QUAZAR:~#

 

Mounts fine. Strange is that /dev/sdi1 is a parity disk and should not have a btrfs superblock. even if I clear the superblock ( wipefs -a /dev/sdi1 ) it comes back like a bad habit with the next array mount attempt.

 

I have sort of cripple-started my system by mounting the array, manually mounting the cache then restarting docker but that's a bit of gibble. Any ideas on how to resolve this (without trying to rebuild the cache because the config looks accurate it is just keeps thinking /dev/sdi1 is a member for some reason looks like) 

 

Thanks!

 

 

Link to comment

More detail, sdi1 may be a red herring (but still strange for a parity to have a btrfs superblock IMO ) in so far as the OP issue :

 

root@QUAZAR:~# btrfs dev stat /mnt/cache/
[/dev/sdg1].write_io_errs    0
[/dev/sdg1].read_io_errs     0
[/dev/sdg1].flush_io_errs    0
[/dev/sdg1].corruption_errs  1
[/dev/sdg1].generation_errs  0
[/dev/sdk1].write_io_errs    0
[/dev/sdk1].read_io_errs     0
[/dev/sdk1].flush_io_errs    0
[/dev/sdk1].corruption_errs  0
[/dev/sdk1].generation_errs  0
root@QUAZAR:~# btrfs scrub start /mnt/cache/
scrub started on /mnt/cache/, fsid 9e26bfb5-cb8e-4261-afed-937a83aa6106 (pid=5914)

 

Waiting to see where this takes us, thinking the corruption error is failing the array start mount of the cache, hoping scrub clears it and I can start the array as normal.

Link to comment

Scrub complete, no errors, cleared counters:

root@QUAZAR:~# btrfs scrub status /mnt/cache/
UUID:             9e26bfb5-cb8e-4261-afed-937a83aa6106
Scrub started:    Sun Jun 26 23:12:38 2022
Status:           running
Duration:         0:00:20
Time left:        0:05:26
ETA:              Sun Jun 26 23:18:29 2022
Total to scrub:   227.32GiB
Bytes scrubbed:   13.11GiB  (5.77%)
Rate:             671.50MiB/s
Error summary:    no errors found
root@QUAZAR:~# btrfs dev stat -z /mnt/cache/
[/dev/sdg1].write_io_errs    0
[/dev/sdg1].read_io_errs     0
[/dev/sdg1].flush_io_errs    0
[/dev/sdg1].corruption_errs  0
[/dev/sdg1].generation_errs  0
[/dev/sdk1].write_io_errs    0
[/dev/sdk1].read_io_errs     0
[/dev/sdk1].flush_io_errs    0
[/dev/sdk1].corruption_errs  0
[/dev/sdk1].generation_errs  0

 

Stopped docker, unmounted /mnt/cache, stopped array, array stopped.

 

Now the MAIN > Array operation tab only gives me the option to reboot or shutdown, no option to start array, array & cache configs look fine. guess I am rebooting!

 

No glory, same issue exists but now I have no corruption errors :(

Link to comment

OK, found following thread:

 

Sounds very familiar. Unfortunately these drives have been in use in this array configuration for at least a year and I have snapshot tasks that run against them which is why they are btrfs. Guess reboots are manual intervention until ( if this ) ever gets resolved. Bummer.

 

EDIT: Actually about 3 months ago I edited the array to have 8 disks and moved disk 7 into slot 8 to more accurately reflect my physical layout and rebuilt the array parity, leaving slot 7 empty for future. FYI.

 

Edited by neurocis
Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...