Cache share becomes read-only. Already tried reformatting.


Recommended Posts

Hello!

 

I'm running into an issue where the cache pool becomes read-only until I restart the array. I noticed this when trying to download dockers.

I found errors like these in my log (these are the new errors after my troubleshooting but they were similar to before)

Aug 18 17:29:19 Canteen kernel: BTRFS error (device loop3): bad tree block start, want 31358976 have 72013368191603779
Aug 18 17:29:19 Canteen kernel: BTRFS error (device loop3): bad tree block start, want 31358976 have 3106939744589177635
Aug 18 17:29:19 Canteen kernel: BTRFS: error (device loop3) in __btrfs_free_extent:3188: errno=-5 IO failure
Aug 18 17:29:19 Canteen kernel: BTRFS info (device loop3): forced readonly
Aug 18 17:29:19 Canteen kernel: BTRFS: error (device loop3) in btrfs_run_delayed_refs:2150: errno=-5 IO failure

 

I thought this might just be a filesystem issue so I copied everything off the cache pool onto the array and reformatted it:

  • Stopped the array
  • Unassigned the cache drives
  • Used blkdiscard:
root@Canteen:~# blkdiscard -f /dev/sdd
blkdiscard: Operation forced, data will be lost!
root@Canteen:~# blkdiscard -f /dev/sde
blkdiscard: Operation forced, data will be lost!
root@Canteen:~# blkdiscard -f /dev/sdf
blkdiscard: Operation forced, data will be lost!
root@Canteen:~# blkdiscard -f /dev/sdg
blkdiscard: Operation forced, data will be lost!

 

  • Re-added cache dives to array
  • Reformatted cache pool
  • Copied data from array back onto cache drives. It all appears to have copied fine

 

As seen above, I still ran into the same error. Any ideas? Do I have a bad drive/cable/port?

It seems odd that copying all the data back to the pool was fine.

 

Any next steps or other help is greatly appreciated.

 

Best,

Greg

canteen-diagnostics-20220818-1839.zip

Link to comment

loop3 is your docker.img. If you copied corrupt docker.img back to your reformatted cache then you would still have corrupt docker.img

 

Why do you have 100G for docker.img anyway? Have you had problems filling it? 20G is usually more than enough. Making it larger won't fix filling it, it will only make it take longer to fill. The usual reason for filling docker.img is an application writing to a path that isn't mapped.

 

When you recreate, set it to 20G

https://wiki.unraid.net/Manual/Docker_Management#Re-Create_the_Docker_image_file

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.