dmtalon - btrfs errors on device loop2


Recommended Posts

how do you know loop2 is docker image?  (is this just default?)

 

I'm getting these errors for loop3

 

Mar 24 16:27:16 NAS1 kernel: BTRFS error (device loop3): bdev /dev/loop3 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0
Mar 24 16:27:16 NAS1 kernel: loop: Write error at byte offset 91684864, length 4096.
Mar 24 16:27:16 NAS1 kernel: print_req_error: I/O error, dev loop3, sector 179072
Mar 24 16:27:16 NAS1 kernel: BTRFS error (device loop3): bdev /dev/loop3 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0
Mar 24 16:27:16 NAS1 kernel: BTRFS: error (device loop3) in btrfs_commit_transaction:2257: errno=-5 IO failure (Error while writing out transaction)
Mar 24 16:27:16 NAS1 kernel: BTRFS info (device loop3): forced readonly
Mar 24 16:27:16 NAS1 kernel: BTRFS warning (device loop3): Skipping commit of aborted transaction.
Mar 24 16:27:16 NAS1 kernel: BTRFS: error (device loop3) in cleanup_transaction:1877: errno=-5 IO failure
Mar 24 16:27:16 NAS1 kernel: BTRFS info (device loop3): delayed_refs has NO entry

 

Link to comment

loop devices can change from boot to boot, service start and restart

 

From your original logs

/dev/loop2             5.0G  3.1G  1.3G  71% /var/lib/docker

Nothing particularly wrong with the cache drive's SMART, and no errors leading up to the docker.img bitching and complaining

 

Without new diags showing what loop3 is, can't really say for sure.

Edited by Squid
Link to comment

I might have a couple things going on... <sigh>

 

I have what looks like a dorked up cache drive (xfs)

 

root@NAS1:~# xfs_repair -v /dev/sdi1
Phase 1 - find and verify superblock...
        - block cache size set to 2277800 entries
Phase 2 - using internal log
        - zero log...
zero_log: head block 182199 tail block 181661
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

 

And since this was a cache drive I didn't care enough to troubleshoot a LOT. and tried to just clear the logs.

 

root@NAS1:~# xfs_repair -L /dev/sdi1
Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ALERT: The filesystem has valuable metadata changes in a log which is being
destroyed because the -L option was used.
Invalid block length (0x0) for buffer
Log inconsistent (didn't find previous header)
empty log check failed

fatal error -- failed to clear log

 

I guess next I'm just going to attempt to reformat it.

 

 

 

Link to comment
25 minutes ago, Dmtalon said:

probably a party foul

Most definitely. Please take all of this to another thread and don't do it again. It should be obvious from what has happened here why hijacking is discouraged. It is very possible that someone could actually lose data because of confusion caused by hijacking.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.