Dmtalon Posted March 24, 2018 Share Posted March 24, 2018 how do you know loop2 is docker image? (is this just default?) I'm getting these errors for loop3 Mar 24 16:27:16 NAS1 kernel: BTRFS error (device loop3): bdev /dev/loop3 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0 Mar 24 16:27:16 NAS1 kernel: loop: Write error at byte offset 91684864, length 4096. Mar 24 16:27:16 NAS1 kernel: print_req_error: I/O error, dev loop3, sector 179072 Mar 24 16:27:16 NAS1 kernel: BTRFS error (device loop3): bdev /dev/loop3 errs: wr 4, rd 0, flush 0, corrupt 0, gen 0 Mar 24 16:27:16 NAS1 kernel: BTRFS: error (device loop3) in btrfs_commit_transaction:2257: errno=-5 IO failure (Error while writing out transaction) Mar 24 16:27:16 NAS1 kernel: BTRFS info (device loop3): forced readonly Mar 24 16:27:16 NAS1 kernel: BTRFS warning (device loop3): Skipping commit of aborted transaction. Mar 24 16:27:16 NAS1 kernel: BTRFS: error (device loop3) in cleanup_transaction:1877: errno=-5 IO failure Mar 24 16:27:16 NAS1 kernel: BTRFS info (device loop3): delayed_refs has NO entry Link to comment
Squid Posted March 24, 2018 Share Posted March 24, 2018 loop devices can change from boot to boot, service start and restart From your original logs /dev/loop2 5.0G 3.1G 1.3G 71% /var/lib/docker Nothing particularly wrong with the cache drive's SMART, and no errors leading up to the docker.img bitching and complaining Without new diags showing what loop3 is, can't really say for sure. Link to comment
Dmtalon Posted March 24, 2018 Author Share Posted March 24, 2018 Sorry, I'm not the OP, so I kind of hijacked his post probably a party foul Link to comment
Squid Posted March 24, 2018 Share Posted March 24, 2018 13 minutes ago, Dmtalon said: Sorry, I'm not the OP, so I kind of hijacked his post probably a party foul No one (especially my wife) has ever said that I was that attentive Link to comment
Dmtalon Posted March 24, 2018 Author Share Posted March 24, 2018 I might have a couple things going on... <sigh> I have what looks like a dorked up cache drive (xfs) root@NAS1:~# xfs_repair -v /dev/sdi1 Phase 1 - find and verify superblock... - block cache size set to 2277800 entries Phase 2 - using internal log - zero log... zero_log: head block 182199 tail block 181661 ERROR: The filesystem has valuable metadata changes in a log which needs to be replayed. Mount the filesystem to replay the log, and unmount it before re-running xfs_repair. If you are unable to mount the filesystem, then use the -L option to destroy the log and attempt a repair. Note that destroying the log may cause corruption -- please attempt a mount of the filesystem before doing this. And since this was a cache drive I didn't care enough to troubleshoot a LOT. and tried to just clear the logs. root@NAS1:~# xfs_repair -L /dev/sdi1 Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. Invalid block length (0x0) for buffer Log inconsistent (didn't find previous header) empty log check failed fatal error -- failed to clear log I guess next I'm just going to attempt to reformat it. Link to comment
trurl Posted March 24, 2018 Share Posted March 24, 2018 25 minutes ago, Dmtalon said: probably a party foul Most definitely. Please take all of this to another thread and don't do it again. It should be obvious from what has happened here why hijacking is discouraged. It is very possible that someone could actually lose data because of confusion caused by hijacking. Link to comment
trurl Posted March 24, 2018 Share Posted March 24, 2018 I have split your posts to their own thread. Link to comment
Dmtalon Posted March 24, 2018 Author Share Posted March 24, 2018 Thanks @trurl... Sorry for the trouble. My initial 'issue' matched the existing post which put me in the other thread and I was just trying to find out if my docker was the issue Link to comment
trurl Posted March 24, 2018 Share Posted March 24, 2018 Even if you have the same problem, the details will usually be different for different users, so it is better to keep support separate to avoid giving bad advice to people. And of course, hijacking has been an internet etiquette no-no since before the world wide web. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.