Matt_G Posted December 15, 2019 Share Posted December 15, 2019 (edited) Yesterday, I started having issues with Docker. I was trying to logon to Crashplan and it wouldn't let me logon, even with the correct password. FCP is stating that the system has 2 issues: 1) "Unable to write to Docker Image" and that "The Docker image is full or corrupted" AND 2) Stating that the system is "Unable to write to cache" and that the "Drive mounted read-only or completely full." Troubleshooting steps attempted so far: 1) Disabled Docker all together, and tried to delete the docker.img file using SSH. No joy, it states that the file system is read-only and will not allow me to delete the docker.img file. (/mnt/cache/docker.img) At that point I tried changing the size, which I could do, but when I attempt to enable Docker, it states the service failed to start. Not sure where to go from here. What should my next step be? BTW, I am on version 6.7.2 The cache consists of 2 Samsung_850_EVO SSD's in a BTRFS mirror. Edited December 15, 2019 by Matt_G Quote Link to comment
Squid Posted December 15, 2019 Share Posted December 15, 2019 Post the diagnostics Quote Link to comment
Matt_G Posted December 15, 2019 Author Share Posted December 15, 2019 Diagnostics attached. unraid-diagnostics-20191215-2120.zip Quote Link to comment
Squid Posted December 15, 2019 Share Posted December 15, 2019 It's your cache drive that's been remounted as read-only. BTRFS I'm not comfortable with, so will let someone else ( @johnnie.black advise on this ), but to me it appears that it's fully allocated. Overall: Device size: 465.77GiB Device allocated: 465.77GiB Device unallocated: 2.05MiB Device missing: 0.00B Used: 219.50GiB Free (estimated): 122.30GiB (min: 122.30GiB) Data ratio: 2.00 Metadata ratio: 2.00 Global reserve: 61.23MiB (used: 3.36MiB) Data Metadata System Id Path RAID1 RAID1 RAID1 Unallocated -- --------- --------- --------- -------- ----------- 1 /dev/sdg1 231.85GiB 1.00GiB 32.00MiB 1.02MiB 2 /dev/sdh1 231.85GiB 1.00GiB 32.00MiB 1.02MiB -- --------- --------- --------- -------- ----------- Total 231.85GiB 1.00GiB 32.00MiB 2.05MiB Used 109.55GiB 199.52MiB 48.00KiB Quote Link to comment
JorgeB Posted December 16, 2019 Share Posted December 16, 2019 12 hours ago, Squid said: but to me it appears that it's fully allocated. It is, this should help: https://forums.unraid.net/topic/62230-out-of-space-errors-on-cache-drive/?do=findComment&comment=610551 Quote Link to comment
Matt_G Posted December 16, 2019 Author Share Posted December 16, 2019 Thanks for the reply @johnnie.black Read that thread and tried balancing the cache drive. No joy, it errors out due to being a read only file system. Rebooted the server and immediately tried to re-balance the cache drive and it seemed to run for about 30 seconds. Then it thru the read only filesystem error. I assume at this point I need to copy the data off the cache and then format the drives and re-create the cache pool? Basically follow the instructions here? https://wiki.unraid.net/Replace_A_Cache_Drive Quote Link to comment
Matt_G Posted December 17, 2019 Author Share Posted December 17, 2019 So I started the array in maintenance mode and checked the btrfs filesystem. I like that line that says cache appears valid but isn't. WTH? Quote Link to comment
Matt_G Posted December 17, 2019 Author Share Posted December 17, 2019 (edited) dmesg | tail shows this: [ 9140.519458] BTRFS info (device loop2): forced readonly [ 9140.519460] BTRFS: error (device loop2) in btrfs_sync_log:3168: errno=-5 IO failure [ 9140.519727] loop: Write error at byte offset 13172736, length 4096. [ 9140.519729] print_req_error: I/O error, dev loop2, sector 25728 [ 9140.519731] BTRFS error (device loop2): bdev /dev/loop2 errs: wr 2, rd 0, flush 0, corrupt 0, gen 0 [ 9140.519754] loop: Write error at byte offset 12914688, length 4096. [ 9140.519755] print_req_error: I/O error, dev loop2, sector 25224 [ 9140.519758] BTRFS error (device loop2): bdev /dev/loop2 errs: wr 3, rd 0, flush 0, corrupt 0, gen 0 [ 9140.519780] BTRFS error (device loop2): pending csums is 12288 [ 9140.522486] BTRFS error (device sdg1): pending csums is 1572864 Are these drives toast? They are 3.5 years old with over 25,000 hours on them. Edited December 17, 2019 by Matt_G Quote Link to comment
JorgeB Posted December 17, 2019 Share Posted December 17, 2019 Loop device errors are because of the filesystem being read-only, if the balance fails best to re-format. Quote Link to comment
Matt_G Posted December 18, 2019 Author Share Posted December 18, 2019 How can I force a format on those two drives? I am not seeing a way to do that via the GUI. I thought maybe XFS would be an option there but it isn't. Quote Link to comment
JonathanM Posted December 18, 2019 Share Posted December 18, 2019 51 minutes ago, Matt_G said: How can I force a format on those two drives? I am not seeing a way to do that via the GUI. I thought maybe XFS would be an option there but it isn't. The GUI is going to be a little tricky. With the array STOPPED, change the visible slots for cache to 1. Then the option to format XFS will be available for the single drive you assign to it. Stop the array again, assign the other cache drive, format it, stop again, set slots to 2 or whatever, change to BTRFS and assign both drives. Or, do it the easy way and at the command line do a blkdiscard /dev/sd? twice, substituting the correct sd designation for the two drives as seen on the main array page. It was sdg and sdh in the info posted earlier, but drive letters can change, so verify immediately before doing the command. That will wipe them out and allow a fresh format. Quote Link to comment
Matt_G Posted December 18, 2019 Author Share Posted December 18, 2019 Gentlemen, everything is back up and running just dandy thanks to all your help. A big Thank You to all three of you. I learned a few things as well, which is always a good thing. Wishing you and yours a Very Merry Christmas! Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.