flokason Posted March 29 Share Posted March 29 Like topic says, I updated from 6.12.8 to 6.12.9 and now I get this message on my dockers: Docker Service failed to start. Attached is my diognostics I would appreciate all help tower-diagnostics-20240329-2118.zip Quote Link to comment
itimpi Posted March 29 Share Posted March 29 The syslog in the diagnostics is full of errors relating to the cache drive which is probably why the docker service is not starting. I would suggest starting by running check filesystem on that pool. 1 Quote Link to comment
flokason Posted March 29 Author Share Posted March 29 I can only do it on Cache, not Cache 2 (they are mirror) I get this: [1/7] checking root items [2/7] checking extents [3/7] checking free space tree [4/7] checking fs roots [5/7] checking only csums items (without verifying data) [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) Opening filesystem to check... Checking filesystem on /dev/nvme0n1p1 UUID: 7928d4a8-c447-4027-8952-2e27c43e51a9 found 511549218816 bytes used, no error found total csum bytes: 401640012 total tree bytes: 1649328128 total fs tree bytes: 993787904 total extent tree bytes: 166559744 btree space waste bytes: 372308360 file data blocks allocated: 1470746066944 referenced 495950946304 Quote Link to comment
flokason Posted March 29 Author Share Posted March 29 I do have the appdata backup plugin and backed up last time 25th of Mars Should I restore that? Quote Link to comment
itimpi Posted March 29 Share Posted March 29 Just now, flokason said: I do have the appdata backup plugin and backed up last time 25th of Mars Should I restore that? Not sure. Might be worth waiting to see if @JorgeB had an ideas. 1 Quote Link to comment
flokason Posted March 29 Author Share Posted March 29 I think I have to format and start from scratch on my cahce drives root@Tower:~# btrfs dev stats /mnt/cache [/dev/nvme0n1p1].write_io_errs 296591154 [/dev/nvme0n1p1].read_io_errs 454910 [/dev/nvme0n1p1].flush_io_errs 1525783 [/dev/nvme0n1p1].corruption_errs 16434475 [/dev/nvme0n1p1].generation_errs 13247 [/dev/nvme1n1p1].write_io_errs 0 [/dev/nvme1n1p1].read_io_errs 0 [/dev/nvme1n1p1].flush_io_errs 0 [/dev/nvme1n1p1].corruption_errs 0 [/dev/nvme1n1p1].generation_errs 0 Quote Link to comment
flokason Posted March 29 Author Share Posted March 29 I got kind of depressed, google something Though I should delete everything from both of my cache drives and then restore it with appdata backup Did those commands shown on screenshot Dockers started to work And hevy read/write on my cahce drive I have literally know idea what it happening Quote Link to comment
JorgeB Posted March 30 Share Posted March 30 11 hours ago, flokason said: root@Tower:~# btrfs dev stats /mnt/cache [/dev/nvme0n1p1].write_io_errs 296591154 [/dev/nvme0n1p1].read_io_errs 454910 [/dev/nvme0n1p1].flush_io_errs 1525783 [/dev/nvme0n1p1].corruption_errs 16434475 [/dev/nvme0n1p1].generation_errs 13247 This means one of the devices dropped offline in the past. Post new diagnostics. 1 Quote Link to comment
flokason Posted March 30 Author Share Posted March 30 Thank you JorgeB I would appreciate any advice so I can prevent this happening again, and perhaps knowing how to fix it if it happens again Best regards tower-diagnostics-20240330-1052.zip Quote Link to comment
JorgeB Posted March 30 Share Posted March 30 Pool looks OK now, it was doing a balance, that would explain all the activity, I assume no more now? See here for better pool monitoring, so you'd get notified if a device drops again: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=700582 1 Quote Link to comment
flokason Posted March 30 Author Share Posted March 30 10 minutes ago, JorgeB said: Pool looks OK now, it was doing a balance, that would explain all the activity, I assume no more now? See here for better pool monitoring, so you'd get notified if a device drops again: https://forums.unraid.net/topic/46802-faq-for-unraid-v6/?do=findComment&comment=700582 Thank you for the help Yes that activity is no more, it looks normal now I will look up that pool monitoring, thank you 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.