TimTaylor Posted August 2, 2023 Share Posted August 2, 2023 (edited) Hello all, since yesterday suddenly the docker system of unraid is buggin garound. The Dockers are started but only partially accessible. I have already restarted Unraid last night, then it worked briefly and now again the same. I don't understand what the problem is all of a sudden. Anyone sees the problem here? Quote Aug 2 13:07:05 NAS kernel: BTRFS: error (device sdc1: state A) in __btrfs_free_extent:3067: errno=-2 No such entry Quote root@NAS:~# btrfs check /dev/sdc1 Opening filesystem to check... Checking filesystem on /dev/sdc1 UUID: f13af22b-683e-4211-8b10-19a3913b2398 [1/7] checking root items [2/7] checking extents data extent[23743852544, 4096] referencer count mismatch (root 9004 owner 5940 offset 0) wanted 0 have 1 data extent[23743852544, 4096] bytenr mimsmatch, extent item bytenr 23743852544 file item bytenr 0 data extent[23743852544, 4096] referencer count mismatch (root 8521300293055423276 owner 4294936705 offset 0) wanted 1 have 0 backpointer mismatch on [23743852544 4096] ERROR: errors found in extent allocation tree or chunk allocation [3/7] checking free space tree [4/7] checking fs roots [5/7] checking only csums items (without verifying data) [6/7] checking root refs [7/7] checking quota groups skipped (not enabled on this FS) found 130698866688 bytes used, error(s) found total csum bytes: 84459360 total tree bytes: 1169408000 total fs tree bytes: 976371712 total extent tree bytes: 79200256 btree space waste bytes: 231943849 file data blocks allocated: 186390126592 referenced 135480156160 Seems the cache drive have problems? nas-diagnostics-20230802-1321.zip Edited August 2, 2023 by TimTaylor Quote Link to comment
TimTaylor Posted August 2, 2023 Author Share Posted August 2, 2023 Edited: Because NGINX was not the problem Quote Link to comment
TimTaylor Posted August 2, 2023 Author Share Posted August 2, 2023 I tried to revert to 6.11.5 but that didnt worked, so im back to 6.12.3. I cant start any docker any more, the cache drive seems to have errors cause of that 6.12 version and is now in read only mode. I really need help now. here are the actual diagnostic files: nas-diagnostics-20230802-1709.zip Quote Link to comment
Solution JorgeB Posted August 2, 2023 Solution Share Posted August 2, 2023 Pool is in the middle of a balance and it's crashing because the fs is corrupt, see if you can copy the data as is, if not see here for some recovery options, try mounting read-only and then backup the data, then recreate the pool and restore the data. Quote Link to comment
TimTaylor Posted August 2, 2023 Author Share Posted August 2, 2023 (edited) Im copying the whole data from the cache at this time with midnight commander, it runs since 2 hours now. It seems to be only sdc1 which is the cache drive. Can i reformat it or make it to a zfs drive (seems btrfs is crap, and the error comes back like i read in another post) and then copy the data back? Edited August 2, 2023 by TimTaylor Quote Link to comment
JorgeB Posted August 2, 2023 Share Posted August 2, 2023 6 minutes ago, TimTaylor said: Can i reformat it or make it to a zfs drive You can, if issues continue there's possible some hardware problem, like bad RAM. Quote Link to comment
TimTaylor Posted August 2, 2023 Author Share Posted August 2, 2023 i dont think its the ram, the nas was working for years without problems. and since 1 year on unraid. So i guess its the update it self. But i will try to repair ist like this. Quote Link to comment
TimTaylor Posted August 2, 2023 Author Share Posted August 2, 2023 i repaired the cache with that btrfs command, so its working again. I will have a look on it. Thank you for the help. 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.