Sptz87 Posted March 2 Share Posted March 2 Mar 2 13:50:25 AWE kernel: BTRFS: error (device nvme0n1p1: state A) in btrfs_run_delayed_refs:2149: errno=-28 No space left Mar 2 13:50:25 AWE kernel: BTRFS info (device nvme0n1p1: state EA): forced readonly Mar 2 13:50:25 AWE kernel: BTRFS warning (device nvme0n1p1: state EA): Skipping commit of aborted transaction. Mar 2 13:50:25 AWE kernel: BTRFS: error (device nvme0n1p1: state EA) in cleanup_transaction:1992: errno=-28 No space left Mar 2 13:56:04 AWE kernel: BTRFS warning (device nvme0n1p1: state EA): checksum verify failed on logical 2169117310976 mirror 1 wanted 0xb123218b found 0xb40b712a level 0 Mar 2 13:56:04 AWE kernel: BTRFS error (device nvme0n1p1: state EA): parent transid verify failed on logical 2169117310976 mirror 2 wanted 10415 found 10411 So I got a warning my cache drive was full, due to downloads. I tried running mover but didn't work. Also tried deleting all files in qbittorrent (was a large torrent) and it showed as ERRORED. So I did a delete and the files are still there. tried running "rm" in terminal and it showed files are read only. The above is the first drive disk log. No idea what to do now. I thought when it got full it would just start downloading to the array? Thought that was the logic there. Any help? No idea what to do... Thanks! Quote Link to comment
Frank1940 Posted March 2 Share Posted March 2 To give the GURUs a bit more information, please attach the diagnostics file to your next post in this thread. Quote Link to comment
itimpi Posted March 2 Share Posted March 2 Probably need to wait for @JorgeB to give you best advice on how to recover from this. 2 minutes ago, Sptz87 said: I thought when it got full it would just start downloading to the array? Thought that was the logic there. That is the basic idea, but you have to specify the cutover point. Set the Minimum Free Space value for the pool to tell Unraid when to switch to by-passing the pool before it gets filled up too far. Quote Link to comment
Sptz87 Posted March 2 Author Share Posted March 2 Attached diagnostics! AWE Diagnostics Mar 2.zip 1 minute ago, itimpi said: Probably need to wait for @JorgeB to give you best advice on how to recover from this. That is the basic idea, but you have to specify the cutover point. Set the Minimum Free Space value for the pool to tell Unraid when to switch to by-passing the pool before it gets filled up too far. I noticed that I can't set a minimum free space when the array is running. So I thought in shares would be enough. obviously I was wrong I don't care about recovering the files that caused this (the torrent downloads) all I care is for nothing to have broken in appdata etc ! Quote Link to comment
itimpi Posted March 2 Share Posted March 2 10 minutes ago, Sptz87 said: I noticed that I can't set a minimum free space when the array is running. So I thought in shares would be enough. obviously I was wrong Correct - the array had to be stopped. The value on User Shares is primarily intended to control when you have a User Share that spans multiple array drives so Unraid knows the threshold for switching to the next drive to be used by the share. Quote Link to comment
Sptz87 Posted March 2 Author Share Posted March 2 (edited) 38 minutes ago, itimpi said: Correct - the array had to be stopped. The value on User Shares is primarily intended to control when you have a User Share that spans multiple array drives so Unraid knows the threshold for switching to the next drive to be used by the share. Yep, makes sense. Still I wouldn't expect 2.55TB out of 3TB to be considered full? Mainly when I have preallocate disk space in qbit. Luckily I have an appdata / vms backup from this morning. So should be good in that front whatever happens. Edited March 2 by Sptz87 Quote Link to comment
Kilrah Posted March 2 Share Posted March 2 (edited) You have mismatched disk sizes, presumably in a RAID1 setup. Btrfs free space is basically meaningless in that case and it can run out of blocks to allocate at unexpected times, and... is likely to corrupt when that happens. Edited March 2 by Kilrah Quote Link to comment
Sptz87 Posted March 2 Author Share Posted March 2 38 minutes ago, Kilrah said: You have mismatched disk sizes, presumably in a RAID1 setup. Btrfs free space is basically meaningless in that case and it can run out of blocks to allocate at unexpected times, and... is likely to corrupt when that happens. It's actually raid0 because I wanted as large a cache as possible and since I backup appdata and vm data everyday I didn't really care if any data is lost. It's just downloads anyway that eventually get moved to the array. Hence me finding it weird it's showing 2.55tb as full when it's a 2tb + 1tb nvme drives Quote Link to comment
Sptz87 Posted March 2 Author Share Posted March 2 Any help with the best course of action here please? And best options for the future to avoid this? If I could save appdata and make sure it's not corrupted then that'd be great otherwise I'll suck it up as I have a backup. The main thing is generating all the preview thumbnails again 😩 Quote Link to comment
Sptz87 Posted March 2 Author Share Posted March 2 Anyone please? I have no idea what to do... Quote Link to comment
Sptz87 Posted March 2 Author Share Posted March 2 (edited) Stopped all dockers, VMS. Turned docker off in Settings. Trying to stop the array results in this: Just stays like this.... Don't know what to do... When I click on any drive or share I get this. Not sure if it's because it's "stopping" the array... dmesg -T: Edited March 2 by Sptz87 Quote Link to comment
Kilrah Posted March 2 Share Posted March 2 (edited) You'll have to force shutdown, delete the pool entirely and recreate it. Before that if you have enough space on the array you can try starting the array again after rebooting, and copying everything off first (not moving) while it's mounted read-only. Edited March 2 by Kilrah Quote Link to comment
JorgeB Posted March 3 Share Posted March 3 Not sure how it managed to get 2.5TB with raid0, it should only be able to use 2TB, only the single profile can use the full 3TB, stop the docker and VM services so nothing writes to the pool, reboot and post new diags, if the pool doesn't immediately go read-only it may be possible to remove some data, or try to convert to single profile Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.