Chukwuka13 Posted July 4, 2023 Share Posted July 4, 2023 When my ZFS pool got to around 90% capacity, I started receiving the "[Errno 28] No space left on device" error when writing to my ZFS pool. I tried moving files off the pool, reducing the storage capacity to 88.5 %, yet this error persists. I later noticed the Minimum free space setting was set to 10%, which could explain the original error. However, I have since tried updating that setting to values less than 5%, but the No space left error persists. I'm not sure what could be the cause at this point. This pool is configured to my Array so that mover moves files from the ZFS pool into the Array. Here is a screenshot providing more info on my share configuration. I'm wondering if this error may have been a result of my moving content from my pool into the Array outside of the mover task. I don't know if that is supported. I did this because the mover task seems to either be broken since the 6.12 upgrade or is very slow when running against the ZFS pool. I had the task running for almost 2 days and saw no sign of anything actually being moved. Any suggestion on how to resolve this would be greatly appreciated. Quote Link to comment
JorgeB Posted July 4, 2023 Share Posted July 4, 2023 Please post the diagnostics. Quote Link to comment
Chukwuka13 Posted July 4, 2023 Author Share Posted July 4, 2023 @JorgeB here you go: diagnostics-20230704-0825.zip Quote Link to comment
JorgeB Posted July 4, 2023 Share Posted July 4, 2023 Logs are spammed with nchan errors, please reboot to clear them and post new diags after array start and a pool write attempt. Quote Link to comment
Chukwuka13 Posted July 4, 2023 Author Share Posted July 4, 2023 (edited) I'm not able to reboot at the moment. I'm currently running a parity check on the array and a scrub on the zpool to see if that helps anything. The parity check probably won't be done until about 48 more hours. Edited July 4, 2023 by Chukwuka13 Quote Link to comment
Chukwuka13 Posted July 4, 2023 Author Share Posted July 4, 2023 While I wait for the parity sync, can you recommend anything to prevent the nchan errors? I've been seeing these for a while and don't understand the cause. My server has 256 GBs of memory, so it should have more than enough to prevent any out-of-memory issues. Quote Link to comment
JorgeB Posted July 5, 2023 Share Posted July 5, 2023 If you don't need it try disabling IPv6, there are some reports that it helps. Quote Link to comment
Llamrei Posted July 5, 2023 Share Posted July 5, 2023 Hi, I've got the same problem with a Pool that is ZFS formated. Seems like it ignores the Minimum Free Space Setting and enforces 10% Free Space. That behaviour started for me with the Update to 6.12.2. Quote Link to comment
JorgeB Posted July 5, 2023 Share Posted July 5, 2023 I cannot reproduce this, share was at the default 10% free space, after that value was reached the transfer overflowed to the array, changed the minimum free space to 5% and new writes went to the pool, I assume you are still writing to the user share? Quote Link to comment
Llamrei Posted July 5, 2023 Share Posted July 5, 2023 Yes, I write to /mnt/user/Downloads Quote Link to comment
JorgeB Posted July 5, 2023 Share Posted July 5, 2023 45 minutes ago, Llamrei said: Yes, I write to /mnt/user/Downloads Post your diags please, after a write attempt where you get the error. Quote Link to comment
Chukwuka13 Posted July 5, 2023 Author Share Posted July 5, 2023 (edited) @JorgeB My scrub finished with a few permanent errors on some files, which I've gone ahead and deleted. After I paused the parity sync and rebooted my system, then attempted to write to my share, but I still got the insufficient storage error. Here is the diagnostic after the write attempt: diagnostics-20230705-0840.zip Keep in mind the ZFS pool is now at 86.4% capacity: and the minimum free space is set to 4%: Edited July 5, 2023 by Chukwuka13 Quote Link to comment
Chukwuka13 Posted July 5, 2023 Author Share Posted July 5, 2023 5 hours ago, JorgeB said: If you don't need it try disabling IPv6, there are some reports that it helps. It looks like IPv6 was never enabled. My network protocol is currently set to IPv4: Quote Link to comment
Chukwuka13 Posted July 5, 2023 Author Share Posted July 5, 2023 (edited) On 7/3/2023 at 10:08 PM, Chukwuka13 said: I'm wondering if this error may have been a result of my moving content from my pool into the Array outside of the mover task. I don't know if that is supported. I did this because the mover task seems to either be broken since the 6.12 upgrade or is very slow when running against the ZFS pool. I had the task running for almost 2 days and saw no sign of anything actually being moved. I turned logging on during a mover scan, and I see a lot of errors about files not being able to be moved due to the No space left on device error. This is probably why nothing was moved after 2 days of leaving that task to run. My array is also not at full capacity. Edited July 5, 2023 by Chukwuka13 Quote Link to comment
Solution JorgeB Posted July 5, 2023 Solution Share Posted July 5, 2023 33 minutes ago, Chukwuka13 said: Here is the diagnostic after the write attempt: According to diags you are hitting the share floor, not the pool floor, to which share are you writing to? Quote Link to comment
Chukwuka13 Posted July 5, 2023 Author Share Posted July 5, 2023 The share is configured to use the zpool as the primary storage and the Unraid Array as the secondary storage. I did actually notice that even though the Zpool was configured to have a minimum free space of 5%, the share itself was somehow configured to have a minimum free space of 50%. I don't know if that setting was recently changed since my server has been well over 50% capacity for a while now. Reducing the value on the share setting did address this issue. 1 Quote Link to comment
darkslyde Posted July 6, 2023 Share Posted July 6, 2023 (edited) EDIT: After restarting the server, so I can get a clean diagnostic file, the issue miraculously resolved itself. I no longer have to transfer the folders in smaller batches. It has also stopped writing duplicate empty folders on the origin drive... Having same issue, not enough space unless I transfer smaller batch and not the whole folder. Also, the transfer (using Krusader) is re-creating the share's folders on the origin pool. Origin: Rockerboys Destination: Netrunners Share Config: Share Computed: Edited July 6, 2023 by darkslyde Updates Quote Link to comment
Llamrei Posted July 6, 2023 Share Posted July 6, 2023 I, too can't replicate the problem now after a reboot. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.