![](http://content.invisioncic.com/u329766/set_resources_34/84c1e40ea0e759e3f1505eb1788ddf3c_pattern.png)
Carlos Talbot
-
Posts
15 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by Carlos Talbot
-
-
So Single BTRFS cache drive doesn't resolve the issue. What's the easiest way to reformat the drive to XFS? Thanks.
-
As I stated in my original post I'm on 6.8.0-rc7. With regards to my particular issue (cache pool/high load) it was reported in 2017 so over 2 years now.
-
18 minutes ago, trurl said:
In case you still need something to fill in your understanding
Array = disks in the parity array
/mnt/user/subfolder = a user share named subfolder
User shares always include cache. Unless the share is set cache-no then all new writes go to cache.
Got it. I'm in the process of switching from 2 drives in the cache pool to 1 and keeping it at BTRFS. I'm just surprised this issue is still ongoing as it's very easy to reproduce.
-
21 minutes ago, johnnie.black said:
That doesn't really answer my question, is that share set to use cache?
Sorry, yes, it's set to Yes for cache.
This got me thinking. I tried the same copy command to a another share that is not using cache. Sure enough the load held steady at 5 and never got higher (this also includes a plex transcode in the background). Containers were accessible without issue.
So it does appear to be the cache that is affecting this.
-
1
-
-
8 minutes ago, johnnie.black said:
Is the copy going to the array or the cache pool?
Array - /mnt/user/subfolder
-
I recently upgraded to rc7 thinking this problem was behind me. It still persists. It's very easy to reproduce. I copy several GB of files from an unassigned disks to a /mnt/user path. After the memory buffer fills and writes are flushed to disk I start seeing the IO wait shoot up to 45, disrupting all running dockers. It takes at least 5 minutes for the load to subside and system return to normal.
I have a cache pool setup with 2 SSDs (no Samsung drives at this point).
Is BRTFS the culprit?
I'll have to go back to rc5 as the lack of nvidia drivers is killing my performance as well.
-
I do see a negative impact. I experience two scenarios, the first is trying to shutdown a VM and it hangs. During the shutdown I see continuous stale NFS messages in the tcpdump between the vSphere host and Unraid. I eventually have to power off the VM forcibly.
The second issue is where all the VMs are shutdown. I can see this from the vCenter gui. I still have access to vCenter as it's the only VM not using an NFS datastore from Unraid.
[6.7.x] Very slow array concurrent performance
in Stable Releases
Posted
Thanks. I'm back to XFS and things are noticeably different. I tried the same copy process to a share that has cache enabled and load looked fine. I will avoid BTRFS until I hear otherwise.