allanp81 Posted April 23, 2020 Share Posted April 23, 2020 What changed in Unraid itself though? Quote Link to comment
itimpi Posted April 23, 2020 Share Posted April 23, 2020 1 hour ago, allanp81 said: What changed in Unraid itself though? I suspect it is a side-effect of moving onto later revisions of the Linux kernel and packages rather than Unraid specific code. I very much doubt that Limetech are ignoring this issue, but we will see when we get the next beta whether they have found anything. Quote Link to comment
Allram Posted April 23, 2020 Share Posted April 23, 2020 Mine actually got better when i turned off NCQ (set it to NO), changed Scheduler to None and changed md_num_stripes to 8192. Don't know which one gave the best effect as i turned it all on at the same time, but now everything works even tho i have big transfers to btrfs cache drives and running mover. Quote Link to comment
thrroow Posted May 3, 2020 Share Posted May 3, 2020 On 4/23/2020 at 2:43 PM, johnnie.black said: The problem is that it doesn't affect everyone, I have a pool of MX500 for more than a year working without any issues. I actually used to have this problem, then it went away for months, now it's back. No significant hardware changes in that time. Quote Link to comment
JorgeB Posted May 3, 2020 Share Posted May 3, 2020 Anyone having this issue check if the docker image is on the array, if yes move it to cache or just outside the array, there are reports this can help, though probably mostly if the i/o wait is copying to the array, e.g: Quote Link to comment
thomast_88 Posted May 3, 2020 Share Posted May 3, 2020 On 4/23/2020 at 8:43 PM, johnnie.black said: The problem is that it doesn't affect everyone, I have a pool of MX500 for more than a year working without any issues. Which makes it a bigger concern since it's inconsistent. Quote Link to comment
JorgeB Posted May 3, 2020 Share Posted May 3, 2020 1 minute ago, thomast_88 said: Which makes it a bigger concern since it's inconsistent. More importantly It makes it difficult for LT to fix if they can't reproduce it. Quote Link to comment
thrroow Posted May 3, 2020 Share Posted May 3, 2020 7 hours ago, thomast_88 said: Which makes it a bigger concern since it's inconsistent. It's started and stopped happening to me, at seemingly random intervals. Quote Link to comment
Iceman24 Posted May 3, 2020 Share Posted May 3, 2020 12 hours ago, johnnie.black said: Anyone having this issue check if the docker image is on the array, if yes move it to cache or just outside the array, there are reports this can help, though probably mostly if the i/o wait is copying to the array, e.g: Should the docker image be on the cache anyways? Quote Link to comment
JorgeB Posted May 4, 2020 Share Posted May 4, 2020 11 hours ago, Iceman24 said: Should the docker image be on the cache anyways? Yes, same for appdata, but not everyone is doing it. 1 Quote Link to comment
aptalca Posted May 4, 2020 Author Share Posted May 4, 2020 4 hours ago, johnnie.black said: Yes, same for appdata, but not everyone is doing it. Doesn't unraid default the appdata location to /mnt/user/appdata ? Is that share created as a cache only share by default? I don't remember because I manually set all of mine a long time ago Quote Link to comment
trurl Posted May 4, 2020 Share Posted May 4, 2020 2 minutes ago, aptalca said: Doesn't unraid default the appdata location to /mnt/user/appdata ? Is that share created as a cache only share by default? I don't remember because I manually set all of mine a long time ago Based on looking at diagnostics, I think it is probably cache-prefer along with domains and system, but I haven't actually tested that myself since I set all that up maybe even before cache-prefer was implemented. Quote Link to comment
JorgeB Posted May 4, 2020 Share Posted May 4, 2020 34 minutes ago, trurl said: I think it is probably cache-prefer along with domains and system It is, but if for example someone starts without cache it will be created on the array, and it won't be moved unless the services are disabled by the user. 1 Quote Link to comment
trurl Posted May 4, 2020 Share Posted May 4, 2020 2 minutes ago, johnnie.black said: It is, but if for example someone starts without cache it will be created on the array, and it won't be moved unless the services are disabled by the user. I have certainly written a lot about that. Quote Link to comment
Rand Posted May 4, 2020 Share Posted May 4, 2020 I also experienced this same issue with my two Samsung_SSD_840_EVO_250GB drives that I had in BTRFS RAID1 for my cache pool. Large file transfers would cause insane iowait. System load would shoot up to 60-80. Docker containers and the Unraid web-ui would become entirely unresponsive. Since moving the pool from raid1 to single, the issue seems to be gone. With 6.9 reportedly having the option for multiple cache pools, I would really like to revisit the option for a mirrored cache pool. Quote Link to comment
Vargink Posted May 30, 2020 Share Posted May 30, 2020 Thought id throw my 5 cents in. So i had this issue with some Crucial brand 512gig nvme sticks i had in a mirror and decided when i was doing a bit of an upgrade to my unraid box that i would try some different ssd's to see if i can get past this btrfs problem. Got some Silicon Power 512GB SSD's and threw them in a mirror and sadly the same problem. So guess im just getting really unlucky with my choice of nand in these things or yeah its just bloody random lol. Quote Link to comment
grobian Posted June 7, 2020 Share Posted June 7, 2020 @limetech please look into this issue. I have a single MX500 SSD BTRFS Encrpyted installed and everything comes to a halt when there are concurrent filestreams handled. The UI shows that the SSD is reading 500mb/s continuously. Average load is 27. This mayor issue is at least two years old. Quote Link to comment
Vipa Posted July 5, 2020 Share Posted July 5, 2020 Bump. Same problem with 2x Kingston A400 480gb SSDs btrf encrypted. Quote Link to comment
JorgeB Posted July 5, 2020 Share Posted July 5, 2020 Tom already mentioned that next beta will have will have partitions on SSDs aligned on the 1MiB boundary, this will hopefully help with this issue, will require re-formatting them though. Quote Link to comment
dansonamission Posted July 11, 2020 Share Posted July 11, 2020 Added two Samsung evo ssd’s to a raid raid pool yesterday, performance drops right off after 20/30secs of copying a large file. This thread is nearly 4 years old and it doesn’t look like its going to be fixed any time soon. My trial is almost up and I’m not sure I’ll be continuing. Quote Link to comment
allanp81 Posted July 11, 2020 Share Posted July 11, 2020 It's so annoying, I had to re-architect my server to get around this problem (and obviously there was expense involved),. Quote Link to comment
JonathanM Posted July 11, 2020 Share Posted July 11, 2020 35 minutes ago, dansonamission said: it doesn’t look like its going to be fixed any time soon. On 7/5/2020 at 3:11 AM, johnnie.black said: next beta will have will have partitions on SSDs aligned on the 1MiB boundary 1 Quote Link to comment
dansonamission Posted July 11, 2020 Share Posted July 11, 2020 Which only says 'may help' the issue. Can anyone running this beta confirm this is fixed? 1 Quote Link to comment
allanp81 Posted July 11, 2020 Share Posted July 11, 2020 Sorry I'm only running beta on my backup server currently which has no cache drives. Quote Link to comment
dansonamission Posted July 11, 2020 Share Posted July 11, 2020 Is there a way to format the cache drive(s) in the current release via command line with this 1MiB Boundary? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.