Superdean56 Posted December 17, 2019 Share Posted December 17, 2019 Unraid 6.8 was just released a couple days ago, does anyone if this update fixes this issue? I have been having this issue with version 6.7.2 Quote Link to comment
JorgeB Posted December 17, 2019 Share Posted December 17, 2019 4 minutes ago, Superdean56 said: Unraid 6.8 was just released a couple days ago, does anyone if this update fixes this issue? If fixes the array writes starve reads issue. Quote Link to comment
Superdean56 Posted December 17, 2019 Share Posted December 17, 2019 22 minutes ago, johnnie.black said: If fixes the array writes starve reads issue. Thank you!! Quote Link to comment
geeksheikh Posted July 25, 2020 Share Posted July 25, 2020 (edited) Sorry to resurrect this older thread but did we ever find the solution to this? This is happening to me too. It appears that the mover kicks of and starves the pool of I/O. The parity disks are at max throttle (write). I do have an NVME cache disk, I'd like to be able to specify max throughput of mover to like 85MB/s or something rather than 140 MB/s which just crushes my pool to the point that plex can't even read a single bit. I saw the mover priority plugin, is that still the best route for this? I do have reconstruct write turned on with the "turbo write" enabled as well. Perhaps I need to put turbo write on a schedule and/or tune my reconstruct write settings? Thanks. Edited July 25, 2020 by srfnmnk Quote Link to comment
trurl Posted July 25, 2020 Share Posted July 25, 2020 Or perhaps you need to schedule mover for idle time. Quote Link to comment
Crad Posted October 29, 2020 Share Posted October 29, 2020 Putting my hat in the ring. When the mover runs it locks everything up (not just plex). I'm testing a few things now but mostly just commenting to say "me too" (using Unraid 6.8.3) Quote Link to comment
tjb_altf4 Posted October 29, 2020 Share Posted October 29, 2020 (edited) 24 minutes ago, Conrad Allan said: Putting my hat in the ring. When the mover runs it locks everything up (not just plex). I'm testing a few things now but mostly just commenting to say "me too" (using Unraid 6.8.3) What time does your mover run? Mine runs at runs at 330am daily when the array is not active (faster moving) and the impact of the mover is unlikely to impact me. Edited October 29, 2020 by tjb_altf4 Quote Link to comment
Crad Posted October 29, 2020 Share Posted October 29, 2020 I manually instigate my mover quite often as I'm moving data in and out of the cache for work (I deal with hundreds of thousands of high-res images). However I have it set to run at 1am by default so it normally doesn't interfere with anything. It's just that probably once or twice a week I need to manually instigate it. Quote Link to comment
trurl Posted October 29, 2020 Share Posted October 29, 2020 8 hours ago, Conrad Allan said: I'm moving data in and out of the cache for work Since these are important files I assume you have backups. So why not just keep them on cache? Quote Link to comment
Crad Posted November 14, 2020 Share Posted November 14, 2020 @trurl sorry I missed your reply. At any one time I'm wanting to work through a few TB of data, so keeping it all on Cache isn't an option. Generally I've struck a balance where I have what I think I will need the next day, moved to the cache drive overnight and anything I'm done with gets moved back to the array. Mover runs at 11pm every night. Quote Link to comment
adminmat Posted December 31, 2020 Share Posted December 31, 2020 (edited) I ran an rsync backup script today and immediately got a call from a plex user that it stopped streaming. I assume this is just the expected behavior? Any workaround? running Version: 6.9.0-rc2 Edited December 31, 2020 by adminmat Quote Link to comment
itimpi Posted December 31, 2020 Share Posted December 31, 2020 1 hour ago, adminmat said: I ran an rsync backup script today and immediately got a call from a plex user that it stopped streaming. I assume this is just the expected behavior? Any workaround? running Version: 6.9.0-rc2 This is not be expected behavior unless the script stopped your Plex container as part of trying to backup the data. Quote Link to comment
adminmat Posted December 31, 2020 Share Posted December 31, 2020 8 hours ago, itimpi said: This is not be expected behavior unless the script stopped your Plex container as part of trying to backup the data. I don't think this script stops the Plex Container. I noticed the CPU usage was very high, ~ 90% at times. I have limited knowledge in this area but my theory it was tying up some hardware (CPU or MOBO disk controller etc) while it was reading / comparing the data for the rsync copy? It continued to tie up Plex even when it was just reading a disk that's only used for photos and documents. (I was watching the disk's transfer rate on the Main tab) Here is the script: BackupOne.sh This is to sync the backup share (Disk 1 & 2) and my Documents and Photos shares (Disk 3) and I assume this is normal when the mover is running as others have said in this thread. So I schedule the mover for 5:00AM. Quote Link to comment
mgutt Posted May 22, 2021 Share Posted May 22, 2021 There are several threads with the same problem: https://forums.unraid.net/topic/27009-buffering-issues-while-running-parity-check/ https://forums.unraid.net/topic/90488-parity-check-cause-plex-network-issue/ https://forums.unraid.net/topic/95676-plex-stuttering-while-moving-files-from-one-hdd-to-another/ I really wonder why the impact is so huge, although the write speed is under 10 MB/s as you can see in this video (movie is located on Disk 1, which is in the 2nd row): unraid plex judder 720p.mp4 This happens for parity checks, parity builds, mover actions and usual file write operations. I did not tested reconstruct write, but I think it could be related to the read/modify/write process which interferes with the reading of the movie through Plex. The strange thing is, that even transcoding does not help. I thought it would, because Plex transcodes the movie in advance and fills a buffer (I'm using ram transcoding), but the behaviour is completely the same with and without transcoding. Is it possible to limit the write speed of the parity disk (to like 50%)? I think this could solve the issue as the other disks would throttle, too. My idea was to start a dd read process to /dev/null from the parity disk every time the parity disk has open write transfers. Or would it be possible with /etc/security/limits.conf? Quote Link to comment
metabubble Posted June 13, 2021 Share Posted June 13, 2021 I am certain I have pinned down the problem. If you have large amounts of ram, and because vm.dirty_ratio is 20%, the mover is first writing to ram. For a long time, since you have lots of ram. It fills, and for some reason, the OS decides to write the buffer in a blocking manner to the disc. Blocking means no other IO on that drive until it is done. It starves all other processes of reads. What I have found out is, that when you reduce vm.dirty_ratio to 1%, you get more frequent "short" reads inbetween the writes, thusly giving plex a chance to serve the content. You will however impact writing performance if you set it too low. RIP Ram. Unraid needs to find a way to bypass the write cache for the mover, or needs to find a different solution to resolve the blocking IO problem. Maybe even try a different scheduler, one that always prioritises reads over writes - if that is possible. 1 2 Quote Link to comment
madejackson Posted January 16, 2023 Share Posted January 16, 2023 (edited) Sorry to resurrect this thread, but I'm quiet surprised, that this issue still persists exactly as described by @mgutt more than 1.5 years ago. I am increasing my storage at 2MB/s, so I am running into this exact issue 1-2 per weeks when my 2tb ssd's are filling up. Workaround I did for anyone interested: https://www.reddit.com/r/unRAID/comments/10fwzin/stop_mover_on_plex_playback/ Edited January 19, 2023 by madejackson added workaround 1 Quote Link to comment
MarcelCliff Posted February 7, 2023 Share Posted February 7, 2023 On 6/13/2021 at 3:41 AM, metabubble said: reduce vm.dirty_ratio to 1% I had the same issue, and this seems to solve it for me as well. Quote Link to comment
mgutt Posted May 15, 2023 Share Posted May 15, 2023 It works to enable turbo write: Set Disk Settings > Tunable (md_write_method) > reconstruct write By that the affected HDD is not hitting its read limit: Sadly this will have a huge negative impact on power consumption. At the moment I will try the Mover Tuning plugin and use its option to enable turbo write while mover is running. But this won't help for other parallel reads/writes. For example today I ripped multiple Blu-Rays directly to the array and it caused judder, too. 😩 I think the only solution would be if Limetech adds an option to throttle the parity creation process. Quote Link to comment
fry_the_solid Posted May 20, 2023 Share Posted May 20, 2023 I just have to say thank you to @metabubble for identifying the cause of this issue. I've had a problem with this for as long as I've used Unraid and this is certainly it. With 64GB of RAM and a dirty_ratio of 20% Plex and other processes were only able to access the disk being written to briefly every 12.8 GB, if I'm understanding correctly. I tested his solution, reducing the dirty_ratio to 1, and sure enough it fixed it—the mover, rsync, and any other continuous write to a disk no longer bottlenecked my system. Although it probably wouldn't affect my system much, I didn't want to keep it at 1 all the time, so I wrote a 2 line script that lowered the dirty_ratio to 1 and another that raised it back to 20, and used the CA Mover Tuning plugin to run the scripts before/after the mover. Thanks again. Quote Link to comment
jch Posted July 28, 2023 Share Posted July 28, 2023 For others who find this thread, using the Mover Tuning plugin, I setup the following script: Replace the run after command with the vm.dirty_ratio you normally use (probably 20). sysctl -w vm.dirty_ratio=20 I noticed significant improvements in concurrent file usage while Mover is running with these changes. 1 Quote Link to comment
furling Posted August 5 Share Posted August 5 I can't believe this is still an issue now in August 2024. Thanks to the people above that solved this issue. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.