Mover making Plex server unresponsive?


88 posts in this topic Last Reply

Recommended Posts

If a drive has to spin up, then pauses in all I/O streams are to be expected and normal.  But on other circumstances I would think that part of the problem could possibly be the Atom CPU itself

Link to post
  • Replies 87
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

Today's updates to the mover tuning plugin allows you to do that (CPU and I/O priority)

@limetech Ok I tested this again last night and got similar yet slightly different results.  My CPU was basically idle (less than 10% usage as I had just 2 direct plays going in Plex).  As soon as I k

Good thought but in my testing this was not the case.  Yes Plex scans on library changes but with all the resources I have at my disposal the scan is done in less than 30 seconds.  The issue seems to

On 6/25/2019 at 9:32 AM, jpowell8672 said:

Unraid 6.7.1 stable

I'm having the same issue of plex stream(s) stops when mover is running. If you try to start stream when mover is running, plex spinning circle is all you will get until mover is done.

Unraid 6.7.2 with plugin set to low priority and idle same problem.

Link to post

Any data access to the array causes all docker and VM activity to become unresponsive.  Copying a file via SMB or moving data from cache manually causes docker to become unresponsive - this isn't just during mover activities.  Lets say copying a file to the array via SMB to a share that is not part of the cache pool.  copying a single file is causing the docker and/or vm's to become unresponsive - this might happen after maybe 500mb into the file. 

 

I've tried different parity drives(SATA and SAS drives), different controllers(SAS and SATA), different setups of the SATA controller, different network cards.

Link to post

Similar problem here (and totally unrelated to Plex).  Mover currently running, array CPU is very high and browsing shares very slow.  It's moving data around 10MB/s (I've used the CA plugin to lower the priority of mover, which had a major impact on this).  However, something still doesn't seem right, it's taking 70-80% of 8 cores to do a move at 10MB/s but I can copy to the array at 150MB/s (directly to the array, no cache drive, turbowrite on) and the CPU usually won't spike above 10%.  

 

Putting it another way, I suspect I could put 2 shares on the array, one on the cache drive and one on the protected (no cache) destination and do a copy at 10X the speed with 1/10th the CPU.  That makes no sense, because copying from the cache drive (which I can do at 150MB/s) and then back to the array (where I can also write at 150MB/s with turbowrite on) via the network stack should be more costly than doing the same operation internal to the array (it's got to move all the same data, but also do all the TCP work/NFS handles/etc).  No reason that should be 100X more computationally intensive to copy internally than to do the same copy over the network.  

 

It's no big deal for me, I rarely use this functionality, I just copied in tons of data yesterday and wanted to stage it on the cache drive because I was streaming tons of data in the other direction from the protected shares at the time.  But something doesn't feel right about whatever process the mover is using to pick up files from the cache to the protected array.  Anyone try copying a huge file to the cache drive and then logging into the array and doing a simple "cp" command into a parity protected user mount point?  That would be interesting.

Link to post
1 hour ago, Overtaxed said:

hopefully it's related

Maybe different than problems others are having in this thread.

 

Some of your disks are pretty full and these are still using reiserFS. That filesystem is known to perform poorly when disks get full.

 

Also, your appdata and system shares have some files on disk1, which is one of those nearly full reiserFS disks. Those shares should be completely on cache to avoid performance issues like you are seeing.

 

Link to post

I think I'm having a similar issue. I view my logs and it says that mover error 397, the /mnt/cache/appdata/plex/transcode temp files cannot be moved because they can not be found. When I view this log this is repeated hundreds of times. My system is extremely laggy. Manually running the mover seemed to fixed this. Any idea what is going on? Should I have my transcode folder somewhere else?

Link to post
11 minutes ago, RubiksCube said:

Right now it is set to Yes. Should it be set to only?

Yes is almost certainly the wrong setting as that means new files are created on the cache and then are moved from cache to array when mover runs!

 

what you probably want to do is:

  • Stop docker service
  • set the Use Cache setting for ‘appdata’ to Prefer
  • Manually run mover to get any files that are on the array for appdata moved to the cache
  • change the setting for the ‘appdata’ share to ‘Only’ to make sure new appdata files are created on the cache and mover will not attempt to move them to the array.
  • re-enable the docker service.
Edited by itimpi
Link to post

So I'm still experiencing the issue with Plex buffering while the mover service is running. I did notice when performing iotop command, that shfs is utilizing ~95% IO when mover is running. Is this to be expected?

 

My Plex app directory is located on it's own separate Unassigned Device share, and transcoding directory is set to /tmp BTW. 

 

*** Update ***

Once the mover process finished, I have very negligible  I/O usage. 

Edited by chad4800
update
Link to post

I'm trying 6.6.7 and there are no issues like the above.. I couldn't even install or update a docker image - let alone goto the docker page in the GUI while copying data via SMB or the mover running... 6.6.7 i can happily do that - and have plex running as it should.  So whats broken in 6.7.x... 

Link to post
On 7/28/2019 at 3:55 PM, jpowell8672 said:

Unraid 6.7.2 with plugin set to low priority and idle same problem.

So I just built an entirely new server with brand new beefy hardware only reusing my original Hard drives, cache drive & Unraid usb drive. I went from older quad core xenon with 16gb ddr dual channel memory to a new x399 Threadripper 2920x 12 core with 32gb quad channel ddr4. I also added Quadro p2000 & LSI 9207-8I HBA card and plex is still unresponsive when mover is running. There is definetely something wrong with the I/O in unraid at this point. I'm not bashing Unraid in any way, I love Unraid it is great. I am just hoping seeing that many poeple having same problem that this will be figured out and fixed in the near future.

Edited by jpowell8672
typo
Link to post

Mover was originally intended for idle time. The default schedule is daily in the middle of the night. Not saying this shouldn't be investigated, but it might be useful to some users if they simply reconsidered how they are using the cache feature.

 

Different people will have different needs, of course, but I sometimes wonder if enough thought has been given to how and whether to cache. Here is a recent post I made about how I use cache:

 

https://forums.unraid.net/topic/82329-why-i-wont-be-using-unraid/?tab=comments#comment-764006

 

Link to post
9 hours ago, jpowell8672 said:

So I just built an entirely new server with brand new beefy hardware only reusing my original Hard drives, cache drive & Unraid usb drive. I went from older quad core xenon with 16gb ddr dual channel memory to a new x399 Threadripper 2920x 12 core with 32gb quad channel ddr4. I also added Quadro p2000 & LSI 9207-8I HBA card and plex is still unresponsive when mover is running. There is definetely something wrong with the I/O in unraid at this point. I'm not bashing Unraid in any way, I love Unraid it is great. I am just hoping seeing that many poeple having same problem that this will be figured out and fixed in the near future.

This is most likely related to this:

 

Link to post
  • 1 month later...

Try disabling turbo-write and see if that solves the issue. Turbo-write needs to read from ALL drives when writing, so even if the data is on a drive that is NOT being written to it's still being utilized. Also, download the netdata docker. If the CPU usage is probably I/O wait, which as I understand it is just the system waiting on your the hdds.

Link to post
  • 1 month later...

I have the same problem with UNRAID v6.8 rc5 and before. I don't use plex, but all the shares become completely unresponsive when the mover is running.

I have two Samsung 860 EVO 1TB in RAID 1 (btrfs) assigned to cache.

Changing the priority in CA Mover Tuning to Low didn't change a thing.

Iowait is really high. Up to 84 sometimes.

Turbo write is disabled, but didn't make a difference when enabled.

Link to post
13 hours ago, Racer said:

I have the same problem with UNRAID v6.8 rc5 and before. I don't use plex, but all the shares become completely unresponsive when the mover is running.

I have two Samsung 860 EVO 1TB in RAID 1 (btrfs) assigned to cache.

Changing the priority in CA Mover Tuning to Low didn't change a thing.

Iowait is really high. Up to 84 sometimes.

Turbo write is disabled, but didn't make a difference when enabled.

 

Did you have the same issue in 6.8rc1-3?  When I upgraded to rc1 it seemed to fix my issue and I'm now running rc3 with no issues.

Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.