Jump to content
IamSpartacus

Mover making Plex server unresponsive?

75 posts in this topic Last Reply

Recommended Posts

If a drive has to spin up, then pauses in all I/O streams are to be expected and normal.  But on other circumstances I would think that part of the problem could possibly be the Atom CPU itself

Share this post


Link to post
On 6/25/2019 at 9:32 AM, jpowell8672 said:

Unraid 6.7.1 stable

I'm having the same issue of plex stream(s) stops when mover is running. If you try to start stream when mover is running, plex spinning circle is all you will get until mover is done.

Unraid 6.7.2 with plugin set to low priority and idle same problem.

Share this post


Link to post
8 hours ago, jpowell8672 said:

Unraid 6.7.2 with plugin set to low priority and idle same problem.

Same...

Share this post


Link to post

Why not just move data right to the array instead of using the cache drive and mover do that? Might have a little buffering while that is going on but shouldn't become unresponsive.

Share this post


Link to post

Any data access to the array causes all docker and VM activity to become unresponsive.  Copying a file via SMB or moving data from cache manually causes docker to become unresponsive - this isn't just during mover activities.  Lets say copying a file to the array via SMB to a share that is not part of the cache pool.  copying a single file is causing the docker and/or vm's to become unresponsive - this might happen after maybe 500mb into the file. 

 

I've tried different parity drives(SATA and SAS drives), different controllers(SAS and SATA), different setups of the SATA controller, different network cards.

Share this post


Link to post

Similar problem here (and totally unrelated to Plex).  Mover currently running, array CPU is very high and browsing shares very slow.  It's moving data around 10MB/s (I've used the CA plugin to lower the priority of mover, which had a major impact on this).  However, something still doesn't seem right, it's taking 70-80% of 8 cores to do a move at 10MB/s but I can copy to the array at 150MB/s (directly to the array, no cache drive, turbowrite on) and the CPU usually won't spike above 10%.  

 

Putting it another way, I suspect I could put 2 shares on the array, one on the cache drive and one on the protected (no cache) destination and do a copy at 10X the speed with 1/10th the CPU.  That makes no sense, because copying from the cache drive (which I can do at 150MB/s) and then back to the array (where I can also write at 150MB/s with turbowrite on) via the network stack should be more costly than doing the same operation internal to the array (it's got to move all the same data, but also do all the TCP work/NFS handles/etc).  No reason that should be 100X more computationally intensive to copy internally than to do the same copy over the network.  

 

It's no big deal for me, I rarely use this functionality, I just copied in tons of data yesterday and wanted to stage it on the cache drive because I was streaming tons of data in the other direction from the protected shares at the time.  But something doesn't feel right about whatever process the mover is using to pick up files from the cache to the protected array.  Anyone try copying a huge file to the cache drive and then logging into the array and doing a simple "cp" command into a parity protected user mount point?  That would be interesting.

Share this post


Link to post
1 hour ago, Overtaxed said:

But something doesn't feel right

Diagnostics? 

Share this post


Link to post
1 hour ago, Overtaxed said:

hopefully it's related

Maybe different than problems others are having in this thread.

 

Some of your disks are pretty full and these are still using reiserFS. That filesystem is known to perform poorly when disks get full.

 

Also, your appdata and system shares have some files on disk1, which is one of those nearly full reiserFS disks. Those shares should be completely on cache to avoid performance issues like you are seeing.

 

Share this post


Link to post

Unbalancing appdata and system shares away from the user (parity) drives as we speak.  I'll update with success/failure after I've had a little time to test.

Share this post


Link to post

I think I'm having a similar issue. I view my logs and it says that mover error 397, the /mnt/cache/appdata/plex/transcode temp files cannot be moved because they can not be found. When I view this log this is repeated hundreds of times. My system is extremely laggy. Manually running the mover seemed to fixed this. Any idea what is going on? Should I have my transcode folder somewhere else?

Share this post


Link to post

What is the Use Cache setting for the ‘appdata’ share?    Normally it would be set to one that would mean mover ignores it.

Share this post


Link to post
51 minutes ago, itimpi said:

What is the Use Cache setting for the ‘appdata’ share?    Normally it would be set to one that would mean mover ignores it.

Right now it is set to Yes. Should it be set to only?

Share this post


Link to post
Posted (edited)
11 minutes ago, RubiksCube said:

Right now it is set to Yes. Should it be set to only?

Yes is almost certainly the wrong setting as that means new files are created on the cache and then are moved from cache to array when mover runs!

 

what you probably want to do is:

  • Stop docker service
  • set the Use Cache setting for ‘appdata’ to Prefer
  • Manually run mover to get any files that are on the array for appdata moved to the cache
  • change the setting for the ‘appdata’ share to ‘Only’ to make sure new appdata files are created on the cache and mover will not attempt to move them to the array.
  • re-enable the docker service.
Edited by itimpi

Share this post


Link to post
Posted (edited)

So I'm still experiencing the issue with Plex buffering while the mover service is running. I did notice when performing iotop command, that shfs is utilizing ~95% IO when mover is running. Is this to be expected?

 

My Plex app directory is located on it's own separate Unassigned Device share, and transcoding directory is set to /tmp BTW. 

 

*** Update ***

Once the mover process finished, I have very negligible  I/O usage. 

Edited by chad4800
update

Share this post


Link to post

the only way i managed to fix my issue is rolling back to before 6.7.x and everything is working smooth again.

I tried dropping my Mover Priority whatever is causing it seems to be more than just a priority setting.

Share this post


Link to post

I'm trying 6.6.7 and there are no issues like the above.. I couldn't even install or update a docker image - let alone goto the docker page in the GUI while copying data via SMB or the mover running... 6.6.7 i can happily do that - and have plex running as it should.  So whats broken in 6.7.x... 

Share this post


Link to post
Posted (edited)
On 7/28/2019 at 3:55 PM, jpowell8672 said:

Unraid 6.7.2 with plugin set to low priority and idle same problem.

So I just built an entirely new server with brand new beefy hardware only reusing my original Hard drives, cache drive & Unraid usb drive. I went from older quad core xenon with 16gb ddr dual channel memory to a new x399 Threadripper 2920x 12 core with 32gb quad channel ddr4. I also added Quadro p2000 & LSI 9207-8I HBA card and plex is still unresponsive when mover is running. There is definetely something wrong with the I/O in unraid at this point. I'm not bashing Unraid in any way, I love Unraid it is great. I am just hoping seeing that many poeple having same problem that this will be figured out and fixed in the near future.

Edited by jpowell8672
typo

Share this post


Link to post

Mover was originally intended for idle time. The default schedule is daily in the middle of the night. Not saying this shouldn't be investigated, but it might be useful to some users if they simply reconsidered how they are using the cache feature.

 

Different people will have different needs, of course, but I sometimes wonder if enough thought has been given to how and whether to cache. Here is a recent post I made about how I use cache:

 

https://forums.unraid.net/topic/82329-why-i-wont-be-using-unraid/?tab=comments#comment-764006

 

Share this post


Link to post
9 hours ago, jpowell8672 said:

So I just built an entirely new server with brand new beefy hardware only reusing my original Hard drives, cache drive & Unraid usb drive. I went from older quad core xenon with 16gb ddr dual channel memory to a new x399 Threadripper 2920x 12 core with 32gb quad channel ddr4. I also added Quadro p2000 & LSI 9207-8I HBA card and plex is still unresponsive when mover is running. There is definetely something wrong with the I/O in unraid at this point. I'm not bashing Unraid in any way, I love Unraid it is great. I am just hoping seeing that many poeple having same problem that this will be figured out and fixed in the near future.

This is most likely related to this:

 

Share this post


Link to post

Try disabling turbo-write and see if that solves the issue. Turbo-write needs to read from ALL drives when writing, so even if the data is on a drive that is NOT being written to it's still being utilized. Also, download the netdata docker. If the CPU usage is probably I/O wait, which as I understand it is just the system waiting on your the hdds.

Share this post


Link to post

I have the same problem with UNRAID v6.8 rc5 and before. I don't use plex, but all the shares become completely unresponsive when the mover is running.

I have two Samsung 860 EVO 1TB in RAID 1 (btrfs) assigned to cache.

Changing the priority in CA Mover Tuning to Low didn't change a thing.

Iowait is really high. Up to 84 sometimes.

Turbo write is disabled, but didn't make a difference when enabled.

Share this post


Link to post

Same problem here... there is def. something wrong with the mover...

The whole sys is lagging around when mover is running.

Edited by Zonediver

Share this post


Link to post
13 hours ago, Racer said:

I have the same problem with UNRAID v6.8 rc5 and before. I don't use plex, but all the shares become completely unresponsive when the mover is running.

I have two Samsung 860 EVO 1TB in RAID 1 (btrfs) assigned to cache.

Changing the priority in CA Mover Tuning to Low didn't change a thing.

Iowait is really high. Up to 84 sometimes.

Turbo write is disabled, but didn't make a difference when enabled.

 

Did you have the same issue in 6.8rc1-3?  When I upgraded to rc1 it seemed to fix my issue and I'm now running rc3 with no issues.

Share this post


Link to post

I didn't test rc1. I tested rc3, rc4 and rc5. On rc4 and rc5 it's definitely an issue.

On rc3 I wasn't really aware of the problem. So I'm not sure whether it existed or not.

Maybe I can rollback to test it.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.