[Support] Linuxserver.io - Plex Media Server


Recommended Posts

Parity rebuild and 2 streams going at the same time? Not only is that going to slow down reading by plex, it is going to slow down your parity rebuild. You do know that rebuilding parity must read all the disks don't you? And in the meantime, 2 different plex streams is asking for data on different sectors than those the parity rebuild is asking for (and even each other if they are on the same disk), causing the disks to thrash as it tries to move the heads all over the place and wait for the platters to spin around to the correct place. Which specific stream (including parity rebuild) gets the most noticeable performance degradation may not be predictable without a lot more information than we will know.

Link to comment

thanks Trurl. Yes, I do realize that with parity sync, it's the secnod parity drive being built, I realize it will slow it down... I actually cancelled the 2nd parity sync at some point and it still had to buffer! Again it seems to buffer only the local streams not remote ones. Shouldn't the remote stream also then be affected? I should probably mention that I have Sab running in the background during all of this as well, but the processor should still be able to handle all of this, especially during direct play no? Again the remote stream is transcoding using hardware and that's also smooth. 

 

Should I attach the plex.log file? Will that be helpful? Not sure if it's a good idea to post it in a public space?

 

Oh I see what you're saying, that perhaps the remote stream is using a file on a disk that gets priority at that specific time. But I'm pretty sure that during the most recent buffering time I had cancelled the parity sync and it still buffered

Edited by maxse
Link to comment

Of course the whole purpose of SAB is downloading, which is not only reading but also writing. Is it writing to the same disk plex is streaming from?

 

I don't know if that plex log will be useful to someone on this forum or not. We are mostly just plex users here. Even the plex docker authors are just packaging software someone else wrote. The reason I said

1 hour ago, trurl said:

without a lot more information than we will know

was because of the details of what other processes were doing with which disks on which sectors and so on. Literally more than we will know.

 

I suppose its also possible that you have some disk I/O problem you haven't mentioned or even noticed. That might show up in your Unraid Diagnostics.

Link to comment

Having a couple issues with Plex, maybe related to each other or not.  I was able to create the Photos library, and it behaved as expected finding all the files.  However, it cannot see the Video or Music shares.  It's as if Plex sees a different set of folders than UNRAID does?  The Photos folder in the plex screenshot is the one that works, but why do I not see one for Videos and Music?

image.png.18efc648c6c39ec5f871d37d10be4195.pngimage.png.dd95184fc16370b9ae372a75b42c5a5b.png

 

 

Not sure this is related, but should I be seeing two Plex folders like this this:

 

image.png.0feeed92eb2d66acef68fb83de0ea3e4.png

 

And finally, camera upload stopped working months ago and I have tried on mu;ltiple iOS and Android phones.  I'm betting that's a Plex problem, but thought I would check here.

 

Edited by btrcp2000
better detail and clarity
Link to comment
30 minutes ago, btrcp2000 said:

It's as if Plex sees a different set of folders than UNRAID does?

Yes, that's the way it works.  Dockers have their own "file system" and this has to be mapped to the host (unRAID) file system.

 

You need to create volume mappings in the docker that map host paths to container paths.

 

Your screen shot only shows partial information.  What are the container paths that correspond to your host paths?

 

In the Plex Server when creating a library, you browse to the corresponding container path to select the folder (/movies, /tv, etc.) not to the host path (/mnt/user/Movies).

 

Here are my mappings, for example:

 

image.thumb.png.ac0f70ada51f4793ec09f5958e08f86f.png

Link to comment

Damn it, it stuttered again 10 minutes ago while watching a 720p file, no parity check going on, just sab, sonarr, radarr running in the background, nothing crazy. And also this time it was the only stream going.

 

I am attaching diagnostics with this post.  I did have a power outage this morning, which the UPS worked perfectly.

 

I did notice around midnight there are some pro key errors, not sure if that's related or what that even means. And it looks like some memory issues around midnight also? Maybe I do have IO issues? Never had any problems before but really desperate now as I just upgraded to unraid, upgraded processor, motherboard and RAM and internet speed and got rid of my mac mini to consolidate into this one unraid system. Anyway, please help me out guys.

sabertooth-diagnostics-20190102-1723.zip

Edited by maxse
Link to comment
8 minutes ago, maxse said:

I did notice around midnight there are some pro key errors, not sure if that's related or what that even means.

Unimportant. When you start the array Unraid looks for your license .key file. An annoying side effect of Apple ecosystems is putting a bunch of hidden (.) files everywhere. I don't know why. The message is because Unraid finds ._Pro.key, decides it is no good and then it finds the actual Pro.key file.

 

Not sure what memory message you were seeing, but it does suggest the question. Have you done a memtest recently?

 

I don't see any mention of I/O errors in syslog. As you may have noticed if you were looking at syslog, sometimes hours go by with nothing to report.

 

Do you see anything in the errors column on Main?

 

I notice your disk1 is 100% full. Don't know if there might be any performance issues associated with that or not.

Link to comment

No errors in the main column at all. It's full because I was encrypting drives and moved things with krusader. I just freed up so it's 85 gigs free. Also I noticed this in docker log, anything significant?

 

time="2019-01-02T00:43:40-05:00" level=warning msg="failed to load plugin io.containerd.snapshotter.v1.zfs" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" 
time="2019-01-02T00:43:40-05:00" level=info msg="loading plugin "io.containerd.metadata.v1.bolt"..." type=io.containerd.metadata.v1 
time="2019-01-02T00:43:40-05:00" level=warning msg="could not use snapshotter aufs in metadata plugin" error="modprobe aufs failed: "modprobe: FATAL: Module aufs not found in directory /lib/modules/4.18.20-unRAID\n": exit status 1" 
time="2019-01-02T00:43:40-05:00" level=warning msg="could not use snapshotter zfs in metadata plugin" error="path /var/lib/docker/containerd/daemon/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter" 

 

Sorry I thought that was memory error. I have not run memtest, I just got brand new RAM when I upgraded the server. Corsair 16Gigs.

 

Also, I have been using 2 TP link switches one next to the appleTV to connect my SONOS and apple TV. I tried playing the same file now connected directly to a FIOS G1100 router and it seems to be playing smooth! No stuttering! Could it be a router issue? Would this just be a coincidence when I upgraded? 

If it turns out to play smooth now without issues, any suggestions for a router? Do I just get one router and run 2 more ethernet cables behind my TV, for SONOS, AppleTV, and one extra if I get another Device like an Xbox? 

 

 

Link to comment

Guys help! I don't know what's going on. It's not the swtch, family is watching TV and boom freezes for everyone and even a remote stream this time.

I checked unraid and processor was pegged at like 92% All 6 cores! I just have sab, sonarr, radarr running that's all. I think that's what causing it! A file must have been moving from the flash drive to the array? But this shouldn't cause processor to spike over 90% and freeze the streams right? I mean I used to run unraid 5 with 8 gigs of ram, same services, on an AMD Athlon processor. No way this processor isn't enough. It's an i5-8400. What should do I do?

 

So I just noticed when Sab unpacks processor spikes to 100%! ALL 6 CORES!

Edited by maxse
Link to comment

Something is wrong guys, but I think I am getting close!. I pinned sab, radarr, sonarr to use CORE 5 only. Now I can see only Core 5 spiking to 100%

 

BUT, for some reason I am having core 1, 2, 3 MAXED OUT at 100%! Under docker tab for utilization, all dockers  (I only have 4 dockers, sab, radarr, sonarr, plex) were below 2% CPU usage, pretty much everything was again pegged at 100%.

 

What's causing this? I don't know. All dockers are pegged to use core 5, except plex which is not pinned to a CPU.

Link to comment

Might shutdown the plex docker for a bit and maybe un-pin the others just to verify that it's really plex that's the cause.

 

Running memtest overnight on your new memory couldn't hurt either, just to rule it out (if you haven't already). I've bought memory that was bad straight out of the box.

Link to comment

Guys thank you sooo much! Narrowed it down to SAB, it would spike the CPU usage to 100%!

 

I switched to nzbget today and pinned all the dockers to 1 core, and it looks like it's using so much less CPU and everything has been fine so far. Thank you all for helping me narrow this down, it was so frustrating. Thank you!

 

*EDIT* Ahh, seems even with nzbget, when it is unpacking, even though the docker is pinned to 1 core, I have 3 cores get go all the way to 100% usage of the CPU. I wasnt streaming plex, but I suspect that this would freeze up any streams that would be transcoding. What can I do here guys? I specifically got an i5-8400 with hardware transcoding, and wanted to make sure my system was powerful enough for a few transcoded streams while running these few dockers, that I've seen people with much less powerful hardware run. What should I do? This makes plex pretty much unusable.

 

BTW, I didn't have any streaming at all this time, the cores spiked just during unpacking phase of nzbget

I am also using CACHE drive ONLY for downloads, and did not enable cache for any of the other shares, I want files to go directly into the array when copied to the "array" Not sure if this info helps

 

*EDIT 2*

Damn it, 3-4 cores still getting pegged at 100% for some reason and my LOCAL plex stream froze when this happened. This makes plex unusable. Guys what do I do and why is this happening? All dockers except plex are pegged to 1 core. I don't know what's going on. Should I start a separate thread on this? 

 

Also the docker CPU utilization (on the docker page) doesn't add up anywhere close to 100% that the CPU is getting to. Is this correct that's it's showing most of my 16 gigs of ram is being used? Not sure if that's an issue, figured I would post this screen shot

Screen Shot 2019-01-04 at 4.32.49 PM.png

Edited by maxse
Link to comment

Plex on ATV has been very unstable for me. The Plex developers have replied and it seems to be an issue of the docker or how I've set it up. Please see the reply from the develop below:

Quote

 

Is this directory structure on a mounted volume ?
/config/Library/Application Support/Plex Media Server/
If it is, you need to make sure the /etc/fstab has exec option and no noexec setting - this is because the EasyAudioEncoder binary gets executed from within that directory structure

If it is not this, check the /tmp permissions - but it is most likely the other

 

 

I have asked for clarification, but also like to check on this forum as this may be easier to answer from a docker perspective. Thanks for your help as always!

Link to comment
5 hours ago, steve1977 said:

Plex on ATV has been very unstable for me. The Plex developers have replied and it seems to be an issue of the docker or how I've set it up. Please see the reply from the develop below:

 

I have asked for clarification, but also like to check on this forum as this may be easier to answer from a docker perspective. Thanks for your help as always!

 

I don't think unraid mounts any disk with no exec flag, but type mount on the command line and you will get the mount flags used on each device. 

Inside the container, fstab isn't used. 

Link to comment

Great, thanks. Will do and send. What you mean by upload? put in a word document and attach the link?

 

I don’t think i map any disk in unraid. i’m mapping them individually in plex. I forgot why, but there was some advantage of mapping them individually rather than through one user share. Are you suggesting user share may be better.

 

On separate note - i use ram to transcode, but realized this is drawing quite some ram. when transcoding two streams, this impacts my VM and leads to crashes. wanted to change it back to transcoding via ssd. how to do?

Link to comment
3 minutes ago, steve1977 said:

Great, thanks. Will do and send. What you mean by upload? put in a word document and attach the link?

 

I don’t think i map any disk in unraid. i’m mapping them individually in plex. I forgot why, but there was some advantage of mapping them individually rather than through one user share. Are you suggesting user share may be better.

 

On separate note - i use ram to transcode, but realized this is drawing quite some ram. when transcoding two streams, this impacts my VM and leads to crashes. wanted to change it back to transcoding via ssd. how to do?

 

Do not use a word document. A simple text file is enough. 

Normally you map the shares to a container. One of the few reasons to use a disk instead is for the appdata folder for some apps, like plex. 

Link to comment

Let me give it another try. Attached the mount on Unraid command line. Also the docker run command below:

 

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex1' --net='host' -e TZ="Asia/Shanghai" -e HOST_OS="Unraid" -e 'PUID'='99' -e 'PGID'='100' -e 'VERSION'='latest' -v '/mnt/':'/data':'rw' -v '/tmp':'/transcode':'rw' -v '/mnt/cache/Docker/PlexMediaServer/':'/config':'rw' 'linuxserver/plex' 

<removed>

Unraid mount.txt

Link to comment

I see. You gave plex access to all of your data. I just had not looked at the mounts inside the container when you do it that way.

 

The "advantage" of giving plex access to all your data is it makes it so you don't have to bother with understanding volume mapping.😉 Actually, I do that myself, and I understand volume mapping very well.

 

However, instead of giving it access to /mnt as you have done, I give it /mnt/user. In other words, I am giving it all my user shares, whereas you are giving it all your disks and all your user shares. Plex is unlikely to break anything like that, but in general I wouldn't give anything direct access to my disks, mainly because you should never mix disks and user shares when managing files.

 

Don't change it though or plex won't be able to find the media it has already scanned.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.