Jump to content
linuxserver.io

[Support] Linuxserver.io - Plex Media Server

4710 posts in this topic Last Reply

Recommended Posts

11 minutes ago, djhunter67 said:

Is this accurate?

Plex006.PNG

The correct answer depends on what your other plex docker had. The Container Path is where the application will see the files. The existing metadata was created by your other plex docker and so that metadata expects to find the files wherever that other plex saw them.

Share this post


Link to post
4 hours ago, Ascii227 said:

Fair enough. When I created the docker from the template in community apps the tag field was prefilled with latest, so I assumed that was default and I was doing the right thing by leaving it like that.

 

I have just been through the linuxserver.io docker image info in the unraid community apps plugin and there is no mention of tags whatsoever.

The only reference I can find is in the template overview where it states 'VERSION Set to either latest,public or a specific version e.g. "1.2.7.2987-1bef33a"' Nothing to implicate that latest is a beta build and public should be used for stability.

 

For users like me who are just installing the Docker straight through community apps there is little in the way of explanations as to what the differnt tags mean. In fact in all my weeks posting in the forums this is first I have ever heard of the public tag.

 

I have now changed the tag in my docker to public and all is well. Apologies for polluting this issue with misunderstanding the tag system.

https://github.com/linuxserver/docker-plex#application-setup

 

This is linked to in the first post here

Share this post


Link to post
11 minutes ago, aptalca said:

https://github.com/linuxserver/docker-plex#application-setup

 

This is linked to in the first post here

If you want to go down this path of pointing out where I should have looked then thats fine, but I did check the first post of this thread several times and had no reason to go to the github link as I had no interest in the code and already had a path of downloading the docker directly from the community apps.

 

If you want to avoid other people making these same mistakes may I suggest you make it much more obvious everywhere that the readme and possible troubleshooting options are all on the github site and to check that first before posting.

 

Better still copy and pasting the entire readme into the first post would be even better as the thread is listed as the place to get support, not the github repo.

Edited by Ascii227

Share this post


Link to post
1 hour ago, Ascii227 said:

If you want to go down this path of pointing out where I should have looked then thats fine, but I did check the first post of this thread several times and had no reason to go to the github link as I had no interest in the code and already had a path of downloading the docker directly from the community apps.

 

If you want to avoid other people making these same mistakes may I suggest you make it much more obvious everywhere that the readme and possible troubleshooting options are all on the github site and to check that first before posting.

 

Better still copy and pasting the entire readme into the first post would be even better as the thread is listed as the place to get support, not the github repo.

I an not trying to be mean. You said you couldn't find the info anywhere so I pointed you to it. Just making sure you have the info.

 

We only update one place, GitHub (readme), which is the ultimate source of both the code and the documentation. That automatically gets synced to the docker hub with each image push, and is the ultimate source of our images.

 

Everything else links to those two places because we don't have the time to update so many different sources every time something changes.

Edited by aptalca

Share this post


Link to post
57 minutes ago, Ascii227 said:

If you want to avoid other people making these same mistakes may I suggest you make it much more obvious everywhere that the readme and possible troubleshooting options are all on the github site and to check that first before posting.

 

I agree that a lot of useful information for most docker containers is not necessarily obvious without checking the Docker Hub/Github sites.  Fortunately, I learned this long ago (because something was not working as expected and I could not figure out why) and I always check them before or immediately after I install a new container.

 

Most users are just going to install the container with the defaults and perhaps play with a few obvious settings on the container Edit screen.

 

There is a lot of very useful and often necessary information in the readme and other docs on the Docker Hub or Github sites for most containers and I suspect many users never read that.

 

It would not hurt to have that information more easily exposed or at least recommended in the first post.  The links are there, but, many users don't know that more than code is available on those sites.

 

The default tag of "latest" on this docker is going to put users on the bleeding edge who may not want to be there.  On the other hand, the whole UniFi kerfuffle was caused because some users were upset they could not be on the bleeding edge as soon as they wanted to be.

 

If a docker has multiple repository tags or version variables, perhaps at least that information ought to be in the first post of each container.

 

The popup when installing UniFi-Controller is very informative and is referenced in the first posts for the container.  Perhaps other containers need this information presented in a more apparent fashion.

 

image.png.e2988592fb798e0d44e3b3833b4b66fd.png

 

Share this post


Link to post
36 minutes ago, Hoopster said:

 

The popup when installing UniFi-Controller is very informative and is referenced in the first posts for the container.  Perhaps other containers need this information presented in a more apparent fashion.

 

image.png.e2988592fb798e0d44e3b3833b4b66fd.png

 

I would love it if all the maintainers took advantage of that feature of CA   Its one of those features which I thought was going to be a game changer, but wound up with very little adoption.   That popup can actually change any default on the template (paths, ports, environment variables etc)

Share this post


Link to post
1 hour ago, aptalca said:

I an not trying to be mean. You said you couldn't find the info anywhere so I pointed you to it. Just making sure you have the info.

 

1 hour ago, Hoopster said:

I agree that a lot of useful information for most docker containers is not necessarily obvious without checking the Docker Hub/Github sites.

Thanks for the calm and considered responses from you guys. I appreciate that documentation is hard to keep updated in many different locations, I was just pointing out that the signposting to it in this instance is a little expectant of the user to assume to go to the docker hub or github repo.

28 minutes ago, Squid said:

Its one of those features which I thought was going to be a game changer, but wound up with very little adoption.

Squids feature in CA looks awesome, this would have nailed my issue with tag confusion upfront before it even appeared :)

Share this post


Link to post
2 hours ago, Ascii227 said:

If you want to go down this path of pointing out where I should have looked then thats fine, but I did check the first post of this thread several times and had no reason to go to the github link as I had no interest in the code and already had a path of downloading the docker directly from the community apps.

 

If you want to avoid other people making these same mistakes may I suggest you make it much more obvious everywhere that the readme and possible troubleshooting options are all on the github site and to check that first before posting.

 

Better still copy and pasting the entire readme into the first post would be even better as the thread is listed as the place to get support, not the github repo. 

I'm sorry but every initial post has links to both github and docker hub, both of which have the readme on them.  Even mild curiosity or a google search would have got you to the readmes.  If we had everyone banging on our door saying the same thing, then you may have a point, but we don't, there's only so much spoon feeding we can do.  Unraid already has it the "easiest" to install docker containers out of pretty much any host os. 

 

It's very easy to suggest to do extra things, if you wish to pop by our Discord and discuss becoming the documentation guy/girl, then I'm all for contributions, I mean this in all seriousness, please pop in.

Share this post


Link to post
2 hours ago, CHBMB said:

I'm sorry but every initial post has links to both github and docker hub, both of which have the readme on them.  Even mild curiosity or a google search would have got you to the readmes.

Im sorry that you seem to have taken it this way. Due to the way both unraid and dockers have developed over time, the standard way now advised in all tutorials and youtube videos to get these systems set up is to use automated setup scripts and plugins. Unfortunately in depth documentation or even signposts to the correct locations of the docs have not kept up with this pattern and remain on developer centric sites such as github.

 

As Squid himself has said, he foresaw this as a potential issue and specifically developed a feature in his app store to combat it.

 

2 hours ago, CHBMB said:

there's only so much spoon feeding we can do

If you look at my post history you will see I have spent quite some time researching how to use unraid properly, and never just whined and asked for people to solve the problems for me.

2 hours ago, CHBMB said:

It's very easy to suggest to do extra things

All I suggested was that on the first post of this thread you mention that the readme and all potential troubleshooting info can be found on the github page.

No spoonfeeding necessary, just a simple signpost to where the docs are so people can find them.

 

Considering this thread is the official support for this docker and system I think thats quite a constructive suggestion, and not really one that requires a condescending reply about how little I am doing to help myself.

Edited by Ascii227

Share this post


Link to post

Hello,

 

I have a questions. Everything is working on the Plex. Except when i chose the WebUI it will not open or load the page. It tries to load the IP:32400\web. But it just freezes trying to load and only a white screen shows up and fails. I can only access the Plex setting thru app.plex.tv/ .

 

What would caused this? I turn off all firewalls just to see if it being blocked. But i just cant seem to figure it out. As they say it not effecting anything but if i didnt have the app.plex access i would not be able to access it at all. But movies play and can be accessed like normal. 

 

Any idea. 

 

Thanks

Share this post


Link to post
25 minutes ago, cpluse said:

when i chose the WebUI it will not open or load the page. It tries to load the IP:32400\web

Is it the right IP address?

Share this post


Link to post
9 minutes ago, trurl said:

Is it the right IP address?

Yes it the correct Ip. Same one on the settings. Also before when I hit webui it would work. Plus I would assume if it was not the correct one or different the Plex would not run from the docker.  At least that what I assume. 

 

It weird everything working except when trying to access it from it from the docker or typing the up!

Share this post


Link to post
51 minutes ago, cpluse said:

IP:32400\web

Are you actually using a backslash (\) character there? That is wrong.

Share this post


Link to post

Lol, sorry type wrong slash

 

Sorry your right. It look like this 32400/web.

 

LinuxPlex:host

 

 

Share this post


Link to post

I need some assistance troubleshooting poor transcoding performance.

I've been struggling with this since I built my new server late last year. I figured the issue was a lack of understanding on the capabilities of the components I selected for the server. Plex was an afterthought, so I wasn't too upset with making a mistake with component selection.


Server specs are as follows:
SuperMicro X10DRL-IO
2x Xeon E5-2658v3
64gb DDR4-2133 Fully Buffered ECC quad channel (4x16gb sticks - in the correct slots, board reports quad channel)
1x nvidia Quadro P2000 added recently for transcoding via the new nvidia plugin.
24x8TB Western Digital RED drives connected to an HP SAS Expander at 3gb/s (not 6gb/s, limitation of this strange SAS Expander)
Raid controller is a 9211-8i
2xIntel 1tb 660p NVME SSD for cache

Since this is a dual socket system, paying attention to CPU/PCI-E Lane associations is important. Thankfully this board only has ONE slot on CPU2.
Slot arrangement is as follows:

660p SSD's are on a carrier card in the TOP PCI-E x8 slot, which is CPU1 Slot 6
Port Expander is in the lowest x8 slot which is a PCI-E 2.0x4 slot, on the chipset (not on either CPU) - but should only be pulling power from the slot.
Raid Controller is in the slot above that, which is PCI-E 3.0x8 and is CPU1 Slot 2
Graphics card is in the only x16 slot on the board which is CPU1 Slot 5

Any time I go to transcode certain 4k files to any resolution the client reports that the server is not powerful enough to transcode for smooth playback. This seems bonkers since I have 48 CPU Threads and a dedicated GPU for transcoding. I figured originally since the CPU has no QuickSync Video that this was the problem, and that I'd need some sort of extra acceleration to get things done. So I threw the P2000 in, tossed unraid-nvidia in it, configured the container to use the new GPU and also added the nvdec parameter to the Plex Transcoder so that I could use the GPU for decoding and encoding. Video files are stored on the rotational media, fwiw though it shouldn't matter at this bitrate even.

I get 15 seconds in to this file:

image.png
And then it starts buffering and says the server isn't powerful enough.

Checking nvidia-smi shows that the card is being used for encoding:
image.png.4eedcfb19db1bb868e3c57c000bf88ba.png

And checking further shows that it is also being used for decoding:
image.png.712675d8bfce392d87d51628f842f24a.png

CPU Usage in this circumstance is quite low, suggesting that the audio stream shouldn't be bottlenecking. The video quality also suffers substantially, colors are washed out and there is significant banding.

I'm at a loss for what to try to solve this. The target transcode is 720p, for reference, though streaming 1080p at 1080p works just fine on this same machine, even before adding the GPU for transcoding.

Edited by Xaero

Share this post


Link to post

Check the network logs at the player. I use Android TV, and that option is under settings. Then at the server website look for the logs when the errors come up. That should give you a better idea of what's going on. That was happening to me and my solution was set the remote streaming to Maximun because I do want the max resolution.

Sent from my Pixel 2 XL using Tapatalk

Share this post


Link to post
3 hours ago, Xaero said:

I need some assistance troubleshooting poor transcoding performance.

I've been struggling with this since I built my new server late last year. I figured the issue was a lack of understanding on the capabilities of the components I selected for the server. Plex was an afterthought, so I wasn't too upset with making a mistake with component selection.


Server specs are as follows:
SuperMicro X10DRL-IO
2x Xeon E5-2658v3
64gb DDR4-2133 Fully Buffered ECC quad channel (4x16gb sticks - in the correct slots, board reports quad channel)
1x nvidia Quadro P2000 added recently for transcoding via the new nvidia plugin.
24x8TB Western Digital RED drives connected to an HP SAS Expander at 3gb/s (not 6gb/s, limitation of this strange SAS Expander)
Raid controller is a 9211-8i
2xIntel 1tb 660p NVME SSD for cache

Since this is a dual socket system, paying attention to CPU/PCI-E Lane associations is important. Thankfully this board only has ONE slot on CPU2.
Slot arrangement is as follows:

660p SSD's are on a carrier card in the TOP PCI-E x8 slot, which is CPU1 Slot 6
Port Expander is in the lowest x8 slot which is a PCI-E 2.0x4 slot, on the chipset (not on either CPU) - but should only be pulling power from the slot.
Raid Controller is in the slot above that, which is PCI-E 3.0x8 and is CPU1 Slot 2
Graphics card is in the only x16 slot on the board which is CPU1 Slot 5

Any time I go to transcode certain 4k files to any resolution the client reports that the server is not powerful enough to transcode for smooth playback. This seems bonkers since I have 48 CPU Threads and a dedicated GPU for transcoding. I figured originally since the CPU has no QuickSync Video that this was the problem, and that I'd need some sort of extra acceleration to get things done. So I threw the P2000 in, tossed unraid-nvidia in it, configured the container to use the new GPU and also added the nvdec parameter to the Plex Transcoder so that I could use the GPU for decoding and encoding. Video files are stored on the rotational media, fwiw though it shouldn't matter at this bitrate even.

I get 15 seconds in to this file:

image.png
And then it starts buffering and says the server isn't powerful enough.

Checking nvidia-smi shows that the card is being used for encoding:
image.png.4eedcfb19db1bb868e3c57c000bf88ba.png

And checking further shows that it is also being used for decoding:
image.png.712675d8bfce392d87d51628f842f24a.png

CPU Usage in this circumstance is quite low, suggesting that the audio stream shouldn't be bottlenecking. The video quality also suffers substantially, colors are washed out and there is significant banding.

I'm at a loss for what to try to solve this. The target transcode is 720p, for reference, though streaming 1080p at 1080p works just fine on this same machine, even before adding the GPU for transcoding.

When you transcode hdr, the colors will be washed out, no way around it. Google plex hdr tone mapping and you'll see plenty of articles and posts about it.

 

With regards to buffering, you must have a bottleneck somewhere, you are working with a high bitrate file, so it could be io or network related as well

Share this post


Link to post
14 minutes ago, aptalca said:

When you transcode hdr, the colors will be washed out, no way around it. Google plex hdr tone mapping and you'll see plenty of articles and posts about it.

 

With regards to buffering, you must have a bottleneck somewhere, you are working with a high bitrate file, so it could be io or network related as well

Strange, it seems to be encode bottlenecked from any probing I do.
Disk read speed according to iotop for the process toggles between ~200kb/s to ~1.5mb/s with the main page unable to even show which disk is being hit (though I know which disk it is by finding it on the disk.)
IO Tests on the disk show it capable of reading at ~150-200mb/s sequential, and no other activity is on that disk. So that should mean the Disk, and interface is good.
The transcode directory is in tmpfs so DDR4-2133 shouldn't be the bottleneck either. The connection *is* slow, but I am streaming using a preset that works on other video files (720p 2mb/s)

I'll have to dig at logs some more to see if I can find a "real reason" for it to not transcode this just fine, as it should be able to according to the specs on paper.

Share this post


Link to post

What is the hourly sas expander that you are using?

Like to know since I just bought a used one of eBay for my 24disk move from 10 disk.


Thank you

Sent from my SM-N960U using Tapatalk

Share this post


Link to post
1 hour ago, ijuarez said:

What is the hourly sas expander that you are using?

Like to know since I just bought a used one of eBay for my 24disk move from 10 disk.


Thank you

Sent from my SM-N960U using Tapatalk
 

HP 487738-001 / 468405-001 (they are the same part)
There are some 24 port HBAs on the market now that are just a tad pricey, but the allure of only utilizing a single slot is quite nice.  There are also a couple that have 2x external ports as well, which would be nice as eventually I'll add a disk shelf to expand (many years away, I hope)

Share this post


Link to post
3 hours ago, Xaero said:

HP 487738-001 / 468405-001 (they are the same part)
There are some 24 port HBAs on the market now that are just a tad pricey, but the allure of only utilizing a single slot is quite nice.  There are also a couple that have 2x external ports as well, which would be nice as eventually I'll add a disk shelf to expand (many years away, I hope)

damn autocorrect, I bought the same card, i dont have a 4k tv so no 4k vids.

 

i also got the same LSI controller in IT mode, but i thought that even with the expander one raid card could only do 16 drives?

Share this post


Link to post

I'm having a bit of a problem. So I paid the plex pass and enabled hardware transcoding. I have no GPU but my processor can handle HW Transcoding. The plex automatically updated to a version 1.15 and I can't watch some of my movies now. And it doesn't make sense. All I did was pay the plex pass

 

From network error I'm getting 

 

[FFMPEG] - No VA display found for device: /dev/dri/renderD128.

Edited by gacpac

Share this post


Link to post
I'm having a bit of a problem. So I paid the plex pass and enabled hardware transcoding. I have no GPU but my processor can handle HW Transcoding. The plex automatically updated to a version 1.15 and I can't watch some of my movies now. And it doesn't make sense. All I did was pay the plex pass
 
From network error I'm getting 
 
[FFMPEG] - No VA display found for device: /dev/dri/renderD128.


Problems playing media seem to be quite common with the 1.15.0.659 version included with latest Plex Pass. Many have resolved it by rolling back to the prior version 1.14.1.5488.

Now your error seems to indicate that no iGPU supported by i915 drivers is found. Is your iGPU set as the primary graphics adapter in your BIOS?

What is your CPU? Intel i9 iGPUs are not currently supported by i915 drivers in the Linux kernel used in UnRAID.


Sent from my iPhone using Tapatalk

Share this post


Link to post
5 minutes ago, Hoopster said:

 


Problems playing media seem to be quite common with the 1.15.0.659 version included with latest Plex Pass. Many have resolved it by rolling back to the prior version 1.14.1.5488.

Now your error seems to indicate that no iGPU supported by i915 drivers is found. Is your iGPU set as the primary graphics adapter in your BIOS?

What is your CPU? Intel i9 iGPUs are not currently supported by i915 drivers in the Linux kernel used in UnRAID.


Sent from my iPhone using Tapatalk

 

Okay the part with the version I got it fixed. Now it's the transcoding. I don't see my render when I go to /dev/dri  

/dri -- doesn't exist 

 

The processor I'm using is an i5-4570 with intel video quicksync. Which is supposed to be supported. 

image.thumb.png.85d100422009af2e2e8fcb9db95f11f9.png

Share this post


Link to post

The processor I'm using is an i5-4570 with intel video quicksync. Which is supposed to be supported. 
image.thumb.png.85d100422009af2e2e8fcb9db95f11f9.png



Yes, that CPU/iGPU should work. I have an i5-4590 in one of my servers and it works fine for hw transcoding.

Again, is your iGPU the primary graphics adapter in BIOS? Auto will not work.


Sent from my iPhone using Tapatalk

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.