[Support] Plex from Linuxserver


48 posts in this topic Last Reply

Recommended Posts

"We have discoverd a issue, i will look in to it, and get back at you soon. :)"

 

Good to hear...I look forward to the fix!

 

EDIT:

 

Tested out the new nzbget and it is working now without changing the owner/permissions!  Awesome!

 

Also tested out the new plex, but it has a slight issue:

 

*** Running /etc/my_init.d/10_dbus.sh...

*** Running /etc/my_init.d/11_new_user.sh...

*** Running /etc/my_init.d/20_update_plex.sh...

  % Total    % Received % Xferd  Average Speed  Time    Time    Time  Current

                                Dload  Upload  Total  Spent    Left  Speed

 

0    0    0    0    0      0      0 --:--:-- --:--:-- --:--:--    0

1  100    21    0    0    117      0 --:--:-- --:--:-- --:--:--  117

*** Running /etc/my_init.d/99_chown_plex_owned_files.sh...

find: `/config/Library/Application Support': No such file or directory

*** /etc/my_init.d/99_chown_plex_owned_files.sh failed with status 1

 

It keeps on getting stuck here and repeats until I kill the container...

 

I know all the perms on the config folder are working, so maybe that script is running too soon??

 

Thanks!

Link to post
  • 2 weeks later...

I just tried switching from Needo's plex to Linuxserver's.  Looking in the log I see this error:

  /etc/my_init.d/99_chown_plex_owned_files.sh: line 4: [: missing `]'

and then Plex has problems due to permission issues.

 

I poked around here:

  https://github.com/linuxserver/plex/blob/master/init/99_chown_plex_owned_files.sh

And I think the problem is that the first path isn't in quotes like this:

  if [ -f "/config/Library/Application Support"]; then

Link to post

the space between " and ] is important.

 

Thanks!  I was focused on the missing quotes around the path, didn't actually know about that :)

 

 

So I docker exec'd into the container and manually ran the next line in 99_chown_plex_owned_files.sh:

 find "/config/Library/Application Support" -user plex -exec chown abc:abc {} \;

but it didn't do anything. 

 

All of my files were owned by user 999, so I ran this instead:

 

find "/config/Library/Application Support" -user 999 -exec chown abc:abc {} \;

It changed a lot of files, but when I run this:

 

find "/config/Library/Application Support" -user 999

it continues to find more.  I'm guessing maybe the files are open in Plex?  Not sure how to shut down Plex without shutting down the docker.

 

 

Link to post

the space between " and ] is important.

 

Thanks!  I was focused on the missing quotes around the path, didn't actually know about that :)

 

 

So I docker exec'd into the container and manually ran the next line in 99_chown_plex_owned_files.sh:

 find "/config/Library/Application Support" -user plex -exec chown abc:abc {} \;

but it didn't do anything. 

 

All of my files were owned by user 999, so I ran this instead:

 

find "/config/Library/Application Support" -user 999 -exec chown abc:abc {} \;

It changed a lot of files, but when I run this:

 

find "/config/Library/Application Support" -user 999

it continues to find more.  I'm guessing maybe the files are open in Plex?  Not sure how to shut down Plex without shutting down the docker.

 

/config......

 

is that mapped locally ?

 

if it is , stop the docker and go to the unraid command line and the path and run the chown there, substituting nobody:users for abc:abc

Link to post

/config......

 

is that mapped locally ?

 

if it is , stop the docker and go to the unraid command line and the path and run the chown there, substituting nobody:users for abc:abc

 

Hmm, so "abc" inside the container is "nobody" outside the container.  Would love to understand why that is important :)

 

But I see now, the remaining files are actually symbolic links so they throw "chown: cannot dereference" errors outside the container.  Should I run "chown -h" inside the container instead?

Link to post

/config......

 

is that mapped locally ?

 

if it is , stop the docker and go to the unraid command line and the path and run the chown there, substituting nobody:users for abc:abc

 

Hmm, so "abc" inside the container is "nobody" outside the container.  Would love to understand why that is important :)

 

But I see now, the remaining files are actually symbolic links so they throw "chown: cannot dereference" errors outside the container.  Should I run "chown -h" inside the container instead?

 

i don't know enough about symbolic links to say either way and would defer to someone else on that issue.

Link to post

OK, that did the trick.  an improved 99_chown_plex_owned_files.sh would be:

 

if [ -f "/config/Library/Application Support" ]; then

find "/config/Library/Application Support" \! -user abc -exec chown -h abc:abc {} \;

find "/config/Library/Application Support" \! -group abc -exec chown -h abc:abc {} \;

fi

 

 

Link to post

OK, that did the trick.  an improved 99_chown_plex_owned_files.sh would be:

 

if [ -f "/config/Library/Application Support" ]; then

find "/config/Library/Application Support" \! -user abc -exec chown -h abc:abc {} \;

find "/config/Library/Application Support" \! -group abc -exec chown -h abc:abc {} \;

fi

 

fork the repo and make a pull request.

Link to post

fork the repo and make a pull request.

 

Good idea.  For anyone following along, I also changed the -f to a -d.  Thanks for all the help sparkly!

 

Thanks for accepting the patch lonix, I am no longer having errors with 99_chown_plex_owned_files.sh

Link to post

Just installed this docker and I'm attempting to move from the Lime Technology docker but I've hit a snag. All of my movies have been scanned and imported, but Plex isn't finding the artwork for any of them. All it shows is a random thumbnail from the middle of the movie resized and squished to fit the artwork icon. What's up with this? Is anyone else having this problem or did I do something wrong?

Link to post

Yes, and everything worked when using the plugin in version 5, and it all came up fine when using the Lime Tech docker. But just in case here is an example of my movie file structure:

 

/Media/Theatrical Movies/101 Dalmatians (1961)/101 Dalmatians (1961)

/Media/Theatrical Movies/Anna and the King (1999).m4v

/Media/Theatrical Movies/Avater (2009)/Avater (2009) part 1.m4v

/Media/Theatrical Movies/Avater (2009)/Avater (2009) part 2.m4v

/Media/Theatrical Movies/Batman Begins (2005).m4v

 

As you can see, some of them are in folders named after the movie and some are just in the root Theatrical Movies/ folder. I started without the folder named after the movie, but then started using it later and just haven't converted all of the files to follow that convention.

 

 

Link to post

Ok, so here's what I've uncovered so far. I restarted the docker and that didn't change anything that I can tell. But I let it sit for awhile to see if it was just really slow. Now the covers show up, but only if I manually go into the details of the movie. It doesn't seem to be automatically matching the movies without me going into each and every movie. It's better than nothing, but going to be a tedious and slow process to go through everything.

 

Anyway, I've attached the screenshots of the settings as requested in case we can fix this little bug.

LimetechPlex.png.1f981915d6b6b83c69f12843a3158c6a.png

LinuxServerPlex.png.2f1f755ecda5c4d0133c417d44f3a504.png

Link to post

Why do you have different config directories for the two dockers?  Typically if you are switching between Plex dockers you would shutdown/delete the old one, then setup the new one to point at the same config directory the old one was using.  That way you keep all of your settings.  See:

  https://lime-technology.com/forum/index.php?topic=41562.msg394565#msg394565

  https://lime-technology.com/forum/index.php?topic=41609.0

 

I don't know why the new one is downloading data slower, perhaps the Plex agents throttle how many requests you can make in a single day?

Link to post

A Few hours ago a patch was submitted to solve some the problem that some people experienced trying to access their plugins directory. This patch had some problems though.

 

1.  It delayed the start of the plex with around 3. Mins.

2.  It changed permissions on a lot of things, plex uses permissions to deal with a few things.

3.  Plex also uses the Changed timestamp to determine whether or not to do certain things (like validate data\Backup etc..)

 

Due to a lack of communication and Quality Assurance this patch was applied to General Public. (was redacted within a hour). And i would like to sincerely apologies for that. This should normalize itself within a week or so.

 

Now to anyone have issues accessing there plugins folder. this is not a issue with this container in any way, but rather a result of something in your old container\plugin or such. You should rather ssh into your server and issue a command something like:

chmod -R ug+rw /path/to/plex/appdata 

with the container stopped. This should fix the problem once and for all. And does not delay the startup of the container.

Link to post

 

/Media/Theatrical Movies/Avater (2009)/Avater (2009) part 1.m4v

/Media/Theatrical Movies/Avater (2009)/Avater (2009) part 2.m4v

 

 

Avatar

 

Good catch, but that is my bad from typing it on the forum.

 

 

I didn't have time over the weekend to mess with it, but it looks like just about everything showed up. So I guess it needed to take its sweet time. Thanks for the help, y'all.

Link to post

I'm having problems with this docker.  I get command failed.

 

IMAGE ID [latest]: Pulling from linuxserver/plex.
IMAGE ID [e206d5fe2a5b]: Pulling fs layer.
IMAGE ID [cfd284ec74f2]: Pulling fs layer.
IMAGE ID [e6624efa03db]: Pulling fs layer.
IMAGE ID [d680890e907c]: Pulling fs layer.
IMAGE ID [26a00a9ab589]: Pulling fs layer.
IMAGE ID [8172f09f7095]: Pulling fs layer.
IMAGE ID [4b5ccf93ca2a]: Pulling fs layer.
IMAGE ID [cf788be675fd]: Pulling fs layer.
IMAGE ID [dad4ea3f0ebd]: Pulling fs layer.
IMAGE ID [91bcfb40161d]: Pulling fs layer.
IMAGE ID [05c5b8ad1f8a]: Pulling fs layer.
IMAGE ID [29af19c672e1]: Pulling fs layer.
IMAGE ID [1945f71177cf]: Pulling fs layer.
IMAGE ID [6c64bfaf5450]: Pulling fs layer.
IMAGE ID [031fcb0b8e62]: Pulling fs layer.
IMAGE ID [511136ea3c5a]: Already exists.
IMAGE ID [53f858aaaf03]: Already exists.
IMAGE ID [837339b91538]: Already exists.
IMAGE ID [615c102e2290]: Already exists.
IMAGE ID [b39b81afc8ca]: Already exists.
IMAGE ID [8254ff58b098]: Already exists.
IMAGE ID [ec5f59360a64]: Already exists.
IMAGE ID [2ce4ac388730]: Already exists.
IMAGE ID [2eccda511755]: Already exists.
IMAGE ID [5a14c1498ff4]: Already exists.
IMAGE ID [031fcb0b8e62]: Layer already being pulled by another client. Waiting..
IMAGE ID [26a00a9ab589]: Downloading 100% of 32 B. Verifying Checksum. Download complete.
IMAGE ID [e206d5fe2a5b]: Downloading 100% of 32 B. Verifying Checksum. Download complete.
IMAGE ID [cfd284ec74f2]: Downloading 100% of 207 B. Verifying Checksum. Download complete.
IMAGE ID [e6624efa03db]: Downloading 100% of 32 B. Verifying Checksum. Download complete.
IMAGE ID [29af19c672e1]: Downloading 100% of 324 B. Verifying Checksum. Download complete.
IMAGE ID [cf788be675fd]: Downloading 100% of 32 B. Verifying Checksum. Download complete.
IMAGE ID [d680890e907c]: Downloading 100% of 32 B. Verifying Checksum. Download complete.
IMAGE ID [4b5ccf93ca2a]: Downloading 100% of 32 B. Verifying Checksum. Download complete.
IMAGE ID [1945f71177cf]: Downloading 100% of 324 B. Verifying Checksum. Download complete.
IMAGE ID [dad4ea3f0ebd]: Downloading 100% of 2 KB. Verifying Checksum. Download complete.
IMAGE ID [05c5b8ad1f8a]: Downloading 100% of 938 B. Verifying Checksum. Download complete.
IMAGE ID [91bcfb40161d]: Downloading 100% of 736 B. Verifying Checksum. Download complete.
IMAGE ID [031fcb0b8e62]: Downloading 100% of 514 B. Verifying Checksum. Download complete.
IMAGE ID [6c64bfaf5450]: Downloading 100% of 937 B. Verifying Checksum. Download complete.
IMAGE ID [e206d5fe2a5b]: Extracting. Pull complete.
IMAGE ID [8172f09f7095]: Downloading 0% of 130 MB.
IMAGE ID [cfd284ec74f2]: Extracting. Pull complete.
IMAGE ID [8172f09f7095]: Downloading 5% of 130 MB.
IMAGE ID [e6624efa03db]: Extracting.
IMAGE ID [8172f09f7095]: Downloading 6% of 130 MB.
IMAGE ID [e6624efa03db]: Extracting.
IMAGE ID [8172f09f7095]: Downloading 6% of 130 MB.
IMAGE ID [e6624efa03db]: Pull complete.
IMAGE ID [8172f09f7095]: Downloading 10% of 130 MB.
IMAGE ID [d680890e907c]: Extracting. Pull complete.
IMAGE ID [8172f09f7095]: Downloading 12% of 130 MB.
IMAGE ID [26a00a9ab589]: Extracting. Pull complete.
IMAGE ID [8172f09f7095]: Downloading 100% of 130 MB. Verifying Checksum. Download complete. Extracting. Pulling repository linuxserver/plex.
IMAGE ID [031fcb0b8e62]: Pulling image (latest) from linuxserver/plex. Pulling image (latest) from linuxserver/plex, endpoint: https://registry-1.docker.io/v1/. Pulling dependent layers.
IMAGE ID [511136ea3c5a]: Download complete.
IMAGE ID [53f858aaaf03]: Download complete.
IMAGE ID [837339b91538]: Download complete.
IMAGE ID [615c102e2290]: Download complete.
IMAGE ID [b39b81afc8ca]: Download complete.
IMAGE ID [8254ff58b098]: Download complete.
IMAGE ID [ec5f59360a64]: Download complete.
IMAGE ID [2ce4ac388730]: Download complete.
IMAGE ID [2eccda511755]: Download complete.
IMAGE ID [5a14c1498ff4]: Download complete.
IMAGE ID [e206d5fe2a5b]: Download complete.
IMAGE ID [cfd284ec74f2]: Download complete.
IMAGE ID [e6624efa03db]: Download complete.
IMAGE ID [d680890e907c]: Download complete.
IMAGE ID [26a00a9ab589]: Download complete.
IMAGE ID [8172f09f7095]: Pulling metadata. Pulling fs layer. Error downloading dependent layers.
IMAGE ID [031fcb0b8e62]: Error pulling image (latest) from linuxserver/plex, endpoint: https://registry-1.docker.io/v1/, Driver btrfs failed to create image rootfs 8172f09f709548ceae7708b245cceab6776eeab3d5fb0f8c0107bd4a4d464c00: Failed to create btrfs snapshot: no space left on device. Error pulling image (latest) from linuxserver/plex, Driver btrfs failed to create image rootfs 8172f09f709548ceae7708b245cceab6776eeab3d5fb0f8c0107bd4a4d464c00: Failed to create btrfs snapshot: no space left on device.

TOTAL DATA PULLED: 130 MB



Command:
root@localhost:# /usr/bin/docker run -d --name="plex" --net="host" -e PUID="99" -e PGID="100" -e PLEXPASS="1" -e TZ="America/Halifax" -v "/mnt/user/appdata/plex/":"/config":rw -v "/mnt/user0/":"/media":rw -v "/tmp":"/transcode":rw linuxserver/plex
Unable to find image 'linuxserver/plex:latest' locally
latest: Pulling from linuxserver/plex
8172f09f7095: Pulling fs layer
4b5ccf93ca2a: Pulling fs layer
cf788be675fd: Pulling fs layer
dad4ea3f0ebd: Pulling fs layer
91bcfb40161d: Pulling fs layer
05c5b8ad1f8a: Pulling fs layer
29af19c672e1: Pulling fs layer
1945f71177cf: Pulling fs layer
6c64bfaf5450: Pulling fs layer
031fcb0b8e62: Pulling fs layer
031fcb0b8e62: Pulling fs layer
511136ea3c5a: Already exists
53f858aaaf03: Already exists
837339b91538: Already exists
615c102e2290: Already exists
b39b81afc8ca: Already exists
8254ff58b098: Already exists
ec5f59360a64: Already exists
2ce4ac388730: Already exists
2eccda511755: Already exists
5a14c1498ff4: Already exists
e206d5fe2a5b: Already exists
cfd284ec74f2: Already exists
e6624efa03db: Already exists
d680890e907c: Already exists
26a00a9ab589: Already exists
031fcb0b8e62: Layer already being pulled by another client. Waiting.
4b5ccf93ca2a: Verifying Checksum
4b5ccf93ca2a: Download complete
cf788be675fd: Verifying Checksum
cf788be675fd: Download complete
05c5b8ad1f8a: Verifying Checksum
05c5b8ad1f8a: Download complete
91bcfb40161d: Verifying Checksum
91bcfb40161d: Download complete
Pulling repository linuxserver/plex
29af19c672e1: Verifying Checksum
29af19c672e1: Download complete
1945f71177cf: Verifying Checksum
1945f71177cf: Download complete
6c64bfaf5450: Verifying Checksum
6c64bfaf5450: Download complete
031fcb0b8e62: Verifying Checksum
031fcb0b8e62: Download complete
031fcb0b8e62: Download complete
031fcb0b8e62: Pulling image (latest) from linuxserver/plex
031fcb0b8e62: Pulling image (latest) from linuxserver/plex, endpoint: https://registry-1.docker.io/v1/
031fcb0b8e62: Pulling dependent layers
511136ea3c5a: Download complete
53f858aaaf03: Download complete
837339b91538: Download complete
615c102e2290: Download complete
b39b81afc8ca: Download complete
8254ff58b098: Download complete
ec5f59360a64: Download complete
2ce4ac388730: Download complete
2eccda511755: Download complete
5a14c1498ff4: Download complete
e206d5fe2a5b: Download complete
cfd284ec74f2: Download complete
e6624efa03db: Download complete
d680890e907c: Download complete
26a00a9ab589: Download complete
8172f09f7095: Pulling metadata
8172f09f7095: Pulling fs layer
8172f09f7095: Error downloading dependent layers
031fcb0b8e62: Error pulling image (latest) from linuxserver/plex, endpoint: https://registry-1.docker.io/v1/, Driver btrfs failed to create image rootfs 8172f09f709548ceae7708b245cceab6776eeab3d5fb0f8c0107bd4a4d464c00: Failed to create btrfs snapshot: no space left on device
031fcb0b8e62: Error pulling image (latest) from linuxserver/plex, Driver btrfs failed to create image rootfs 8172f09f709548ceae7708b245cceab6776eeab3d5fb0f8c0107bd4a4d464c00: Failed to create btrfs snapshot: no space left on device
time="2015-07-25T18:13:32-03:00" level=fatal msg="Error pulling image (latest) from linuxserver/plex, Driver btrfs failed to create image rootfs 8172f09f709548ceae7708b245cceab6776eeab3d5fb0f8c0107bd4a4d464c00: Failed to create btrfs snapshot: no space left on device" 

Link to post
Guest
This topic is now closed to further replies.