Plex Media interfere's with wifi while processing library


Recommended Posts

After a few challenges, I finally was able to get plex up and running, able to remotely access, and start processing my library.  Problem is, now my wifi network is down.  I'm on Fios with my fios router set to bridge and using a Netgear Nighthawk r8000 router as primary.  Everything worked until I started to have plex process my library(s).  Thoughts by chance?

Thanks gang

Link to comment

Hi -

 

Hmm.  It doesn't really make sense that these two things would be directly related.  I assume your server is on a hard wired connection..,   Plex is going to be downloading metadata as it processes your library but that shouldn't kill your wifi.  Do your wifi devices have any connection to the r8000?

Link to comment

That's what I said, it doesn't make sense but my connected devices didn't go down until plex started processing.  Yes, my server is hard wired.  Rechecked and still in green with remote connectivity.  None of my smart home or amazon echo/dot's will connect.  My smart phone won't even connect.  Completely baffled as I've tried to reboot the router, turn off then on the radios....nothing.  Not sure if there is another setting that could be causing issue or not other than port forward and that shouldn't impact anything either.  Talked to another plex owner who has had it setup for years and he has never come across anything like this.

 

Thanks

Link to comment

SOLVED:  It was an overload problem!  If you have a huge movie collection (3000 or so) and set up one folder to move at once, the data transfer and looking for metadata will overwhelm the router.  WiFi was inop and then I had trouble with hardwire internet connections.  Had to resort to reset router to factory and start over.  Now I transfer in batches of 200.  It WORKS!  Also have remote connectivity while transfers are happening.  Everything now working as it should!

Link to comment
12 hours ago, tdallen said:

Good to hear, but I'm surprised the R8000 fell over on you in that situation - thought it was more robust.

Robust is one thing - max # of concurrent connects etc are a different.

 

Mot probably, the Linux kernel did run out of RAM (or reached a configured hard limit) for growing some critical table in which case it could not handle more requests. There is a rather large number of parameter that can be tweaked in the /proc file system. And it isn't easy to find the optimum depending on the usage case.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.