XisoP
-
Posts
28 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by XisoP
-
-
7 hours ago, Hexenhammer said:
If you run both VMs at same time you better split the cores 50/50
Otherwise there is no point in core isolation since both use them.
That's not really an issue in my case. Yes, the VM's share the cores and are running in parallel but we actually don't use them at the same time.
-
On 1/14/2021 at 1:23 PM, skois said:
*Click Finish Setup. (DONT CLOSE PAGE on 504 error)*With the page still displaying 504 error, go to /mnt/user/appdata/nextcloud/www/nextcloud/config/ and edit the config.php
truested_domain section should look like this (dont forget the comma after each entry). ofc change the ips according to yours.
You should only need to add the line 1, 0 should be there.
'trusted_domains' =>
array (
0 => '10.0.0.72:444',
1 => '10.0.0.*',
),
*After saving the config.php. Go back to your browser and refresh the page. It should now ask you to log in.
In case still shows 504 error. Try restarting docker WITHOUT CLOSING the page. and when docker restarts, refresh the page once more.
My install keeps sending me back to the login page. Updated congig.php with my own IP range/domain.
Did a full clean install after a live install died this week. Fortunately all data is stored on external storage
Yesterday I did manage to get it somehow going on a donor-database, but shortly after migrating to the live one, MadiaDB died.
I was hoping to get everything going again, but no luck for the past 16 hours
getting the same issue with SQLite btw 😭
-
Another resurrection. Sorry. 😅
Read the thread, but I do have one question.
Let's say I have 2 VM's (one for the wife and one for myself) with shared cores (6C/6T). When I isolate the corresponding cores, will they be exclusive to Both of the VM's or just for one?
-
47 minutes ago, garydapogi said:
I see your nextcloud also uses port 80. Every container/app needs a unique port.
Also, you have to forward the WAN ports to your lan ports in your router 😉 port 80 directs to assigned npm port for http, 443 directs to assigned npm port for http
-
7 hours ago, Squid said:
..... Odds are excellent that you'll never even notice that you migrated over.
That's the message I was hoping for. I'll have something to do this weekend. And of course: always keep a backup 😉
Thanks
-
Can anyone provide an answer? I currently have a docker.img somewhere on my server and I would like to migrate to a share.
Running Unraid 6.9.2. docker.img is currently on the array. A share for docker has been created and set to use cache only (1TB SSD, mirrored)
Hope someone can help me out with keeping all data and settings, running some apps and databases that need to be kept
thanks in advance
-
4 hours ago, GhostKnock said:
Could someone walk me through the process of rolling back to a previous version? Specifically 1.23.3.4706
Thanks!
Take a look at this thread:
You need the full tag in order for it to work (amd64-1.23.3.4707-ebb5fe9f3-ls58)
Google is your friend as always
I'm rolling back aswell, just trying something. If prerolls are working, I'll make a post at the plex community hoping the issue will be resolved somewhere in the future. Keep in mind the fixing of issues is not important to plex... LG buffer issue took over a year
Please keep in mind that the version you called does not exist on docker hub. enter a correct value to downgrade your image
https://hub.docker.com/r/linuxserver/plex/tags?page=3&ordering=last_updated
- 1
-
Prerolls..
Does anyone else have issues with them? Mine just stopped working a couple of weeks ago. I currently have a playlist of 50 (more in total, but there are some seasonal prerolls), randomly playing (or at least playeD in the past) one before each movie. All mp4, placed in a seperate folder on my plex SSD for instant access.
Is this a container issue or a common plex issue?
-
5 hours ago, mattie112 said:
Yes
Simpy forward to ip.of.your.vm:portofyourapp does not need to be on the same host
Figured it out.
Turns out that a MAC adress changes on a VM somehow. Had everything configured to look for IP 24 which whas bound by MAC adress. When the MAC doesn't exist in my scope, the machine is assigned an adress from the DHCP pool 😩
Kicked in a static IP in my VM and all is working like a charm. Thanks for your reply
-
This might have been asked before. In that case: sorry 😅
I'm running NPM on my unraid server. I'm also running a VM which is serving my access control to my house. This platform is web-based. Is there a way to point <sub.domain.ext> to a site that is not running on my custom docker network? It is http traffic on a fixed IP for the VM. Tried some stuff allready, pointing to IP or hostname, pushing buttons and sliding sliders but all I get is errors.
-
1 hour ago, happyfuntime said:
I recently changed my routers IP from 192.168.1.1 to a random IP.......
Are you using DHCP for your plex with remote acces or internally only? Don't recommend DHCP because of portforwarding. Plex really needs 32400 (if not manually changed)
-
7 hours ago, jmmrly said:
Hey,
I've added plex into my SWAG reverse proxy docker, and added my SWAG certs within network settings of plex, and also my plex domain name in there. When I access https://plex.domain.co.uk/web, I just get the plex logo and nothing else. Is this as expected or should I be able to login?
You should. I've migrated from swag to nginx a couple of months ago because in my opinion the management is way easier that way. As soon as I added plex (and made the dns entry on my domain of course) all worked like a charm.
-
1 hour ago, Kaizac said:
Plex currently has issues with HW decoding, especially with HDR (their forums are full with it). They are aware and working on it. But nothing that can be done on the docker, Plex has to solve it. Rolling back to older versions like you did, often solves it.
Let's hope they fix it a bit faster than last time. Took them about a year 😒
-
Hi all,
Last year we, the LG TV owners, ran into an issue with playback while transcoding with subtitles. There were extreme buffer issues, the only option for a while was running an old server version (1.16.x or older).
Earlier this year the issue was solved.
Today I noticed that the issue might have returned in a worse way. Playing a 1080p h264 file @ direct play just froze a couple of minutes into the movie. Forcing transcode didn't change a thing. Upgrading to 4K HDR made things worse.
After downgrading the docker image from 1.22.3.4392 to 1.22.2.4282 the issue cleared. 4K HDR h265 playback with subs was smooth as butter.
Could it be something went wrong (failed line of text or so) while compiling a docker image?
I'm running unraid 3.9.2
Currently plex 1.22.2.4282
transcoding is handled by Quadro M4000
-
Done some testing, and indeed seems to be some kind of plex issue. Direct play freezes even more 🤔
Thanks for your quick response. I'll take this issue to linuxserver
-
Does anyone have issues with the GPU memory? My M4000 floods and won't empty. Transcoding movies freeze for a couple of seconds and then resume. Re-freeze within 20 seconds and repeat.
Just stopped my movie and it took about a minute before the GPU started to flush the memory.
In my opinion this shouldn't happen when there's only one stream playing. Especially when you realize the M4K can handle at least 20 streams
-
16 minutes ago, dlandon said:
If you think there is a preclear issue, post on that forum. I don't think this is a UD issue.
I'm leaving it in the middle for now. Preclear was doing a post read, pre-read was error free. Data rebuild is going at this moment, I'm leaving a main tab open on one of my machines to track what will happen. I'll keep you posted 👍
-
On 4/26/2021 at 5:38 PM, XisoP said:
The whole server was unresponsive. Had to do a PTP (FTS, Flip The Switch) in my case.
Server booted withoud issues, SSD for plex is visible.
Attatched is the diagnostics report. Log is only of today, not that much hepful I'm afraid.
penny-diagnostics-20210426-1732.zip 93.67 kB · 0 downloads
I'm starting a new parity check as I'm typing this. Hope it will survive. Unraid is booted in GUI safemode at this moment.
edit:
Parity check is still running, getting close to 14 hours. Still booted in safe mode
Final verdict:
Par check finished successfully in safe mode, server has been restarted in normal mode. All works fine now. Don't know what the issue was, but all works fine again. Maybe the dead drive? It is strange though that the unassigned disk for plex was dropped during the par check.
My server froze again. All went well for a couple of days, no real load. Just preclearing 14TB and serving some plex. I didn't leave the main tab open, but i did have a preclear progress screen open on one of my machines. Preclear indicates that the server froze 61:33:11 into the job. Unfortunately I cant get any logs (different than posted earlier), the complete server is unresponsive. No SSH, no GUI terminal. Total uptime according to docker GUI screen has been 2 days and a couple of hours (about the same as the preclear job)
-
I've got a disabled 8TB Seagate SMR drive. Been meaning to get rid of my 3 SMR drives for a while because this type of drive isn't the best for array's. A few day's ago it got disabled during parity check. Yesterday a new 14TB WD drive arrived, shucked it and it's preclearing now. Really hope nothing goes to sh*t for the coming week as the preclear only takes 3 days for a single run😱
As soon the preclear is done, the new disk will be adopted by the array and the rebuild can start. Another 1,5 day. Let's hope the SSD won't mess up the rebuild. This one will be removed from the array when the rebuild is complete.
Made me realize I need a hot-swap unit in my server. Got 2 5-bay cages but I need to remove the complete cage in order to swap out a drive wich is a pain in the butt to do. Luckily I keep track of all my drive-locations.
-
The whole server was unresponsive. Had to do a PTP (FTS, Flip The Switch) in my case.
Server booted withoud issues, SSD for plex is visible.
Attatched is the diagnostics report. Log is only of today, not that much hepful I'm afraid.
penny-diagnostics-20210426-1732.zip
I'm starting a new parity check as I'm typing this. Hope it will survive. Unraid is booted in GUI safemode at this moment.
edit:
Parity check is still running, getting close to 14 hours. Still booted in safe mode
Final verdict:
Par check finished successfully in safe mode, server has been restarted in normal mode. All works fine now. Don't know what the issue was, but all works fine again. Maybe the dead drive? It is strange though that the unassigned disk for plex was dropped during the par check.
-
Is it possible there is a bug in the latest update? Yesterday (or the day before) I installed an update, last night my 3-month parity check started. This morning the web UI was not available and still isn't.. All services are down, apps installed on unassigned and on pools/array disks.
The reason I'm asking here is because a lot of Google hits are referring to UD
edit:
I left a browser open on one of my machines earlier today, the UI froze 8 hours into the par check. My unassigned disk (plex) seems to be missing
Yes, there is a dead drive in my array and yes, there is a ssd in my array. New disks are on the way 😉
-
I'm joining this conversation. Slightly different setup.
I plan on running a VM for home assistant and would like to run it through my docker reverse proxy.
-
7 minutes ago, wgstarks said:
Run a check for updates and that should go away.
never would have thought of that. Worked like a charm. Thanx
-
[Support] Linuxserver.io - Nextcloud
in Docker Containers
Posted
I fixed my issue. Unfortunately I couldn't save my love database, have to add everyone again.
Turns out it was a networking issue. Got new networking hardware some time ago, installed all the VLANs on my main NIC. This caused some serious issues turned out.
Got an extra NIC, removed the VLANs from the main nic and installed them on the second.
Done a fresh install of nextcloud:latest with a new database (old one was corrupted after trial and error). Just left the gateway timeout on initialization for a couple of minutes and had a succesfull login.