Earendur

Members
  • Posts

    21
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Earendur's Achievements

Noob

Noob (1/14)

4

Reputation

  1. You should be able to copy the key data folders over following the guides they have: https://support.plex.tv/articles/201370363-move-an-install-to-another-system/ Odd that it's giving you issues changing containers. I'd try stopping the old container, move the folder or target a new folder with your new container. Start the new one, confirm it boots, then shut it down and copy over the relevant data. It might be worth purging plex cache as well. Then see if it starts.
  2. I've successfully done the exact same thing here except I'm using a traefik docker container for my automatic certificate provisioning for my services and not SWAG. I created a User Script that extracts the certs using jq, concats the cert and key into a .pem bundle, then it uses openssl verify to check that the cert is valid before issuing the command to reload the web ui. Here's the script: #!/bin/bash traefik_path=/mnt/user/appdata/traefik acme_json=$traefik_path/acme.json domain=mydomain.com domain_cert=$traefik_path/certs/$domain.crt domain_key=$traefik_path/certs/$domain.key unraid_cert=/boot/config/ssl/certs/tower_unraid_bundle.pem # Use jq to extract the cert and the key - decode them from base64 - store them in files for later use jq -r '.[].Certificates[] | select(.domain.main=="'${domain}'") | .certificate' $acme_json | base64 -d > $domain_cert jq -r '.[].Certificates[] | select(.domain.main=="'${domain}'") | .key' $acme_json | base64 -d > $domain_key # concatenate the certs and the key into a .pem file at the correct location for unraid to utilize cat $domain_cert $domain_key > $unraid_cert # if openssl can verify the cert as valid, recycle the webui openssl verify -untrusted $domain_cert $unraid_cert 2>/tmp/err if [ -s /tmp/err ] then echo Certificate Failed to verify. else echo Certificate verified Successfully - recycling Unraid Web UI... # reload the web UI to accept the new cert /etc/rc.d/rc.nginx reload fi I believe jq comes with the NerdPack plugin, so you'll need to install that first. I haven't found a lot of info for people who use Traefik proxy, so hopefully this helps others who do. A possible enhancement to this might be to check that the acme.json has been updated, or check if the specific domain cert has been updated before running the script. A watch might be able to be used on the acme.json file to do this, or inotify, but this version works for me and the reload of the web UI doesn't seem to cause any issues. I don't have to re-log in for my logged in session, I can run this via the User Scripts UI and it doesn't cause issues, etc. To be honest, I'm not entirely sure that the way I used OpenSSL to verify is the correct way to do it.
  3. I'd like to just chime in here to add to the list and confirm that I have no hang or crashing issues on 6.11.1 with an i5-12600k running on an MSI PRO Z690-A board. I have Windows VMs, about 20 docker containers, plex transcoding AND jellyfin transcoding confirmed to be working with no crashes or hangs. I have a monitor hooked up to the server directly at all times. I've never tried to run the transcoder with it unplugged. At the start of all of this, I was experiencing the hangs as described in the original report. It just took time for both unraid to release an update and for Plex to fix their transcoder. Before I enabled it, I was getting crashes that logged to syslog randomly, and ich777 was kind enough to help and suggested I switch to ipvlan from macvlan which fixed those crashes. Then I re-enabled plex and jellyfin transcoding and it all works now. I've been crash and hang free for over 3 months.
  4. I am running MACVLAN. Pihole is running as br0 with it's own IP address, but all the rest are on a custom docker network. What's the risk if I switch to IPVLAN?
  5. Okay that's fine. I'll leave my issue out until we get more releases. I do have a kernel panic that randomly happens - sometimes multiple times a week, other times it'll go 40 days - and I strongly suspect it's the docker networking. BUT I will start another thread if that issue gets to be more than just a minor annoyance like it is now. I figure the kernel updates should solve this issue over time. Thanks
  6. I understand that, however I was experiencing the exact same issue on the reported versions of Unraid here - a system hang with no information in syslog and I confirmed it was caused by the iGPU. So while I am on Alder Lake, I was getting the exact same issue reported by Tristankin. Could the issue be different due to us being on different CPUs? Maybe. But I'm not convinced it's not the same issue.
  7. I actually have this issue and had to uninstall Intel GPU top. I do have a monitor connected and I believe I have taken a photo of the issue. I leave the monitor connected in my rack, and it's on so I don't think it's the dongle issue. I'm running alder lake. At the time of the issues I had the most up to date BIOS. Running stock clocks. I tried with and without XMP profiles. I'm pretty sure in one of the threads (probably this one) I attached a syslog (from USB) and a screenshot but it's been quite a while since then. I have next week off of work. If ich wants me to, I can follow his suggestions to debug this next week.
  8. My file has now more than doubled. It's 977,920 lines long, all but 25 or so lines are the sessions. 35 mbs in size. Something is definitely not functioning correctly here. These sessions shouldn't be accumulating like this and they should get cleaned up.
  9. Interesting. Mine is 12 megabytes. When you say you deleted the sessions, you went into the JSON and removed these records: Of course, there are hundreds/thousands of these in the file. I wonder if deleting the file itself would cause an issue, or if it would just recreate a new one? I may leave them because the web ui is running, and watch it to see if they get cleaned up at all by the application.
  10. It appears I'm having this exact same issue. Hangs at the same point and the WebUI doesn't load. 2022-05-07 11:09:10,133 DEBG 'watchdog-script' stdout output: [info] Deluge key 'listen_interface' currently has a value of '10.67.228.49' [info] Deluge key 'listen_interface' will have a new value '10.67.228.49' [info] Writing changes to Deluge config file '/config/core.conf'... 2022-05-07 11:09:10,211 DEBG 'watchdog-script' stdout output: [info] Deluge key 'outgoing_interface' currently has a value of 'wg0' [info] Deluge key 'outgoing_interface' will have a new value 'wg0' [info] Writing changes to Deluge config file '/config/core.conf'... Sometimes, after leaving it for a few hours, it seems to move to the next step: 2022-05-07 11:09:10,133 DEBG 'watchdog-script' stdout output: [info] Deluge key 'listen_interface' currently has a value of '10.67.228.49' [info] Deluge key 'listen_interface' will have a new value '10.67.228.49' [info] Writing changes to Deluge config file '/config/core.conf'... 2022-05-07 11:09:10,211 DEBG 'watchdog-script' stdout output: [info] Deluge key 'outgoing_interface' currently has a value of 'wg0' [info] Deluge key 'outgoing_interface' will have a new value 'wg0' [info] Writing changes to Deluge config file '/config/core.conf'... 2022-05-07 14:06:11,257 DEBG 'watchdog-script' stdout output: [warn] Deluge config file /config/web.conf does not contain valid data, exiting Python script config_deluge.py... 2022-05-07 14:06:11,592 DEBG 'watchdog-script' stdout output: [info] Deluge process started [info] Waiting for Deluge process to start listening on port 58846... 2022-05-07 14:06:11,802 DEBG 'watchdog-script' stdout output: [info] Deluge process listening on port 58846 2022-05-07 14:06:13,029 DEBG 'watchdog-script' stdout output: [info] No torrents with state 'Error' found 2022-05-07 14:06:13,029 DEBG 'watchdog-script' stdout output: [info] Starting Deluge Web UI... [info] Deluge Web UI started However, the WebUI is not responsive and won't load. I've made no changes to my configuration and it was working for many months before yesterday when this issue popped up. Edit: I was planning to follow the steps listed here today, but it seems to have started on it's own over night and is running fine now. I'll update if that changes.
  11. Originally, I simply turned off the hardware transcoding in Plex itself, but the crashes would still occur, which is consistent with the reported problem here - that the drivers don't even need to be actively running to cause the issue. I tried a number of different troubleshooting steps to figure out the issue, from Memtest to see if it was a ram issue, to updating the BIOS. It wasn't until I deleted the /dev/dri device from the docker container configuration - effectively making it impossible for plex to access the drivers for hardware transcoding - that the crashes stopped. There is definitely an instability in the iGPU drivers. I tested the hardware transcoding and was using it for several hours in some cases before it would crash the system. Other times, it would happen 3 times in a hour, and when no one was streaming from the plex server at all. I knew the hardware transcoding was working because plex dashboard was showing (hw) and so was tautulli. Since the removal of the device, no transcoding operations are showing (hw).
  12. I believe I'm experiencing this same issue. I have been discussing it in the 12th gen Alder Lake thread here: I touched the i915.conf file, have Intel GPU Top and GPU statistics installed. The hangs will lockup the system entirely, but the terminal is still accessible direct on the machine itself. Shutdown commands don't shut the system down. Syslog says nothing. Crashes would happen whether people were streaming or not. Sometimes several times an hour, others 4-6 hours apart. I'm on 6.10.0 RC2 The fix for me was to remove the /dev/dri device from the Plex container. I still have the i915.conf, Intel GPU Top, and GPU Statistics installed, but the crashes stopped immediate when I removed /dev/dri from the container. I was about to test if pinning the container to the performance cores would help, but based on the people experiencing the same issue here it appears the issue is not limited to 12th gen Intel CPUs
  13. Dang, that looks to be the exact issue I'm experiencing. The transcoding wasn't always running when I was getting full system hangs, and the syslog absolutely did not have any details about the crash. I'll comment there and indicate I'm experiencing the same thing.
  14. I can report that my unraid machine has now been online for 4 days with no crashes. The culprit seems to have definitely been the /dev/dri being added as a device to the Plex container. I can confirm that I have HDR tone mapping disabled: So I don't think HDR tone mapping was the cause of the crashes, at least not for me because it was not on when I had the /dev/dri device added to the container. I still suspect it has something to do with docker or the container moving threads across the efficiency cores and the performance cores. I'd like to re-enable the hardware transcoding and test it by pinning the container to only the performance cores of the CPU. But before I do, I want to ask if anyone else has had similar problems? It seems that others have had no crashes when they add the device to the container so long as they have HDR tone mapping disabled, but that was not my experience.
  15. I can report that I have now been online for 1 day and 20 hours with no crashes since I removed the device /dev/dri from the Plex container. If it remains stable for a few more days, I may re-add the device and pin the Plex container to the efficiency cores of the CPU. It's possible threads moving across different types of cores is what causes the crash, but this is pure speculation on my part so I'll have to test it out. Thanks for this tip! I actually worked late last night so I never attempted the memtest. I'll have to do it this way to save me the trouble of switching to legacy boot. I've never even tried to do GUI mode. I've only ever connected remotely or through command line directly on the server.