horridwilting

Members
  • Posts

    12
  • Joined

  • Last visited

Everything posted by horridwilting

  1. All you have to do to down grade SHOULD be: (if I'm incorrect, anyone please correct me) 1) Read the release notes for the version you're on and the one you want to go back to (and any release in between) to make sure there are no irreversible changes with the version you're on now that would prevent a downgrade. 2) Stop your Plex docker container 3) Find the exact release tag you want to use - I use this all the time without issue: https://hub.docker.com/r/binhex/arch-plexpass/tags Copy the tag "binhex/arch-plexpass:1.40.2.8395-1-01" for example for the next step 4) Edit your docker container and find the REPOSITORY line - past the full tag into it 5) Save the docker container - it should pull the changed version and auto start. If you see no errors, then it should have been successful.
  2. Just an FYI, I upgraded as follows in case anyone else was wondering what happens if you're a few versions behind - the answer...nothing! My server upgraded fully within about 2 minutes after the docker container started. Granted, it has a lot of extra processor to throw behind the DB upgrade with v1.40, but it didn't get stuck. I use specific tags from their Docker repo to choose my upgrade path, as well. https://hub.docker.com/r/binhex/arch-plexpass/tags Unraid OS: 6.12.10 Old: binhex/arch-plexpass:1.32.5.7349-1-01 New: binhex/arch-plexpass:1.40.2.8395-1-01
  3. Oh! You might temporarily disable SCHEDULED TASKS or the MAINTENANCE tasks in Plex just to see if that activity is causing the issue as it normally happens overnight.
  4. Ah, that really sucks. Sorry. I would check the UNRAID SYSTEM LOG (TOOLS>SYSTEM LOG). Scroll down to the bottom then go up, and see if you find any errors, especially hardware or CPU or RAM. they'll be in RED for easy ID. That hopefully will tell you what is faulting to narrow it down.
  5. Hey sarus, sorry I didn't reply sooner - since my last post on March 9, I haven't had any more crashes after updating my Plex. I am on 1.31.2.6783 and haven't had any crashes since then.
  6. What version of Plex were/are you running? I was on 1.31.2.6739 using BINHEX-PLEXPASS and now I just forced an update to Version 1.31.2.6783 since it was available, though I can see this is a BETA version so I may need to roll that back to something earlier. I am going to keep my Plex Docker OFF for a while longer to see if it crashes or not though, just to make sure this isn't some other obscure hardware problem and the Plex thing was a red herring. Seems the PUBLIC version is .6782 https://www.reddit.com/r/PleX/comments/11mmq4y/new_public_pms_version_available_1311678277dfff442/
  7. Hey! I also have had two odd crashes in the last week with my previously-rock-solid-for-6+-months server. Only things that have changed: 1) Updated the firmware on my cache pool 2x980 PRO SSD's (not using this Cache Pool for anything Plex, but it DOES have the appdata on it for Docker...) 2) Updated Plex to the newest version (as of like a week ago, I noticed there is an "update available" but I need to get numbers - currently at work so I'll get those as soon as I can. Both crashes were very strange. Like, the server didn't fully reboot, but the array was stopped and it noted UNCLEAN SHUTDOWN. After the first one, I started everything back up, and did a parity check. It ran for about 20hrs then randomly crashed again...but weirdly, it sent a notice that "parity check finished" despite still having about 8hrs left on it. So, something is doing a weird..."soft" reset of some kind? I did see a MCE in the SYSTEM LOG but need to look at it more and probably make my own post with my DIAGNOSTICS. For anyone to help on here, they'll ask for your DIAGNOSTICS so you should add that to your OP when you have time.
  8. Not hijacking this thread but adding my voice - this same thing happened to me today out of the blue. My server seemed to reboot (though I don't recall hearing it despite sitting right next to it) somehow a few hours ago and also got MCE alerts. Array was stopped but configuration valid, and but unclean shutdown detected. It's currently doing a Parity Check. Server has been completely rock solid since I built it a few months ago. Been on 6.11.5 for a long time, too. I am concerned that something else may be going on as there are more than a few reports in the last 24hrs of similar behavior.
  9. I stopped my array to help make sure it isolated the disks and nothing was actively accessing them to make sure it had as few possible reasons not to work is all. Sounds like you can do it either way (which makes sense as the Windows Utility can do it on the SSD that the OS is on, etc...).
  10. Cheers! I didn't have any issues updating my two 980 PRO's with the process described by OP. I had to reboot my UNRAID server after they updated but everything went totally smoothly.
  11. Just to add some positives, I upgraded from 6.11.1 to 6.11.5 with no apparent issues. -Basic dockers like Plex working (no VLANS). -W10 VM with NVIDIA 3060Ti passthrough and some SATA SSD's attached - no issues. Hardware info (as it is always nice to see people posting good or bad things that have the same or similar hardware to me when I'm browsing!) -MOBO: ASUSTeK COMPUTER INC. Pro WS WRX80E-SAGE SE WIFI , Version Rev 1.xx American Megatrends Inc., Version 1003 -CPU: AMD Ryzen Threadripper PRO 5965WX -RAM: (8x16GB) Micron 16GB DDR4-3200 RDIMM 1Rx8 CL22 (MTA9ASF2G72PZ-3G2R) -USB/FLASH: SanDisk Cruzer Blade CZ50 16GB USB 2.0 Flash Drive SDCZ50-016G-AFFP (working since 2019 early with no issues, moved to 3 different server builds with no problems)
  12. I most appreciate the lenient licensing - for those of us new to Unraid, knowing I had time to tinker with the trial license was invaluable while I got my hardware configured. Really solid peace of mind, and at the end I bought a Pro license! Wishlist: Perhaps some native disk preclear options (or just integrate the gfjardim plugin), Unassigned Devices (dlandon's plugin), and native IPMI support for hardware stats like temperature, etc. I'm a huge fan of how granular I can get - or go hands off and know I'll have it all running without needing to keep on top of anything except my notifications! Great work, y'all.