ez009

Members
  • Posts

    19
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

ez009's Achievements

Noob

Noob (1/14)

2

Reputation

  1. Hi, I've had this reoccurring problem for several months now. Plex is the only thing running other than Unassigned Devices that have 2 disks mounted for schlepping files. I'm running Unraid 6.12.0 but this was happening in prior versions as well. Every 2 days, Plex stops functioning and I get lots of these in the Plex log: Traceback (most recent call last): File "/usr/lib/plexmediaserver/Resources/Python/python27.zip/logging/handlers.py", line 76, in emit if self.shouldRollover(record): File "/usr/lib/plexmediaserver/Resources/Python/python27.zip/logging/handlers.py", line 157, in shouldRollover self.stream.seek(0, 2) #due to non-posix-compliant Windows feature IOError: [Errno 107] Socket not connected Logged from file agentservice.py, line 786 It requires a reboot to fix. Stopping and starting any dockers immediately after the issue results in "Execution error, Server error" In prior session where this happened I did see "Transport endpoint is not connected" However today I just see "IOError: [Errno 107] Socket not connected" in the Plex log. I've looked at similar threads like these and I've disabled NFS which was only present on the Unassigned devices, but that did not fix it. and this Here are diagnostics from today's event, saved right before reboot. Any help appreciated, thx. tower-diagnostics-20230624-0734.zip
  2. So it was Plex transcoding. After setting it to use a disk share on the cache drive, I could see the following day it was using over 200 GB for transcoding (4k movies). All is well in the GUI since. Thx for your help.
  3. Hi thanks for the reply. I did see the previous post and did not see anything out of the ordinary when running that command. Currently, After a recent re-boot, I'm seeing this: Filesystem Size Used Avail Use% Mounted on rootfs 12G 1.1G 11G 10% / I have Plex transcoding set to TMP (RAM) - However last time I was having the issue, I did check RAM usage in the GUI and I never saw it above 40%. But it must be Plex, considering it's the only docker running. So I will setup a disk share on the Cache Pool and hope that was the cause. Will report back either way. - Thx again
  4. bump. This is happening every week. It's a major problem. I'm about to leave town and I really need my server to stable. Can anyone provide an explanation or advise to help solve this issue? thx,
  5. Hi, I started having this problem a month or so ago. I'm currently on 6.11.5. If I stop the array and start it again, things begin to function again however the GUI summary on the right of Dashboard no longer lists the drives. The only docker I'm running is Plex-Media-Server and occasionally binhex-krusader. To get full functionality in the GUI i have to reboot the server. I copied the errors from the log while the issue was taking place. I was unable to access save diagnostics until after I stopped and re-started the array. I'm attaching the syslog w/ the errors and the diagnostics from after a re-start of the array. Any help appreciated, thanks. Unraid_SysLogErrors_.txt tower-diagnostics-20221206-0828_GUIprobs.zip
  6. Sry for the delayed response, but for some closure, I narrowed the issue down to Plex and/or mapping shares in Windows 11 on my main client machine. This disks ARE spinning down even with Plex docker running as long as I'm not accessing it from the Win 11 machine. I haven't had time to figure out exactly what on the Win 11 machine is keeping all drives active yet, but I'm satisfied Unraid is functioning correctly.
  7. I was having this issue recently and have since upgraded to 6.10.1 and now 6.10.2 My disk are still NOT spinning down. I have the timeout set to 15 mins. Reading another thread... these disks are attached to an SAS backplane and read as SAS, so I downloaded the Spin Down SAS Drives app. No change. I've now disabled all dockers (plex etc) and still no spindown. This server typically pulls about 220-280 watts, but with all disks active, I'm seeing 450 watts plus. Ouch. Can someone pls advise what might be causing this? Thx
  8. So, rebooted without disk and {relief} ..parity is intact. Can see all shares and data. I think from here should be okay.. just need to do the rebuild. Thanks much for your helpful insight.
  9. After powering down last night, I pulled the drive in question to try scanning with UFS explorer. It's about halfway through the scan. I have another large drive on the way - would prefer to rebuild on that one. In the meantime, would it be safe to boot with no drive in slot #8 ? (and without the unassigned drives) in order to just see if parity shows the missing disk8 data correctly?
  10. Thx. Here 'tis. tower-diagnostics-20210610-2009.zip
  11. Hi, I've created a compounded problem here and could use some assistance.. These things happened while adding some new drives to the server: 1. I accidentally pulled a mounted drive while the array was running (thinking it was an empty tray) and quickly popped it back in. 2. I installed 2 drives to the bay, a new unformatted drive and an old drive which i planned on wiping. 3. back in the WebGui I see the drive I accidentally unplugged has a red X "Device is disabled, contents emulated" (The new drives I added are in the unassigned devices section) After reading through the forum - and this post: https://forums.unraid.net/topic/51469-can-i-add-a-quotdevice-is-disabled-contents-emulatedquot-disk-back-in-to-the-array/ it sounded like this was typical for a drive connection issue, and at this point I'd need to either rebuild parity or the Data disk. Since I don't have another drive that large (14tb) on hand and was pretty sure the drive was fine, I decided to follow the procedure to unassign, start array, stop array, re-assign, and rebuild data from parity. Upon starting the rebuild, I noticed that none of shares from that disk were visible.. in the GUI nor on the network. My recollection of prior migrations was that the emulated contents would be preserved in parity and at least readable while the rebuild is in progress? Then I realized, the old drive I'd added (which still had data on it) had MOUNT info and in fact, this was the old drive(2TB) that I migrated FROM, long ago... TO the current disk in question (14TB). (they both occupied same slot #8) Is it possible that parity got confused during the removal/install of drives and caused it to be overwritten? After about 2% into the rebuild process I decided to pause and eventually cancel and take the array offline until I figured things out. So.. in a bit of a panic here, I can't afford to lose this data. Can someone verify that i SHOULD be seeing the shares during the rebuild/emulation? If that is true, then I'm assuming I have screwed up parity...? and having started a data rebuild on top of that, how much damage has been done? I'm assuming not as bad as a reformat, but that I'd need to use something like UFS Explorer to retrieve the data and copy onto a new disk? Any advice on how to safely proceed, much appreciated. - thx
  12. Hi, I'm having trouble with unpacking 'password protected' rars. Basically it sits in testing archive mode and there is no indication that it's waiting for a password nor a dialog box. Is there a tutorial or info on how to configure settings for this? My google searches turned up empty.
  13. I see the cache slots now. should be fine. However, i guess copying the super.dat doesn't work for this Trial version i'm playing with.. so I guess i will need to copy the new install to my old usb drive. or I may buy a 2nd license.. In either case, is there anything else that would be considered "safe" to copy over from my old Flash directories in order to preserve as many settings as possible? for example (in config): shares, pools, disk assignment, various other .cfg files?
  14. ok, i started to try that but it seemed i was only able to add one drive. so i unassigned it again. Wasn't sure if it would be reformatted Also the old config was crashing so fast i didn't have much of a chance to do anything. So should I copy the super.dat though? and would it be ok to copy the one from my 6.9.2 config into this fresh 6.8.3?
  15. the saga continues..... I eventually got the new motherboard.. up and running. It'd been years since I'd been in the bios but i think at this point I have the settings correct. (I am dealing with some RAM not being recognized so (for now) I am only using CPU2 which recognizes all DIMMS on that side) Unfortunately I am back where I left off with the 6.9.2 freezing. In fact it was freezing within 15mins of logging into the GUI. At first I wasn't sure if it had to do with the hardware, bios, or several configuration changes. the syslog would sometimes show lots of these before freezing: May 24 01:05:39 Tower kernel: w83795 0-002f: Failed to read from register 0x03c, err -6 May 24 01:05:39 Tower kernel: w83795 0-002f: Failed to read from register 0x034, err -6 *note that the freeze won't happen until I actually log into the GUI I only started the Array twice(?) during testing and have since opted to make sure unraid will stay alive on its own before I move any further. Speaking of moving further, I realized that my prior stable installed build was in fact 6.8.3(!) - so I made a backup of the current 6.9.2 flash drive and downgraded to 6.8.3... But Same issue - freezing within the amount of time it takes to peruse the previous syslog. I am also seeing similar errors. *One note of concern, my cache pool is no longer recognized after the downgrade.. all 4 of those SSDs are now unassigned Next, I created a fresh Trial install of 6.8.3 on a new USB drive. I disconnected my SAS card and booted up. After logging in, it's been up and running for over 3 hours now. I'm wondering how to proceed? I was planning on copying super.dat to the fresh install, install the SAS card and test the Array, but with my cache pool not being recognized after the 6.8.3 downgrade, I'm concerned. (it was recognized fine in 6.9.2) If i copy super.dat from the 6.9.2 flash into this fresh 6.8.3 install, would that work??? - or should i try i clean install of 6.9.2? The Plex docker is the main thing of importance I'd like to preserve in addition to the array.