Jump to content

Duggie264

Members
  • Content Count

    26
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Duggie264

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Hi, just started getting this crash today, any ideas? 2020-04-19 15:23:48.353687 [info] System information Linux ################# 4.19.107-Unraid #1 SMP Thu Mar 5 13:55:57 PST 2020 x86_64 GNU/Linux 2020-04-19 15:23:48.383161 [info] PUID defined as '99' 2020-04-19 15:23:48.415129 [info] PGID defined as '100' 2020-04-19 15:23:48.918075 [info] UMASK defined as '000' 2020-04-19 15:23:48.945899 [info] Permissions already set for volume mappings 2020-04-19 15:23:48.972882 [warn] TRANS_DIR not defined,(via -e TRANS_DIR), defaulting to '/config/tmp' 2020-04-19 15:23:49.004719 [info] Deleting files in /tmp (non recursive)... 2020-04-19 15:23:49.030933 [info] Starting Supervisor... 2020-04-19 15:23:49,197 INFO Included extra file "/etc/supervisor/conf.d/plexmediaserver.conf" during parsing 2020-04-19 15:23:49,197 INFO Set uid to user 0 succeeded 2020-04-19 15:23:49,199 INFO supervisord started with pid 6 2020-04-19 15:23:49.004719 [info] Deleting files in /tmp (non recursive)... 2020-04-19 15:23:49.030933 [info] Starting Supervisor... 2020-04-19 15:23:49,197 INFO Included extra file "/etc/supervisor/conf.d/plexmediaserver.conf" during parsing 2020-04-19 15:23:49,197 INFO Set uid to user 0 succeeded 2020-04-19 15:23:49,199 INFO supervisord started with pid 6 2020-04-19 15:23:50,202 INFO spawned: 'plexmediaserver' with pid 55 2020-04-19 15:23:50,202 INFO reaped unknown pid 7 2020-04-19 15:23:50,437 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 139734334410080 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stdout)> 2020-04-19 15:23:50,438 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 139734334410368 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stderr)> 2020-04-19 15:23:50,438 INFO exited: plexmediaserver (exit status 255; not expected) 2020-04-19 15:23:50,438 DEBG received SIGCHLD indicating a child quit 2020-04-19 15:23:51,441 INFO spawned: 'plexmediaserver' with pid 60 2020-04-19 15:23:51,731 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 139734335288080 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stdout)> 2020-04-19 15:23:51,731 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 139734333605152 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stderr)> 2020-04-19 15:23:51,731 INFO exited: plexmediaserver (exit status 255; not expected) 2020-04-19 15:23:51,732 DEBG received SIGCHLD indicating a child quit 2020-04-19 15:23:53,736 INFO spawned: 'plexmediaserver' with pid 65 2020-04-19 15:23:54,021 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 139734334119840 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stdout)> 2020-04-19 15:23:54,021 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 139734334410368 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stderr)> 2020-04-19 15:23:54,021 INFO exited: plexmediaserver (exit status 255; not expected) 2020-04-19 15:23:54,021 DEBG received SIGCHLD indicating a child quit 2020-04-19 15:23:54,021 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 139734334119840 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stdout)> 2020-04-19 15:23:54,021 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 139734334410368 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stderr)> 2020-04-19 15:23:54,021 INFO exited: plexmediaserver (exit status 255; not expected) 2020-04-19 15:23:54,021 DEBG received SIGCHLD indicating a child quit 2020-04-19 15:23:57,027 INFO spawned: 'plexmediaserver' with pid 70 2020-04-19 15:23:57,323 DEBG fd 8 closed, stopped monitoring <POutputDispatcher at 139734335288080 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stdout)> 2020-04-19 15:23:57,323 DEBG fd 10 closed, stopped monitoring <POutputDispatcher at 139734333605056 for <Subprocess at 139734334410224 with name plexmediaserver in state STARTING> (stderr)> 2020-04-19 15:23:57,324 INFO exited: plexmediaserver (exit status 255; not expected) 2020-04-19 15:23:57,324 DEBG received SIGCHLD indicating a child quit 2020-04-19 15:23:58,325 INFO gave up: plexmediaserver entered FATAL state, too many start retries too quickly 2020-04-19 15:23:58,325 INFO gave up: plexmediaserver entered FATAL state, too many start retries too quickly
  2. Hi I am currently running: Unraid: 6.8.3, Sonarr: Version 2.0.0.5344, Mono Version 5.20.1.34 Ombi: 3.0.4892 I have searched for this issue, but can't seem to find a directly related issue or resolution. I have series that are not marked available. For this example - Friends (1994). Upon pressing the Request radio button, then selecting the Select option, I see that the final episodes from S09 and S10 are marked as missing. I then opened SONARR, and both episodes are there (and watchable), however after a little bit more digging, it appears that the final episodes of both these sesons are range marked episode e.g. S10E17E18. My questions is: Is there a simple way to get Ombi to recognise these episodes, and thus marking the entire series as available? Many Thanks Duggie
  3. trurl - many thanks for your replies, yes, memory is fine, and no I couldn't get any diagnostics as I lost the system completely - You are right about the log server though, I should really get one up and running! Cheers, Duggie
  4. So to add to my misery, after three attempted restarts and three failures, I left to put a load of washing in, came back, restart number 4 and it appears to be good with everything back to normal and currently enjoying a parity check. Grrr... but Yay!
  5. I have been running 6.8 stable for a couple of weeks with no problems. Last weekend, after adding some nzbs to sabz, I noticed that they were not being added. I then restarted the container, however it failed to come back up. Then the dashboard hung and would not load, Main would load only the top half (disk Info) or if I scrolled to bottom of page quickly enough, it would load the reboot, mover options (but no the disk info at the top of the page). I rebooted the server, but the issues remained. I then downgraded to the previous version, and all seemed well. Today I decided to upgrade to the 6.8.1-rc1 version, and everything appeared to go well, for about 2 hours! For that entire time I was in a Ubuntu VM with a passed through KB monitor and mouse, administering the server. Without any indication or notification the monitor screen went blank. I then jumped on seperate PC, but could not open an ssh session nor ping it. The server still appeared to be running (i.e. it hadn't power cycled). I manually fat fingered the server, and it will not reboot. It hangs at the screenshot below (prior to this only other thing is that an unclean shutdown was detected, and dirty bit set, clearing dirty bit, as I would expect on a system crash), then once timeout is complete stalls completely after the pps3 worker is killed. Is there going to be anything retrievable on the flas before I flatten and rebuild (obviously no logs due ot crash and restart/hang)? unraid.mp4 unraid.mp4
  6. So I emailed the maintainers from the SF link you provided, and this was their response: Hi Duggie, a quick glance at the UNRAID changelog it looks like in unRAID 6.6.7 they are using the 4.18.20 kernel. Do you know what kernel is in the newer RC versions? Feel free to submit the diagnostic logs to us. We will take a look. Contrary to their comments, we do maintain the kernel driver as well and have some patches staged to go upstream soon. Thanks, Scott AND ON SENDING MY DIAG LOGS AND THE UNRAID CHANGE-LOGS FOR THE 6.7.0-rcX FAMILY RCVD THIS FROM DON I see a DMAR error logged. That may be what caused the controller lockup, which caused the OS to send down a reset to the drive. Are you able to update the driver and build it for a test? If so, there is a structure member in the scsi_host_template called .max_sectors. It is set to 2048, wondering if you can change it to 1024 for a test? If not, I would have to know what OS I could do the build for you on. Not real sure about unraid. Feb 25 23:03:09 TheNewdaleBeast kernel: DMAR: DRHD: handling fault status reg 2 Feb 25 23:03:09 TheNewdaleBeast kernel: DMAR: [DMA Read] Request device [81:00.0] fault addr fe8c0000 [fault reason 06] PTE Read access is not set Feb 25 23:03:40 TheNewdaleBeast kernel: hpsa 0000:81:00.0: scsi 14:0:7:0: resetting physical Direct-Access SEAGATE ST4000NM0023 PHYS DRV SSDSmartPathCap- En- Exp=1 Feb 25 23:03:57 TheNewdaleBeast avahi-daemon[4764]: Leaving mDNS multicast group on interface br0.IPv6 with address fe80::1085:73ff:fedb:90d4. Feb 25 23:03:57 TheNewdaleBeast avahi-daemon[4764]: Joining mDNS multicast group on interface br0.IPv6 with address fd05:820d:9f35:1:d250:99ff:fec2:52fb. Feb 25 23:03:57 TheNewdaleBeast avahi-daemon[4764]: Registering new address record for fd05:820d:9f35:1:d250:99ff:fec2:52fb on br0.*. Feb 25 23:03:57 TheNewdaleBeast avahi-daemon[4764]: Withdrawing address record for fe80::1085:73ff:fedb:90d4 on br0. Feb 25 23:03:58 TheNewdaleBeast ntpd[3173]: Listen normally on 6 br0 [fd05:820d:9f35:1:d250:99ff:fec2:52fb]:123 Feb 25 23:03:58 TheNewdaleBeast ntpd[3173]: new interface(s) found: waking up resolver Feb 25 23:04:33 TheNewdaleBeast kernel: hpsa 0000:81:00.0: Controller lockup detected: 0x00130000 after 30 Feb 25 23:04:33 TheNewdaleBeast kernel: hpsa 0000:81:00.0: controller lockup detected: LUN:0000000000800601 CDB:01030000000000000000000000000000 Feb 25 23:04:33 TheNewdaleBeast kernel: hpsa 0000:81:00.0: Controller lockup detected during reset wait Feb 25 23:04:33 TheNewdaleBeast kernel: hpsa 0000:81:00.0: scsi 14:0:7:0: reset physical failed Direct-Access SEAGATE ST4000NM0023 PHYS DRV SSDSmartPathCap- En- Exp=1 Feb 25 23:04:33 TheNewdaleBeast kernel: sd 14:0:7:0: Device offlined - not ready after error recovery @limetech would you be able to assist as I am currently about 8000 feet below sea level, with only a snorkel for comfort!
  7. @johnnie.black @limetech Thanks for your response, guess I'll just have to stick with 6.6.7 for the foreseeable, in the meantime I have emailed HP to try and hasten a solution - I will update you if I get a response, Regards Duggie
  8. Hi, I still have the same problem with this update that I have had with every 6.7.0RCx version, namely that once I reboot, multiple drives fail to mount/are disabled. On reversion to 6.6.7 all the drives are fine. I have tried: Install --> reboot Install --> Power down all VMs and containers --> Turn off Docker and VM manager --> Reboot Install --> Power down all VMs and containers --> Turn off Docker and VM manager --> Stop Array --> Reboot Regardless of the process I follow, loads of drives appear failed/corrupt/disabled after reboot and array started. reversion to 6.6.7 returns system to normal operations, except disks 2 and 5 which remain disabled. See attached diagnostics, taken after upgrade and reboot, followed by reversion .thenewdalebeast-diagnostics-20190501-0848.zip If I get some time this weekend, I will do another upgrade, and pull diags at every step.
  9. Ahh cool, I'll keep waiting for e release that works then, cheers John ie!
  10. Is that an unraid kernel issue, or a controller kernel issue?
  11. Thanks, I just find it strange that I have had no issues under 6.6.7 and below? Now I have reverted, it is all working fine, as it was before the upgrade.
  12. was working fine on latest stable (6.6.7) closed down all of my VMs and containers. ensured mover had finished. backed up flash stopped array and then updated to next 6.7.0-rc6. b Better than previous attempts to upgrade, as far as on reboot all drives were detected in correct slots, however on starting array errors everywhere. log is full of errors: Mar 30 23:26:05 TheNewdaleBeast emhttpd: error: get_filesystem_status, 6474: Input/output error (5): scandir Mar 30 23:26:05 TheNewdaleBeast kernel: XFS (md2): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x15d508f48 len 32 error 5 Mar 30 23:26:05 TheNewdaleBeast kernel: XFS (md2): xfs_imap_to_bp: xfs_trans_read_buf() returned error -5. Mar 30 23:26:06 TheNewdaleBeast emhttpd: error: get_filesystem_status, 6474: Input/output error (5): scandir Mar 30 23:26:06 TheNewdaleBeast kernel: XFS (md2): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x15d508f48 len 32 error 5 Mar 30 23:26:06 TheNewdaleBeast kernel: XFS (md2): xfs_imap_to_bp: xfs_trans_read_buf() returned error -5. Mar 30 23:26:07 TheNewdaleBeast emhttpd: error: get_filesystem_status, 6474: Input/output error (5): scandir Mar 30 23:26:07 TheNewdaleBeast kernel: XFS (md2): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x15d508f48 len 32 error 5 ............................ screenshot shows five appearing as unmountable (were mounted and functioning with no problem in stable 6.6.7) also attached is the diagnostics log. going to revert back again to a functioning system! thenewdalebeast-diagnostics-20190330-2333.zip
  13. Generating some 4096 RSA certs - should more than one thread be getting allocated? Cheers Duggie
  14. Hi, Installed this plugin on 6.6.7, and after changing the settings to what I require, am unable to get a client to connect - I will continue fault finding, but in the meantime, if you set LZO compression to No in the Server Config page, whenever you create files, line 17 is simply a 0. should it be "comp-LZO No" or "comp-LZO 0"?