Duggie264

Members
  • Content Count

    27
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Duggie264

  • Rank
    Newbie

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Another upvote from me, been having this problem for a couple of years now, and with 15 $TB SAS drives spinning day and night, paid a few quid more than I have needed/wanted to! Currently run 15 14 Seagate ST4000NM0023 (SAS) and 2 ST4000DM005 (SATA) drives through 2 LSI 9207-8i in IT mode. Would love a simple way to put the drives into standby!
  2. Hi, just started getting this crash today, any ideas? 2020-04-19 15:23:48.353687 [info] System information Linux ################# 4.19.107-Unraid #1 SMP Thu Mar 5 13:55:57 PST 2020 x86_64 GNU/Linux 2020-04-19 15:23:48.383161 [info] PUID defined as '99' 2020-04-19 15:23:48.415129 [info] PGID defined as '100' 2020-04-19 15:23:48.918075 [info] UMASK defined as '000' 2020-04-19 15:23:48.945899 [info] Permissions already set for volume mappings 2020-04-19 15:23:48.972882 [warn] TRANS_DIR not defined,(via -e TRANS_DIR), defaulting to '/config/tmp' 2020-04-19 15:23:49.0
  3. Hi I am currently running: Unraid: 6.8.3, Sonarr: Version 2.0.0.5344, Mono Version 5.20.1.34 Ombi: 3.0.4892 I have searched for this issue, but can't seem to find a directly related issue or resolution. I have series that are not marked available. For this example - Friends (1994). Upon pressing the Request radio button, then selecting the Select option, I see that the final episodes from S09 and S10 are marked as missing. I then opened SONARR, and both episodes are there (and watchable), however after a little bit more digg
  4. trurl - many thanks for your replies, yes, memory is fine, and no I couldn't get any diagnostics as I lost the system completely - You are right about the log server though, I should really get one up and running! Cheers, Duggie
  5. So to add to my misery, after three attempted restarts and three failures, I left to put a load of washing in, came back, restart number 4 and it appears to be good with everything back to normal and currently enjoying a parity check. Grrr... but Yay!
  6. I have been running 6.8 stable for a couple of weeks with no problems. Last weekend, after adding some nzbs to sabz, I noticed that they were not being added. I then restarted the container, however it failed to come back up. Then the dashboard hung and would not load, Main would load only the top half (disk Info) or if I scrolled to bottom of page quickly enough, it would load the reboot, mover options (but no the disk info at the top of the page). I rebooted the server, but the issues remained. I then downgraded to the previous version, and all seemed well. Today I decided to upgrade to
  7. So I emailed the maintainers from the SF link you provided, and this was their response: Hi Duggie, a quick glance at the UNRAID changelog it looks like in unRAID 6.6.7 they are using the 4.18.20 kernel. Do you know what kernel is in the newer RC versions? Feel free to submit the diagnostic logs to us. We will take a look. Contrary to their comments, we do maintain the kernel driver as well and have some patches staged to go upstream soon. Thanks, Scott AND ON SENDING MY DIAG LOGS AND THE UNRAID CHANGE-LOGS FOR THE 6.7.0-rcX FAMILY RCVD TH
  8. @johnnie.black @limetech Thanks for your response, guess I'll just have to stick with 6.6.7 for the foreseeable, in the meantime I have emailed HP to try and hasten a solution - I will update you if I get a response, Regards Duggie
  9. Hi, I still have the same problem with this update that I have had with every 6.7.0RCx version, namely that once I reboot, multiple drives fail to mount/are disabled. On reversion to 6.6.7 all the drives are fine. I have tried: Install --> reboot Install --> Power down all VMs and containers --> Turn off Docker and VM manager --> Reboot Install --> Power down all VMs and containers --> Turn off Docker and VM manager --> Stop Array --> Reboot Regardless of the process I follow, loads of drives appear failed/corrupt/disabled after reboot
  10. Ahh cool, I'll keep waiting for e release that works then, cheers John ie!
  11. Is that an unraid kernel issue, or a controller kernel issue?
  12. Thanks, I just find it strange that I have had no issues under 6.6.7 and below? Now I have reverted, it is all working fine, as it was before the upgrade.
  13. was working fine on latest stable (6.6.7) closed down all of my VMs and containers. ensured mover had finished. backed up flash stopped array and then updated to next 6.7.0-rc6. b Better than previous attempts to upgrade, as far as on reboot all drives were detected in correct slots, however on starting array errors everywhere. log is full of errors: Mar 30 23:26:05 TheNewdaleBeast emhttpd: error: get_filesystem_status, 6474: Input/output error (5): scandir Mar 30 23:26:05 TheNewdaleBeast kernel: XFS (md2): metadata I/O error in "xfs_trans_read_buf_map" at daddr 0x15d
  14. Generating some 4096 RSA certs - should more than one thread be getting allocated? Cheers Duggie