mdoom

Members
  • Posts

    42
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mdoom's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Just to add, I also had md_write_method se to reconstruct. After switching to auto, that helped. Now not every disk spins up when any write occurs, but I still am dealing with the constant SMART reads. I have had some issues with plex hanging on scans before, but I've experienced this SMART issue with all my dockers stopped too.
  2. Thanks. Yeah already been going down that path disabling nearly everything. Haven't figured out the culprit yet! Will keep messing with it. The interesting thing, is if i force a spin-down of all disks, its always shortly after I do that that they all wake up with that READ SMART message.
  3. I just happened to stumble onto this thread... ..I've been having this problem forever! Admittedly, I never dug into it much, but was always annoyed that my disks were always spun up. Even if i forced to spin down, they always seemed to come back up. I finally checked logs today and I'm seeing it frequently always reading SMART on all drives. Trying to figure out what is triggering the reading of the SMART stats though. I will say though, I am using LSI 2008 HBA for my controllers. EDIT: I also am on RC2. I can't say specifically if this is unique to RC2 or not, however I am on that currently, and currently see logs showing this. I cannot speak to why i previously had issues with disks always spun up. (prior to RC2).
  4. Sort of a two part question here... I'm using this plex docker on unraid. Latest versions of everything. So, whenever I have large movies added to my Media folder and Plex finds it, it seems to fully lock up Plex for awhile until it is done importing it. I've dealt with this for way longer than I care to admit... but in searching this forum I found some recommendations to others with similar issues, that i should make sure appdata path in the docker is set to /mnt/cache/ instead of /mnt/user. And sure enough, I checked and I currently am setup with /mnt/user. So my first question, since appdata share is set to cache only, what impact does this really have? Second question, I tried updating this from /mnt/user/ to be /mnt/cache/ and now plex won't start and continuously crashed. Getting a "unraid failed to create uuid file" errors over and over. It says it keeps outputting crash reports too, but when checking the folder in appdata for crash reports, nothing new is there. I deleted and re created the image as well to just try that. No luck. For now I switched back to /mnt/user since it 'works'. I welcome any advice though
  5. Hey, I just recently started using this specific container, and having an issue that I'm hoping someone can point me in the right direction. My goal: Use Nvidia GPU in the docker with OpenCL. I've setup my docker appropriately to pass in the GPU UID and all that, and confirmed that the docker itself has visibility to the GPU. (confirmed with nvidia-smi) However when running a project that has tasks that expect Nvidia OpenCL processing, (i'm running Primegrid in particular) I'm getting computational errors. When I check stderrgpudetect.txt file, I'm seeing: "beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware" Which, tells me its trying to use Intel OpenCL. Is there something additional that would need to be installed to use Nvidia? Any guidance is greatly appreciated.
  6. Thanks. Yep i'm already working on getting those cards setup now first. Then will just rebuild 2 back onto itself, and 4 onto a new drive i have ready to go. Thank you!
  7. Well i dont know if they are a problem for sure, but yeah. I just haven't gotten around to replacing as the ones I have will require flashing new bios to them first, and just have been busy. I did go ahead and get the server rebooted. disk 2 and 4 both still had their disabled status, but disk 2 is good from what I can tell. 4 is throwing some legit smart errors and I'd like to replace. Are there any issues with leaving disk2 disabled and emulated, while replacing disk 4? (and replacing disk 4 with a larger disk.. replace 3 TB with 8 TB) Once I get 4 replaced, then I'm confident I can 'trust' 2 and rebuild parity from there. Although if I can get another replacement disk this week I may rebuild that too anyway then and then do further testing on the drive in the meantime. This is first time I've really had any issues since switching to dual-parity so its a new world for me. 🙂 EDIT: I see you said it was the SAS controller that crashed. So yes, that will also motivate me a bit now to get those cards replaced asap.
  8. Hello, I came home to work today to find quite the surprise. Had alerts for 2 disks being disabled, while 3 in total had read errors, one just recovered from it. I didn't panic, as I know I've had issues in the past with my current SAS cards, and occasionally having flukes and kicking multiple drives off. (I have replacement cards, just haven't gotten them installed yet) Anyways, before I did anything, I figured I should stop all my dockers to just prevent anything from making it worse. And when I attempted that, everything started hanging. I have one CPU maxed at 100%. The process is "kworker/1:0+events". So before I did anything, I thought I'd download diagnostics and just get some extra sets of eyes helping me on best way to move forward. disk2 and disk4 are the disabled ones currently. disk6 is the one that also had read error but seems okay as in its 'green'. Although looking at the diagnostics, disk2 4 and 6 all didn't produce SMART reports. Also why I think it had to be something with my card / cables. Any guidance is much appreciated. As I sit here, CPU still spinning, and nothing happening. :-) tiger-diagnostics-20181204-1439.zip
  9. So, now that I'm home and able to better play with it.. I wanted to provide an update. First and foremost, I apologize, turns out, I wasn't even using linuxserver container. I thought I had switched everything to linuxserver, however i still was running a version by needo, which apparently hasn't been updated in a very long time. I switched to linuxserver, mono is on 5.10 and all is well!
  10. Thats good to know, I did do a "check for updates" this morning before I left home and said no updates for the container at all. I'll just purge my image tonight after work and grab a fresh copy. I'm re-assured that you said it'll pull latest version. The odd part is I know I have re-setup Sonarr before (new container) many times since 3.10 Mono has been out, but oh well. Hopefully if anyone else having same problem they can be helped by this too!
  11. Good question. I guess I don't know for sure, but I know mine currently has 3.10. I'm away from home currently, but I will try a clean install of the container to see what happens. It may just be that since it pulls the latest when its installed, that then it never gets updated over time without an explicit update in the container.
  12. I was curious how to go about upgrading the version of Mono in this docker? There has been a bug with Mono 3.10 that is preventing nzb's from successfully being grabbed. details found at: https://forums.sonarr.tv/t/xmlexception-syntax-error/18599/11 So far the solution of upgrading Mono has been working for others. I'm just not sure what to do for the docker version.
  13. Agreed. Truly safest option. Thanks again!
  14. Well @johnnie.black, thank you very much!! That executed successfully, i restarted array (still in emulation mode for the disk) and it comes up perfectly now. I'll probably go ahead and move all data off to other drives and then rebuild back to original drive as 'empty'. Or i'll just rebuild now back to it. Either way, I think I'm good now more or less.
  15. restarted array in maintenance mode, tried command, got this fun warning. Wanted to double check before I did anything else. Should I proceed with adding -L flag?