mdoom

Members
  • Posts

    42
  • Joined

  • Last visited

Everything posted by mdoom

  1. Just to add, I also had md_write_method se to reconstruct. After switching to auto, that helped. Now not every disk spins up when any write occurs, but I still am dealing with the constant SMART reads. I have had some issues with plex hanging on scans before, but I've experienced this SMART issue with all my dockers stopped too.
  2. Thanks. Yeah already been going down that path disabling nearly everything. Haven't figured out the culprit yet! Will keep messing with it. The interesting thing, is if i force a spin-down of all disks, its always shortly after I do that that they all wake up with that READ SMART message.
  3. I just happened to stumble onto this thread... ..I've been having this problem forever! Admittedly, I never dug into it much, but was always annoyed that my disks were always spun up. Even if i forced to spin down, they always seemed to come back up. I finally checked logs today and I'm seeing it frequently always reading SMART on all drives. Trying to figure out what is triggering the reading of the SMART stats though. I will say though, I am using LSI 2008 HBA for my controllers. EDIT: I also am on RC2. I can't say specifically if this is unique to RC2 or not, however I am on that currently, and currently see logs showing this. I cannot speak to why i previously had issues with disks always spun up. (prior to RC2).
  4. Sort of a two part question here... I'm using this plex docker on unraid. Latest versions of everything. So, whenever I have large movies added to my Media folder and Plex finds it, it seems to fully lock up Plex for awhile until it is done importing it. I've dealt with this for way longer than I care to admit... but in searching this forum I found some recommendations to others with similar issues, that i should make sure appdata path in the docker is set to /mnt/cache/ instead of /mnt/user. And sure enough, I checked and I currently am setup with /mnt/user. So my first question, since appdata share is set to cache only, what impact does this really have? Second question, I tried updating this from /mnt/user/ to be /mnt/cache/ and now plex won't start and continuously crashed. Getting a "unraid failed to create uuid file" errors over and over. It says it keeps outputting crash reports too, but when checking the folder in appdata for crash reports, nothing new is there. I deleted and re created the image as well to just try that. No luck. For now I switched back to /mnt/user since it 'works'. I welcome any advice though
  5. Hey, I just recently started using this specific container, and having an issue that I'm hoping someone can point me in the right direction. My goal: Use Nvidia GPU in the docker with OpenCL. I've setup my docker appropriately to pass in the GPU UID and all that, and confirmed that the docker itself has visibility to the GPU. (confirmed with nvidia-smi) However when running a project that has tasks that expect Nvidia OpenCL processing, (i'm running Primegrid in particular) I'm getting computational errors. When I check stderrgpudetect.txt file, I'm seeing: "beignet-opencl-icd: no supported GPU found, this is probably the wrong opencl-icd package for this hardware" Which, tells me its trying to use Intel OpenCL. Is there something additional that would need to be installed to use Nvidia? Any guidance is greatly appreciated.
  6. Thanks. Yep i'm already working on getting those cards setup now first. Then will just rebuild 2 back onto itself, and 4 onto a new drive i have ready to go. Thank you!
  7. Well i dont know if they are a problem for sure, but yeah. I just haven't gotten around to replacing as the ones I have will require flashing new bios to them first, and just have been busy. I did go ahead and get the server rebooted. disk 2 and 4 both still had their disabled status, but disk 2 is good from what I can tell. 4 is throwing some legit smart errors and I'd like to replace. Are there any issues with leaving disk2 disabled and emulated, while replacing disk 4? (and replacing disk 4 with a larger disk.. replace 3 TB with 8 TB) Once I get 4 replaced, then I'm confident I can 'trust' 2 and rebuild parity from there. Although if I can get another replacement disk this week I may rebuild that too anyway then and then do further testing on the drive in the meantime. This is first time I've really had any issues since switching to dual-parity so its a new world for me. 🙂 EDIT: I see you said it was the SAS controller that crashed. So yes, that will also motivate me a bit now to get those cards replaced asap.
  8. Hello, I came home to work today to find quite the surprise. Had alerts for 2 disks being disabled, while 3 in total had read errors, one just recovered from it. I didn't panic, as I know I've had issues in the past with my current SAS cards, and occasionally having flukes and kicking multiple drives off. (I have replacement cards, just haven't gotten them installed yet) Anyways, before I did anything, I figured I should stop all my dockers to just prevent anything from making it worse. And when I attempted that, everything started hanging. I have one CPU maxed at 100%. The process is "kworker/1:0+events". So before I did anything, I thought I'd download diagnostics and just get some extra sets of eyes helping me on best way to move forward. disk2 and disk4 are the disabled ones currently. disk6 is the one that also had read error but seems okay as in its 'green'. Although looking at the diagnostics, disk2 4 and 6 all didn't produce SMART reports. Also why I think it had to be something with my card / cables. Any guidance is much appreciated. As I sit here, CPU still spinning, and nothing happening. :-) tiger-diagnostics-20181204-1439.zip
  9. So, now that I'm home and able to better play with it.. I wanted to provide an update. First and foremost, I apologize, turns out, I wasn't even using linuxserver container. I thought I had switched everything to linuxserver, however i still was running a version by needo, which apparently hasn't been updated in a very long time. I switched to linuxserver, mono is on 5.10 and all is well!
  10. Thats good to know, I did do a "check for updates" this morning before I left home and said no updates for the container at all. I'll just purge my image tonight after work and grab a fresh copy. I'm re-assured that you said it'll pull latest version. The odd part is I know I have re-setup Sonarr before (new container) many times since 3.10 Mono has been out, but oh well. Hopefully if anyone else having same problem they can be helped by this too!
  11. Good question. I guess I don't know for sure, but I know mine currently has 3.10. I'm away from home currently, but I will try a clean install of the container to see what happens. It may just be that since it pulls the latest when its installed, that then it never gets updated over time without an explicit update in the container.
  12. I was curious how to go about upgrading the version of Mono in this docker? There has been a bug with Mono 3.10 that is preventing nzb's from successfully being grabbed. details found at: https://forums.sonarr.tv/t/xmlexception-syntax-error/18599/11 So far the solution of upgrading Mono has been working for others. I'm just not sure what to do for the docker version.
  13. Agreed. Truly safest option. Thanks again!
  14. Well @johnnie.black, thank you very much!! That executed successfully, i restarted array (still in emulation mode for the disk) and it comes up perfectly now. I'll probably go ahead and move all data off to other drives and then rebuild back to original drive as 'empty'. Or i'll just rebuild now back to it. Either way, I think I'm good now more or less.
  15. restarted array in maintenance mode, tried command, got this fun warning. Wanted to double check before I did anything else. Should I proceed with adding -L flag?
  16. It was mounted. Oops! I unmounted and stopped/started again and still says it again, but it isn't mounted in UD. Here is latest Diagnostics. tiger-diagnostics-20180301-1742.zip
  17. So I did go ahead and just start array with it emulating that disk. However it still shows as "Unmountable: No file system" for Disk 7, even when its emulating.
  18. Its been in my array for over two years. Its a basically full 8 TB drive. Haven't had any issues at all with it. Just after the power outage yesterday that led to unclean shutdown (apparently my UPS backup is dead too...) , it now shows up as unmountable when the array starts. If it comes to rebuilding the filesystem, I guess I dont entirely know what that all entails.
  19. With array stopped, (this is drive in disk 7 spot), I change drop-down from the hard drive to "no" for disk 7, and then in the GUI under "Unassigned Devices" I am able to click "Mount". Then via terminal I just was navigating to /mnt/disks/.... etc. and can browse all data without issue.
  20. Good call. :-) Thank you! I just added it.
  21. So, had some crazy events yesterday. Long story short, had an unclean shutdown of my unraid. Last night I quickly got it back up and running but just enough to click start and go, didn't pay much attention to anything. This morning I noticed it was doing Parity check, which is expected after unclean shutdown, but then noticed that one of my data drives in my array said "Unmountable: No file system". Now I'm home from work today and trying to fix this. I've searched forums a decent amount, and see several had similiar issues specific to cache drives in upgrading to 6.4 release candidates, but I'm on full 6.4 version, and it is a data drive. When I stop the array, I am able to mount the drive and explore it and see all my data via a terminal. But even after that, starting it back up in the array still gives same error. I'm just looking for guidance on next steps to try to correct it so I can get everything working. I thought about maybe trying New Config, but wasn't sure if that'd do it or not. I also wasn't sure what state my parity was in since it ran over night doing parity check before I cancelled it while one drive was in this state. I am running a dual parity setup as well. I haven't yet, but I also thought about starting array just with the drive "missing" to test if i could see all of the data via parity. Thank you in advance EDIT: added Diagnostics zip tiger-diagnostics-20180301-1657.zip
  22. Cheapest I've seen these in awhile! Amazon - Seagate 3 TB
  23. So I have been wanting to do a backup of my app data for quite some time now and I stumbled upon this thread. After reading everything here and seeing all the awesome work by dirtysanchez, and then looking into expansion by StevenD, I then took gundamguy's idea of looking at this link: Time Machine for every Unix out there Putting it all together I came up with a script that works great for me and my needs and I thought I'd share. I liked the idea of using the "time machine" like approach where it'd just create links to files that did not change so I can truly have an iterative approach to backups and don't need to worry about wasting a lot of space. The other major concern I had was Plex. Plex downloads and stores TONS of metadata, and if you have a library even half the size of mine, you know that its a pain to copy/move/do anything with all of those thousands and thousands of folders and files. I wanted to exclude all Plex metadata from my backups. I successfully did this and tested restoring it to a new Plex installation. Sure enough, Plex restored with just no metadata and it instantly started downloading it. :-) Here is my code of my script: #!/bin/bash #Set date variables for today time_stamp=$(date +%Y-%m-%dT%H_%M_%S) #Create backup folder for today backup_path="/mnt/disk1/CacheBackup" mkdir -p "${backup_path}/$time_stamp" #Stop Plugins /etc/rc.d/rc.Couchpotato stop #Stop Docker Apps docker stop $(docker ps -a -q) #Backup apps dir via rsync date >/var/log/cache_backup.log /usr/bin/rsync -azP \ --delete \ --delete-excluded \ --exclude-from=$backup_path/excludes.txt \ --link-dest=$backup_path/current /mnt/cache/apps/ $backup_path/$time_stamp \ >>/var/log/cache_backup.log #Start Docker Apps /etc/rc.d/rc.docker start #Start Plugins /etc/rc.d/rc.Couchpotato start #Create symbolic link for current backup rm -f $backup_path/current ln -s $backup_path/$time_stamp $backup_path/current One thing you'll notice, is that it uses a "--exclude-from=" statement in rsync. This allows you to specify a file that includes all of the items you want it to ignore. I went ahead and excluded the following: Metadata/ Cache/ MediaCover/ The way excludes works for directories is: /dir/ means exclude the root folder /dir /dir/* means get the root folder /dir but not the contents dir/ means exclude any folder anywhere where the name contains dir/ Examples excluded: /dir/, /usr/share/mydir/, /var/spool/dir/ /dir means exclude any folder anywhere where the name contains /dir Examples excluded: /dir/, /usr/share/directory/, /var/spool/dir/ /var/spool/lpd//cf means skip files that start with cf within any folder within /var/spool/lpd I hope this helps someone!
  24. Hello, I'm trying to get the ReverseProxy setup, and I am running into an issue. I know I have to be overlooking something simple. I'm trying everything on port 80 for now to get it working there before even going into SSL and 443. Anyways, I started with my router and port forwarding. I have port 80 forwarding to my unraid server. 192.168.1.3 I have the docker setup with container port 80, and host port 81... and I have the simplest config file. Right now I'm just trying to point to sonarr, couchpotato, and plexwatch. I followed your recommendations for a free dns subdomain just to test things out before I got my own domain too. The problem I have now is that if I just go to my.domain.com it takes me to my unRAID dashboard. Now, this makes sense to me since that is just http and would be on port 80 and my router is forwarding all port 80 traffic to my unraid server. if i try something like my.domain.com/sonarr i just get a 404 error, which also makes sense because that page doesn't exist in unRAID gui. This'll happen with or without my apache container running. So basically, it seems that unraid is trumping apache and I'm competing for port 80 traffic. What simple step might I be missing?