kaiguy

Members
  • Posts

    723
  • Joined

  • Last visited

Everything posted by kaiguy

  1. Updated on Saturday. Started noticing that all my drives were pretty much always spun up. Read in this thread that it could be a combo of nerd pack and cache_dirs, so removed all nerd pack items and uninstalled the plugin, and updated cache_dirs to the latest unofficial version. Still experiencing it. Is there anything else in this release that could potentially cause this issue? Thanks!
  2. Is there any way to access the phantom branch (sonarr v3) with this container? I know develop is an option, but my gut tells me that the container itself needs to be able to support the given branch for me to be able to just change a tag and update... Edit: Nevermind. I just changed the repository within the container to lsiodev/sonarr-preview and it started right up!
  3. Thanks all. I'm comfortable with the flashing process (did the crossflash with my M1015 years ago), but I just hadn't found much info related to the 9201-8i as it relates to flashing firmware (especially since there seems to be no official 9201-8i firmware). I'll give it a go!
  4. Can anyone please confirm that I can flash 9211-8i firmware on a 9201-8i card? And in the process remove the BIOS? Thanks!
  5. Bumping this topic, since I'm getting the itch to upgrade yet again. My Atlas clone hasn't seen any upgrades at all (aside from HDDs and SDDs) since I built it in 2014. Truth be told, while it has been running great, I think the processor is starting to show its age with all the Dockers and VM I run. The case, PSU, SAS card and expander should be good to go, but I know I'll need to get a new mobo, CPU, and DDR4 RAM... I have enjoyed the stability and quality of the Xeon and Supermicro mobo (including IPMI since I run headless), but I wonder if I won't get more utility switching to consumer-grade hardware with a modern Core processor (for Intel QSV support, for example, for Plex hardware transcoding). Does anyone have a suggestion on a good upgrade path for me? I haven't researched hardware in years, and could really use some advice. I've been reading through the build and mobo subforums, but not feeling like I'm making much progress. I have fairly heavy Plex use (Docker), about 7 running Docker containers total, 1 Windows 10 Pro VM... Low-ish power would also be a plus! Thanks in advance!
  6. Thanks for this! The only reference to port 443 I could find was: tcp6 0 0 :::443 :::* LISTEN 5814/docker-proxy Which I believe would be the LE container. Not sure why it won't let me start the container with 443... especially since it's been working fine for months.
  7. I'm suddenly running into a problem where it appears LetsEncrypt container won't load because 443 is already in use. But I don't even have SSL enabled on my server, and prior to disabling, I sent the HTTPS port to 444. No changes to my server config, but I did have an unclean shutdown and am having a parity check (but that really shouldn't have any effect). /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint letsencrypt (7c3343119f45bcf4276a0xxxxxxxxxf6791f5be978ae5): Bind for 0.0.0.0:443 failed: port is already allocated. I can't figure this one out! Any thoughts? THANKS!
  8. Yeah, quite a few. I guess a dirty shutdown it is.
  9. Hello! Woke up this morning and tried to access the webgui, and sure enough, no luck. Shares can be accessed; I can ssh into the server... I was running a hacked preclear_disk.sh in a screen session, and that too seemed to be hung... maybe because of my once-a-week mover job? So I went ahead and control-c'd the job and exited out of screen. Tried kicking off a shutdown command but nothing happened. Figuring there's something keeping the array from unmounting, I tried executing a fuser command (fuser -mv /mnt/disk* /mnt/user/*) but it just hangs. lsof, too. I've tried to kill whatever PIDs look like it may be holding it up, but that doesn't seem to be doing anything. I've exhausted all of my knowledge and forum searching at this point. Not sure what else I can do to get a clean shutdown. Any suggestions?
  10. Thanks for continuing to maintain this plugin! Just FYI -- I updated and the NUT service never started back up. I had to manually toggle the NUT service off then back on within the plugin settings. Now it appears to be working as expected.
  11. Hmm... I get that if its coming in from outside, the port forward 443 to the correct internal host port would work as expected. On the LAN, the host port is listening on 8062, and you're connecting on https (443)... so unless you're leaving your LAN to make that connection, I don't know how this would work without specifying the port (https://plexpy.mydomain:8062)... But I could very well be missing something. Does your plexpy/reverse proxy container have a different IP than your unraid box?
  12. If I want to continue using this container for reverse proxy, combined with the new RC with LetsEncrypt support, I'm going to need to use my second NIC and assign all my Docker containers their own IPs in order to not have a port 443 conflict, right? I'm having some trouble visualizing how best to move forward...
  13. $169 again. I think I'm finally going to pick up one or two.
  14. Is there any reason the delete.ds_store script wouldn't actually delete all the .ds_store files? Tried it last night after Fix Common Problems found a bunch of duplicate .ds_store files spanning pretty much all of my shares. I disabled the creation of .ds_store files on my Macs, ran the user script, then ran the Fix Common Problems extended test again. Still had many, many dupes. So I went through and manually deleted them. Just curious if I did something wrong, or that script perhaps needs an update. Thanks!
  15. Mine is set up the same way as above, but instead of nobody / users I went with 99 / 100. Same result.
  16. For whatever reason, I've had to increase the number of connections in nzbgbet to get to the same download speed as sab. No clue why that is. I've got a 300mbps connection and I am able to saturate my download. You may also want to take a look at performance tips on the nzbget wiki. Though my server is pretty beefy, I reduced the logging level as I just don't need that writing to my SSD constantly for every little thing.
  17. Thanks, all, for clarifying. dmacias, I was curious if I'd have to manually add a dnsmasq entry in merlin, so I appreciate you specifically mentioning that. Last question hopefully--I installed the letsencrypt container, exposed appropriate ports, got the certs for my domain and subdomain... https://internal.ip.of.unraid works fine (albeit the cert doesn't like the url), but I can't get into http://internal.ip.of.unraid:81 ... Does the http not matter since the whole point is we're using https now? Just want to make sure this is by design before I continue on... Edit: Nevermind. I went ahead and set it up even though I can't access port 80 (actually 81) internally. As long as I reference https locally, I can access all the server apps. Forcing 80 to 443 redirect externally. Thanks again for the writeup!
  18. I completely understand the concept and appeal to be able to securely access your services remotely, but how would this impact local use? I assume you wouldn't be going to your domain (I feel like my router wouldn't allow that) but still access your various services via the traditional methods when on your internal network. If some of the apps (like sonarr, for instance) require changing the URL base, would that just mean I'd have to change the way I locally access sonarr on my network (e.g., internalip:8989/sonarr)?
  19. So over the past hour unRAID is still trying to unmount, but /mnt/cache is keeping it from doing so. 2 of my remaining 3 SSDs in the cache pool are showing consistent activity, so I have no idea if it's just reading and trying to unmount, or if there's some sort of rebalancing going on (or whatever... I'm not all that familiar with BTFRS). I'm not going to try to powerdown the server until someone can chime in on a good option for how to deal with this... ugh. Edit: Ok, shortly after this post, the webgui returned. Not sure how BTFRS works, but when it was finally able to unmount, I assigned the new drive and now it's rebalancing. Data appears to be safe. Strange how it took so long to be able to do this, but I'm glad all is well. Just posting this in case someone else runs into a similar experience.
  20. Hello! I recently wanted to upgrade one of my 4 SSD cache pool drives with a larger one. I shut down the array, removed the old drive, replaced it with the new, and restarted my server. I forgot to enter maintenance mode, however, so unRAID went ahead and mounted the array. When I tried to shut down the array to replace the drive in the webgui, it's just sitting there trying to unmount. The syslog is reporting: Jun 8 16:54:51 titan emhttp: shcmd (3277): umount /mnt/cache |& logger Jun 8 16:54:51 titan root: umount: /mnt/cache: target is busy Jun 8 16:54:51 titan root: (In some cases useful info about processes that Jun 8 16:54:51 titan root: use the device is found by lsof(8) or fuser(1).) Jun 8 16:54:51 titan emhttp: Retry unmounting disk share(s)... Jun 8 16:54:53 titan kernel: BTRFS info (device sdd1): found 10 extents Jun 8 16:54:53 titan kernel: BTRFS info (device sdd1): relocating block group 1859598286848 flags 17 I really hope I didn't muck things up. Any suggestions on what I can do to get the array stopped and this drive replaced? THANKS!
  21. unRAID version of Plex is meant to run natively as a plugin on Slackware. Docker does not run Slackware.
  22. Thanks, sparkly. Can confirm that Plex corrected the issue and the container now updates as expected.
  23. I wrapped up my conversion from RFS to XFS last week. Since I have only one parity disk, I was able to do the drive reassignments with the "new config" tool and assignment swaps as detailed without mucking with my user shares. I followed the tutorial on the wiki and I'm happy to report that a parity check after all 10 drives were converted came back with zero sync errors! Looks like it even slightly sped up my parity check speed as well. But most importantly, I'm just glad my array doesn't spin up now when I update or stop a Docker as it was doing with my RFS-formatted drives. Not sure if you'd like to reflect in the wiki that it's now confirmed that the process works... Thanks to all who contributed!