Jump to content

kaiguy

Members
  • Content Count

    580
  • Joined

  • Last visited

Community Reputation

1 Neutral

About kaiguy

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Happy Pro-license user since 2010 (you were, what, 5 at the time?). First build was an i3 in a tower case. I actually just recently retired one of the last remaining drives from that build--not even due to errors, just wanted to rid my system of all old Green 2TB drives for speed purposes. Then, like many people here, I did a JohnM Atlas-clone build around 2012. Aside from adding another HBA and many, many disks, my server has pretty much remained the same since then, happily chugging away 24/7. Unraid is great for many reasons: the drive aggregation, the continual evolution (VMs, Dockers FTW), the reliability, the community... I've probably turned 10-12 people on to Unraid, and I don't even have many tech-inclined friends. Happy Birthday, Unraid!!
  2. This issue popped up for me last night and its been bugging me ever since. I have Plex data on an unassigned SSD, was direct streaming/direct playing a file, and I started getting constant hangs while watching a video. Perhaps it was because I was trying to watch a 4K remux, but I still experienced this issue with the file on the cache array. Cache is connected via mobo SATA3, whereas my parities and data drives are connected via a IBM1015/LSI/expander... so I'd think the mover process wouldn't interfere with the cache at all.
  3. Just purchased this app. Excellent job! Question--have you ever revisted the request to implement unassigned devices? Even if its limited, like simply showing the devices and space on the disks view?
  4. Headless 6.6.7 upgrade to rc6 here. No issues at all.
  5. I'd like some advice on a reconfigure of my cache/SSD drives. I've been using a 4-SSD cache pool for quite some time (~500 GB total size), housing my appdata, Dockers, and a single Windows 10 VM. I have noticed that when files are transferred to/being unpacked on the cache pool that my Dockers appear to hang. I figure it may help performance by separating out some of these SSDs for specific purposes. In preparation I've purchased 2 additional 500 GB SSDs, and I plan on sunsetting 2 of the 4 existing 250 GB SSDs. My initial thought is to just remove the pool all together and setup the following: 500 GB SSD for cache (maybe x2 for a pool, but just not sure if its necessary) I used to have the mover run once per week, but if I did remove the redundancy benefits of the pool, I'd just run it nightly to limit the potential for loss to one day. 250 GB SSD for VM 250 GB SSD for Dockers/appdata I'd need to set up a backup script to write the data of these other SSDs to the array... maybe a combination of the CA appdata backup plugin and some other script for the VM Does anyone have any suggestions? Or perhaps you have your system configured in an optimal way to minimize apparent Docker/VM hangs on write? Thanks so much!!
  6. I purchased a couple OEM licenses from a somewhat questionable online source, and I've had no issues activating them on both unRAID and VMWare Fusion for my Mac.
  7. I just copied my whole sonarr appdata directory as a backup and upgraded in place. Been working great. My guess is it changes the database, so I wouldn't assume going back to stable will be possible. But if you try running it for a few days and don't like it, you can always just restore your backed up folder and won't be too far out of date.
  8. Updated on Saturday. Started noticing that all my drives were pretty much always spun up. Read in this thread that it could be a combo of nerd pack and cache_dirs, so removed all nerd pack items and uninstalled the plugin, and updated cache_dirs to the latest unofficial version. Still experiencing it. Is there anything else in this release that could potentially cause this issue? Thanks!
  9. Is there any way to access the phantom branch (sonarr v3) with this container? I know develop is an option, but my gut tells me that the container itself needs to be able to support the given branch for me to be able to just change a tag and update... Edit: Nevermind. I just changed the repository within the container to lsiodev/sonarr-preview and it started right up!
  10. Thanks all. I'm comfortable with the flashing process (did the crossflash with my M1015 years ago), but I just hadn't found much info related to the 9201-8i as it relates to flashing firmware (especially since there seems to be no official 9201-8i firmware). I'll give it a go!
  11. Can anyone please confirm that I can flash 9211-8i firmware on a 9201-8i card? And in the process remove the BIOS? Thanks!
  12. Bumping this topic, since I'm getting the itch to upgrade yet again. My Atlas clone hasn't seen any upgrades at all (aside from HDDs and SDDs) since I built it in 2014. Truth be told, while it has been running great, I think the processor is starting to show its age with all the Dockers and VM I run. The case, PSU, SAS card and expander should be good to go, but I know I'll need to get a new mobo, CPU, and DDR4 RAM... I have enjoyed the stability and quality of the Xeon and Supermicro mobo (including IPMI since I run headless), but I wonder if I won't get more utility switching to consumer-grade hardware with a modern Core processor (for Intel QSV support, for example, for Plex hardware transcoding). Does anyone have a suggestion on a good upgrade path for me? I haven't researched hardware in years, and could really use some advice. I've been reading through the build and mobo subforums, but not feeling like I'm making much progress. I have fairly heavy Plex use (Docker), about 7 running Docker containers total, 1 Windows 10 Pro VM... Low-ish power would also be a plus! Thanks in advance!
  13. Thanks for this! The only reference to port 443 I could find was: tcp6 0 0 :::443 :::* LISTEN 5814/docker-proxy Which I believe would be the LE container. Not sure why it won't let me start the container with 443... especially since it's been working fine for months.
  14. I'm suddenly running into a problem where it appears LetsEncrypt container won't load because 443 is already in use. But I don't even have SSL enabled on my server, and prior to disabling, I sent the HTTPS port to 444. No changes to my server config, but I did have an unclean shutdown and am having a parity check (but that really shouldn't have any effect). /usr/bin/docker: Error response from daemon: driver failed programming external connectivity on endpoint letsencrypt (7c3343119f45bcf4276a0xxxxxxxxxf6791f5be978ae5): Bind for 0.0.0.0:443 failed: port is already allocated. I can't figure this one out! Any thoughts? THANKS!