Jump to content


  • Content Count

  • Joined

  • Last visited

Community Reputation

2 Neutral

About TyantA

  • Rank
    Advanced Member


  • Gender

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thanks for the advice / tips! So no need to race off and RMA then, and in theory should be fine to re-deploy, replacing smaller drives?
  2. My 8TB Toshiba Parity drive that I got about a year ago threw 5 CRC errors pretty close to each other, then nothing for a while. Within the last two months, it inched up to 8. Doesn't seem like much but I don't like the trend. So I order a new drive that finally shows up. I replace it with a 8TB RED and the build finished this morning. During the rebuild, I see that another drive, a Seagate 4TB drive's error count is now 314! That seems far more concerning. I also happen to have ordered a 4TB RED as well, so I'll get to subbing that one in today. My question is: what's the threshold for concern? If they're still under warranty for a while, is it OK to let them go, or best practice to replace them as soon as a drive shows signs of CRC errors? I intend to replace these under warranty and re-deploy them to replace smaller disks. Thoughts?
  3. Ok, so some further reading reminded me that I can't use GPU acceleration with Plex unless PMS is running on a Windows VM with the GPU passed through. Hrmph. I suppose that's an option, however, I'm not sure what impact say, gaming on the VM that is also running PMS would be like. I really thought I had solved this as my temp hardware I just pulled out was a Z68 board & 2600k CPU. Plex could have in theory used quicksync with that setup.... except the K CPU apparently doesn't support IMMOU. Sigh. So now I'm getting more horsepower (and power consumption) but losing quicksync for Plex. So my issue with being able to play H.265 won't be resolved unless I go the VM route. I suppose it's not a bad option and answers the 'getting Plex to use the best GPU along with the same gaming VM' - again, assuming I can set the primary card for in slot X16_1 as passthrough for the VM and not be assigned to unraid. I just feel like relying on a VM for something as "critical" in our house just adds another point of failure. I liked how the docker was unaffected by VMs being spun up or down. Also, how hard would it be to move from a PMS docker to a windows VM install? I can't afford (the time) to start from scratch.
  4. Got parts to upgrade my HEDT workstation to Ryzen and no one seems to want my old i7 3970X / Asus P9X79 Pro / 32GB setup, so today it gets transplanted into my primary unraid box. I'm hoping my goal of getting Unraid with several dockers (including Plex) and 2x Windows 10 VMs out of it will be more of a reality now than ever, with 6c 12t @3.5Ghz/4.0 boost at my disposal (up from 4/8) and 32GB ram to play with. I'm also looking to set up 3x GPUs as there is no onboard graphics. All of them are AMD cards. I have: Radeon RX 470 8GB for my "casual gaming" and primary VM which also functions as my HTPC hooked up to a projector. Radeon HD 7950 3GB for my secondary VM that would probably only be used for split screen gaming on my projector (retro titles like Portal) I'm toying with running a long HDMI cable upstairs to replace the main floor HTPC setup with this. An old AMD single slot Radeon HD 3470 256MB card for the server's purposes. (ALternatively, I have a Quadro FX570 but I like the low profile of the AMD card for cooling purposes.) [The question is: which cards in which slots?] The manual shows that triple GPUs would mean using PCIe X16_1, PCIe X16_2 @ 8x and PCIe X6_4 @ 8x. Edit: I forgot to mention I have to sandwich a Dell controller card in there somewhere too. I wonder why X16 slot 3 is excluded from the above. Maybe there are not enough lanes for it to be used? Or just not for GPU? Physically speaking, I can fit the RX 470 in slot 1, the 3470 in slot 2, the Dell controller in slot 3 and the 7950 in slot 4. It's tight, but with extra cooling?? I've read a bit about passing the primary GPU through to Unraid for the purposes of Plex, but I guess I'm confused about whether PMS needs the GPU horsepower or the Plex client? I partially picked up the RX 470 for its better decoding abilities as I've been having trouble with h.265 etc. Also, if I can use it for Plex, presumably I can't use it for the VM. Finally, to me it makes most sense to put the 470 in the X16 slot, but reading has indicated that primary card passthrough can be tricky. Can a 'primary' i.e. used for unraid card live in a slot other than PCIe_1? This is more confusing than I was hoping it would be and the more I read, I feel the more lost I get. Could someone shed some light on the best multi-gpu setup given this hardware? TIA.
  5. TyantA

    3970X a bad idea?

    The other option is to see if I can find a buyer for the 3970X and pick up a Xeon E5-2697 v2 that this motherboard should support. 12c 24t is a decent upgrade in cores, 20w drop in TDB and also 800Mhz drop in clock per core (500Mhz boost differential). In theory, that would set up the server for multiple VMs, potentially allowing me to sell another computer (replaced by VM) which *should* cover the cost of the upgrade. Only thing is, I had originally been planning to sell the entire LGA2011 setup and starting from scratch (while it was still worth *something*. If I go this route, I'm pretty much cementing it into server-hood for the remainder of its life. Then there's still the challenge of finding an LGA2011 cooler that will fit in a 4U case!
  6. TyantA

    3970X a bad idea?

    Interesting idea I hadn't considered - thank you! Most of the time this will be under-utilized but it would be nice to have the power if/when needed. If this chip goes in my server, I'd consider making it render videos for me instead of on my workstation. I suppose with it currently in my workstation I could get a sense of what it consumes at idle as well to help inform the decision.
  7. From a purely TDP standpoint @ 150w, seems to be a less than ideal choice - but I have one - is it workable? I have a small server case currently in an enclosed room. Seems like a lot of heat to be dumping in there. I guess I have to weigh the climate management with the cost of going a different direction. Also, I don't have a cooler that would fit this processor in this case, so I'd have to buy that. Otherwise, 6 cores, 12 threads is an upgrade. There's 32GB of ram in there (it's my current workstation) and room for twice that. The pro board has 8 sata ports and at one point I was using (just the mobo) in the server so I know it virtualizes well. On paper, everything looks good except heat and power. Budget is super tight right now. Thoughts?
  8. Actually, the thing that prompted this was a motherboard that was misbehaving. I was trying to build a PC with it but it had onboard audio issues, wouldn't save BIOS settings with the power unplugged even with a new battery and sometimes the NIC would drop out. I ended up using parts from the systems listed above to get this other system up and running. I assumed I'd then have to upgrade one of the other two systems. Then I thought about it: Audio isn't an issue in a server. It's always connected to a UPS and if the BIOS settings reset, NBD. As for the NIC, if it does flake out (seemed to only happen when a PCI sound card was installed) I could potentially use an add-in card. Not to mention the P8Z68-V Pro in question has QuickSync which I'm hoping might come in handy for Plex (even though it's paired with a 2nd gen i7) and still has 8 SATA ports on board. All in all, it's only a few passmarks shy of the 3820 I had in there. Oh, and the PCI(-e) slot config makes it more realistic to get two GPUs in there (not to mention having onboard video freeing one up too!) I dropped it in last night and the server (so far) is behaving just fine! Not sure how much life it has in it with those gremlins... but in some ways this almost feels like a little "upgrade".
  9. Quick version: Current Unraid Server: P9X79 Pro i7 3820 16GB DDR3 Arctic Freezer i11 compact cooler Current Desktop: P9X79 (vanilla) i7 3970X 32GB DDR3 Noctua D14 cooler I have the opportunity to sell my i7 3820, 16GB ram & P9X79. Need to decide whether to upgrade my main desktop to Ryzen (and move the 3970X to the server) or focus on a lower power/heat server upgrade that will handle 4k. Current mobo doesn't support CPUs with QuickSync so there's the draw to moving to something that does for encoding abilities. Prior to this, my plan was to pick up a Radeon RX 470 to better handle transcoding over the 7950 I have in there currently. I don't have a 4k projector yet, but have started collecting content and plan to make the move one day. I currently only run one VM which is a W10 install that runs my Plex client and other HTPC functions. I wanted to run a 2nd VM for the potential of split screen gaming on my projector one day, but that's not a priority. 3970X + RX 470 + 32GB ram would handle this well, I think, but be a bit of a power hungry beast for 24/7 operation. The other trick is finding a cooler that will fit in my 4U case but still cool the 150w CPU. The Arctic Freezer i11 I'm using currently would need to go with the parts I'm selling, however, it is rated for 150w. Its successor though, the Freezer 12 is only rated for 130w?? But I love the idea of upgrading my main desktop if this is the route I go. The flip side is, that could turn into a much more expensive upgrade and the budget is tight. Maybe it makes the most sense to look at an Intel QuickSync capable CPU, mobo, memory and adequate cooler then call it a day. Thoughts?
  10. So I just threw a URL in for Transcode and this time the reinstall appears to have worked. Will actually have to test it later though. Also... can I use a disk mount for an unassigned disk for the transcode path? How do I export an unassigned disk as a share so I can access it to actually create the directory I specified in that path again?
  11. Hrmm. It's a thought, however, with a firstborn due to pop in a few weeks and an endless list of things to try to get done before then, I don't think now's the time to embark on that adventure. Especially since I'm trying to get a backup build in place before then too. Not a bad idea though.
  12. Whelp, it's been this way since the 4.x days so for now, I'm going to leave it Just manually grabbing the plex docker folder now. There's only the .ssh and (thankfully) Library folder left in there. Looks like I'll be letting that run overnight at its current pace.
  13. And is it much different if I'm not using the default appdata share? I created a share on my cache drive simply called "apps".
  14. I only use direct play on my local network. Internet's too slow for anything else . I've always used the linuxserver.io version for no particular reason. Is it possible / advisable to switch without too much hassle? I didn't realize Plex had their own docker. I'm generally a fan of stock. I had started a manual copy to back things up (mind you I was after the whole appdata folder) and it was going to take forever. Is there a better way to do it other than copying the folder(s)?
  15. Might actually be an issue with the release. I'm not the only one it's happened to, apparently: