Jump to content

Iceman24

Members
  • Content Count

    23
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Iceman24

  • Rank
    Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. They're similar, all very shuckable. Videos galore on doing so.
  2. Thanks. Are there in particular NVMe drives that are free of issues? I've also read in one place that performance is barely different compared to SATA due to sequential reads/writes not really being used, but mostly random, which has similar speeds. SSD's have become cheap enough that I wanted the latest and greatest for minor cost difference, plus more clean install in case.
  3. Reading through that thread more carefully has me thinking I should reevaluate my SSD choice, but unsure which to get. Samsung ones are way more expensive, so I didn't want to just say forget any others and pay that much more times 2.
  4. Lol. Funny you picked that one. I was last poster in that thread. I figured that it may have had to do with using an add-in card for connecting M.2 drives. Motherboard I'm looking at has them built-in. I read similar post as well that referenced only issues with M.2 that used add-in card instead of built-in slot.
  5. Thanks. Was there a specific issue(s) with NVMe? I've done some research. Nothing stood out to me regarding any NVMe issues.
  6. I'm about ready to order parts for my unRAID build, but had a question regarding PCIe lanes usage with dual M.2 NVMe cache drives. I'm looking at the Supermicro X11SCH-F motherboard which has 2 M.2 NVMe ports with PCIe 3.0 x4 for each slot. No mentioned sharing of anything. I won't have more than 2 USB drives plugged in and 3 HDD's at max most likely. Perhaps a GPU card for passthrough at a later date, but not planned at this time. From what I read, SB has its own PCIe lanes, so that seems okay from that perspective. The only thing I'm really wondering as I am figuring the 2 M.2 slots support full speed with each having a drive in them, is the upstream of the C246 chipset to the CPU is only x4, not the full x24 the chipset offers. Does this matter for M.2 cache drives? They will be RAID 1 if that matters.
  7. Follow up question to this. I know the file would download to cache, then unpack to cache, so does that mean that if you had data moved to array daily, that you would need close to twice the download size of cache space for a particular download? As in even though the downloaded file gets deleted after it is unpacked, that space was used on cache, as well as unpacked file.. ?
  8. I was considering this SSD for my RAID 1 cache drives. Is this only an issue when you want to use encryption on the SSD?
  9. Thanks. Chipset would be Intel C246. Motherboard would be one in the Supermicro X11 line, such as X11SCH-F, X11SCM-F, or X11SCA-F.
  10. Would the P630 integrated graphics in the Xeon E-2176G work for passthrough to a VM? What about if I wanted to pass it through to Plex docker for transcoding (which I don't often do), could it not be passed through to both simultaneously?
  11. I'm having trouble figuring out exactly how this works. First, unsure if I'll use VM's on my new unRAID server I'll be building soon, but I would like to be able to if I decide to. I understand that you can passthrough a GPU that you would then connect a monitor to physically on the server and use a VM that way, but can you passthrough a GPU to a VM so that it can utilize it so that it runs better, looks better, smoother framerate, if you only connect to the VM over the network, instead of being physically plugged into the passthrough'ed GPU's video output port?
  12. I noticed PCIe 3.0 x2 and PCIe 2.0 x4 M.2 slots. This sounds less than ideal for dual cache drive usage.
  13. That can't be true, as I never changed any docker settings.
  14. Piggybacking off the last post, how would an encrypted unRAID drive be read on another computer? Does it need to be a certain OS or do you need certain software?
  15. Thanks, but from prior research NAT Reflection isn't the recommended way to handle such routing. It's recommended to leave it off and use Split DNS, so I am determined to keep it configured that way. Even if nothing else worked, I'd rather work around it be using port 443 internally/externally for NPM and workaround that port being unavailable for unRAID GUI, which I don't access remotely anyways.