jmcguire525

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by jmcguire525

  1. With ZFS and the many options for special VDevs can we please have to 30 drive limit removed or increased for ZFS pools? I plan on moving a 30 drive array over to Unraid that includes 5 additional drives (3 for metadata and 2 slog). It seems that is currently not possible. I'd rather not go the route of using Truenas Scale since Unraid is a bit more straight forward when it comes to VMs and Docker. I would have put this as a feature request but removing an arbitrary limit isn't as much a feature as it is a roadblock for creating/migrating large ZFS pools.
  2. Will the 30 drive limit include special vdevs?
  3. Will special vdev devices be excluded in the 30 drive limit?
  4. Another question regarding the 30 drive limit... Will special VDEV devices be allowed in addition to the 30 drives or will that exceed the limit? I'm planning a 3x10 Raidz2 Pool with a mirrored Metadata VDEV and mirrored SLOG.
  5. I want to mirror two ssd's for temp/cache files that will never be written to the array (Plex and Channels DVR transcodes, incoming nzb downloads). This seems pretty easy to do if I only want to use a single disk, but in order to do this with a mirror I've only found one solution... This is a pretty old post, is it currently the only method of achieving my goal?
  6. Thanks for the info, in my defense I did read up on this before assuming it needed further separation and this post made me believe the bridge would require acs. Knowing I don't saves me from buying additional hardware!
  7. I've looked at the IOMMU groups with 2 intel motherboards (EVGA Stinger Z170 and ASRock Z270M-ITX) both have the GPU and PCI bridge in the same group. Right now I'm considering a Ryzen ITX build, 3600 with Asus STRIX B450-I or ASRock Fatal1ty B450. I do not know what the grouping for those boards are. If the PCI bridge is in a group with only the GPU does it require the ACS override? Any idea what the IOMMU groupings are for the Ryzen boards I'm looking at?
  8. Unfortunately the one I have laying around is an exception to that, and its a decent asrock mb
  9. Can anyone recommend a good itx board that can easily do gpu passthrough without acs override?
  10. I'm new to unraid and VMs in general and I'm curious if my expectations were too high. My system consist of an i7 7700 with 16gb ddr4, running a Deepin VM on an ssd using the UD plugin. The on-board intel graphics are set to primary and a GTX 750ti is passed through to the VM. I installed TigerVNC and I'm able to remote into the VM, things work but the graphics performance is a bit slow even on my local network. Even using it as a standard desktop I have audio issues (which I'm sure I can read up on and solve). Going in I was expecting to be able to remotely access the VM with performance on par with it being used as a local desktop through since I have a fiber 250/250 connection. Is that possible?
  11. So it should be transparent to the system, not acting as its own RAID card? Sounds like its worth trying out! I would add a full sized card for more sata connections but my only x16 slot is taken up by a GPU, this is a secondary NAS and won't need any more drives, but hopefully this will do what I'm looking for.
  12. I was looking for a way to add a few more sata ports to a mITX build, the on-board chipset only has x6 and if the nvme slot is occupied by a sata device then one of the sata ports is disabled. Would this card work for me... I haven't been able to find any specific information on it, not sure if it acts as its own controller and would pass through as pci-e or if it would still be using the on-board sata controller. m.2 sata expansion card
  13. Thanks... If I'm understanding correctly it will automatically set it as a mirror as long as I have Data1+Parity1. In this situation Turbo Write would be of no benefit to me, but if I chose to go with 4x4TB it would speed up write times a good bit?
  14. If expansion isn't a concern how will a 2x10TB configuration compare to a 4x4TB? Will the two 10TBs act as a raid1 mirror where there is no parity calculation, meaning if first drive fails it will not need to be rebuilt with calculations and instead be a complete copy, or is the parity drive always calculated with expansion in mind? Would a mirror have faster write speeds without a cache drive than a 4 drive array?