• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About KBlast

  • Rank
  1. I am using Asrock EP2C612 WS motherboard with the ASUS Hyper V2 card, 1TB 970 EVO plus in first slot and 2x 480GB NVME in slot 2 and 3. I have the latest BIOS and I have the card in PCIE Slot 1 with PCIE Slot 1 set to x4/x4/x4/x4. Unfortunately only the 1TB card shows up and the others don't. Any ideas on how to solve this?
  2. Thanks for the further feedback and follow up. I probably should get a UPS. Any recommendations on a brand, specific product, or the amount of VA for this system? I am looking at this one right now https://www.newegg.ca/cyberpower-cp1500pfclcd-nema-5-15r/p/N82E16842102134 Also, it looks like a UPS will give 5-15 minutes of time. Great for short blips in power, but not great for going through a blackout that lasts 30 minutes+. How do people handle this if they are remote when this happens? Do they have a script running that checks if the server is running on battery power and if so
  3. Proxmox looks interesting. One person said they use Proxmox as the hypervisor and Turnkey Linux FileServer in a VM for their NAS (https://www.turnkeylinux.org/fileserver.) Another said they use OpenMediaVault in a VM for their NAS (https://www.openmediavault.org/) I have an 8-port card I could pass through (https://www.amazon.ca/gp/product/B0085FT2JC/.) Is this a workable way to do what I would want or am I adding complexity or doing something wrong? It looks like I'd have my VMs for my projects then I'd have a VM for the NAS / media server. Would the NAS HDDs be expose
  4. Thanks. I ordered all the parts. I got a killer deal on RAM but it's a long ship time, so I got some time to think through unraid vs FreeNAS. I'm not so worried about data integrity. It's important, but not something I'm worried about. Our business data isn't huge and it's all in the cloud and backed up. The main media I'll store on the server will me media and the VMs running some code. All that code will be stored on my personal github repo so no worry about that. My interest in ZFS would be performance, not data integrity. I'll read that article you linked on ZFS as soon as I ca
  5. Thanks for the tips, Ford Prefect! Those are very helpful. I also got some feedback recently I should avoid unraid and go with FreeNAS for speed. Is there a thread or video that compares the pros and cons of each system? I don't know enough yet to know what will work best for what I am doing so I am trying to get feedback from pros like you. Do you know how difficult it would be to switch from unraid to FreeNAS or FreeNAS to unraid?
  6. 100% agree. I want to go for a nice system, but I also don't want to end up with expensive parts that don't really fit what I need. Either, I need different expensive parts, or I way overspec and end up running a Rube Goldberg machine but without all the pizzazz So are you saying run 1 VM per NVMe2 or I can run multiple on "virtual disks" (like a partition?), but they may be IOPS limited? Win 10 is 20GB install space. The win10 based project that requires the GT 710s doesn't require a ton of space or IOPS. If I can partition, I would run both off that Corsair Force 240GB you recom
  7. Thanks, that information is very helpful. I don't know what kind of projects I'll get into in the future, but for now none of my VMs pass data back and forth. Is it possible to partition an NVMe2 drive and pass through partitions, or just share the NMVe2 and the VMs get their own folder to write to? I know it depends on my use case, but any general rules on how many VMs per NVMe2 if they can be shared or partitioned? If they can be shared in some fashion, I might do one larger nvme2. How many GB would you recommend for a normal windows 10 VM or linux VM? Maybe multiple
  8. Thanks for the feedback. I haven't worked on that project in a while (one reason I am building this unraid.) I believe it's actually 300-500GB/day. I don't need all that data to be permanently stored, but rather I need to process through it then it can be deleted. I think we'd need 4GB for our own storage needs, 4GB for temporary storage, then 2x 4GB would be parity. I could look into 2 x 8 if you think having 1 storage and 1 parity is better? 500 GB is a lot to download in a day, but I don't max out my 1GBPs connection (speedtest: ~600Mbps) so I don't think dual 1gbps or a 10gbps
  9. After further research I settled on the general idea of my build. Trying to keep it under $2000 if possible. It's currently about that much. Part List Motherboard: ASRock EP2C612 WS SSI EEB Dual-CPU LGA2011-3 Motherboard ($414.60 @ Amazon) CPU: Intel Xeon E5-2670 V3 2.3 GHz 12-Core Processor (found em for ~$100/ea @ ebay) x 2 CPU Cooler: Noctua NH-U12DXi4 55 CFM CPU Cooler ($64.95 @ Amazon) x 2 Memory: Samsung 32 GB Registered DDR4-2133 CL15 Memory ($92.00 @ Amazon) x 4 Storage: Western Digital Blue 4 TB 3.5" 5400RPM Internal Hard Drive ($89.99 @ Adorama) x 4 Cac
  10. I am also interested in taking 1 GPU, virtualizing it into multiple vGPU resource pools then sharing those pools to different VMs. For example... 10GB Card split into 10 x 1GB vGPUs to be shared across 10 windows VMs each thinking they have their own discreet 1GB GPU. Is this possible now on unraid?
  11. I read somewhere that I can pass through 1 GPU to 1 VM, but I can't split up 1 GPU into multiple "gpu resource pools" and then pass those through to different VMs simultaneously. If that's the case, then I couldn't use an RTX 3000 in this build - I'd need to take the 2-box approach. Make one server build, then one daily driver AI/ML build. Some of the projects I want to run require their own GPU per win 10 VM, but it doesn't need to be a super powerful GPU. 2gb is sufficient. If it's true you cannot virtualize your GPU and pass through to multiple VMs, then I have come up with a on
  12. Background: I am a "experience beginner" Linux user: I know all the basics pretty well. I am an intermediate Python programmer: self-taught over 3 years. I have no real networking, NAS, RAID experience, but am patient so I believe I can start picking it up. I have some idea what I want to do with my unraid server. I am not sure what specific parts / build to go yet. I am also uncertain if I should do an unraid server as one box, then a separate daily driver for work/play as another box. I want to: Play FPS games @ 144-240 fps. I am looking at an NVIDIA RTX 3000