KBlast
Members-
Posts
13 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Everything posted by KBlast
-
I am using Asrock EP2C612 WS motherboard with the ASUS Hyper V2 card, 1TB 970 EVO plus in first slot and 2x 480GB NVME in slot 2 and 3. I have the latest BIOS and I have the card in PCIE Slot 1 with PCIE Slot 1 set to x4/x4/x4/x4. Unfortunately only the 1TB card shows up and the others don't. Any ideas on how to solve this?
-
Thanks for the further feedback and follow up. I probably should get a UPS. Any recommendations on a brand, specific product, or the amount of VA for this system? I am looking at this one right now https://www.newegg.ca/cyberpower-cp1500pfclcd-nema-5-15r/p/N82E16842102134 Also, it looks like a UPS will give 5-15 minutes of time. Great for short blips in power, but not great for going through a blackout that lasts 30 minutes+. How do people handle this if they are remote when this happens? Do they have a script running that checks if the server is running on battery power and if so gracefully shut it down? If battery power can be detected, it's just a matter of scripting for each VM/container/etc to be powered down within that 5-15 minutes. Any generally applicable approaches to do this?
-
Proxmox looks interesting. One person said they use Proxmox as the hypervisor and Turnkey Linux FileServer in a VM for their NAS (https://www.turnkeylinux.org/fileserver.) Another said they use OpenMediaVault in a VM for their NAS (https://www.openmediavault.org/) I have an 8-port card I could pass through (https://www.amazon.ca/gp/product/B0085FT2JC/.) Is this a workable way to do what I would want or am I adding complexity or doing something wrong? It looks like I'd have my VMs for my projects then I'd have a VM for the NAS / media server. Would the NAS HDDs be exposed to my VMs? Could it be toggled? Also, if I wanted them to have access, would it route it directly to the VMs directly through the mobo or would it go from the server to the router then back to the server because it's exposed as a network attached storage?
-
Any update on this?
-
Thanks. I ordered all the parts. I got a killer deal on RAM but it's a long ship time, so I got some time to think through unraid vs FreeNAS. I'm not so worried about data integrity. It's important, but not something I'm worried about. Our business data isn't huge and it's all in the cloud and backed up. The main media I'll store on the server will me media and the VMs running some code. All that code will be stored on my personal github repo so no worry about that. My interest in ZFS would be performance, not data integrity. I'll read that article you linked on ZFS as soon as I can.
-
Thanks for the tips, Ford Prefect! Those are very helpful. I also got some feedback recently I should avoid unraid and go with FreeNAS for speed. Is there a thread or video that compares the pros and cons of each system? I don't know enough yet to know what will work best for what I am doing so I am trying to get feedback from pros like you. Do you know how difficult it would be to switch from unraid to FreeNAS or FreeNAS to unraid?
-
100% agree. I want to go for a nice system, but I also don't want to end up with expensive parts that don't really fit what I need. Either, I need different expensive parts, or I way overspec and end up running a Rube Goldberg machine but without all the pizzazz So are you saying run 1 VM per NVMe2 or I can run multiple on "virtual disks" (like a partition?), but they may be IOPS limited? Win 10 is 20GB install space. The win10 based project that requires the GT 710s doesn't require a ton of space or IOPS. If I can partition, I would run both off that Corsair Force 240GB you recommended. Could you explain more what you mean by "Use PCIe-x8 Adapters for two NVMe-Pcie x4 each (some x16 slot pairs of that MB will either work in x16/x0 or x8/x8 mode)." This is the adaptor I'm looking at right now https://www.amazon.ca/YATENG-Controller-Expansion-Card-Support-Converter/dp/B07JJTVGZM/ I believe I can only use 1 per PCIe slot, correct? Or are you saying the adaptor is x8 and it can slap on two NMVe that run x4? Something like this https://www.amazon.ca/CERRXIAN-Adapter-Controller-Expansion-Profile/dp/B07VRY6FPH/ If it can still passthrough easily, I'm all for saving PCIe slots. I got 7, but they are going fast and I want to leave a few open for future opportunities. At the same time, I want to avoid headaches. I've installed 1 GPU in a PCIe slot, a wifi card... and that's my experience with PCIe slots so far =D I might start with 2 x 32GB ram sticks, then add more later if needed, per your suggestion. This would help with the budget going for the 16TB HDDs. I think I will go for 2 x 16TB. The Seagate Expansion Desktop Hard Drive 16TB HDD contains a 16TB Seagate Exos. This was the suggestion I got on reddit. "Personally I prefer the 16TB Seagate as they contain EXOS x16 drives which register for warranty with the drive serial once outside the enclosure. They're Enterprise drives and pretty speedy too... As mentioned, they registered for warranty exactly as all my other drives did. Once they're out of their enclosure you can't distinguish them from any other EXOS enterprise drive.... Amusingly, the warranty on the drive once taken out of its enclosure is a year longer than on the enclosure itself..." In this video https://www.youtube.com/watch?v=qvIC-z_MBgs, someone shucks it and reveals the Exos 16TB. He already has an Exos 16TB in his box and claims to have done it with numerous Seagate 16TB External HDD. Looks like a good option. In Canada, I can save ~$100 CAD per drive shucking. Mind boggling. I guess they have some great margins and the price point for external HDD is competitive. I think I'll stick with the 1TB 970 PLUS NVMe2 for budget reasons, but then it gives me a good upgrade opportunity in the future if I really blow it out in the next few years. At that point a 2-4TB will be much cheaper.
-
Thanks, that information is very helpful. I don't know what kind of projects I'll get into in the future, but for now none of my VMs pass data back and forth. Is it possible to partition an NVMe2 drive and pass through partitions, or just share the NMVe2 and the VMs get their own folder to write to? I know it depends on my use case, but any general rules on how many VMs per NVMe2 if they can be shared or partitioned? If they can be shared in some fashion, I might do one larger nvme2. How many GB would you recommend for a normal windows 10 VM or linux VM? Maybe multiple nvme2 is the way. TeamGroup's 256GB is only $34 USD. In terms of which SSD, it looks like the PLUS has better performance than the Pro from what Newegg shows. Same sequential read, but sequential write max for Pro is 2700 and PLUS is 3300 mbps. Also 4KB Random Read and Write both favor the PLUS. I guess the Pro is for longevity? I've reached the upper end of the budget for this build, not sure I can squeeze another $100 in. Would it really make a big difference in the next 3-5 years? We'll only have 1 workstation, so there won't be others to transfer between. My wife does all her media creation/editing on her own computer currently. So I think we'll pass on 10gbe for now. In the future if I had an array of GPUs for AI/ML work if that area develops further, maybe it could make sense for 10gbe so she could make use of them for video editing. One more question. What are your thoughts on shucking? The WD Easystore 16TBs are (or at least used to be) 16TB Red Drives. Do you know if that's the case and if those are still helium or are they made with air now? It seems like a decent deal to get down costs on HDD.
-
Thanks for the feedback. I haven't worked on that project in a while (one reason I am building this unraid.) I believe it's actually 300-500GB/day. I don't need all that data to be permanently stored, but rather I need to process through it then it can be deleted. I think we'd need 4GB for our own storage needs, 4GB for temporary storage, then 2x 4GB would be parity. I could look into 2 x 8 if you think having 1 storage and 1 parity is better? 500 GB is a lot to download in a day, but I don't max out my 1GBPs connection (speedtest: ~600Mbps) so I don't think dual 1gbps or a 10gbps nic will help or would it? I don't know much about networking. I've never done anything other than a 1gbps ethernet (no 2.5, no 10 nic card, etc)) Ideally, everything could exist in memory while it's being processed so it's fast. Speed is king for me. If I lost one glob of downloaded data, I could always just redownload it again. Is it possible to download straight to memory to save hitting my disks so much? I download and process 30-40GB chunks of video at a time, process them, then delete them. I plan to do about 10 of these a day. It would be great to download to and work directly from the RAM. I know conceptually how the cache drive works, but I've never used one. I thought it was for smaller downloads due to faster writing speed: write it to the nvme2 first, then it writes to the HDD. Feel free to correct me if I am misunderstanding something.
-
After further research I settled on the general idea of my build. Trying to keep it under $2000 if possible. It's currently about that much. Part List Motherboard: ASRock EP2C612 WS SSI EEB Dual-CPU LGA2011-3 Motherboard ($414.60 @ Amazon) CPU: Intel Xeon E5-2670 V3 2.3 GHz 12-Core Processor (found em for ~$100/ea @ ebay) x 2 CPU Cooler: Noctua NH-U12DXi4 55 CFM CPU Cooler ($64.95 @ Amazon) x 2 Memory: Samsung 32 GB Registered DDR4-2133 CL15 Memory ($92.00 @ Amazon) x 4 Storage: Western Digital Blue 4 TB 3.5" 5400RPM Internal Hard Drive ($89.99 @ Adorama) x 4 Cache Drrive: Samsung 970 EVO Plus m2. 2280 1TB PCIe Gen 3.0 x4 ($150 @ Amazon) Video Card: MSI GeForce GT 710 2 GB Video Card ($49.99 @ Amazon) x 2 Case: Phanteks Enthoo Pro 2 ATX Full Tower Case ($146.98 @ Newegg) Power Supply: Rosewill Capstone 1000 W 80+ Gold Certified Semi-modular ATX Power Supply ($124.00 @ Newegg) PCIe to NVMe2 Adatpor: M.2 NVME to PCIe 3.0 x4 Adapter with Aluminum Heatsink Solution ($16 @ Amazon) Sata Controller: SAS9211-8I 8PORT Int 6GB Sata+SAS Pcie 2.0 ($55 @ Amazon) Sata Data Cables: CableCreation Mini SAS 36Pin (SFF-8087) Male to 4X Angle SATA 7Pin Female Cable, Mini SAS Host/Controller to 4 SATA Target/Backplane Cable, 3.3FT ($10 @ Amazon) Sata Power Daisy Chain Cable: StarTech - PYO4SATA .com 15.7-Inch (400mm) SATA Power Splitter Adapter Cable - M/F - 4x Serial ATA Power Cable Splitter (PYO4SATA) Black ($5 @ Amazon) USB Stick: SanDisk Cruzer Fit CZ33 32GB USB 2.0 Low-Profile Flash Drive Reasoning My wife and I run our own business, mostly from home. We always wanted a NAS for ease of sharing files and as a central place to store data. I also have some side projects I want to run on my own that require a VM. One project requires 1-2GB of GPU memory, and I want to run 2 versions of the project at the same time. After looking at vGPU options, it just made sense to grab 2 cheap GT 710. They are only $50 and 19W TDP. Another project I have requires downloading 300-800GB/day of video and involves a lot of OCR / computer vision. All python. I am still learning, so I am not sure if I adding more cores would help or not. Beyond that I'll probably build out these projects and explore other ideas that would require a VM or docker container. We're not huge media junkies, but since we're going to setup our own unraid, it makes sense to run a plex server too so we have that option available. I wanted to do an unraid + gaming all-in-one build, but after much research, it just doesn't make sense. I want to explore AI and machine learning. It makes sense to do this on the gaming box with an RTX 3000 GPU. This motherboard has 7 PCIe 3.0x16 slots and 8 RAM slots. I found a great deal on 32GB ram sticks so I figured go for 4 sticks now for a solid 128GB. If I really need more I can go up to 256 later. It allows for dual CPU and the E5-2670s hit a real sweet spot. 12/24 for only $100. Not blazing fast, but plenty of cores. PCIe Slots: SATA controller, NVME2 to PCIe adaptor, and 2 GTI 70 cards, so I'd have 3 3.0x16 slots left for a beefier GPU if I want to add one later. Storage: 4 x 4TB is perfect for us right now. No NVMe2 slot on MOBO, but the adaptor is 3.0x16 so it should be fast enough. I am guessing I will hit close to max speeds. Case: Large, plenty of space. Has enough PCIe slots, 3.5" slots etc. It just so happens to come with 4 brackets for 3.5" which is exactly the number of HDD we will start with. No fans included. PSU: Found a great deal on a 1KW PSU. 600W build now, but we have room to grow. Questions What fans, how many, and what configuration do you recommend for this build in the Phatenks Enthoo Pro 2? 1,000W is enough for this 600W build. But if we put in an RTX 3080 later (320W TDP) that'd put us as 920W. Thoughts? Buy an 800W PSU then upgrade later when needed? Keep the 1K it'd be fine for a 320W GPU later? Or go for a 1100-1200W PSU? Any recommendations on specific PSU/deals? Can these board/CPUs do non-ECC RAM? I didn't find non-ECC RAM on official memory QVL. I found a great deal on these 32GB non-ECC sticks, but heard ECC RAM can give you problems if you don't really need it. Am I missing anything? Any potential part conflicts? Any suggestions? From seeing other builds on this board with these fans, it looks like one CPU fan will blow into the other CPU fan, causing one CPU to run a little hotter. Is there a good fan option that won't do this? Maybe I get one Noctua for 1 CPU then I get another fan that blows up (is there such a thing?) for the other one? I appreciate ANY and ALL feedback/answers. Pick 1 or a few of the above questions and take your best shot. I'm new to unraid, so I have a lot to learn. I want to make sure I am learning on a solid box that won't give me too many problems, and one that let's me dink around with the fun projects I am trying to get working
-
Sharing GPU with multiple VMs at the same time possible ?
KBlast replied to batesman73's topic in VM Engine (KVM)
I am also interested in taking 1 GPU, virtualizing it into multiple vGPU resource pools then sharing those pools to different VMs. For example... 10GB Card split into 10 x 1GB vGPUs to be shared across 10 windows VMs each thinking they have their own discreet 1GB GPU. Is this possible now on unraid? -
I read somewhere that I can pass through 1 GPU to 1 VM, but I can't split up 1 GPU into multiple "gpu resource pools" and then pass those through to different VMs simultaneously. If that's the case, then I couldn't use an RTX 3000 in this build - I'd need to take the 2-box approach. Make one server build, then one daily driver AI/ML build. Some of the projects I want to run require their own GPU per win 10 VM, but it doesn't need to be a super powerful GPU. 2gb is sufficient. If it's true you cannot virtualize your GPU and pass through to multiple VMs, then I have come up with a one option: a dual Xeon CPU with an ASUS server board. The board has 6 PCIe x8 slots. I could fit GT 710 2GB GPUs in those slots to pass through. No NVME2 slot, but I could still use an SSD as a cache. Here is the build. I've only listed 1 GPU, but I'd start with 2-3. This is also a good option because I found a great deal on the CPUs: $150 USD for a batch of 4. Any feedback? PCPartPicker Part List CPU: Intel Xeon E5-2670 V3 2.3 GHz 12-Core Processor CPU: Intel Xeon E5-2670 V3 2.3 GHz 12-Core Processor CPU Cooler: Noctua NH-U12DXi4 55 CFM CPU Cooler ($64.95 @ Amazon) CPU Cooler: Noctua NH-U12DXi4 55 CFM CPU Cooler ($64.95 @ Amazon) Motherboard: ASRock EP2C612D16SM SSI EEB Dual-CPU LGA2011-3 Narrow Motherboard ($291.72 @ Amazon) Memory: Samsung 32 GB (1 x 32 GB) Registered DDR4-2133 CL15 Memory ($92.00 @ Amazon) Memory: Samsung 32 GB (1 x 32 GB) Registered DDR4-2133 CL15 Memory ($92.00 @ Amazon) Memory: Samsung 32 GB (1 x 32 GB) Registered DDR4-2133 CL15 Memory ($92.00 @ Amazon) Memory: Samsung 32 GB (1 x 32 GB) Registered DDR4-2133 CL15 Memory ($92.00 @ Amazon) Storage: Samsung 860 Evo 1 TB 2.5" Solid State Drive ($109.99 @ Adorama) Storage: Western Digital Blue 4 TB 3.5" 5400RPM Internal Hard Drive ($89.99 @ Amazon) Storage: Western Digital Blue 4 TB 3.5" 5400RPM Internal Hard Drive ($89.99 @ Amazon) Storage: Western Digital Blue 4 TB 3.5" 5400RPM Internal Hard Drive ($89.99 @ Amazon) Storage: Western Digital Blue 4 TB 3.5" 5400RPM Internal Hard Drive ($89.99 @ Amazon) Video Card: MSI GeForce GT 710 1 GB GT LP Video Card ($34.99 @ Amazon) Case: Phanteks Enthoo Pro 2 ATX Full Tower Case ($146.98 @ Newegg) Power Supply: Rosewill Capstone 750 W 80+ Gold Certified Semi-modular ATX Power Supply ($99.99 @ Amazon) Total: $1541.53 Prices include shipping, taxes, and discounts when available Generated by PCPartPicker 2021-01-06 20:12 EST-0500
-
Background: I am a "experience beginner" Linux user: I know all the basics pretty well. I am an intermediate Python programmer: self-taught over 3 years. I have no real networking, NAS, RAID experience, but am patient so I believe I can start picking it up. I have some idea what I want to do with my unraid server. I am not sure what specific parts / build to go yet. I am also uncertain if I should do an unraid server as one box, then a separate daily driver for work/play as another box. I want to: Play FPS games @ 144-240 fps. I am looking at an NVIDIA RTX 3000 series GPU. Not sure if Xeons would work here for a game like Apex Legends, or if I would need a consumer AMD/Intel. I lean heavily AMD for consumer CPU. Work on AI/ML projects Run a Python project (involves a LOT of video downloading + computer vision + OCR) -- since I am downloading 300-800GB/day, it makes using a paid VPS option unworkable. I'll need a workhouse drive for this. I might need to do it all on NVME to spare the HD so much IO. Run a few Windows VMs. Win 7 or Win 10. Would like to be able to split/share the GPU as a resource across multiple VMs. Example: Take a 10GB graphics card, and assign 500mb to each VM and VM thinks it has its own discreet 500MB card. Not sure which RTX 3000 cards are capable of this, or if this is feasible. Have a NAS for our small business. Would want 8TB of usable storage. So I am thinking of starting with 4 x 4TB HD running 2 disks at parity, 500gb-1tb nvme for cache drive, and 2TB nvme for main driver Plex media server in the future -- don't need this now but will later this year Thoughts and questions: I have no idea how much RAM I need and if I should go ECC or not. I have considered the old server gear purchased via ebay route. It looks like old server boards can be found for $100 and 6 core Xeon E5 can be found for $20-30 each. This seems like a great value, but I have read that you can't really compare 10 year old CPU core-to-core, clock-to-clock with modern CPUs due to architecture differences having a big impact (32nm v 7nm.) Older server CPUs are also energy hogs. It seems like a cheap cost upfront to buy, but then you'll ultimately pay for it in the long-run on electricity costs. My understanding is using old server gear usually gets you more lanes than a modern consumer PC mobo/CPU. However, with the above, I am not sure I need that many or not. Is the old server gear option a cheaper option or no when energy costs are considered? Could a dual Xeon E5 do what I need for the above? I am thinking high frames per second in FPS games would be the biggest question. Another question in my mind is, when do I need to spin up a VM versus when do I just do a docker container? If I went the old server gear build in for an all-in-one box, what specs would you recommend for the above needs? If I went the consumer PC route I'd want something beefy that could handle my gaming / AI & ML needs, but also run as a NAS/VM. I am leaning towards a Ryzen 5000 CPU and an RTX 3000 GPU. What would your recommend as a build? That about sums it up. I know just enough to know there is a lot I don't know, haha. Thank you for any feedback!