• Posts

  • Joined

  • Last visited

Everything posted by rollieindc

  1. Heyas, Welcome to the T310 club. Feel free to ask/share info. Sounds like you have a good start on a nice system. I’d see about adding a redundant power supply, if you are able. Not a mandatory thing, but I like the reliability bump I get with mine. And yes, I have mine hooked into an UPS brick- just in case. I had the H700, but went to a H200 card so I could use the Smartdrive info to monitor my drive health more closely. But the H700 is a nice card too. One piece of advice for the H700- Copy your RAID config file settings for the H700, that way if you have a drive fail with unRAID and were running parity, you can still rebuild your drive array. And - Yes, I use a SAS/SATA splitter cable (4x channel, and 2 channels per card for a total of 8 drives) and yes, I get 6GBs (or the drive’s max speed) on them. Ebay sells them for less than $10. Look for “SAS SFF-8087 4 SATA Hard Drive Cable”. If I were you, I’d consider making all three of the 3TB drives you have as RAID 0, and then make one of them a parity drive. That would give you 6TB of parity “covered” storage. Then for every drive you add, you add 3TB of “parity covered” storage. I went with 4TB drives, based on price point, but if you stuck with the 3TB and went up to 8 drives, you’d sit at 21TB maxed out. My only reason that I’d go back to RAID 5/10 disk configuration now, would be just for improved read speeds. And since I am using my system mostly for a few VMs (on the SSD) and as a NAS, I don’t see the need. You might want to read up on how parity is implemented in the unRAID system, vice RAID 1. There were compelling reasons that I went that way -for my needs. With unRAID you’d likely want to use SSD drive as a cache drive, which works well for VMs. For me, I’d keep the SSD on the pcie card, you’ll probably get speeds as good as the H700, and it will probably run faster on a separate PCIe slot. You might want to go to a 500gb SSD- and move to an NVME type drive- but to start with 250GB is good (2-3 VMs plus a docker app) And- Sure, I can grab a pic of my drive layout and will post it later. There’s enough room in the T310 to get 8x 3.5” drives inside the box if you think it out carefully. If you decide to go with an external drive cage you might need to get an external connector card - like a H200e. FYSA- I do not recommend a usb HDD box on the system for anything other than file transfers. The USBs on the T310 are s-l-o—-w. But for temporary file use or transfers, they work well enough. With unRAID, I think I read that you can run the Windows 2016 image as a virtual machine. Then you can assign the unRAID “shares” as virtual drives- as you want. Best of both worlds. But for me, I am able to attach my unRAID (NAS) shares directly to PCs (win 7, 10, macs) on my network easily. Looks & works like a regular network mapped drive. More later...
  2. Moving on (June 15 update) Still happy with the Dell T310 server as an unRAID platform. Very few down days. Installed the Samsung 1TB SDD, definite performance bump with it over the Patriot 240GB one. Still working on the ASUS nVidia 1030 card - and tonight my next step is to make it the primary video card at boot-up. (Nope, that didn't work either. Will need to remove the card and dump the bios/firmware on another machine) Starting to enjoy the Plex docker. Still not got SickChill working well. Also tried to get Deluge-VPN docker working, but confused by the config file locations in the interface to unRAID 6.7 seems to have changed significantly. Uninstalled the DarkTable docker, the interface was just too clunky and difficult to use in a docker. Even on my metal laptop, it's still clunky. Noted that there is now a docker for GIMP, but honestly - I'd rather run it through a Windows VM on the server.
  3. Moving on (June update) Updated/Max'ed out the RAM in my server to 32Gigs from 16Gigs. (16GB for $49 on eBay) Added a Plex docker. Tried BinHex's SickChill docker, but it's not working well (yet.) Also bought a 1TB SDD (Samsung $89 at MicroCenter) but have yet to install it. Also still need to debug my ASUS nVidia 1030 card issues, but the system is working well without that working fully - at present. Loaded up the DarkTable docker app for photo cataloging, not sure yet it's something I will keep.
  4. That and disabling the motherboard graphics chip are all I have left to try, but that will have to wait a bit. (Family life calls!)
  5. Just the one (Asus 1030). There is the onboard graphics chip, but not using it - although I have not tried disabling it ... yet.
  6. So, I'm about out of ideas. (This is on my Dell T310 Server, specs in my signature) Still trying to get a Asus 1030 card to passthrough, and having no joy. I've watched S-I-1's videos a dozen times. (Thank you!) Made over a dozen VM's with various settings. The unRAID server boot mode is set to Legacy. Downloaded the Asus 1030 BIOS from techpowerup and edited it, as shown. Am able to get Win 10 Pro to boot, video to show up, and fully load up. (login works) Can even get to where the card is recognized as a 1030. But any driver tried only leaves me with "Windows has stopped this device because it has reported problems. (Code 43)" ----------------------- Within the VM: Doing passthrough of the CPU (intel Xeon, from 1 to 6 cpus), or emulated. Using OVMF bios to boot. (will not boot under SeaBios) Machine is i440fx-2.8, 9 or 11 (Will not boot under i440fx-3.0) Hyper-V is set to no. Using either SD or HDD to boot makes no difference (although IDE seems to work best, occasionally) Am passing through the card & the HDMI sound lane as well. (And sometimes a second sound card) The Video ROM is included or not (and including it does seem to make the booting process more stable.) Also tried a similar set up with Windows 7, but it crashed on boot up. Most of the time can only use the VM with the MS basic (800x600) video display driver. Do ATI Radeon cards have the same issues? From some posts elsewhere, it looks like they are having similar issues with VMs too.
  7. Another day, another issue... Still having issues with the VM using the Asus nVidia 1030 card. I did pull out a monitor, keyboard, and mouse to make a proper go of it. And I don't think it's the card, it's the VM. So I continued tinkering with it, and more will be required. I am using direct video out to a monitor over the DVI interface. So yes, the card is recognized, but it still refuses to accept the drivers (new or old) and reverts back to the Microsoft Standard Display adapter settings. This included removing the old driver with DDU (Display Deinstall Utility) - and reinstalling only the recommended one. At this point, I am going to start over and build a brand new VM, checking everything twice. As for the Plex docker, that has gotten a lot more stable and it's beginning to grow on me. I did buy the iOS app for my phone ($4.99), and it's good at connecting to my library (music & video). I'm not a big media dog by any means, but I have some tunes that I do like to listen to - and a few movies that I like to watch occasionally. I also am trying to set up the binhex vpndeluge, but that's far more complex - and as I run NordVPN, I couldn't even determine which port was being used in OpenVPN files & servers to get past the initial install stage. (And I have other issues at home that are a much higher priority.)
  8. Except that the 1050 draws 75 watts, and the PCIe buss (on my Dell T310 server) only supplies about 60W. For now, I think I will be able make this 1030 work, at around half the price. (Did I mention that I am Scottish? Thrifty + Stubborn 😃 )
  9. Video Card Dump - SWAP & Plex Docker So, tired of dealing with the nVidia GT610 card in any VM builds, I pulled it and installed an ASUS Phoenix 1030 OC 2Gb card ($75). Needed to place it in slot 2 because of the fan shield, and moved the SAS/SATA controller to slot 3 (leaving slot 1 empty). System and the VM seem stable enough when the VM is booted. Still can't access a VM that uses the 1030 card yet (can't get TeamViewer to come up, probably a video driver issue that I still need to work through.) I might need to go into the VM through VNC at first using the QXL driver, and make the video card and driver something VGA simple at first, change the card in the VM settings, and then load the new drivers from within the VM in TeamViewer. If someone else has a better idea, I am all ears. Mostly hunt and poke at this point. And yes, I've watched @SpaceInvaderOne 's videos. Problem is, I am currently running the system headless, so I'd have to pull out a monitor, keyboard and mouse to make a proper go of it. Also rather torqued off to find out that, apparently, the 1030 card can't do much for transcoding videos using NVENC/YUV or 4K. Well, not the whole reason I bought it, as I really wanted the physics engines on it for some graphics and scientific computing. Just surprised that it can't do much to transcode a 4K video, apparently. Still frustrated with the VM builds, I changed and installed a Plex media server via a docker. Somewhat unimpressed. At least it's a small app. Took me a while to realize that when I went into the Web_UI, that I had to change the URL to start with https: - as it wouldn't start otherwise. And it seems that everything I want to do with it requires a PlexPass or payments to the "Creators", functionality is pretty basic too, although it did sort out my library without much excessive thrashing. (Once I realized that I could make a path into the Plex with the path of "nasmusic" that resolved to a path on my server shares - I had my library added. Oy! Half the time I was trying to get it to work - it threw me errors saying the path was illegal because I used a capital letter or hyphen.) Might be dumping Plex and looking at other media server systems.
  10. Could you better define your nexus of a "first hiccup"?
  11. Ok, yes, my apologies - I over-over simplified. It's really a failed disk recovery method. Thanks for doing a better explanation than I did at the time (I blame a restless night and not enough sleep.) But also to be fair, I've gotten repeated recommendations to add a parity drive for every 8th drive. Maybe it's to reduce the time required by the algorithms to compute a parity value, and then write it (either in parity creation or in recovery) - but that was the recommendation. And I've never seen where the algorithm can have one value for data on up to 28 drives - if it can, great! I'll have to go back and re-read that part of the documentation. And well, there still seems to be a lot of talk in the forum about pre-clearing new drives - perhaps to avoid "start up" deaths. But I still got quite a few admonitions to pre-clear any and all new drives being added to my arrays. But if not necessary, then ok... good to know that too! Thanks.
  12. Looks like a solid build to me. Which HD Controller and video card are you using?
  13. Yeah, the onboard SATA is only 3Gbs. I'd be looking at a new SATA adapter first thing. Also probably an NVME card as an SSD/Cache&VM drive. It reminds me a lot of my DELL T310 in size and configuration. Does it have any spare Molex or PCIe power outlet/plugs? If not, you'll be limited in video card selection like I was. Dang x4PCIe are very low wattage (like 25watts) on most servers. Few were built for any nVidia type video passthrough power needed.
  14. Sounds like a nice buy on a nice rig, Knipster. First, I'd add a UPS battery back up, if you don't already have one. Which disk controller is in it? For me, I'd probably be thinking about adding an NVME M.2/SSD cache drive and for the VMs (500GB-1TB), and maybe some removable drive bay slots. The new drive bay systems don't even require a sled, just slap in the 2.5 or 3.5 SATA/SAS drive in the slot and close the door. Are you going to make one of the drives a parity drive? I would if it was my machine. 🙃
  15. Sounds like a good match for unRAID. Ok, let's start with my understanding unRAID disks. (Happy to have others chime in, with their experiences or opinions!) You can read up more in the unRAID manuals online about this. There are some great tutorials by @SpaceInvaderOne on youtube. I recommend watching a few of them first before building your system. When you install unRAID, you'll have a "drive array" that you can have parity error correction disk repair capability with. One thing to note, any drive going into the "drive array" will need to be pre-cleared and reformatted for unRAID to use effectively, so a back up is a "REALLY GOOD IDEA (tm)". The drives will be formatted into Linux formats, like xfs. The "Parity Drive" is a separate drive (or two) that can cover up to seven other "combined" drives with error correction drive recovery capability, such that if there is a drive failure in the array - you can rebuild the (entire) array by simply adding a new replacement drive for the one that failed. No muss, no fuss, just a rebuild and you're back in business. This is probably one of the biggest selling points for unRAID. And as mentioned below, any drive showing errors are best replaced with that replacement handled promptly. For my system, I have a "hot swappable" 4TB drive sitting in standby (in a anti-stat wrapper, in a drive bay drawer) should that ever be the case. And the parity drive needs to be specifically assigned, and the largest drive in system (or array). The only job of the parity drive is for error data protection, it cannot be used for other file storage. If you have more than seven drives, then you typically it has been suggested practice to just add another parity drive as you add more than seven drives. You can even add extra insurance with two parity drives covering the same (7-10 drive) "array". Having two parity drives is has also been recommended to me by some users of unRAID if your drives are getting older, or are of "questionable" lifespan, or you need to ensure that the data is not vulnerable to being lost. There is a lot more to learn about parity protection, but it seems a valuable feature that makes unRAID unique compared to many other systems available. After that, the "drive array" that you would build/create - essentially merges the multiple hard disk drive spaces (even of varied sizes) into one "virtual drive" - but in such a way that you span drives, but can still assign specific areas of the drives for specific tasks. For my system, I have 3x4TB drives, with 2x600GB drives, totaling 13.2GB - (with one 4TB drive in parity) After the array is built, you can then have separate share folders in that array, that you can set size limits on, access as network drives (and assign as windows networked drives in VMs) - just like a Network Array Storage (NAS) device. For example, I have my photos in my array, in a protected user folder named "Photos" and have that folder assigned on my laptop and on my VMs as drive "P:" with my (retained) login information. And as long as I am on the network, I have access to those files - but no one else on the same network would. You could also run each hard drive as an "unassigned device" (note, this is an added "pluggin" for unRAID) that you could then reassign inside your VMs as needed. It's a little tricky to do, but not hard either. Any "unassigned device" is also outside of the parity protection scheme, and for hard drives - is then treated like any other standard hard drive. The unRaid Cache drive, is used for NAS file ingestion (uploads) to speed up the upload, and also contains the VMs and the Docker apps that you decide to install. This is typically made up of one or two SSD drives - and it's not protected by any parity protection like the drive array is. These drives area also not "spun down", as I understand it - in order to help with file upload times. The uploaded cache files are moved off at an interval you specify automatically by unRAID. You can also move them "manually" if you need to. The files are not protected by the drive array parity drive until they are moved. There are also "unassigned devices" that are like separate drives, which can be either hard drives or USB drives. They can still be used for storage and backups, but are meant more for specific uses where parity protection isn't needed, and you might want them for only specific purposes (like as a VM system backup drive). Now, let's get back to your potential set up. From what I sense - you could run the Win 7 WMC VM in the background, with an Emby or Plex Docker server app running from unRAID. Plex could pull from the drive in your VM, and add the new WMC recorded video to your library - serving it back to the XBox. And you could also run a Win10 VM at the same time - doing other things. You can even load balance those on the Cache with just the 500GB SDD - or with the VMs on the 1TB m.2 SSD, and still have plenty of room left over. (And in reality - you could run everything off one SSD and not be too cramped, so I'd try it first, then decide. For me, I'd want the speed from the m.2, especially if it's an NVME for my VMs.) You might want to add a little more RAM to your system if you can. If you run multiple VMs at the same time, then it gets a little tight on 16Gigs. Not really bad, I've had a couple of 6GB VMs running at the same time. You'll just want to watch how you build them, and leave about 2 GB of RAM for unRAID to do "it's thing". If you have multiple VMs and Docker apps running, that would also be a good reason for 24-32GB total. And the 1050Ti should let you do any video transcoding (to mp4 format?) you need to with Plex, and still have headroom for your VMs with passthrough. Plenty of good tutorials on setting those up. If you do decide to run Plex or Emby as a docker app, it will all go on the SDD Cache drive - so very low overhead with those running. There are Docker apps for capturing networked cameras feeds (like you do with iSpy) as well. Note, Dockers, Apps, VMs, and Tools are all separate items in unRAID. Which allows for more flexibility - and sometimes better (tool) options to be considered. You can even have the external drive added as an external device any time you want, and even assign specific USB ports to the VMs for hot plugging, if you need to. Also easy to have a Win10 VM running any time you need it, as a "front cover" on the machine, while running unRAID in a "headless" configuration ("headless meaning without need for a monitor or keyboard). You can then VNC into your windows 7 or 10 VMs via any standard browser on the network, and your XBox 360 can access the Plex or Emby services - and access your disk array as if it was a secure NAS system. So... did that help?
  16. Heyas Lance, Fellow Dell Server user here (T310). Looks like a nice big data rig. Is it a 8 bay or 12 bay server? And let me know how the PERC H700 works for you. I decided to move to a PERC H200 (reflashed to 9211 IT Mode) so I could let unRAID build my array, have it be covered with an unRAID parity drive - and use the SMART (drive) status data to monitor my drives (example the drive temps show up in the dashboard). When I first did the unRAID build, I had all the drives be RAID 0 on the H700, then let unRAID build the array. But I realized (with some help) that if I lost a drive, it would also be a lot easier to rebuild the entire data drive array with unRAID with the H200 in IT mode, than to have to remove the drive from the H700 RAID array (you might want to read up on how to do that) and then add the new one back into the array. Doable, but not as easy as it might first appear. And you know you could also run the SSD SATA drives off the PERC H700 or 200 too, although in hindsight, I'd probably want to use a couple of NVME drives on a PCIe card. (Looks like you have three slots on the riser, I have four on my motherboard). Also if you mount the SSD drives internally, do you have any molex or SATA power cords that are unused? If not, that can be an issue with Dell Servers too. They were very "thrifty" with power internally on their servers. Wait... what about the two internal SATA drive slots that the R510 is supposed to already have?! (See figure 3-2 on page 85 of the manual at's manual_en-us.pdf and you'll see there is another way to mount two internal drives in the internal bay-array) Speaking of that riser - Are you going to add a graphics card for video pass-throughs on your VMs? (And do you think you want to game on this system?) If so, look through my thread on "My Tower" - as Dell does some power limiting in their servers that can affect which video cards you might want to consider - if you haven't already. The riser PCIe slots look to be X4's, and if so - any card will be limited to about 25 watts, which will make it hard to find any kind of "gamer" graphics card. (You can see my headaches I am going through at present in this regard) If you have a H700 in slot one on the riser (maybe not, as it looks like it's in it's own slot on the R510 - lucky!), and a graphics card for passthrough (like I am doing) then you might be down to your last card slot for a sata/NVME card holder... just a thought to consider. Power inside my T310 is limited (just 400 watts), and I don't have any PCIe power cords for video cards - so that means I am on a real power budget. In the end, I pulled my optical (CD) drive and the tape drive that was installed and used the molex power for additional hard drives. Anyway, sounds like you've got fun ahead of you... I enjoy unRAID on my Dell Server. I couldn't beat the price or nature of the beast.
  17. From what I have gathered, the cache drive typically is used for disk file ingestion, docker apps, and VMs in unRAID. And for me - based on what I know of unRAID, the NVME should just show up as just another device that you will need to pre-clear and format as is the standard practice for unRAID. Depending on how it is installed, you can use it as the second cache device, but if you already have a decent sized SSD, then you might want to instead mount it with the unassigned devices app - and have it just be for VMs. Then when you build your VMs, just to point to that unassigned drive/device instead of the cache drive - and as the "boot drive" when building the VM. The advantage of this approach should be that you won't be competing with cache bandwidth on the SSD cache drive or with other docker apps. Plus your VMs will likely have the fastest seek times of any device on the system. I have my VMs on another (fast HDD) drive in my array, and it's still pretty fast. About the only disadvantage of it that I can think of would be the VMs wouldn't be covered under the Parity drive, and you might want to back them up on the drive array from time to time - just in case you lose the NVME. Well, that and your cache SSD drive is likely slightly slower than the NVME M2 drive, right? But unless you're talking a large NVME (>1TB) and you're working with moving large files (videos, digital photography, or scientific files) then you're not going to notice that much difference.
  18. rollieindc

    New man...

    Welcome, Frode. Think you'll find unRAID pretty stable and well supported, even for us that are "Over 50". And the forum here is pretty helpful. Have a similar build, but am using a used Dell T310 server as my main. Am working with Plex in a Docker app, but am a newbie with that app. But it runs. Mostly doing digital photography and some video work on the side as a hobby.
  19. For bandwidth, you are 100% right. Especially since this x16 slot has only x8 routing. And I am not looking at adding or expecting to use this build for any multi-GPU card acceleration features that the additional lanes would help with anyway. (This is a notable downside on the T310 architecture compared to other modern server mobo designs, but something I can probably live with given the price I've paid.) But it also "might" make a slight difference for power, since full-sized ×16 graphics card (without additional 6 or 8 pin connectors) can typically draw up to 5.5 A at +12 V (66 W) from those slots. Dell may have limited it to 40watts, so that makes GPU card selection a little trickier. And I didn't see any specific documentation on the x8, but noted that x4 cards are typically limited to 25 watts. Seems from what I've read in other forums that the x8 slots are potentially limited to 25w in the same way. Again, given that the T310 server's power supply is 400watts, I don't want to even fool with molex to other 6/8 pin PCIe power plug adapters.
  20. I'm going to confirm everything that pappaq said, and offer that the nVidia 1030 card should be a good one to use in your build as a pass-through graphics card for a VM. (I'm looking at that one too!) And I use VNC to get into multiple VMs. It works pretty well. And using unRAID makes that set up pretty easy. I have 16GB of RAM, but sometimes think that I'd be better off with 32GB for some of the photo and video work I do. Most of my VMs are around 6GB of RAM, and they work well. And you might like TeamViewer - if you want to remote into your VMs from outside your own network. Otherwise, you might want to review some of SpaceInvaderOne's YouTube videos, as he does a really good step-by-step process in building VMs, remote access and doing graphic card pass-through. You can read about my build, and see some of the things I have gone through. I do boot mine headless, but I used the internal graphics card for initial system build. And you can use any basic video card for the initial build/headless config - it doesn't need to be an onboard video card for unRAID. For me, I didn't need a lot of CPU power. In fact, since most of my work is more as a NAS type file service, I am happier with my parity build disk system (Yes, you can mix different size and make disks in the array, but the Parity Disk needs to be the largest drive in the system.) If you want to add extra disks (like your 3TB movie drive) outside of their disk array - you can. You just won't get the value of any disks used for "cache" or parity protection that unRAID offers. Oh, and I currently use just one SSD as my cache drive. But for most things, I don't use the cache drive at all. But I'd recommend a 250-500GB SSD if you can get one. The docker apps and VMs by default get installed on the cache drive. So a second cache drive will improve performance, but isn't totally necessary. And it's good to plan two additional CPUs for unRAID to what you need for your daily driver. I have 4core/8thread xeon CPU in mine, and I keep 2 cores (2 threads) reserved for the OS/unRAID to chew on things. One thing to note, if you do run parity, you should look at when the parity check runs - because for my system (~16TB), it takes 6-8 hours to complete the parity check, and I do it weekly. I do recommend considering another 3TB drive (2 for drive space, 1 for parity) - if you plan to continue to expand your library. But the good news is that if you want to, later on, you can add the parity drive later, or rebuild the parity of the system with adding a larger drive later. You can even move files from one drive to another without taking up too much CPU/RAM resources. My current parity drive is 4TB, and at some point, I might go up to a 6 or 8TB set of drives - which is really easy to do - you just plug them in, format, assign as you want to and move files - if needed. If you're just adding to the array, it's basically just a one click to add storage. For me, if my array gets up to more than 5 drives, I'll likely add a second parity drive for additional protection.
  21. Oh... well that's a bugger (of a video card) First, just to get it written down - the T310 uses implementation of Intel® Virtualization Technology with Directed I/O (Intel VT-d) based off the Intel 3420 chipset, and the built in video is based off the Matrox G200eW w/ 8MB memory integrated in Nuvoton® WPCM450 (BMC controller), and will do up to a 1280x1024@85Hz and 32-bit color for KVM. And it has a 3D PassMark of 42... Which is squat-nothing. And am also just going to bookmark the following specs for reference: PCI 2.3 compliant • Plug n’ Play 1.0a compliant • MP (Multiprocessor) 1.4 compliant • ACPI support • Direct Media Interface (DMI) support • PXE and WOL support for on-board NICs • USB 2.0 (USB boot code is 1.1 compliant) Multiple Power Profiles • UEFI support So, in working with the nVidia GT610 I picked up, I am learning that the T310 layout on the slots is (from top to bottom) Slot 1: PCIe 2.3 (5GT/s) x8 (x8 routing) Slot 2: PCIe 2.3 (5GT/s) x16 (x8 routing) <- likely best for a graphics card. Slot 3: PCIe 2.3 (2.5GT/s) x8 (x4 routing) Slot 4: PCIe 2.0 (2.5GT/s) x1 Slot 5: PCIe 2.0 (2.5GT/s) x1 Disabling the integrated video controller seems to make sense given the basic nature of the video out, and Slot 1 has the SAS/HD Controller, so that means that the best spot for a graphics card is slot 2. What is not clear to me that there is a real need to change BIOS "Enable Video Controller" setting to Disabled, in order to enable a second graphics card as a VM. I don't think that's necessary, but more experimentation will tell in time. I did find that due to cooling issues, Dell limits the power draw to 25W on slots 4 & 5. Also it was noted that the X16 slot was probably not originally designed for graphics cards to be installed and thus power draw may be limited to 40W max. Given the two power supplies add up to 800 Watts max, perhaps this isn't the most surprising find. The PCIe slots were likely all intended only for ethernet or SAS cards for external expansion disk arrays. Based on this, finding a low power (25-40W) graphics card with any performance might be tough. I found out that the GT610 had a max draw of 29 Watts. Buggers! No wonder it's a popular card for systems, cheap and low power. Then before doing more research - I managed to snag a Radeon A9 290, but given the power draw limits - I doubt I can get the 290 to work in the server. It draws 300 watts alone. (Oops. =( Well, at $80, I think I snagged a good buy... eBay is listing the same cards at $120. 😃 ) I did find this: that might be really useful. From that, I could maybe go with the Gigabyte Radeon RX 550 Gaming OC 2G card (at about $100) - as it draws just 50 watts. Although I might need to consider the ZOTAC (or Gigabyte/EVGA) GeForce GT 1030 2GB GDDR5 - since it only draws 30 watts! (And only $85 at NewEgg.) And I might have either a old GeForce 8800 or an HD7570 (60 Watts) available in some of my spare parts that would work too. I was really hoping to do some transcoding, so the 1030 might be the right call for all the "wants" I have for a VM. I just wish I knew it would work well as a VM card with unRAID systems without a lot of hastles. Oh, and Plex doesn't support the AMD RX 550... so that's part of an answer too.
  22. April 24, 2019 - VM with nVidia GT610/1GB card - or why I have "no love" video card - post pains. So, for an update on this build - am up to 6.6.7, no real issues... and my Dell T310 server has been running rock solid for 55+ days. Mostly having to update apps and dockers. Parity checks run regularly and are not showing any signs of errors. Disks (still made of 4TB of IronWolf Drives) are all running reasonably cool (always less than 38C/100F), and I continue to add to my NAS build as I am able to. Still struggling with making a choice between a larger SSD cache drive (500GB=$55) and more RAM (+16B=$67) memory - but now I may need to consider a video card replacement instead. The GT610 was all of $25, so it's not really a loss to me... just bummed I can't get it to work. 😎 In that regard - this VM build for Windows 10/64bit Pro has me at an impass. And I guess I need to add myself to the "No Love for the nVidia GT610 video card" community (geForce GT610/1GB low noise version). I've never been able to dial in this card since first installation. Not really sure why. Again - just so others following can check what I've got done so far - am working with unRAID 6.6.7 on a DELL T310 (with Intel VT-d confirmed) and booting the Windows 10/64bit VM using Machine: i440fx-2.7 or 2.8, with either SeaBOIS or OVMF, with either CPU pass-through or emulated (QEMU64), and am building the VM with the nVidia GT610 in it's own IOMMU group (13) and it's in slot 4 of the PCIe's available. But it is giving me the same headache as others have had with video passthrough. VNC/QXL video types work fine, and I am able to use the VirtIO drivers from RedHat without much of an issue. And often the GT610 card shows up as a Microsoft basic video card (ugh), so it at least "posts" rather than locks everything up. Note - for my VM's I am using TeamViewer (v.14). As I can access it from behind my firewalls over my Verizon DSL link without an issue. (No, there is no fiber where we live, and I refuse to get ComCrap service... no, not even a dry drop! And yes, I have AT&T/Verizon/DirecTV, so suck it ComCast/NBCUniversal Media, LLC !) And I followed the excellent GPU ROM BIOS edits video that SpaceInvaderOne had done & suggested, made sure that there was no added header - and I still can't get the nVidia Card to work reliably, even after multiple attempts. I have other VMs that are working fine with TeamViewer, and multiple with VNC/QXL, but nothing seems to be working reliably with the nVidia GT610. I might try one last shot at it with Windows 7/64, but not holding my breath for that one either. Mostly I just want a card for video and audio trans-coding and video acceleration (Virtual Reality/3D MIMO gaming), and "maybe" some home-lab stuff. And I thought the 610 would have worked well, since I have it in another HP DC7900 desktop (Quad Intel Core 2 Q8400 cpu), and it works rather solidly in that machine. After three nights of tinkering, I think I am just going to find another video card off eBay or locally at the local PC Reseller to try.
  23. I guess I need to add myself to the "No Love for the GT610" community. For me - working with unRAID 6.6.7 on a DELL T310 (with Intel VT-d confirmed) with either SeaBOIS or OVMF, with CPU pass-through or emulated, and building a Windows 10/64bit image VM with the nVidia GT610 in it's own IOMMU group (13) is giving me the same headache in video passthrough as well. Note - I am using TeamViewer (v.14). And I followed the GPU ROM BIOS edits that SpaceInvaderOne had suggested, made sure that there was no added header - and I still can't get the nVidia Card to work reliably, after multiple attempts. I have other VMs that are working fine with TeamViewer, and with VNC/QXL, but nothing seems to be working reliably with the nVidia GT610. I might try one last shot at it with Windows 7/64, but not holding my breath for that one either. After three nights, I think I am just going to find another video card to try.
  24. You'll like the flexibility and capabilities of unRAID. I've been impressed with all the tools available (as a semi-pro/amateur photographer). I run multiple OS's (Win 7, Win 10, MacOS, Linux, etc.) So I like the specs, but I'd probably suggest adding a separate LSI SATA controller with cache on PCIe. I've never been happy with onboard SATA controllers, but that's just me. If the SATA fails, you can simply replace it. But if your onboard SATA fails, you're in deep trouble. If you're going to be running 8x 3.5” HDDs, but only going to have moderate CPU loads - you should be ok with 550 Watts. A lot of people are saying that 500GB is enough cache, but given the prices and you're talking about 20-40tb of media - I'd reconsider going to a 1TB drive, or maybe having 2x 500GB SSDs, if you have them already. Especially if you are working with a lot of large (4K) video files. And I like that compact Fractal Design Node 804 case design, but would be sure that the fans are sufficient airflow for both the drives and CPU. Last point: Make sure that Parity 8TB drive is new and server/enterprise quality.