rollieindc

Members
  • Posts

    104
  • Joined

  • Last visited

Posts posted by rollieindc

  1. 31JULY2019 - Upgrade to 6.7.2 issue?

    Upgraded the OS to 6.7.2 from 6.7.1, but something new happened. I had been able to access the system with https://tower (actually https://nastyfox) but now have to issue https://tower.local/ or the direct IP address http://192.168.1.119 - in order to get to the server GUI. This was not an issue in 6.7.1. I tried a few fixes, including purging the network (DNS) entries in router - without any success.

     

    Plus all of my drive maps in Win7/64 laptop had to be remapped and re-logged in. Another issue is when I try to log in with any user name than "root" - I seem to be unable to get the system to recognize the user/password combination correctly. "root" works fine, but my other usernames (admin and peter) are not working very well.

  2. Update: June 23, 2019 - The continuing saga that is the nVidia - ASUS 1030 GT OC card.

     

    I did manage to get the video card to display from the display port in both boot and a Win10/64 VM configuration with OVMF, i440fx-2.12. (Yea, some success!) I had to make the card the primary display in the T310, essentially disabling the onboard intel video chip in the server's bios. the boot screen now shows up through that display.

     

    To get this far with the VMs, I used the downloaded and editted BIOS from the TechPowerUP in the VM's XML, and set the sound card to a ich9 model. So far, it was looking good. Until the machine rebooted when installing nVidia drivers. (UGH!) At that point I got the dreaded "Error 43" code in the windows driver interface box, and was stuck in 800x600 SVGA mode - unable to correct it.

     

    I will likely remove the card and dump the BIOS from another machine, and then use that in a new VM machine build to see if that works. I am unsure if I need to go back to SeaBIOS and try that option to make it workable - but that's another path I could persue. Also unclear if i440fx-3.1 is an option or not.

     

    In some regards, I am just encouraged to know that the 1030 GT video card is indeed working in the Dell T310, and that I can have it be a displayed output - even if "hobbled" at present.

  3. 16 hours ago, BuHaneenHHHH said:

    did you remove the backplane? if yes, then how did you get power to all drives since the server has only 2 sata power cables? and i read that it is not good to have a splitter for more than 2 drives on one sata power cable. (thats my main reason for asking for pictures :P)

    4

    Heyas - Happy to share what I know.

     

    Just to be clear, I am running my T310 as a "headless" system, with no keyboard, mouse or video display. If you intend to use it as a desktop type system to run games, then you might want to consider how the VMs and the other components are installed. I wanted to be able to run 24/7 as a NAS, with some Virtual Machines (VMs) - that I could remote desktop into via a VPN, have some Docker apps (Plex) and otherwise house my digital photo library.

     

    Attached are some of the photos of my system. Excuse the pink floor, the house is in a state of "pre-remodeling". 

     

    Let's start with your power question. The 4x HDDs in the array have a power tap already.

     

    For my system, and in the photos - you can see how I removed the side cover panel in one of the photos and see all the drive bays. I made an addition of a StarTech 3x 3.5" HDD removable kit in the top/front of the system. So I used the molex power tap from the removed RD (Dell removable hard drive) and the SATA power from the removed Dell DVD Drive, and have since I replaced those with that aftermarket SATA removable drive bay. Essentially, what I had originally was a 2x 5.25 half height (or one Full Height) bay to work with. I also included a view inside, so you can see the "inner workings" of the server, power, video and drive/cabling layout.

     

    You can see that new drive bay I installed the other photos too. So I have the one moxel split into two SATA power outlets, and one SATA power outlet (existing) - resulting total of three SATA power taps to work with. Two go into the new three HDD drive bay (yes, it only needed two SATA taps for three 3.5" drives) and I used one SATA power split tap for the SSD.

     

    Overall, I like this setup, and think I am good on power- but would have liked it better if I had been able to find a 3x or 4x bay system that didn't use removable trays - and just accepted the bare SATA/SAS drives by sliding them into a SATA port/locked with a cover. And yes, I currently have the SSD hanging from the cords ("etherially mounted") and it will get hard mounted later (or duct taped if enough people gripe complain about it!)

     

    I also included a shot of the redundant removable power supplies. I really like this power supply feature, so I can swap out a bad PS, and the system can run on the remaining PS in the interim. So "no," as you can see - I did not remove the backplane - and I wouldn't recommend it. If you install the redundant power bricks- you should be able to pull the existing power supply, and then replace it with the new ones - and add the redundant distribution board. You can see the distribution board just to the left of the power supplies in the overall "guts" view. The one existing molex for the drive - and the one existing SATA power connector came from that distribution board and are unchanged. The molex and SATA power cables from the distribution board looked "beefy" enough, so I think I am ok for power consumption given what I am using the system for, and the way the power is distributed to the drives.

     

    CAVEAT EMPTOR: I WOULD NOT RECOMMEND THIS SET UP FOR A VIDEO PRODUCTION ARRAY. IF I WAS BEATING THIS ARRAY WITH VIDEO EDITS, I WOULD GO WITH SOMETHING MUCH MORE ROBUST AS A SERVER! (Besides, I really hate this issue debugging the nVidia card in a VM. If that was my goal, I think I'd rather pay for a SuperMicro server. But - I am a cheap Scotsman with a mustache.)

     

    You can also see where I have my USB unRAID memory stick plugged into the motherboard. And trust me when I say this, booting from USB for unRAID is not a speed issue. It's very compact, and it "unfolds" itself very quickly into a full blown server. My unRAID total boot time is about 90 seconds to 2 minutes, and I leave it run 24/7. Now, just to be clear - you will want to have that SSD set up for the apps, dockers, and virtual machines (VMs) in order to get something that is speedy/responsive. And the part I really like is that all the VMs can run independently, 24/7 as long as you don't try to use the same resource (e.g same graphics card) at the same time. And most VMs can run a virtual "display" and output through a VNC. I've already had Unbuntu, Win7 and Win10 images running simultaneously on my T310 with VNCs.

     

    (Although I am still fighting with the VMs using an nVidia 1030GT graphics card - ARGH!)

     

    If someone just wanted is a single machine that is not up/on 24/7 - then I suggest they consider installing Win10 image on a T310, slapping in a good SDD and video card - and go with it. But if they wanted a NAS, that is on more than an hour or two while you work with photos or watch videos, (but not run a producton system) that can also run VMs and apps (like Plex) - I am pretty much convinced this (unRAID) is the best way to go. If they wanted a production system that serves more than a few (5?) users, and did audio or video production - I'd be looking at a higher class dual Xeon or Dual Thread Ripper machine with a good PCIe backplane/video car compatibility track record.

     

    Did I miss anything?

    Questions?

    Comments?

    Argumentative Speculation?

    Philosophical Diatribes? 😃
     

     

     

     

     

     

    IMG_2066A.jpg

    IMG_2067A.jpg

    IMG_2068A.jpg

    IMG_2070A.jpg

    IMG_2073A.jpg

    613B8s6fkrL._AC_UL654_QL65_.jpg

  4. On 6/21/2019 at 12:57 PM, BuHaneenHHHH said:

    did you use a SAS to Sata splitter to connect your drives to the controller to get 6GBs speed?

    can you take a picture of the internal of your server showing your drives arrangement?

    do you think i should change my OS from Windows Server to unRaid or something else?

     

    Heyas,

     

    Welcome to the T310 club. Feel free to ask/share info. Sounds like you have a good start on a nice system. I’d see about adding a redundant power supply, if you are able. Not a mandatory thing, but I like the reliability bump I get with mine. And yes, I have mine hooked into an UPS brick- just in case. 

     

    I had the H700, but went to a H200 card so I could use the Smartdrive info to monitor my drive health more closely. But the H700 is a nice card too. One piece of advice for the H700- Copy your RAID config file settings for the H700, that way if you have a drive fail with unRAID and were running parity, you can still rebuild your drive array. 

     

    And - Yes, I use a SAS/SATA splitter cable (4x channel, and 2 channels per card for a total of 8 drives) and yes, I get 6GBs (or the drive’s max speed) on them. Ebay sells them for less than $10. Look for “SAS SFF-8087 4 SATA Hard Drive Cable”. 

     

    If I were you, I’d consider making all three of the 3TB drives you have as RAID 0, and then make one of them a parity drive. That would give you 6TB of parity “covered” storage. Then for every drive you add, you add 3TB of “parity covered” storage. I went with 4TB drives, based on price point, but if you stuck with the 3TB and went up to 8 drives, you’d sit at 21TB maxed out.

     

    My only reason that I’d go back to RAID 5/10 disk configuration now, would be just for improved read speeds. And since I am using my system mostly for a few VMs (on the SSD) and as a NAS, I don’t see the need. 

     

    You might want to read up on how parity is implemented in the unRAID system, vice RAID 1.  There were compelling reasons that I went that way -for my needs. 

     

    With unRAID you’d likely want to use SSD drive as a cache drive, which works well for VMs. For me, I’d keep the SSD on the pcie card, you’ll probably get speeds as good as the H700, and it will probably run faster on a separate PCIe slot. You might want to go to a 500gb SSD- and move to an NVME type drive- but to start with 250GB is good (2-3 VMs plus a docker app) 

     

    And- Sure, I can grab a pic of my drive layout and will post it later. There’s enough room in the T310 to get 8x 3.5” drives inside the box if you think it out carefully. If you decide to go with an external drive cage you might need to get an external connector card - like a H200e.

     

    FYSA- I do not recommend a usb HDD box on the system for anything other than file transfers. The USBs on the T310 are s-l-o—-w. But for temporary file use or transfers, they work well enough. 

     

    With unRAID, I think I read that you can run the Windows 2016 image as a virtual machine. Then you can assign the unRAID “shares” as virtual drives- as you want. Best of both worlds.

     

    But for me, I am able to attach my unRAID (NAS) shares directly to PCs (win 7, 10, macs) on my network easily. Looks & works like a regular network mapped drive.

     

    More later...

     

  5. Moving on (June 15 update)

     

    Still happy with the Dell T310 server as an unRAID platform. Very few down days.

    Installed the Samsung 1TB SDD, definite performance bump with it over the Patriot 240GB one.

    Still working on the ASUS nVidia 1030 card - and tonight my next step is to make it the primary video card at boot-up.

    (Nope, that didn't work either. Will need to remove the card and dump the bios/firmware on another machine)

    Starting to enjoy the Plex docker. Still not got SickChill working well. Also tried to get Deluge-VPN docker working, but confused by the config file locations in the interface to unRAID 6.7 seems to have changed significantly.
    Uninstalled the DarkTable docker, the interface was just too clunky and difficult to use in a docker. Even on my metal laptop, it's still clunky.

    Noted that there is now a docker for GIMP, but honestly - I'd rather run it through a Windows VM on the server.

  6. Moving on (June update)

     

    Updated/Max'ed out the RAM in my server to 32Gigs from 16Gigs. (16GB for $49 on eBay)

     

    Added a Plex docker. Tried BinHex's SickChill docker, but it's not working well (yet.)

    Also bought a 1TB SDD (Samsung $89 at MicroCenter) but have yet to install it.

    Also still need to debug my ASUS nVidia 1030 card issues, but the system is working well without that working fully - at present.

    Loaded up the DarkTable docker app for photo cataloging, not sure yet it's something I will keep.

  7. 4 hours ago, steve1977 said:

    How many GPUs do you have? I have the same issue. Everything works as long as i have only one GPU. I can pass it through. Once i add a second GPU to my mobo (not even using it for anything), the first one no longer passes through.

    Just the one (Asus 1030). There is the onboard graphics chip, but not using it - although I have not tried disabling it ... yet.

  8. So, I'm about out of ideas. (This is on my Dell T310 Server, specs in my signature)

    Still trying to get a Asus 1030 card to passthrough, and having no joy.

    I've watched S-I-1's videos a dozen times. (Thank you!)

    Made over a dozen VM's with various settings.

    The unRAID server boot mode is set to Legacy.

    Downloaded the Asus 1030 BIOS from techpowerup and edited it, as shown.

    Am able to get Win 10 Pro to boot, video to show up, and fully load up. (login works)

    Can even get to where the card is recognized as a 1030.

    But any driver tried only leaves me with

    "Windows has stopped this device because it has reported problems. (Code 43)"

    -----------------------

    Within the VM:

    Doing passthrough of the CPU (intel Xeon, from 1 to 6 cpus), or emulated.

    Using OVMF bios to boot. (will not boot under SeaBios)

    Machine is i440fx-2.8, 9 or 11 (Will not boot under i440fx-3.0)

    Hyper-V is set to no.

    Using either SD or HDD to boot makes no difference (although IDE seems to work best, occasionally)

    Am passing through the card & the HDMI sound lane as well. (And sometimes a second sound card)

    The Video ROM is included or not (and including it does seem to make the booting process more stable.)

     

    Also tried a similar set up with Windows 7, but it crashed on boot up.

    Most of the time can only use the VM with the MS basic (800x600) video display driver.

    Do ATI Radeon cards have the same issues?

    From some posts elsewhere, it looks like they are having similar issues with VMs too.

  9. Another day, another issue...

     

    Still having issues with the VM using the Asus nVidia 1030 card. I did pull out a monitor, keyboard, and mouse to make a proper go of it. And I don't think it's the card, it's the VM. So I continued tinkering with it, and more will be required. I am using direct video out to a monitor over the DVI interface. So yes, the card is recognized, but it still refuses to accept the drivers (new or old) and reverts back to the Microsoft Standard Display adapter settings. This included removing the old driver with DDU (Display Deinstall Utility) - and reinstalling only the recommended one. At this point, I am going to start over and build a brand new VM, checking everything twice.

     

    As for the Plex docker, that has gotten a lot more stable and it's beginning to grow on me. I did buy the iOS app for my phone ($4.99), and it's good at connecting to my library (music & video). I'm not a big media dog by any means, but I have some tunes that I do like to listen to - and a few movies that I like to watch occasionally.

     

    I also am trying to set up the binhex vpndeluge, but that's far more complex - and as I run NordVPN, I couldn't even determine which port was being used in OpenVPN files & servers to get past the initial install stage. (And I have other issues at home that are a much higher priority.)

  10. 2 hours ago, nuhll said:

    get a 1050

     

    - cost around 100€

    - can encode 4k at around 60% GPU  Usage

    - dont need power adapter

    - passive cooled

    2

    Except that the 1050 draws 75 watts, and the PCIe buss (on my Dell T310 server) only supplies about 60W.

    For now, I think I will be able make this 1030 work, at around half the price.

    (Did I mention that I am Scottish? Thrifty + Stubborn 😃 )

  11. Video Card Dump - SWAP & Plex Docker

     

    So, tired of dealing with the nVidia GT610 card in any VM builds, I pulled it and installed an ASUS Phoenix 1030 OC 2Gb card ($75). Needed to place it in slot 2 because of the fan shield, and moved the SAS/SATA controller to slot 3 (leaving slot 1 empty). System and the VM seem stable enough when the VM is booted. Still can't access a VM that uses the 1030 card yet (can't get TeamViewer to come up, probably a video driver issue that I still need to work through.) I might need to go into the VM through VNC at first using the QXL driver, and make the video card and driver something VGA simple at first, change the card in the VM settings, and then load the new drivers from within the VM in TeamViewer.  If someone else has a better idea, I am all ears. Mostly hunt and poke at this point. And yes, I've watched @SpaceInvaderOne 's videos. Problem is, I am currently running the system headless, so I'd have to pull out a monitor, keyboard and mouse to make a proper go of it.

     

    Also rather torqued off to find out that, apparently, the 1030 card can't do much for transcoding videos using NVENC/YUV or 4K. Well, not the whole reason I bought it, as I really wanted the physics engines on it for some graphics and scientific computing. Just surprised that it can't do much to transcode a 4K video, apparently.

     

    Still frustrated with the VM builds, I changed and installed a Plex media server via a docker. Somewhat unimpressed. At least it's a small app. Took me a while to realize that when I went into the Web_UI, that I had to change the URL to start with https: - as it wouldn't start otherwise. And it seems that everything I want to do with it requires a PlexPass or payments to the "Creators", functionality is pretty basic too, although it did sort out my library without much excessive thrashing. (Once I realized that I could make a path into the Plex with the path of "nasmusic" that resolved to a path on my server shares - I had my library added. Oy! Half the time I was trying to get it to work - it threw me errors saying the path was illegal because I used a capital letter or hyphen.) Might be dumping Plex and looking at other media server systems.

     

     

  12. 16 hours ago, jonathanm said:

    Sorry, I can't leave this be. Parity in unraid really isn't error correction,...

    1

    Ok, yes, my apologies - I over-over simplified. It's really a failed disk recovery method. Thanks for doing a better explanation than I did at the time (I blame a restless night and not enough sleep.)

     

    But also to be fair, I've gotten repeated recommendations to add a parity drive for every 8th drive. Maybe it's to reduce the time required by the algorithms to compute a parity value, and then write it (either in parity creation or in recovery) - but that was the recommendation. And I've never seen where the algorithm can have one value for data on up to 28 drives - if it can, great! I'll have to go back and re-read that part of the documentation.

     

    And well, there still seems to be a lot of talk in the forum about pre-clearing new drives - perhaps to avoid "start up" deaths. But I still got quite a few admonitions to pre-clear any and all new drives being added to my arrays. But if not necessary, then ok... good to know that too! Thanks.

  13. Yeah, the onboard SATA is only 3Gbs. I'd be looking at a new SATA adapter first thing. Also probably an NVME card as an SSD/Cache&VM drive.

    It reminds me a lot of my DELL T310 in size and configuration.

    Does it have any spare Molex or PCIe power outlet/plugs? If not, you'll be limited in video card selection like I was. Dang x4PCIe are very low wattage (like 25watts) on most servers. Few were built for any nVidia type video passthrough power needed.

  14. Sounds like a nice buy on a nice rig, Knipster.

     

    First, I'd add a UPS battery back up, if you don't already have one. Which disk controller is in it? For me, I'd probably be thinking about adding an NVME M.2/SSD cache drive and for the VMs (500GB-1TB), and maybe some removable drive bay slots. The new drive bay systems don't even require a sled, just slap in the 2.5 or 3.5 SATA/SAS drive in the slot and close the door. Are you going to make one of the drives a parity drive? I would if it was my machine. 🙃

     

     

     

  15. Sounds like a good match for unRAID.

     

    Ok, let's start with my understanding unRAID disks. (Happy to have others chime in, with their experiences or opinions!)

    You can read up more in the unRAID manuals online about this. There are some great tutorials by @SpaceInvaderOne on youtube. I recommend watching a few of them first before building your system.

     

    When you install unRAID, you'll have a "drive array" that you can have parity error correction disk repair capability with. One thing to note, any drive going into the "drive array" will need to be pre-cleared and reformatted for unRAID to use effectively, so a back up is a "REALLY GOOD IDEA (tm)". The drives will be formatted into Linux formats, like xfs. The "Parity Drive" is a separate drive (or two) that can cover up to seven other "combined" drives with error correction drive recovery capability, such that if there is a drive failure in the array - you can rebuild the (entire) array by simply adding a new replacement drive for the one that failed. No muss, no fuss, just a rebuild and you're back in business. This is probably one of the biggest selling points for unRAID.

     

    And as mentioned below, any drive showing errors are best replaced with that replacement handled promptly. For my system, I have a "hot swappable" 4TB drive sitting in standby (in a anti-stat wrapper, in a drive bay drawer) should that ever be the case.

     

    And the parity drive needs to be specifically assigned, and the largest drive in system (or array). The only job of the parity drive is for error data protection, it cannot be used for other file storage. If you have more than seven drives, then you typically it has been suggested practice to just add another parity drive as you add more than seven drives. You can even add extra insurance with two parity drives covering the same (7-10 drive) "array". Having two parity drives is has also been recommended to me by some users of unRAID if your drives are getting older, or are of "questionable" lifespan, or you need to ensure that the data is not vulnerable to being lost. There is a lot more to learn about parity protection, but it seems a valuable feature that makes unRAID unique compared to many other systems available.

     

    After that, the "drive array" that you would build/create - essentially merges the multiple hard disk drive spaces (even of varied sizes) into one "virtual drive" - but in such a way that you span drives, but can still assign specific areas of the drives for specific tasks. For my system, I have 3x4TB drives, with 2x600GB drives, totaling 13.2GB - (with one 4TB drive in parity) After the array is built, you can then have separate share folders in that array, that you can set size limits on, access as network drives (and assign as windows networked drives in VMs) - just like a Network Array Storage (NAS) device. For example, I have my photos in my array, in a protected user folder named "Photos" and have that folder assigned on my laptop and on my VMs as drive "P:" with my (retained) login information. And as long as I am on the network, I have access to those files - but no one else on the same network would.

     

    You could also run each hard drive as an "unassigned device" (note, this is an added "pluggin" for unRAID) that you could then reassign inside your VMs as needed. It's a little tricky to do, but not hard either. Any "unassigned device" is also outside of the parity protection scheme, and for hard drives - is then treated like any other standard hard drive.

     

    The unRaid Cache drive, is used for NAS file ingestion (uploads) to speed up the upload, and also contains the VMs and the Docker apps that you decide to install. This is typically made up of one or two SSD drives - and it's not protected by any parity protection like the drive array is. These drives area also not "spun down", as I understand it - in order to help with file upload times. The uploaded cache files are moved off at an interval you specify automatically by unRAID. You can also move them "manually" if you need to. The files are not protected by the drive array parity drive until they are moved.

    There are also "unassigned devices" that are like separate drives, which can be either hard drives or USB drives. They can still be used for storage and backups, but are meant more for specific uses where parity protection isn't needed, and you might want them for only specific purposes (like as a VM system backup drive).

     

    Now, let's get back to your potential set up.

     

    From what I sense - you could run the Win 7 WMC VM in the background, with an Emby or Plex Docker server app running from unRAID. Plex could pull from the drive in your VM, and add the new WMC recorded video to your library - serving it back to the XBox. And you could also run a Win10 VM at the same time - doing other things.

     

    You can even load balance those on the Cache with just the 500GB SDD - or with the VMs on the 1TB m.2 SSD, and still have plenty of room left over. (And in reality - you could run everything off one SSD and not be too cramped, so I'd try it first, then decide. For me, I'd want the speed from the m.2, especially if it's an NVME for my VMs.)

     

    You might want to add a little more RAM to your system if you can. If you run multiple VMs at the same time, then it gets a little tight on 16Gigs. Not really bad, I've had a couple of 6GB VMs running at the same time. You'll just want to watch how you build them, and leave about 2 GB of RAM for unRAID to do "it's thing". If you have multiple VMs and Docker apps running, that would also be a good reason for 24-32GB total.

     

    And the 1050Ti should let you do any video transcoding (to mp4 format?) you need to with Plex, and still have headroom for your VMs with passthrough. Plenty of good tutorials on setting those up. If you do decide to run Plex or Emby as a docker app, it will all go on the SDD Cache drive - so very low overhead with those running. There are Docker apps for capturing networked cameras feeds (like you do with iSpy) as well. Note, Dockers, Apps, VMs, and Tools are all separate items in unRAID. Which allows for more flexibility - and sometimes better (tool) options to be considered.

     

    You can even have the external drive added as an external device any time you want, and even assign specific USB ports to the VMs for hot plugging, if you need to. Also easy to have a Win10 VM running any time you need it, as a "front cover" on the machine, while running unRAID in a "headless" configuration ("headless meaning without need for a monitor or keyboard). You can then VNC into your windows 7 or 10 VMs via any standard browser on the network, and your XBox 360 can access the Plex or Emby services - and access your disk array as if it was a secure NAS system.

     

    So... did that help?

    • Like 1
  16. Heyas Lance,

     

    Fellow Dell Server user here (T310). Looks like a nice big data rig. Is it a 8 bay or 12 bay server?

     

    And let me know how the PERC H700 works for you. I decided to move to a PERC H200 (reflashed to 9211 IT Mode) so I could let unRAID build my array, have it be covered with an unRAID parity drive - and use the SMART (drive) status data to monitor my drives (example the drive temps show up in the dashboard). When I first did the unRAID build, I had all the drives be RAID 0 on the H700, then let unRAID build the array. But I realized (with some help) that if I lost a drive, it would also be a lot easier to rebuild the entire data drive array with unRAID with the H200 in IT mode, than to have to remove the drive from the H700 RAID array (you might want to read up on how to do that) and then add the new one back into the array. Doable, but not as easy as it might first appear.

     

    And you know you could also run the SSD SATA drives off the PERC H700 or 200 too, although in hindsight, I'd probably want to use a couple of NVME drives on a PCIe card. (Looks like you have three slots on the riser, I have four on my motherboard). Also if you mount the SSD drives internally, do you have any molex or SATA power cords that are unused? If not, that can be an issue with Dell Servers too. They were very "thrifty" with power internally on their servers. Wait... what about the two internal SATA drive slots that the R510 is supposed to already have?! (See figure 3-2 on page 85 of the manual at https://downloads.dell.com/manuals/all-products/esuprt_ser_stor_net/esuprt_poweredge/poweredge-r510_owner's manual_en-us.pdf  and you'll see there is another way to mount two internal drives in the internal bay-array)

     

    Speaking of that riser - Are you going to add a graphics card for video pass-throughs on your VMs? (And do you think you want to game on this system?)  If so, look through my thread on "My Tower" - as Dell does some power limiting in their servers that can affect which video cards you might want to consider - if you haven't already. The riser PCIe slots look to be X4's, and if so - any card will be limited to about 25 watts, which will make it hard to find any kind of "gamer" graphics card. (You can see my headaches I am going through at present in this regard)

     

    If you have a H700 in slot one on the riser (maybe not, as it looks like it's in it's own slot on the R510 - lucky!), and a graphics card for passthrough (like I am doing) then you might be down to your last card slot for a sata/NVME card holder... just a thought to consider. Power inside my T310 is limited (just 400 watts), and I don't have any PCIe power cords for video cards - so that means I am on a real power budget. In the end, I pulled my optical (CD) drive and the tape drive that was installed and used the molex power for additional hard drives.

     

    Anyway, sounds like you've got fun ahead of you... I enjoy unRAID on my Dell Server. I couldn't beat the price or nature of the beast.

     

  17. From what I have gathered, the cache drive typically is used for disk file ingestion, docker apps, and VMs in unRAID.

     

    And for me - based on what I know of unRAID, the NVME should just show up as just another device that you will need to pre-clear and format as is the standard practice for unRAID. Depending on how it is installed, you can use it as the second cache device, but if you already have a decent sized SSD, then you might want to instead mount it with the unassigned devices app - and have it just be for VMs. Then when you build your VMs, just to point to that unassigned drive/device instead of the cache drive - and as the "boot drive" when building the VM.  The advantage of this approach should be that you won't be competing with cache bandwidth on the SSD cache drive or with other docker apps. Plus your VMs will likely have the fastest seek times of any device on the system. I have my VMs on another (fast HDD) drive in my array, and it's still pretty fast.

     

    About the only disadvantage of it that I can think of would be the VMs wouldn't be covered under the Parity drive, and you might want to back them up on the drive array from time to time - just in case you lose the NVME. Well, that and your cache SSD drive is likely slightly slower than the NVME M2 drive, right? But unless you're talking a large NVME (>1TB) and you're working with moving large files (videos, digital photography, or scientific files) then you're not going to notice that much difference.

  18. Welcome, Frode. Think you'll find unRAID pretty stable and well supported, even for us that are "Over 50". And the forum here is pretty helpful. Have a similar build, but am using a used Dell T310 server as my main. Am working with Plex in a Docker app, but am a newbie with that app. But it runs. Mostly doing digital photography and some video work on the side as a hobby.

  19. On 4/27/2019 at 5:26 AM, nuhll said:

    Just as a hint 8x or 16x doenst make a noticable difference for GPU.

    For bandwidth, you are 100% right.

     

    Especially since this x16 slot has only x8 routing. And I am not looking at adding or expecting to use this build for any multi-GPU card acceleration features that the additional lanes would help with anyway. (This is a notable downside on the T310 architecture compared to other modern server mobo designs, but something I can probably live with given the price I've paid.)

     

    But it also "might" make a slight difference for power, since full-sized ×16 graphics card (without additional 6 or 8 pin connectors) can typically draw up to 5.5 A at +12 V (66 W) from those slots. Dell may have limited it to 40watts, so that makes GPU card selection a little trickier. And I didn't see any specific documentation on the x8, but noted that x4 cards are typically limited to 25 watts. Seems from what I've read in other forums that the x8 slots are potentially limited to 25w in the same way. Again, given that the T310 server's power supply is 400watts, I don't want to even fool with molex to other 6/8 pin PCIe power plug adapters.

    • Upvote 1
  20. I'm going to confirm everything that pappaq said, and offer that the nVidia 1030 card should be a good one to use in your build as a pass-through graphics card for a VM. (I'm looking at that one too!) And I use VNC to get into multiple VMs. It works pretty well. And using unRAID makes that set up pretty easy. I have 16GB of RAM, but sometimes think that I'd be better off with 32GB for some of the photo and video work I do. Most of my VMs are around 6GB of RAM, and they work well.

     

    And you might like TeamViewer - if you want to remote into your VMs from outside your own network. Otherwise, you might want to review some of SpaceInvaderOne's YouTube videos, as he does a really good step-by-step process in building VMs, remote access and doing graphic card pass-through.

     

    You can read about my build, and see some of the things I have gone through. I do boot mine headless, but I used the internal graphics card for initial system build. And you can use any basic video card for the initial build/headless config - it doesn't need to be an onboard video card for unRAID. For me, I didn't need a lot of CPU power. In fact, since most of my work is more as a NAS type file service, I am happier with my parity build disk system (Yes, you can mix different size and make disks in the array, but the Parity Disk needs to be the largest drive in the system.) If you want to add extra disks (like your 3TB movie drive) outside of their disk array - you can. You just won't get the value of any disks used for "cache" or parity protection that unRAID offers. 

     

    Oh, and I currently use just one SSD as my cache drive. But for most things, I don't use the cache drive at all. But I'd recommend a 250-500GB SSD if you can get one. The docker apps and VMs by default get installed on the cache drive. So a second cache drive will improve performance, but isn't totally necessary. And it's good to plan two additional CPUs for unRAID to what you need for your daily driver. I have 4core/8thread xeon CPU in mine, and I keep 2 cores (2 threads) reserved for the OS/unRAID to chew on things. One thing to note, if you do run parity, you should look at when the parity check runs - because for my system (~16TB), it takes 6-8 hours to complete the parity check, and I do it weekly.

     

    I do recommend considering another 3TB drive (2 for drive space, 1 for parity) - if you plan to continue to expand your library. But the good news is that if you want to, later on, you can add the parity drive later, or rebuild the parity of the system with adding a larger drive later. You can even move files from one drive to another without taking up too much CPU/RAM resources. My current parity drive is 4TB, and at some point, I might go up to a 6 or 8TB set of drives - which is really easy to do - you just plug them in, format, assign as you want to and move files - if needed. If you're just adding to the array, it's basically just a one click to add storage. For me, if my array gets up to more than 5 drives, I'll likely add a second parity drive for additional protection.