rollieindc

Members
  • Content Count

    92
  • Joined

  • Last visited

Everything posted by rollieindc

  1. Another day, another issue... Still having issues with the VM using the Asus nVidia 1030 card. I did pull out a monitor, keyboard, and mouse to make a proper go of it. And I don't think it's the card, it's the VM. So I continued tinkering with it, and more will be required. I am using direct video out to a monitor over the DVI interface. So yes, the card is recognized, but it still refuses to accept the drivers (new or old) and reverts back to the Microsoft Standard Display adapter settings. This included removing the old driver with DDU (Display Deinstall Utility) - and reinstalli
  2. Except that the 1050 draws 75 watts, and the PCIe buss (on my Dell T310 server) only supplies about 60W. For now, I think I will be able make this 1030 work, at around half the price. (Did I mention that I am Scottish? Thrifty + Stubborn 😃 )
  3. Video Card Dump - SWAP & Plex Docker So, tired of dealing with the nVidia GT610 card in any VM builds, I pulled it and installed an ASUS Phoenix 1030 OC 2Gb card ($75). Needed to place it in slot 2 because of the fan shield, and moved the SAS/SATA controller to slot 3 (leaving slot 1 empty). System and the VM seem stable enough when the VM is booted. Still can't access a VM that uses the 1030 card yet (can't get TeamViewer to come up, probably a video driver issue that I still need to work through.) I might need to go into the VM through VNC at first using the QXL driver, and m
  4. Could you better define your nexus of a "first hiccup"?
  5. Ok, yes, my apologies - I over-over simplified. It's really a failed disk recovery method. Thanks for doing a better explanation than I did at the time (I blame a restless night and not enough sleep.) But also to be fair, I've gotten repeated recommendations to add a parity drive for every 8th drive. Maybe it's to reduce the time required by the algorithms to compute a parity value, and then write it (either in parity creation or in recovery) - but that was the recommendation. And I've never seen where the algorithm can have one value for data on up to 28 drives - if it can, great!
  6. Looks like a solid build to me. Which HD Controller and video card are you using?
  7. Yeah, the onboard SATA is only 3Gbs. I'd be looking at a new SATA adapter first thing. Also probably an NVME card as an SSD/Cache&VM drive. It reminds me a lot of my DELL T310 in size and configuration. Does it have any spare Molex or PCIe power outlet/plugs? If not, you'll be limited in video card selection like I was. Dang x4PCIe are very low wattage (like 25watts) on most servers. Few were built for any nVidia type video passthrough power needed.
  8. Sounds like a nice buy on a nice rig, Knipster. First, I'd add a UPS battery back up, if you don't already have one. Which disk controller is in it? For me, I'd probably be thinking about adding an NVME M.2/SSD cache drive and for the VMs (500GB-1TB), and maybe some removable drive bay slots. The new drive bay systems don't even require a sled, just slap in the 2.5 or 3.5 SATA/SAS drive in the slot and close the door. Are you going to make one of the drives a parity drive? I would if it was my machine. 🙃
  9. Sounds like a good match for unRAID. Ok, let's start with my understanding unRAID disks. (Happy to have others chime in, with their experiences or opinions!) You can read up more in the unRAID manuals online about this. There are some great tutorials by @SpaceInvaderOne on youtube. I recommend watching a few of them first before building your system. When you install unRAID, you'll have a "drive array" that you can have parity error correction disk repair capability with. One thing to note, any drive going into the "drive array" will need to be pre-cleared and refo
  10. Heyas Lance, Fellow Dell Server user here (T310). Looks like a nice big data rig. Is it a 8 bay or 12 bay server? And let me know how the PERC H700 works for you. I decided to move to a PERC H200 (reflashed to 9211 IT Mode) so I could let unRAID build my array, have it be covered with an unRAID parity drive - and use the SMART (drive) status data to monitor my drives (example the drive temps show up in the dashboard). When I first did the unRAID build, I had all the drives be RAID 0 on the H700, then let unRAID build the array. But I realized (with some help) that if I
  11. From what I have gathered, the cache drive typically is used for disk file ingestion, docker apps, and VMs in unRAID. And for me - based on what I know of unRAID, the NVME should just show up as just another device that you will need to pre-clear and format as is the standard practice for unRAID. Depending on how it is installed, you can use it as the second cache device, but if you already have a decent sized SSD, then you might want to instead mount it with the unassigned devices app - and have it just be for VMs. Then when you build your VMs, just to point to that unassigned dri
  12. rollieindc

    New man...

    Welcome, Frode. Think you'll find unRAID pretty stable and well supported, even for us that are "Over 50". And the forum here is pretty helpful. Have a similar build, but am using a used Dell T310 server as my main. Am working with Plex in a Docker app, but am a newbie with that app. But it runs. Mostly doing digital photography and some video work on the side as a hobby.
  13. For bandwidth, you are 100% right. Especially since this x16 slot has only x8 routing. And I am not looking at adding or expecting to use this build for any multi-GPU card acceleration features that the additional lanes would help with anyway. (This is a notable downside on the T310 architecture compared to other modern server mobo designs, but something I can probably live with given the price I've paid.) But it also "might" make a slight difference for power, since full-sized ×16 graphics card (without additional 6 or 8 pin connectors) can typically draw up to 5.5 A a
  14. I'm going to confirm everything that pappaq said, and offer that the nVidia 1030 card should be a good one to use in your build as a pass-through graphics card for a VM. (I'm looking at that one too!) And I use VNC to get into multiple VMs. It works pretty well. And using unRAID makes that set up pretty easy. I have 16GB of RAM, but sometimes think that I'd be better off with 32GB for some of the photo and video work I do. Most of my VMs are around 6GB of RAM, and they work well. And you might like TeamViewer - if you want to remote into your VMs from outside your own network. Othe
  15. Oh... well that's a bugger (of a video card) First, just to get it written down - the T310 uses implementation of Intel® Virtualization Technology with Directed I/O (Intel VT-d) based off the Intel 3420 chipset, and the built in video is based off the Matrox G200eW w/ 8MB memory integrated in Nuvoton® WPCM450 (BMC controller), and will do up to a 1280x1024@85Hz and 32-bit color for KVM. And it has a 3D PassMark of 42... Which is squat-nothing. And am also just going to bookmark the following specs for reference: PCI 2.3 compliant • Plug n’ Play 1.0a compliant • MP (Mult
  16. April 24, 2019 - VM with nVidia GT610/1GB card - or why I have "no love" video card - post pains. So, for an update on this build - am up to 6.6.7, no real issues... and my Dell T310 server has been running rock solid for 55+ days. Mostly having to update apps and dockers. Parity checks run regularly and are not showing any signs of errors. Disks (still made of 4TB of IronWolf Drives) are all running reasonably cool (always less than 38C/100F), and I continue to add to my NAS build as I am able to. Still struggling with making a choice between a larger SSD cache drive (500GB=$55) a
  17. I guess I need to add myself to the "No Love for the GT610" community. For me - working with unRAID 6.6.7 on a DELL T310 (with Intel VT-d confirmed) with either SeaBOIS or OVMF, with CPU pass-through or emulated, and building a Windows 10/64bit image VM with the nVidia GT610 in it's own IOMMU group (13) is giving me the same headache in video passthrough as well. Note - I am using TeamViewer (v.14). And I followed the GPU ROM BIOS edits that SpaceInvaderOne had suggested, made sure that there was no added header - and I still can't get the nVidia Card to work reliably, after multip
  18. You'll like the flexibility and capabilities of unRAID. I've been impressed with all the tools available (as a semi-pro/amateur photographer). I run multiple OS's (Win 7, Win 10, MacOS, Linux, etc.) So I like the specs, but I'd probably suggest adding a separate LSI SATA controller with cache on PCIe. I've never been happy with onboard SATA controllers, but that's just me. If the SATA fails, you can simply replace it. But if your onboard SATA fails, you're in deep trouble. If you're going to be running 8x 3.5” HDDs, but only going to have moderate CPU loads - you should
  19. So, just a boring update. Not much to report lately, as I've just been on travel a lot for work and dealing with a "wonky" DSL connection at my house. The Dell T310 server has been running smoothly on unRAID 6.6.6, and not had any real issues since I installed the H200 controller in it. My biggest quandary has been to decide if I should increase the cache drive size (500gb or 1TB) or up the RAM memory to 32GB. To be honest, I don't need to do either at the moment. And my Drive Array appears to be running just fine. So, nice and quiet for me. Looking forward to the update on 6.7/6.
  20. Jonathanm, Just to follow up on this thread, I exchanged the H700 for an H200 flashed over to IT. I did this as much to obtain the SMARTDrive information on all the attached drives, in the hopes of being able to increase the reliability of the system through knowledge of that information- as anything else. I am not sure that the loss of the attached 512MB of cache memory on the H700 will be a performance hit, but since this system is mostly going to be pulling NAS storage duty, I think the overall performance will be "good enough" for my needs. Also the ability to work with the dr
  21. Update: 03DEC2018 - Replacing the H700 SAS Controller for a H200 flashed into IT Mode. New Stats: Dell PowerEdge T310 (Flashed to latest BIOS) RAM: 16GB ECC Quad Rank Controller: Dell H200 SAS Flashed to IT Mode, replacing the SAS H700 (Flashed to latest BIOS, all drives running in RAID 0) Drives: Seagate Ironwolf 4TB SATA (1x parity, 3x data ) + 2x 600GB Dell + 240GB SSD (for VMs) Note: The installed a three drive 3.5" bay system (from StarTech.com) is in the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives. I plan to
  22. Well, that's actually my point. That's not all I want. And yes, I realize, I may not get what I want. My goals was to: 1) secure/encrypt the data pathway into the server. 2) secure/hide the ip address of the home server (as much as possible), and close the data pathway to avoid tracerouting into the rest of the home network. (1) would be easy enough to do, just using a VPN tunnel from an external client into the server. But connecting into this tunnel directly requires an open, insecure port into my network from the router in order to establish t
  23. Thanks Jonathanm, I've been using NordVPN from my client side for a while now to connect to various servers. And yes - I was thinking that an OpenVPN docker would be the answer keeping my home network (and home ip address) as secure as possible, and that would connect to my unRAID server to the NordVPN servers - permitting me to then establish the most secure tunnel from my "offsite" client/laptop (at a coffee shop) into the server (sitting at home) ... but perhaps I am misunderstanding something with the protocols(?). I really don't want an access point into my entire network (whi
  24. Update: Thursday, 13SEP2018 & 15OCT 2018 I have been running unRAID 6.4 (now 6.5, soon 6.6.1) for a while now on the Dell PowerEdge T310, but I've been doing some hardware upgrades. So let me see if I can show where I started, and where I am going. Dell PowerEdge T310 (Flashed to latest BIOS) RAM: 8GB 16GB ECC Quad Rank Controller: SAS DRAC 6ir SAS H700 (Flashed to latest BIOS, all drives running in RAID 0) Drives: 2x 600GB SAS Dell/Seagate Ironwolf 4TB SATA (1x parity, 2x now up to 3x data - ) + 1x 2x 600GB Dell/Seagate SAS + 120GB 240GB SSD (for VMs), I a