Jump to content

rollieindc

Members
  • Content Count

    55
  • Joined

  • Last visited

Community Reputation

4 Neutral

1 Follower

About rollieindc

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Washington DC USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Thank you for the reply Jonathanm, I do appreciate you taking the time to follow up with me. To your comment - I can get to the management console with "root" or "admin" accounts. However, I used to be able to use https://nastyfox/Main and now I have to use https://nastyfox.local/Main to get to the server. I can also get to the management console with https://192.168.0.119/Main But I am unable to reliably map a network drive to my unRAID shares in Win7/64, like the one named "Music". In Win 7/64 using 6.7.1 with any user and password combination, it connected the "M:" drive to //nastyfox/Music. But since the upgraded to 6.7.2, nothing else seems to allow me to map it to "M:" with the sole exception of "//192.168.0.119/Music" And I find this rather odd behavior.
  2. 31JULY2019 - Upgrade to 6.7.2 issue? Upgraded the OS to 6.7.2 from 6.7.1, but something new happened. I had been able to access the system with https://tower (actually https://nastyfox) but now have to issue https://tower.local/ or the direct IP address http://192.168.1.119 - in order to get to the server GUI. This was not an issue in 6.7.1. I tried a few fixes, including purging the network (DNS) entries in router - without any success. Plus all of my drive maps in Win7/64 laptop had to be remapped and re-logged in. Another issue is when I try to log in with any user name than "root" - I seem to be unable to get the system to recognize the user/password combination correctly. "root" works fine, but my other usernames (admin and peter) are not working very well.
  3. 32GB DDR3 ECC - Using most of it for running Windows VMs.
  4. rollieindc

    scenic mountain

    Glacier Point at Yosemite National Park on a clear day! Nice shot.
  5. I just upgraded my tower from 6.7.0 to 6.7.1 - No Issues. Thanks for the denial-of-service and processor vulnerabilities security plugs! 😀
  6. Update: June 23, 2019 - The continuing saga that is the nVidia - ASUS 1030 GT OC card. I did manage to get the video card to display from the display port in both boot and a Win10/64 VM configuration with OVMF, i440fx-2.12. (Yea, some success!) I had to make the card the primary display in the T310, essentially disabling the onboard intel video chip in the server's bios. the boot screen now shows up through that display. To get this far with the VMs, I used the downloaded and editted BIOS from the TechPowerUP in the VM's XML, and set the sound card to a ich9 model. So far, it was looking good. Until the machine rebooted when installing nVidia drivers. (UGH!) At that point I got the dreaded "Error 43" code in the windows driver interface box, and was stuck in 800x600 SVGA mode - unable to correct it. I will likely remove the card and dump the BIOS from another machine, and then use that in a new VM machine build to see if that works. I am unsure if I need to go back to SeaBIOS and try that option to make it workable - but that's another path I could persue. Also unclear if i440fx-3.1 is an option or not. In some regards, I am just encouraged to know that the 1030 GT video card is indeed working in the Dell T310, and that I can have it be a displayed output - even if "hobbled" at present.
  7. Heyas - Happy to share what I know. Just to be clear, I am running my T310 as a "headless" system, with no keyboard, mouse or video display. If you intend to use it as a desktop type system to run games, then you might want to consider how the VMs and the other components are installed. I wanted to be able to run 24/7 as a NAS, with some Virtual Machines (VMs) - that I could remote desktop into via a VPN, have some Docker apps (Plex) and otherwise house my digital photo library. Attached are some of the photos of my system. Excuse the pink floor, the house is in a state of "pre-remodeling". Let's start with your power question. The 4x HDDs in the array have a power tap already. For my system, and in the photos - you can see how I removed the side cover panel in one of the photos and see all the drive bays. I made an addition of a StarTech 3x 3.5" HDD removable kit in the top/front of the system. So I used the molex power tap from the removed RD (Dell removable hard drive) and the SATA power from the removed Dell DVD Drive, and have since I replaced those with that aftermarket SATA removable drive bay. Essentially, what I had originally was a 2x 5.25 half height (or one Full Height) bay to work with. I also included a view inside, so you can see the "inner workings" of the server, power, video and drive/cabling layout. You can see that new drive bay I installed the other photos too. So I have the one moxel split into two SATA power outlets, and one SATA power outlet (existing) - resulting total of three SATA power taps to work with. Two go into the new three HDD drive bay (yes, it only needed two SATA taps for three 3.5" drives) and I used one SATA power split tap for the SSD. Overall, I like this setup, and think I am good on power- but would have liked it better if I had been able to find a 3x or 4x bay system that didn't use removable trays - and just accepted the bare SATA/SAS drives by sliding them into a SATA port/locked with a cover. And yes, I currently have the SSD hanging from the cords ("etherially mounted") and it will get hard mounted later (or duct taped if enough people gripe complain about it!) I also included a shot of the redundant removable power supplies. I really like this power supply feature, so I can swap out a bad PS, and the system can run on the remaining PS in the interim. So "no," as you can see - I did not remove the backplane - and I wouldn't recommend it. If you install the redundant power bricks- you should be able to pull the existing power supply, and then replace it with the new ones - and add the redundant distribution board. You can see the distribution board just to the left of the power supplies in the overall "guts" view. The one existing molex for the drive - and the one existing SATA power connector came from that distribution board and are unchanged. The molex and SATA power cables from the distribution board looked "beefy" enough, so I think I am ok for power consumption given what I am using the system for, and the way the power is distributed to the drives. CAVEAT EMPTOR: I WOULD NOT RECOMMEND THIS SET UP FOR A VIDEO PRODUCTION ARRAY. IF I WAS BEATING THIS ARRAY WITH VIDEO EDITS, I WOULD GO WITH SOMETHING MUCH MORE ROBUST AS A SERVER! (Besides, I really hate this issue debugging the nVidia card in a VM. If that was my goal, I think I'd rather pay for a SuperMicro server. But - I am a cheap Scotsman with a mustache.) You can also see where I have my USB unRAID memory stick plugged into the motherboard. And trust me when I say this, booting from USB for unRAID is not a speed issue. It's very compact, and it "unfolds" itself very quickly into a full blown server. My unRAID total boot time is about 90 seconds to 2 minutes, and I leave it run 24/7. Now, just to be clear - you will want to have that SSD set up for the apps, dockers, and virtual machines (VMs) in order to get something that is speedy/responsive. And the part I really like is that all the VMs can run independently, 24/7 as long as you don't try to use the same resource (e.g same graphics card) at the same time. And most VMs can run a virtual "display" and output through a VNC. I've already had Unbuntu, Win7 and Win10 images running simultaneously on my T310 with VNCs. (Although I am still fighting with the VMs using an nVidia 1030GT graphics card - ARGH!) If someone just wanted is a single machine that is not up/on 24/7 - then I suggest they consider installing Win10 image on a T310, slapping in a good SDD and video card - and go with it. But if they wanted a NAS, that is on more than an hour or two while you work with photos or watch videos, (but not run a producton system) that can also run VMs and apps (like Plex) - I am pretty much convinced this (unRAID) is the best way to go. If they wanted a production system that serves more than a few (5?) users, and did audio or video production - I'd be looking at a higher class dual Xeon or Dual Thread Ripper machine with a good PCIe backplane/video car compatibility track record. Did I miss anything? Questions? Comments? Argumentative Speculation? Philosophical Diatribes? 😃
  8. Heyas, Welcome to the T310 club. Feel free to ask/share info. Sounds like you have a good start on a nice system. I’d see about adding a redundant power supply, if you are able. Not a mandatory thing, but I like the reliability bump I get with mine. And yes, I have mine hooked into an UPS brick- just in case. I had the H700, but went to a H200 card so I could use the Smartdrive info to monitor my drive health more closely. But the H700 is a nice card too. One piece of advice for the H700- Copy your RAID config file settings for the H700, that way if you have a drive fail with unRAID and were running parity, you can still rebuild your drive array. And - Yes, I use a SAS/SATA splitter cable (4x channel, and 2 channels per card for a total of 8 drives) and yes, I get 6GBs (or the drive’s max speed) on them. Ebay sells them for less than $10. Look for “SAS SFF-8087 4 SATA Hard Drive Cable”. If I were you, I’d consider making all three of the 3TB drives you have as RAID 0, and then make one of them a parity drive. That would give you 6TB of parity “covered” storage. Then for every drive you add, you add 3TB of “parity covered” storage. I went with 4TB drives, based on price point, but if you stuck with the 3TB and went up to 8 drives, you’d sit at 21TB maxed out. My only reason that I’d go back to RAID 5/10 disk configuration now, would be just for improved read speeds. And since I am using my system mostly for a few VMs (on the SSD) and as a NAS, I don’t see the need. You might want to read up on how parity is implemented in the unRAID system, vice RAID 1. There were compelling reasons that I went that way -for my needs. With unRAID you’d likely want to use SSD drive as a cache drive, which works well for VMs. For me, I’d keep the SSD on the pcie card, you’ll probably get speeds as good as the H700, and it will probably run faster on a separate PCIe slot. You might want to go to a 500gb SSD- and move to an NVME type drive- but to start with 250GB is good (2-3 VMs plus a docker app) And- Sure, I can grab a pic of my drive layout and will post it later. There’s enough room in the T310 to get 8x 3.5” drives inside the box if you think it out carefully. If you decide to go with an external drive cage you might need to get an external connector card - like a H200e. FYSA- I do not recommend a usb HDD box on the system for anything other than file transfers. The USBs on the T310 are s-l-o—-w. But for temporary file use or transfers, they work well enough. With unRAID, I think I read that you can run the Windows 2016 image as a virtual machine. Then you can assign the unRAID “shares” as virtual drives- as you want. Best of both worlds. But for me, I am able to attach my unRAID (NAS) shares directly to PCs (win 7, 10, macs) on my network easily. Looks & works like a regular network mapped drive. More later...
  9. Moving on (June 15 update) Still happy with the Dell T310 server as an unRAID platform. Very few down days. Installed the Samsung 1TB SDD, definite performance bump with it over the Patriot 240GB one. Still working on the ASUS nVidia 1030 card - and tonight my next step is to make it the primary video card at boot-up. (Nope, that didn't work either. Will need to remove the card and dump the bios/firmware on another machine) Starting to enjoy the Plex docker. Still not got SickChill working well. Also tried to get Deluge-VPN docker working, but confused by the config file locations in the interface to unRAID 6.7 seems to have changed significantly. Uninstalled the DarkTable docker, the interface was just too clunky and difficult to use in a docker. Even on my metal laptop, it's still clunky. Noted that there is now a docker for GIMP, but honestly - I'd rather run it through a Windows VM on the server.
  10. Moving on (June update) Updated/Max'ed out the RAM in my server to 32Gigs from 16Gigs. (16GB for $49 on eBay) Added a Plex docker. Tried BinHex's SickChill docker, but it's not working well (yet.) Also bought a 1TB SDD (Samsung $89 at MicroCenter) but have yet to install it. Also still need to debug my ASUS nVidia 1030 card issues, but the system is working well without that working fully - at present. Loaded up the DarkTable docker app for photo cataloging, not sure yet it's something I will keep.
  11. That and disabling the motherboard graphics chip are all I have left to try, but that will have to wait a bit. (Family life calls!)
  12. Just the one (Asus 1030). There is the onboard graphics chip, but not using it - although I have not tried disabling it ... yet.
  13. So, I'm about out of ideas. (This is on my Dell T310 Server, specs in my signature) Still trying to get a Asus 1030 card to passthrough, and having no joy. I've watched S-I-1's videos a dozen times. (Thank you!) Made over a dozen VM's with various settings. The unRAID server boot mode is set to Legacy. Downloaded the Asus 1030 BIOS from techpowerup and edited it, as shown. Am able to get Win 10 Pro to boot, video to show up, and fully load up. (login works) Can even get to where the card is recognized as a 1030. But any driver tried only leaves me with "Windows has stopped this device because it has reported problems. (Code 43)" ----------------------- Within the VM: Doing passthrough of the CPU (intel Xeon, from 1 to 6 cpus), or emulated. Using OVMF bios to boot. (will not boot under SeaBios) Machine is i440fx-2.8, 9 or 11 (Will not boot under i440fx-3.0) Hyper-V is set to no. Using either SD or HDD to boot makes no difference (although IDE seems to work best, occasionally) Am passing through the card & the HDMI sound lane as well. (And sometimes a second sound card) The Video ROM is included or not (and including it does seem to make the booting process more stable.) Also tried a similar set up with Windows 7, but it crashed on boot up. Most of the time can only use the VM with the MS basic (800x600) video display driver. Do ATI Radeon cards have the same issues? From some posts elsewhere, it looks like they are having similar issues with VMs too.
  14. Another day, another issue... Still having issues with the VM using the Asus nVidia 1030 card. I did pull out a monitor, keyboard, and mouse to make a proper go of it. And I don't think it's the card, it's the VM. So I continued tinkering with it, and more will be required. I am using direct video out to a monitor over the DVI interface. So yes, the card is recognized, but it still refuses to accept the drivers (new or old) and reverts back to the Microsoft Standard Display adapter settings. This included removing the old driver with DDU (Display Deinstall Utility) - and reinstalling only the recommended one. At this point, I am going to start over and build a brand new VM, checking everything twice. As for the Plex docker, that has gotten a lot more stable and it's beginning to grow on me. I did buy the iOS app for my phone ($4.99), and it's good at connecting to my library (music & video). I'm not a big media dog by any means, but I have some tunes that I do like to listen to - and a few movies that I like to watch occasionally. I also am trying to set up the binhex vpndeluge, but that's far more complex - and as I run NordVPN, I couldn't even determine which port was being used in OpenVPN files & servers to get past the initial install stage. (And I have other issues at home that are a much higher priority.)
  15. Except that the 1050 draws 75 watts, and the PCIe buss (on my Dell T310 server) only supplies about 60W. For now, I think I will be able make this 1030 work, at around half the price. (Did I mention that I am Scottish? Thrifty + Stubborn 😃 )