Jump to content

rollieindc

Members
  • Content Count

    62
  • Joined

  • Last visited

Community Reputation

4 Neutral

1 Follower

About rollieindc

  • Rank
    Advanced Member

Converted

  • Gender
    Male
  • Location
    Washington DC USA

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. No, you guys are amazing... and we appreciate all that gets done and shared on unRAID Forums. 👍 Thank you!
  2. Hello fellow T310 user ! I am considering some of the same pathways as you, and am often out looking for used server hardware. I did shoehorn in multiple drives (Up to 7 SATA/SAS drives and a 1TB SSD) and a GT1030 into my T310 that I am running a NAS on, but struggling to get either a Win7/64 or Win10/64 in a VM to address the 1030 card correctly. (You can find my thread on my T310 build in the forum) I picked the 1030 video card as it is relatively low power, which is important for the T310. And I like the T310 Xeon X34XX 4 core/8 thread design for CPU utilization. But if I had it to do over again, I'd pick up the "low profile" version over the ASUS GT 1030 OC version I currently have. The 1030 does work in the T310, but it's "finicky" - so I might try an Unbuntu VM to see if that helps based on your comment. (Thanks!) With that in mind, I think you could look at two or three strategies: 1) Get a GT1030 low profile card for about $100, and install in the T310 and "hope" it works better than my experience. Or you could look at an AMD or other graphics card. Problem is that nVidia really is a leader in GPU performance and in utilization in computational heavy codes for transcoding and Artificial Intelligence (AI) applications. And the T310 just doesn't have the power (No PCIe 6/8 pin power cords) to drive higher end cards like the RTX 2080 or GTX 1080Ti. You could back down to a low power card, like a GT610 - but it really depends on the application and GPU power you want/need. (And I need to remind myself to look at Zoneminder, thanks for that tip!) But an upgraded GPU could greatly reduce your CPU load on things like feature recognition. 2) Surf Craigslist/eBay/LetGo and come up with a nice (used) dual Xeon server in a 1U or 2U rack with very little investment ($100-200). That approach would still leave you a fair bit of funding for a decent GPU card (GTX 1050 or GTX 1070 @ $100-$200?) And you'd probably want to update the SAS/SATA card ($50-$100), and/or max out on ECC server RAM which will run a few more $. Given that both DELL and HP seem to be very proprietary in their motherbaord designs - (which drives me crazy at times,) I would tend to recommend the SuperMicro or similar "open architecture" server pathway with at least DDR3 ECC RAM - if you want a server grade, reliable build. [For example, I just picked up a SuperMicro system (sans drives or power supply, but with 32GB DDR2 RAM) for $20 - which included a "bitcoin" brand 8x riser - so I am guessing was a failed bitcoin mining rig attempt. But for another $25, I can upgrade the dual CPUs to 3+Ghz, and I can probably find an ATX power supply for around $40. That pathway has significant performance potential over the T310.] 3) Consider just upgrading the Xeon CPU x3430 @2.4ghz in the T310 to something like a Intel Xeon X3480 @ 3.06 GHz which are selling on eBay for $25. That's a 25% performance bump for not a lot of money. (And allow for buying a better SAS/SATA controller, more drives, and a good low end graphics card like a GT610 or GT 1030.) That upgrade might drop CPU usage from 70% to around 50%, and allow a little more headroom in feature recognition. Also if you've not got the Dell H200 SAS controller, I would highly recommend that upgrade in the T310 so you can push past the 2TB barrier that the H700 controllers have. But, this approach also "max's out" the T310 and leaves nowhere else to go. (This is about where I am, so I am out shopping for the next box for me.) 4) Look for something like a used Dell Poweredge T610 2x Intel Xeon, which run around $350 on eBay, move everything else over, grab a cheap nVidia GT610 ($40) or GTX 970 ($80) and then call it "Done for now". Hope this helps... feel free to check back with me.
  3. First, much appreciation to all involved in developing this (fork?), and documenting in this thread - it helps me understand the potential value. Second, I am a "noob" but running a unRAID NAS server box with vanilla Plex docker (limetech/plex - no subscriptions). I recently was able to acquire a nVidia K10 Tesla GPGPU, so I have some basic questions: 1) Is there any inherent value in "Unraid Nvidia", outside of the obvious speed increases in media transcoding? 2) Can "Unraid Nvidia" take advantage of the GPUs and memory on a Tesla card? (K10, K80, M40, V100, etc) 3) Has anyone had experience with a Tesla card and unRAID? (In VM or with nVidia unRAID) - And was it worth the time involved? 3A) Or for that matter, anyone using homelab applications utilizing a Tesla card under unRAID that they can share tips on? Thank you in advance for anything offered!
  4. Squid, I could kiss you... or buy you a beer. I didn't even know one existed... but will pop it in now! Thanks!
  5. Well, time for an unRAID update. The Dell T310 has been mostly quiet and doing what it does best - Parity Checks. Very few issues running unRAID. Only real issues are that it screams for updates on software tools and dockers, which I do regularly. Otherwise, it's been up continuously for 45 days, serving files, running a couple of VMs - without much of an issue. One 600GB Seagate SAS 15K drive I have - alerts that it gets "Hot" at 115F, but quickly goes back to 99F within a minute or two. May be a fan/circulation issue in that 3x drive bay that I have it in. Likely will "push" the drive bay fan to a "fully on" state, rather than temperature controlled (which I think it's currently in.) I also have a Win7/64 VM and a Win10/64 VM, which have also been running in the background with Plex, and they seem mostly "just working." I did install BEET, which cleaned up my music library files nicely - once I figured out the terminal console interface for it. The GUI for BEET doesn't work for much. If the terminal program wasn't easy and relatively useful, I'd have ditched it after the first 20 minutes - the GUI was that bad. Also on the "new to me" hardware front, I am now at the point where I may want to make some "changes." Mostly since I got the following hardware really cheap, which has me quietly thinking about a "rebuild." (Have I mentioned I love cheap used hardware!) (#1) The first is a SuperMicro X7DCL-I-YI001 motherboard with two Xeon LGA 771 quad core cpus (Might be E52XX's), and 32GB of DDR2 ram - for $20. The downside of this is that it only has 2 (x8) PCI-e, 1 (x4) PCI-e (using x8 slot), and 3x 32-bit PCI slots slots, and the DDR2 memory is maxed out. It came with a 4U case (no power supply) and a bunch of other fans and interface cards. My guess is that it was a failed attempt at a bitcoin miner. There was very little dust, and the system looked super clean. For $20, I felt like it was a steal. Still looks like it has a lot of life left in it, even if I throw it into another ATX case with stuff I have lying around - like a nVidia GT610 or 1030GT card (on a riser), 6Gbs SATA PCIe controller, and a 240GB SDD. And I could also upgrade with a new (cheap) set of 3Ghz Xeons (Seeing some of the LGA 771 X5460's on eBay for $25/pair), so I think this could become a decent desktop running Windows 10. I realize I am capped on memory with this Motherboard, but I am also working on a cMP (classic Mac Pro 5.10 w/dual Xeons - that will ultimately become my video/photography editing workhorse.) But even if I strip it and resell the parts from this SuperMicro unit, I think I am ahead of the game - by a lot. (I even heard "HACKINTOSH" in the back of my mind, but quickly grabbed a beer and killed those brain cells before they took root!) My question is, would this SuperMicro really be better than my T310 as a NAS/VM server? (#2) An nVidia TESLA K10 GPGPU card, also for $20. It was mod'ed, but again - it's a "What the heck" buy. Wondering if this would work with the nVidia build of unRAID - but since I run "vanilla Plex" without a subscription, I'm thinking it will have limited value/use/overall speed increases. Mostly I wanted to get it - to tie it to a VM for running some simulation programs - but unsure if it's worth it or not for anything else. I did read that I could put it on a 8X to X16 (powered) riser, maybe for the SuperMicro above... but I am just not sure of the value of it. I cannot put this into the T310, since it needs more power than is available from the T310 Power supply, and I am not really interested in replacing the T310's redundant dual power supplies. To be honest, I just want to leave the T310 alone and just have it keep serving up and storing files for me. (If it ain't broke...) Decisions, decisions...
  6. First - You're absolutely welcome! Bummer about the SATA/SAS cables, hope that's resolved now. Mine have given me zero issues. And yes, I am "eye"-ing larger SAS drives, but am waiting for the prices to fall a little more. When I can Three 8TB or 10TB drives (one parity, one data, and one "hot swap" spare) then I will move over to larger drives. For now, I am fine with the multiple 4TB drives... easy to get spares if I need them. And I am not using up that much space yet (although I upgraded my photo, sound and video equipment too!) I'll have to look into OpenMediaVault. Am using the unRAID smb shares connections, but not sure I will stick with that - but it works. And yes, the motherboard sata ports are S-L-O....W. Like 1.5Gbs slow. Let me know how you're doing... for me, I am kinda just hanging out for awhile - as I am traveling a lot recently. Nice to be able to access my music via the PLEX server from my cellphone and play it on my rental car's CARPLAY stereo as I was driving between Monterey and San Jose. (My server is back in Virginia) I still need to fix that ASUS 1030 nVidia card... DARNIT!
  7. Thank you for the reply Jonathanm, I do appreciate you taking the time to follow up with me. To your comment - I can get to the management console with "root" or "admin" accounts. However, I used to be able to use https://nastyfox/Main and now I have to use https://nastyfox.local/Main to get to the server. I can also get to the management console with https://192.168.0.119/Main But I am unable to reliably map a network drive to my unRAID shares in Win7/64, like the one named "Music". In Win 7/64 using 6.7.1 with any user and password combination, it connected the "M:" drive to //nastyfox/Music. But since the upgraded to 6.7.2, nothing else seems to allow me to map it to "M:" with the sole exception of "//192.168.0.119/Music" And I find this rather odd behavior.
  8. 31JULY2019 - Upgrade to 6.7.2 issue? Upgraded the OS to 6.7.2 from 6.7.1, but something new happened. I had been able to access the system with https://tower (actually https://nastyfox) but now have to issue https://tower.local/ or the direct IP address http://192.168.1.119 - in order to get to the server GUI. This was not an issue in 6.7.1. I tried a few fixes, including purging the network (DNS) entries in router - without any success. Plus all of my drive maps in Win7/64 laptop had to be remapped and re-logged in. Another issue is when I try to log in with any user name than "root" - I seem to be unable to get the system to recognize the user/password combination correctly. "root" works fine, but my other usernames (admin and peter) are not working very well.
  9. 32GB DDR3 ECC - Using most of it for running Windows VMs.
  10. rollieindc

    scenic mountain

    Glacier Point at Yosemite National Park on a clear day! Nice shot.
  11. I just upgraded my tower from 6.7.0 to 6.7.1 - No Issues. Thanks for the denial-of-service and processor vulnerabilities security plugs! 😀
  12. Update: June 23, 2019 - The continuing saga that is the nVidia - ASUS 1030 GT OC card. I did manage to get the video card to display from the display port in both boot and a Win10/64 VM configuration with OVMF, i440fx-2.12. (Yea, some success!) I had to make the card the primary display in the T310, essentially disabling the onboard intel video chip in the server's bios. the boot screen now shows up through that display. To get this far with the VMs, I used the downloaded and editted BIOS from the TechPowerUP in the VM's XML, and set the sound card to a ich9 model. So far, it was looking good. Until the machine rebooted when installing nVidia drivers. (UGH!) At that point I got the dreaded "Error 43" code in the windows driver interface box, and was stuck in 800x600 SVGA mode - unable to correct it. I will likely remove the card and dump the BIOS from another machine, and then use that in a new VM machine build to see if that works. I am unsure if I need to go back to SeaBIOS and try that option to make it workable - but that's another path I could persue. Also unclear if i440fx-3.1 is an option or not. In some regards, I am just encouraged to know that the 1030 GT video card is indeed working in the Dell T310, and that I can have it be a displayed output - even if "hobbled" at present.
  13. Heyas - Happy to share what I know. Just to be clear, I am running my T310 as a "headless" system, with no keyboard, mouse or video display. If you intend to use it as a desktop type system to run games, then you might want to consider how the VMs and the other components are installed. I wanted to be able to run 24/7 as a NAS, with some Virtual Machines (VMs) - that I could remote desktop into via a VPN, have some Docker apps (Plex) and otherwise house my digital photo library. Attached are some of the photos of my system. Excuse the pink floor, the house is in a state of "pre-remodeling". Let's start with your power question. The 4x HDDs in the array have a power tap already. For my system, and in the photos - you can see how I removed the side cover panel in one of the photos and see all the drive bays. I made an addition of a StarTech 3x 3.5" HDD removable kit in the top/front of the system. So I used the molex power tap from the removed RD (Dell removable hard drive) and the SATA power from the removed Dell DVD Drive, and have since I replaced those with that aftermarket SATA removable drive bay. Essentially, what I had originally was a 2x 5.25 half height (or one Full Height) bay to work with. I also included a view inside, so you can see the "inner workings" of the server, power, video and drive/cabling layout. You can see that new drive bay I installed the other photos too. So I have the one moxel split into two SATA power outlets, and one SATA power outlet (existing) - resulting total of three SATA power taps to work with. Two go into the new three HDD drive bay (yes, it only needed two SATA taps for three 3.5" drives) and I used one SATA power split tap for the SSD. Overall, I like this setup, and think I am good on power- but would have liked it better if I had been able to find a 3x or 4x bay system that didn't use removable trays - and just accepted the bare SATA/SAS drives by sliding them into a SATA port/locked with a cover. And yes, I currently have the SSD hanging from the cords ("etherially mounted") and it will get hard mounted later (or duct taped if enough people gripe complain about it!) I also included a shot of the redundant removable power supplies. I really like this power supply feature, so I can swap out a bad PS, and the system can run on the remaining PS in the interim. So "no," as you can see - I did not remove the backplane - and I wouldn't recommend it. If you install the redundant power bricks- you should be able to pull the existing power supply, and then replace it with the new ones - and add the redundant distribution board. You can see the distribution board just to the left of the power supplies in the overall "guts" view. The one existing molex for the drive - and the one existing SATA power connector came from that distribution board and are unchanged. The molex and SATA power cables from the distribution board looked "beefy" enough, so I think I am ok for power consumption given what I am using the system for, and the way the power is distributed to the drives. CAVEAT EMPTOR: I WOULD NOT RECOMMEND THIS SET UP FOR A VIDEO PRODUCTION ARRAY. IF I WAS BEATING THIS ARRAY WITH VIDEO EDITS, I WOULD GO WITH SOMETHING MUCH MORE ROBUST AS A SERVER! (Besides, I really hate this issue debugging the nVidia card in a VM. If that was my goal, I think I'd rather pay for a SuperMicro server. But - I am a cheap Scotsman with a mustache.) You can also see where I have my USB unRAID memory stick plugged into the motherboard. And trust me when I say this, booting from USB for unRAID is not a speed issue. It's very compact, and it "unfolds" itself very quickly into a full blown server. My unRAID total boot time is about 90 seconds to 2 minutes, and I leave it run 24/7. Now, just to be clear - you will want to have that SSD set up for the apps, dockers, and virtual machines (VMs) in order to get something that is speedy/responsive. And the part I really like is that all the VMs can run independently, 24/7 as long as you don't try to use the same resource (e.g same graphics card) at the same time. And most VMs can run a virtual "display" and output through a VNC. I've already had Unbuntu, Win7 and Win10 images running simultaneously on my T310 with VNCs. (Although I am still fighting with the VMs using an nVidia 1030GT graphics card - ARGH!) If someone just wanted is a single machine that is not up/on 24/7 - then I suggest they consider installing Win10 image on a T310, slapping in a good SDD and video card - and go with it. But if they wanted a NAS, that is on more than an hour or two while you work with photos or watch videos, (but not run a producton system) that can also run VMs and apps (like Plex) - I am pretty much convinced this (unRAID) is the best way to go. If they wanted a production system that serves more than a few (5?) users, and did audio or video production - I'd be looking at a higher class dual Xeon or Dual Thread Ripper machine with a good PCIe backplane/video car compatibility track record. Did I miss anything? Questions? Comments? Argumentative Speculation? Philosophical Diatribes? 😃
  14. Heyas, Welcome to the T310 club. Feel free to ask/share info. Sounds like you have a good start on a nice system. I’d see about adding a redundant power supply, if you are able. Not a mandatory thing, but I like the reliability bump I get with mine. And yes, I have mine hooked into an UPS brick- just in case. I had the H700, but went to a H200 card so I could use the Smartdrive info to monitor my drive health more closely. But the H700 is a nice card too. One piece of advice for the H700- Copy your RAID config file settings for the H700, that way if you have a drive fail with unRAID and were running parity, you can still rebuild your drive array. And - Yes, I use a SAS/SATA splitter cable (4x channel, and 2 channels per card for a total of 8 drives) and yes, I get 6GBs (or the drive’s max speed) on them. Ebay sells them for less than $10. Look for “SAS SFF-8087 4 SATA Hard Drive Cable”. If I were you, I’d consider making all three of the 3TB drives you have as RAID 0, and then make one of them a parity drive. That would give you 6TB of parity “covered” storage. Then for every drive you add, you add 3TB of “parity covered” storage. I went with 4TB drives, based on price point, but if you stuck with the 3TB and went up to 8 drives, you’d sit at 21TB maxed out. My only reason that I’d go back to RAID 5/10 disk configuration now, would be just for improved read speeds. And since I am using my system mostly for a few VMs (on the SSD) and as a NAS, I don’t see the need. You might want to read up on how parity is implemented in the unRAID system, vice RAID 1. There were compelling reasons that I went that way -for my needs. With unRAID you’d likely want to use SSD drive as a cache drive, which works well for VMs. For me, I’d keep the SSD on the pcie card, you’ll probably get speeds as good as the H700, and it will probably run faster on a separate PCIe slot. You might want to go to a 500gb SSD- and move to an NVME type drive- but to start with 250GB is good (2-3 VMs plus a docker app) And- Sure, I can grab a pic of my drive layout and will post it later. There’s enough room in the T310 to get 8x 3.5” drives inside the box if you think it out carefully. If you decide to go with an external drive cage you might need to get an external connector card - like a H200e. FYSA- I do not recommend a usb HDD box on the system for anything other than file transfers. The USBs on the T310 are s-l-o—-w. But for temporary file use or transfers, they work well enough. With unRAID, I think I read that you can run the Windows 2016 image as a virtual machine. Then you can assign the unRAID “shares” as virtual drives- as you want. Best of both worlds. But for me, I am able to attach my unRAID (NAS) shares directly to PCs (win 7, 10, macs) on my network easily. Looks & works like a regular network mapped drive. More later...