My first hobby “TOWER”


Recommended Posts

So, been a techie for a long time, sometimes a sysop (ran VAX/VMS 11/785 & 8800) - and so I was really interested in unRAID to replace my NAS for my home networking (mainly for semi-pro photography) and run some VMs and apps.


I found a good “excessed” Dell Poweredge T310, single Xeon X3340 (4 cores) @ 2.53GHz with 8GB RAM (DDR3/1066) and 2x 600GB seagate cheetah SAS 10K drives, and dual 400 watt power supplies (for less than $100). Adjusted the boot to do a BIOS boot from the internal USB 4GB sadisk micro cruiser drive I had sitting in my pocket - and downloaded unRAID v6.5 trial. The disk controller (PERC 6i) set to was raid 1, but I booted the system the way it was. Adjusted the network ip4 address, prepped the USB drive... booted...


Bang... up and running! Nice.


Things I want to do:



Add 2x 4TB Ironwolf SATA drives

- (have these already in a current NAS)

Add 4TB or larger ironwolf parity SATA drive

Add +8GB RAM for VMs.

Add SSD cache SATA drives (2x 32 or 64GB)

Add a GPU video card (nVidia?)



Gotta have:

Fileserver/NAS/Personal Cloud (primary use)

Secure Document archive (PDFs, etc)

Mediaserver (music, home videos & movies, plex?)


Like to have:

Photo website (DruPal, maybe)

Run VMs for Win 98, XP, 7, 10 and ubuntu studio

Remote Desktop into vm.

Maybe minecraft server for the daughter... 


Amazing if I could do it:

Run apps in Docker to aggrigate research (work) articles (IFTTT?)

Scientific code runner (eg Blender 3D, Finite Element Codes,  etc)


Already pleased with the speed of the system. Flexibility. Updating for it looks super simple.


Questions: [pointers to other best forum threads appreciated]


The six SATAs on the motherboard... will they support 4TB or larger drives?


Any way to check the life use of existing SAS drives?


Replace the perc 6i to H700? Wondering if I just add some 2TB SAS 7.2K enterprise drives from eBay instead of upgrading it. They look really cheap right now. (have 4 slots, 2 occupied) 


Have a DVD RW drive (one sata port)... best movie transcoder pathway?


System has one dell RD1000 drive slot. No drive in it. Seen many on eBay. Any real value, or sell it? Am thinking it might be a good backup at offsite location (safe deposit box)... dunno. 


Oh, and I have Verizon DSL, (it beats Comcrap available here.)


Thanks for reading... thoughts?







Link to comment
1 hour ago, rollieindc said:

VAX/VMS 11/785 & 8800

Wow.  An old timer. ?


Completely off topic, but I remember failing my coSci exam because of a floating point error in the Pascal compiler on the Vax at my university (only one left in Canada).   The program the Prof wanted would return a result out by 1/100,000.  I saw that, rewrote the math routines and was the only one who actually came up with the correct answer and buddy decided to fail me because I was also the only one who didn't come up with the result he was looking for.   Really turned me off school...

Link to comment

Yea... I am an old timer (does that make me a crummugeon?) ... (Am currently 57)


I started with a 110 baud teletype and FORTRAN IV on a pdp-8 (even did punchtape, then punchcards)... then worked on a trash 80, a sinclair at home, then a color computer trs-80,  and ordered one of the first IBM/XT’s (pre 8087 add on) for my office. We went nuts with it in the engineering department. So, I got the call to build the next big system (DEC VMS based for finite element codes)... and then had to run it. 24/7 was a pain in the butt. But... yeah. 


Used to program matrix inversion subroutines in pascal in college. It was an esoteric program the prof wanted, but it never worked quite right. Never understood why... but aced the exams, so he had to pass me (with an A-)


The T310 is probably overkill for what I am doing... but... heck... unRAID just looks cool (and better than any RAID options on a NAS). 

Link to comment

Ok, time for an update. Lots of lessons learned. Some new hardware, and getting up to speed. For many this will probably seem elementary, but for me, this is a log of discovery.


First... crud... I bought 8GB (2 x 4GB) of DIMM off eBay that won’t work with the Dell T310. Seems the T310 is very particular about the type and density of the RAM chips used. Lesson learned, get and read the server manual. Will resell or use the memory elsewhere. But it also means I have to replace the current 8GB in order to go above 12GB on the T310. Moving to 12GB might be a good option, because I have a hard time coming up with reasons to need a VM with more than 8GB of memory. (4GB for unRAID is still huge from what I can read, and Windows 10 should play well with 8GB). And hey, I’m not Linus Media Group... (thank goodness! Sorry Linus!)


Hardware wise, I got two additional drives on rails (450MB SAS, 10K) that I installed in the T310 and started to play with. The PERC 6iR SAS controller needs an update flash, and I was befuddled by the RAID configuration, as RAID 0 & 1 needed drive pairs to enable the virtual drives. So I set up two RAID 0 drives (600+450) ending up with 2 x ~990MB drives. Performance on my network still seemed very snappy and quick. Soooo... 


Well “Duh.” I didn’t realize that you could also eliminate the virtual (RAID) drives on the 6ir and just address each drive from within unRAID individually. So that will be my next logical step. (watch for next update) But I was able to run drive checks on everything and got zero errors. (Yea!) 


The SAS 6ir is limited to 3Gb/sec SAS and a individual HD max size of about 2.2TB (I think, I still need to confirm that). The 3Gb/sec speed alone is probably why most would move to a Dell H200 or H700 controller, as they run SAS or SATA at 6Gb/sec, and also allows for drives greater than 2.2TB. Also it looks like they have an option for a battery onboard to maintain their on-card cache memory in case of power failures.


So (academically speaking) having 2 drives in RAID 1 could provide the equivalent of 6GB throughputs (on reads) and redundancy. That “might” be a nearterm “good enough” for a cache drive for a home server system, and just keeping the 6ir. Plus potential replacements on eBay are cheap ($20-30) and plentiful. But the max on the 4 drive rails would be limited to 4TB of storage in RAID 1 (4x 2TB, @50% for RAID1 @ 6Gb/sec) Great for reliability and speed, but cruddy in capacity.


Yeah, I think I will need more than that. 


And while newer SSDs would be faster, especially with a faster controller... that will have to be a down the road “learn and burn” exercise, much like the RAM memory experience. (Maybe when 4TB SDDs become super cheap in 10 years... or quantum computing for Windows arrives!)


Even so, with just the 6ir, I should still be able to replace all the current drives with 2TB drives on unRAID and reach 6TB, (with one 2TB acting as parity) without needing to buy anything more. That 6ir limits me to 2TB at the “top end” on the 6ir controller, and moving to the H200 or H700 (I think) would allow me to use the 2x 4TB Seagate Ironwolf drives that I have in my current NAS - on the Dell’s T310 rail hot swap system. (And yes, I have to migrate that data! ;) )


For me, as this is a home server, I wanted to dig a little further into the T310, as it also as 6x SATA connectors on the motherboard. These are also rated at 3Gb/sec throughput (I think this is a hardware limit on the motherboard, but I need to flash upgrade the BIOS here too, including the integrated motherboard SATA drive controller). 


Currently, I have one into the DVD/RW drive, and one into the RD1000 drive. So, I “could” also put up to 5x SATA drives (keeping the DVD/RW), and just sell the RD1000 drive. The catridges on the RD1000, even on eBay are not cheap, at $250 each (None came with the system) - and I am thinking a hot swappable “generic” drive tray with a SATA drive will be a better use of money for offsite (safe deposit box) storage of critical home & photo library files. (A good 8TB drive is less than $170!) Plus if it takes a couple of days to make a backup... I am ok with that.


(Reminder: I need to get a UPS.)


So I am going to *think* about my options, and look at the H700 ($40 on ebay) as a near term option to let me use the current Dell HDD rails and ultimately go to a 4x4TB hot swappable 12TB NAS “on rails” configuration at 6GB/sec -with offsite disk storage... without having to make up weird power cables.


I might need a cache SSD ultimately... but there again... with unRAID, even if I start editing and creating home video (probably only HD 1080p) on the server... I might have enough throughput for most tasks, including VMs. (Have I mentioned I have an 8  or 2x4 core xeon Mac Pro 5.1, that will likely have that dedicated task as well as any audio work I need!) And if I did need SSDs for cache, I have PCIe slots to hang the newer m.2 SSDs off a single PCIe card. And I think... with no need for another power cable. (win-win!)


Now, I’ve typed enough for tonight... questions and comments welcomed!  







Edited by rollieindc
Affixing typos
Link to comment
11 hours ago, rollieindc said:

So I set up two RAID 0 drives (600+450) ending up with 2 x ~990MB drives. Performance on my network still seemed cery snappy and quick. Soooo... 


One thing to consider. If you have four disks and create two disk spans that you then mirror like the following:

|   |
A   C
|   |
B   D
|   |

Looking at all possible two-disk failures (A+B, A+C, A+D, B+C, B+D, C+D)


If A+C fails - the combined drive is down.

If A+D fails - the combined drive is down.

If B+C fails - the combined drive is down.

If B+D fails - the combined drive is down.

A+B fails - still working.

C+D fails - still working.


Compare this with:

|   |
A   B
|   |
|   |
C   D
|   |

If A+B fails - the combined drive is down.

If C+D fails - the combined drive is down.

A+C fails - still working.

A+D fails - still working.

B+C fails - still working.

B+D fails - still working.


Striping mirrors gives fewer combinations that brings down the resulting array compared to mirroring stripes.

Link to comment
48 minutes ago, c3 said:

Striping mirrors is the lowest performance config.

Why do you claim this? Note that the link doesn't seem to have any relevance to RAID 0+1 compared to RAID 1+0. It handles the move past RAID 5.


When writing, it's the same amount of work to write to a RAID 0+1 (mirror of stripes) or RAID 1+0 (stripe of mirrors).


After one disk fails, a RAID 0+1 no longer have redundancy, and will often no longer touch any disk in the failed stripe, resulting in far worse seek and read speeds than if a RAID 10 loses one disk. With RAID 10, it's only the mirror pair that has lost a disk that will have lower read and seek capabilities.

The scary thing with RAID 1+0 aka RAID 10 is that a number of cheap boards claims to support RAID 1+0 while in reality implementing RAID 0+1.

Link to comment

"So, another day, another array."


So the T310 is up, and I have all the HDD running without any raid entered into the PERC 6ir controller, and currently building the parity disk on one of the two 600GB drives. Total time until the drives are ready in the array, about 1 hour 45 minutes.  Here is what it currently looks like... and yes, I blurred out the drive SNs and the tower IP address. Call me "once bitten, twice shy" on computer security issues.




This now gives me about 1.4TB of usable storage space to play with, and validates most of my thoughts regarding the way the drives would work. I'll let this "putter" for a couple of days, while I move on to trying my hand at building some VMs (already have installed a nVidia GT 610 card) and got a Windows 10 Pro lisc to load up. Also I want to try my hand at a Docker App.  After that, I will reflash the motherboard bios and see if the SATA interface can pick up any of the SATA drives I have. I did remove the RD1000 drive, and put in a Toshiba 128GB SDD on that SATA cable and used the power cable for it, but the system didn't recognize the SDD. I'll need to investigate that later. That might become a first cache drive, if I get so motivated.


The other interesting thing is that the system fan at first ran very high (with the case side panel off, but now appears to be operating much more quietly. Again, not sure why, but imagine it's a Dell T310 setting that I will need to investigate further (and read the manual!)


More later, but at this point... I just need to move on and get some other things done around the house.

Edited by rollieindc
Link to comment
11 hours ago, pwm said:


One thing to consider. If you have four disks and create two disk spans that you then mirror like the following...



That was a nice primer for me to understand the importance of the different RAID configurations. Thanks!


What does get me, is that any one of those configurations leaves me with 1TB of drive space - compared to 1.4TB I have now with unRAID. And to be honest, I don't think the system is going to be taxed that hard - compared with the amount of data I need to protect. On top of that, I plan (and will get) offsite backups for really important data. And I do like that two drives need to fail in order for bad things to happen, but I think the parity option in unRAID should be fairly robust and cover that instance... or do you think otherwise? I get that if I lose the two 600GB drives, I am pretty well "hosed"... but how often do two drives fail... nearly simultaneously?

Link to comment

And I just realized that I have an Intel(R) Xeon(R) CPU X3440 @ 2.53GHz Processor... with four cores and 8 threads. No wonder why the X3340 wasn't found in various searches. Whew!!!


Update: Tonight I got the internal SATA connections running (They were disabled in bios), and added a 120GB SDD Toshiba drive for cache. It looks like the internal SATA ports are limited to 2TB and running at 1.5Gbs each (likely need to adjust that in the BIOS as well), but that also means that the H700 driver is going to be a fairly assured thing that I will need to get. For me, something seems a bit unusual (probably the bios for the internal sata), as the system now seems to be transferring files a little slower than before - even with the SDD Cache included.


I hope to get a VM of Win 10 Pro up and running too, tonight. 

Edited by rollieindc
changes on the system
Link to comment
  • 3 weeks later...

"Checking in, and bellying up."


Yes, this will be long and boring read for any experts... but I am writing this for anyone else who happens to be interested in doing something similar, and for my own "fun" of building up an "inexpensive" Xeon 3440 based Dell Poweredge T310 server with unRAID.


So the saga of the $99 Dell Poweredge T310 continues. I spent some time playing with unRAID trial version enough to realize that I was in for the investment, and bought the H700 SAS controller to replace the PERC 6ir that came with it. And I bought the "plus" version (for up to 12 drives) of unRAID at $89.  To be honest, I was back and forth on this- but decided that limiting myself to 2TB drives - as the 4 main HDDs in the system -was not what I was interested in for my NAS replacement. I wanted to at least get myself to something more like 6 to 8 TB, with some ability to have error correction or drive rebuild. I also wanted to have potentially more than 6 drives available (4 HDD  + 1 parity HDD + 1 Cache SDD) just in case performance became an issue. I also wanted some flexibility to add a drive or two for separate storage space for VMs or Media Cloud Storage from the main drive system/storage drives. And the "plus" version of unRAID gave me that flexibility. I also don't expect to be running a huge server farm, so the "Pro" version seemed excessive in terms of needs. After a few minutes at the pay website, I had my email with the updated URL, and the unRAID was upgraded in place on my existing USB stick already installed on the motherboard. I did reboot the system, just to be sure it took, but I don't think I needed to. (Kudos to the Lime Tech designers on that pathway!)


I also carefully considered the HDD size in my decision process. (Comments and other views welcome on this. And yes, I could have gone with WD or HGST drives, but I didn't... You can also see why here:  ) The Seagate Ironwolf 4TB SATA drives were running $124, while the 6TB version was running $184-190. So, my choice was two 6TB, or three 4TB, giving up one drive for parity. So for 2x6TB =>6TB storage, I would have sat at $368, or for 3x4TB=>8TB, I got for $372. And while if I added another drive, the 6TB drives probably would have been a performance winner (3x6TB=>12TB @ $552) over the 4TB (4x4TB=>12TB, $496), I think I made the better deal for cost, expandability and reliability. (And we could probably argue over getting the WD Red 6TB drives, but I've already opened the Ironwolf drives... so let's not.)


So, next to eBay I went, picking up the Dell PERC H700 and a SAS W846K cable kit to tie the existing SAS drive slots/backplane to the H700. (For those not aware, the PERC 6ir has a special SAS cable to the backplane, allowing for 4 drives with the T310) The one nice thing with the H700, I can add more drives (SATA or SAS) with an addition of another SAS to SATA cable (SF-8087?) set - as the H700 has two SAS connectors (A & B, and note you have to use "A" with the first set of drives). The other nice change is that the H700 does full 6Gbs transfer rates. Anyway, total spent for the eBay H700+W846K Cable was $35.


The only other downsides I saw with the 6ir to H700 changeover is that I will need to do additional power splitters to get power to any addition SATA (or SAS) drives I add to the system - and I had to reinitialize the existing SAS drives to use it with the new HDD controller. This also meant that any data I had on the drives were gone. Fortunately, I had not yet populated them completely with data. (I also found out that the two 450GB drives I picked up were only 3Gbs SAS drives, so those will likely go to eBay at some point, along with the 6ir and the Dell RD1000 drive.) This need to reformat HDD probably wouldn't have happened if I had been replacing the 6ir with another 6ir, or done an H700 to H700 swap, but going from the 6ir to the H700 meant reinitializing and reformatting the HDD drives and losing the few files I had placed on it. In configuring the H700, each HDD drive has to be it's own "RAID 0" for unRAID to be able to address it separately. Not too hard to do, once I deciphered the H700 firmware menu system.


But the good thing about this configuration on the Dell T310 is that the 4 main (SATA, 3 x 4TB HDD initially,  with 1 of those as parity) drives will still be (hot?) swappable. And I am leaving one HDD bay/slot on the front panel unfilled for now, even though I do have a Dell 600GB SAS drive that I could put in it. I also went with brand new HDD drives, although I saw plenty of SAS 4TB drive lots on eBay that were refurbished or "like new." But here- I don't want to be replacing bad drives with this system, I simply want it to work well and store lots of files (primarily digital photo library which is currently just over 1TB in size). And at some point, I will likely get one more 4TB Ironwolf drive to act as a "hot spare" in case one of the drives fails later on. (Reminder: I need to read up more about adding drives to increase storage space, but I recall that is what unRAID is supposedly good at.)


At present, I'm still on the fence about adding another SSD SATA cache drive (I currently have a 120GB SSD SATA on the motherboard SATA B socket/header, but not using it - since it seems to run only at 3Gbs), since the PERC H700 came with 512MB of cache (RAM) memory on the card, this might not be necessary. I did make the decision not get the Dell battery add-on for the H700, partially because the system will live on a battery UPS, and will be set to shut down if the UPS power goes low.


After I do some more system burn in with the existing drive array (2x600GB + 2x450GB SAS drives), I will load up the Ironwolf drives and add a couple of VMs to the system to give it a good workout. I really am interested in seeing how the system runs with a Windows 10 Pro VM and then a Windows 7 VM, and some video and photo capture and editing software. (I might want to use one of those 600GB drives for the VMs, dunno.) And later I'll be adding an Ubuntu Linux distro on it as well, likely with Docker. I'm also working on rebuilding a separate Apple Mac Pro 5.1 system, which will be networked into the system, and used for editing video and editing and scanning photos. The two will be connected with GigE switch, as to make the large video file access far less painful.

Edited by rollieindc
Link to comment

Moving on...


So the H700 and SAS cable install went well in the T310. All the drives were relatively easy to add in the H700 settings - and change over to RAID 0 in prep for unRAID, although I will want to look to see if there is an IT mode available with an updated H700 firmware load. However, the H700 was definitely more "zippy" in moving files from my laptop into the server (did this using the iso's for Win 10 Pro and Ubuntu).


I've also been watching SPACEINVADERONE's video tutorials, and I have VMs for Win 10 and Ubuntu Studio 16.04 up and running. I need to redo the WIN 10 (Pro x64) VM, as there is no internet connection to it. I have to say that VM is a lot tricker than the one for Ubuntu, but it still works. So, yes, watch those videos... they are quite good, and well done. (Thanks!!!) I did see the video on the topic of a reverse proxy, and that seems like a good idea for me to implement with this server. I want to have it be secure (with https access only, if that's possible!)


I also picked up my third (new) 4TB Ironwolf Drive, which will become my new parity drive. I've not installed any of the IronWolf Drives into the T310 yet, because I still wanted to tinker with the VMs beforehand. And I got a spare SATA power splitter cable, which I will likely use with SDDs (cache) only - should I still decide to install them. The other things I noted was that there were power splitters that I could get that would come directly from the SAS drive and power leads. I would need another SAS-SATA cable from the H700 (B port) to connect up to another 4 drives for data - since I still think the 6 available SATA connectors available on the motherboard are limited to 1.5Gbs, rather than the 6Gps I can get on the H700.


And about the only thing I may be adding in the future will be more disk space, but I am not anticipating that soon - if at all. I may want to run the VM's off the SSDs, as is being suggested... since that would be far better use for any VNC/remote connections. But for now, I am thinking that the VMs can sit on one of the SAS 600GB drives, that be a Standalone drive - but get VM's backed up into the IronWolf Disk array from time to time. And I never got to the Win 7 VM install, but I anticipate less issues with it, since I've done a number of those already.

Link to comment
  • 2 months later...

Update: Thursday, 13SEP2018 & 15OCT 2018

I have been running unRAID 6.4 (now 6.5, soon 6.6.1) for a while now on the Dell PowerEdge T310,  but I've been doing some hardware upgrades. So let me see if I can show where I started, and where I am going.


Dell PowerEdge T310 (Flashed to latest BIOS)

RAM: 8GB 16GB ECC Quad Rank

Controller: SAS DRAC 6ir SAS H700 (Flashed to latest BIOS, all drives running in RAID 0)

Drives: 2x 600GB SAS Dell/Seagate Ironwolf 4TB SATA (1x parity, 2x now up to 3x data - ) + 1x 2x 600GB Dell/Seagate SAS + 120GB 240GB SSD (for VMs), I also installed a three drive 3.5" bay system ( into the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives, which should be more than enough for me. I will be moving the VMs from the SSD to the 600GB cheetah SAS drives, since the speed on those should be enough for anything I will be doing.

Video: onboard & nVidia GTX 610 card (I wanted a PhysX/physics engine and some CUDA cores for digital rendering and transcoding)


Fairly pleased with my overall stability, and my next steps will be doing some configuration backups & network throughput testing, and adding a UPS for when power is lost. I will also need a way to access the system from offsite, preferably through my VPN service (NordVPN/OpenVPN).

Link to comment
4 hours ago, rollieindc said:

I will also need a way to access the system from offsite, preferably through my VPN service (NordVPN/OpenVPN).

Think of a VPN as a network cable running from the server to the client. With a VPN service, the server is in a remote location, and the client is your computer. The purpose of this type of connection is to obfuscate the real location of the client, to make it appear that all the traffic is coming from the the location of the server. This is not helpful for offsite access of your server, and depending on the specific VPN service, may not even be possible.


What you are looking for is a VPN server running inside your network, either on your router (easier) or on a computer that must stay running for the VPN to work (possibly faster due to more CPU power). You can run this server as a docker on unraid if you wish, but that's just one option. You connect to this server with your client machine offsite, and it then appears to your offsite machine that you are plugged in to your home network.


This type of connection doesn't require a paid service, as both endpoints are yours.

Link to comment
On 10/15/2018 at 6:04 AM, jonathanm said:

Think of a VPN as a network cable running from the server to the client. [clip]


Thanks Jonathanm, I've been using NordVPN from my client side for a while now to connect to various servers.


And yes - I was thinking that an OpenVPN docker would be the answer keeping my home network (and home ip address) as secure as possible, and that would connect to my unRAID server to the NordVPN servers - permitting me to then establish the most secure tunnel from my "offsite" client/laptop (at a coffee shop) into the server (sitting at home) ... but perhaps I am misunderstanding something with the protocols(?). I really don't want an access point into my entire network (which I would gain with going to the router), I only want the unRAID server "accessible" and preferably only by means of a good SSL connection.


My other means would be going through a domain I have, making my server a subdomain - or going the route of connecting via DuckDNS. My concern with that route is that my home IP address would be "trace-ible" by pinging and tracing the subdomain. Which I thought (perhaps incorrectly) that the DockerVPN would then mask the ip address until a VPN connection was established.


Dunno... confused now. Thankfully, there is no rush, and I've been happily uploading lots of files to my "new" server. 😀

Link to comment

You do not need to get NordVPN involved at all if your requirement is to securely connect to your home server from a client that is external to your home network.    I do this quite happily by having the openvpn-as server docker on UnRAID and then running an OpenVPN client on the remote client.  This allows me to act as if the client machine is plugged into my home LAN even though I am physically remorse.


The time you want NordVPN (or an equivalent) involved is when you want to make an outbound connection (possibly from your home LAN) from the client machine and hide your IP address (or perhaps make it appear you are located in another country to run an application that does geo-location checking).    I use NordVPN client on my iOS and Windows machines for exactly these capabilities.   In this case the outbound connection is going via the NordVPN servers and being routed on from there.

Link to comment
On 10/17/2018 at 2:05 PM, itimpi said:

You do not need to get NordVPN involved at all if your requirement is to securely connect to your home server from a client that is external to your home network.  


Well, that's actually my point. That's not all I want.  And yes, I realize, I may not get what I want.


My goals was to:

1) secure/encrypt the data pathway into the server.

2) secure/hide the ip address of the home server (as much as possible), and close the data pathway to avoid tracerouting into the rest of the home network.


(1)  would be easy enough to do, just using a VPN tunnel from an external client into the server.


But connecting into this tunnel directly requires an open, insecure port into my network from the router in order to establish the connection. Password and SSL protected, maybe. But still leaves the port open and other ports could be pinged and then interrogated by anyone on the internet. I trust my ISP and their router about as far as I could throw it. So if all I wanted was a direct data connection - that could be accomplished by opening the port on the router, and hoping the open connection isn't found and then hacked.


So to accomplish (2) my thought had been(and yes, I recognize I could be wrong) I would need to establish a "closed" secure data path/route from the home server to the website subdomain using OpenVPN through NordVPN. The subdomain is one that I own/control which is not on my home network. In that way, a secure connection could be established from a client (e.g. my laptop) going through a secure VPN tunnel, and connecting to the other tunnel completing the connection through the subdomain name. Since no trace from the subdomain to the server could be completed without having the data connection being established first, that should essentially "hide" any (secure) connected port on my home network.


Maybe I am overthinking it... and there may be too many layers of encryption... but I am still "thinking this out" and looking other more knowledgeable ideas/views.


My other idea would be to just build a VM.

Link to comment
  • 1 month later...

Update: 03DEC2018 - Replacing the H700 SAS Controller for a H200 flashed into IT Mode.


New Stats:

Dell PowerEdge T310 (Flashed to latest BIOS)

RAM: 16GB ECC Quad Rank

Controller: Dell H200 SAS Flashed to IT Mode, replacing the SAS H700 (Flashed to latest BIOS, all drives running in RAID 0)

Drives: Seagate Ironwolf 4TB SATA (1x parity, 3x data ) + 2x 600GB Dell + 240GB SSD (for VMs)

Note: The installed a three drive 3.5" bay system (from is in the available full height 5.25" drive slot. This gives me 7 accessible hot-swappable drives. I plan to populate 6, and leave one for a hot swap-able bay.

Video: Onboard & nVidia GTX 610 card

Soundcard: Creative Sound Blaster X-Fi Xtreme Audio PCIe x1 SB1040 Sound Card


First things first, the really good news on the nVidia 610 GTX video card choice that I made - is that I should be able to make a Mac High Sierra VM now, and then Mojave later - once they release the nVidia drivers for Mojave. The drivers are already out for High Sierra, and since I have a Mac Pro 5.1 running High Sierra  - that I plan to use for video and photo editing, this should be a great value to me later on.


Next, I found a relatively cheap ($26/shipped) Dell H200 SAS RAID card on eBay, from China, and decided to get it. As I understand it, the Dell H200 is a LSI based 9211-8i card with RAID Firmware installed. Installing the LSI "IT" 2118it.bin firmware allows for individual access to additional features and SMART disk data. The latter is important for determining disk health and tracking, like temperature issues or bit/sector errors. Since this was primarily to save a lot of data in my personal and historical photo library and backups, I need to identify early "disk death" before it happens - and swap out any before they fail completely.


After two weeks of shipping time, it finally arrived from China and looked to be in decent shape. (One of the SAS connector shields looked a little bent, but I was able to straighten it with my fingernails.) I then read through various descriptions of the process to change it to IT Mode, and decided to go for "IT". The process was fairly straightforward, although a little daunting. Instructions are available online. (See ) I did use a separate HP DC9700 computer to flash the H200, since some people stated that they had trouble using the T310 for this purpose. In order to do this, I had to remove the back card edge holder, since my system has a small form factor, but the card sat nicely in the case for the time I needed to do the reflashing. I booted the HP computer into MS/DOS from a USB drive that I had made from RUFUS (, started the diagnostics, found the SAS address, and loaded the IT drivers onto it. The process was, again, fairly straightforward until I tried typing in the H200's SAS address at the line [ C:\> s2fp19.exe -o -sasadd 500xxxxxxxxxxxxx (replace this address with the one you wrote down in the first steps) ]. It took me awhile to realize that I needed to make the address in 16 characters, all hexidecimal (0-9,A-F), and ALL UPPERCASE. The address from the previous steps had hypens included and was in lower case, so I fumbled a bit with that, until I got a clue from the flashing software that I needed to use "NO HYPENS" and "ALL UPPER CASE". Duh - I felt stupid, but after that, the process rolled quickly without any further issues.


For me, If you feel comfortable reflashing computers or cards, this is very similar and should not pose any issues. Just check your syntax and typing before hitting the enter key. That I can see this as a potential big issue, and how some people could have probably "bricked" their card from making address mistakes. But for me - my reflashing went fine. After rebooting a few times in the process, I had a reflashed H200 card in IT mode.


So, I put it into the Dell T310, replacing the H700 with 512MB of cache on the card. This is what somewhat bums me out, that the H700 has a nice cache on it already, and the H200 is likely cache-less. But I rebooted, and am in the process of reformatting the array's drives. Yes, they had to all be "reformated". Ugh. This is the last change I am making to the controller. So far, the difference in speed isn't really showing up (yet, if at all - they are both 6Gbs cards), but the other SMART disk information already is. I also can see drive info from any of the drives I want - SAS or SATA, and in the drive related plugins or other diagnostics available. And I can already read the temperature on every drive in the array from the dashboard. And for me, I could not read the temperatures with the H700 in a RAID 0 configuration. And the formatting process appeared to be definitely faster too.


Lastly, I went to a PC Salvage store while on travel (Hint: it was the "USED COMPUTER" store at 7122 Menaul Blvd NE, Albuquerque, NM 87110, Hours: 8-7pm, Phone: (505) 889-0756) and picked up a used Creative Sound Blaster X-Fi Xtreme Audio PCIe x1 SB1040 Sound Card (for $10). Plop and drop, nothing more necessary for it to load up in unRAID. Haven't done anything with it yet, but it could help with some of the audio files and the way that the VMs run. If I get really bored, and turn the server into a Home Theater PC server, that 5.1 sound option will be nice to have.


Oh frack... parity check is running again... that will be about 6 hours of disk spinning again. But, that should be the first real parity check that serves as a baseline. Guess I should get used to it.


Next up, building a bunch of VMs and starting to use the system for storing files and backups.

Edited by rollieindc
fix hex symbology
Link to comment
  • 2 months later...

So, just a boring update. Not much to report lately, as I've just been on travel a lot for work and dealing with a "wonky" DSL connection at my house. 


The Dell T310 server has been running smoothly on unRAID 6.6.6, and not had any real issues since I installed the H200 controller in it. My biggest quandary has been to decide if I should increase the cache drive size (500gb or 1TB) or up the RAM memory to 32GB. To be honest, I don't need to do either at the moment. And my Drive Array appears to be running just fine. So, nice and quiet for me. Looking forward to the update on 6.7/6.8 at some point, the new features seem really promising. I still need to work on some VMs, that's an ongoing project for me.

Link to comment
  • 2 months later...

April 24, 2019 - VM with nVidia GT610/1GB card - or why I have "no love" video card - post pains.


So, for an update on this build - am up to 6.6.7, no real issues... and my Dell T310 server has been running rock solid for 55+ days. Mostly having to update apps and dockers. Parity checks run regularly and are not showing any signs of errors. Disks (still made of 4TB of IronWolf Drives) are all running reasonably cool (always less than 38C/100F), and I continue to add to my NAS build as I am able to. Still struggling with making a choice between a larger SSD cache drive (500GB=$55) and more RAM (+16B=$67) memory - but now I may need to consider a video card replacement instead. The GT610 was all of $25, so it's not really a loss to me... just bummed I can't get it to work. 😎


In that regard - this VM build for Windows 10/64bit Pro has me at an impass. And I guess I need to add myself to the "No Love for the nVidia GT610 video card" community (geForce GT610/1GB low noise version). I've never been able to dial in this card since first installation. Not really sure why. Again - just so others following can check what I've got done so far - am working with unRAID 6.6.7 on a DELL T310 (with Intel VT-d confirmed) and booting the Windows 10/64bit VM using Machine: i440fx-2.7 or 2.8, with either SeaBOIS or OVMF, with either CPU pass-through or emulated (QEMU64), and am building the VM with the nVidia GT610 in it's own IOMMU group (13) and it's in slot 4 of the PCIe's available. But it is giving me the same headache as others have had with video passthrough. VNC/QXL video types work fine, and I am able to use the VirtIO drivers from RedHat without much of an issue. And often the GT610 card shows up as a Microsoft basic video card (ugh), so it at least "posts" rather than locks everything up.


Note - for my VM's I am using TeamViewer (v.14). As I can access it from behind my firewalls over my Verizon DSL link without an issue.


(No, there is no fiber where we live, and I refuse to get ComCrap service... no, not even a dry drop! And yes, I have AT&T/Verizon/DirecTV, so suck it ComCast/NBCUniversal Media, LLC !)


And I followed the excellent GPU ROM BIOS edits video that SpaceInvaderOne had done & suggested, made sure that there was no added header - and I still can't get the nVidia Card to work reliably, even after multiple attempts. I have other VMs that are working fine with TeamViewer, and multiple with VNC/QXL, but nothing seems to be working reliably with the nVidia GT610. I might try one last shot at it with Windows 7/64, but not holding my breath for that one either. Mostly I just want a card for video and audio trans-coding and video acceleration (Virtual Reality/3D MIMO gaming), and "maybe" some home-lab stuff. And I thought the 610 would have worked well, since I have it in another HP DC7900 desktop (Quad Intel Core 2 Q8400 cpu), and it works rather solidly in that machine.


After three nights of tinkering, I think I am just going to find another video card off eBay or locally at the local PC Reseller to try. 

Link to comment

Oh... well that's a bugger (of a video card)


First, just to get it written down - the T310 uses implementation of Intel® Virtualization Technology with Directed I/O (Intel VT-d) based off the Intel 3420 chipset, and the built in video is based off the Matrox G200eW w/ 8MB memory integrated in Nuvoton® WPCM450 (BMC controller), and will do up to a 1280x1024@85Hz and 32-bit color for KVM. And it has a 3D PassMark of 42... Which is squat-nothing.


And am also just going to bookmark the following specs for reference: PCI 2.3 compliant • Plug n’ Play 1.0a compliant • MP (Multiprocessor) 1.4 compliant • ACPI support • Direct Media Interface (DMI) support • PXE and WOL support for on-board NICs • USB 2.0 (USB boot code is 1.1 compliant) Multiple Power Profiles • UEFI support


So, in working with the nVidia GT610 I picked up, I am learning that the T310 layout on the slots is (from top to bottom)


Slot 1: PCIe 2.3 (5GT/s) x8 (x8 routing)

Slot 2: PCIe 2.3 (5GT/s) x16 (x8 routing) <- likely best for a graphics card. 

Slot 3: PCIe 2.3 (2.5GT/s) x8 (x4 routing)

Slot 4: PCIe 2.0 (2.5GT/s) x1

Slot 5: PCIe 2.0 (2.5GT/s) x1


Disabling the integrated video controller seems to make sense given the basic nature of the video out, and Slot 1 has the SAS/HD Controller, so that means that the best spot for a graphics card is slot 2. What is not clear to me that there is a real need to change BIOS "Enable Video Controller" setting to Disabled, in order to enable a second graphics card as a VM. I don't think that's necessary, but more experimentation will tell in time.


I did find that due to cooling issues, Dell limits the power draw to 25W on slots 4 & 5.  Also it was noted that the X16 slot was probably not originally designed for graphics cards to be installed and thus power draw may be limited to 40W max. Given the two power supplies add up to 800 Watts max, perhaps this isn't the most surprising find. The PCIe slots were likely all intended only for ethernet or SAS cards for external expansion disk arrays. Based on this, finding a low power (25-40W) graphics card with any performance might be tough. I found out that the GT610 had a max draw of 29 Watts. Buggers! No wonder it's a popular card for systems, cheap and low power. Then before doing more research - I managed to snag a Radeon A9 290, but given the power draw limits - I doubt I can get the 290 to work in the server. It draws 300 watts alone. (Oops. =(  Well, at $80, I think I snagged a good buy... eBay is listing the same cards at $120. 😃 )


I did find this: that might be really useful. From that, I could maybe go with the Gigabyte Radeon RX 550 Gaming OC 2G card (at about $100) - as it draws just 50 watts. Although I might need to consider the ZOTAC (or Gigabyte/EVGA) GeForce GT 1030 2GB GDDR5 - since it only draws 30 watts! (And only $85 at NewEgg.)


And I might have either a old GeForce 8800 or an HD7570 (60 Watts) available in some of my spare parts that would work too. I was really hoping to do some transcoding, so the 1030 might be the right call for all the "wants" I have for a VM. I just wish I knew it would work well as a VM card with unRAID systems without a lot of hastles.


Oh, and Plex doesn't support the AMD RX 550... so that's part of an answer too.

Edited by rollieindc
Plex update
Link to comment
On 4/27/2019 at 5:26 AM, nuhll said:

Just as a hint 8x or 16x doenst make a noticable difference for GPU.

For bandwidth, you are 100% right.


Especially since this x16 slot has only x8 routing. And I am not looking at adding or expecting to use this build for any multi-GPU card acceleration features that the additional lanes would help with anyway. (This is a notable downside on the T310 architecture compared to other modern server mobo designs, but something I can probably live with given the price I've paid.)


But it also "might" make a slight difference for power, since full-sized ×16 graphics card (without additional 6 or 8 pin connectors) can typically draw up to 5.5 A at +12 V (66 W) from those slots. Dell may have limited it to 40watts, so that makes GPU card selection a little trickier. And I didn't see any specific documentation on the x8, but noted that x4 cards are typically limited to 25 watts. Seems from what I've read in other forums that the x8 slots are potentially limited to 25w in the same way. Again, given that the T310 server's power supply is 400watts, I don't want to even fool with molex to other 6/8 pin PCIe power plug adapters.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.