My first hobby “TOWER”


rollieindc

Recommended Posts

Video Card Dump - SWAP & Plex Docker

 

So, tired of dealing with the nVidia GT610 card in any VM builds, I pulled it and installed an ASUS Phoenix 1030 OC 2Gb card ($75). Needed to place it in slot 2 because of the fan shield, and moved the SAS/SATA controller to slot 3 (leaving slot 1 empty). System and the VM seem stable enough when the VM is booted. Still can't access a VM that uses the 1030 card yet (can't get TeamViewer to come up, probably a video driver issue that I still need to work through.) I might need to go into the VM through VNC at first using the QXL driver, and make the video card and driver something VGA simple at first, change the card in the VM settings, and then load the new drivers from within the VM in TeamViewer.  If someone else has a better idea, I am all ears. Mostly hunt and poke at this point. And yes, I've watched @SpaceInvaderOne 's videos. Problem is, I am currently running the system headless, so I'd have to pull out a monitor, keyboard and mouse to make a proper go of it.

 

Also rather torqued off to find out that, apparently, the 1030 card can't do much for transcoding videos using NVENC/YUV or 4K. Well, not the whole reason I bought it, as I really wanted the physics engines on it for some graphics and scientific computing. Just surprised that it can't do much to transcode a 4K video, apparently.

 

Still frustrated with the VM builds, I changed and installed a Plex media server via a docker. Somewhat unimpressed. At least it's a small app. Took me a while to realize that when I went into the Web_UI, that I had to change the URL to start with https: - as it wouldn't start otherwise. And it seems that everything I want to do with it requires a PlexPass or payments to the "Creators", functionality is pretty basic too, although it did sort out my library without much excessive thrashing. (Once I realized that I could make a path into the Plex with the path of "nasmusic" that resolved to a path on my server shares - I had my library added. Oy! Half the time I was trying to get it to work - it threw me errors saying the path was illegal because I used a capital letter or hyphen.) Might be dumping Plex and looking at other media server systems.

 

 

Edited by rollieindc
Link to comment
2 hours ago, nuhll said:

get a 1050

 

- cost around 100€

- can encode 4k at around 60% GPU  Usage

- dont need power adapter

- passive cooled

2

Except that the 1050 draws 75 watts, and the PCIe buss (on my Dell T310 server) only supplies about 60W.

For now, I think I will be able make this 1030 work, at around half the price.

(Did I mention that I am Scottish? Thrifty + Stubborn 😃 )

Link to comment

Another day, another issue...

 

Still having issues with the VM using the Asus nVidia 1030 card. I did pull out a monitor, keyboard, and mouse to make a proper go of it. And I don't think it's the card, it's the VM. So I continued tinkering with it, and more will be required. I am using direct video out to a monitor over the DVI interface. So yes, the card is recognized, but it still refuses to accept the drivers (new or old) and reverts back to the Microsoft Standard Display adapter settings. This included removing the old driver with DDU (Display Deinstall Utility) - and reinstalling only the recommended one. At this point, I am going to start over and build a brand new VM, checking everything twice.

 

As for the Plex docker, that has gotten a lot more stable and it's beginning to grow on me. I did buy the iOS app for my phone ($4.99), and it's good at connecting to my library (music & video). I'm not a big media dog by any means, but I have some tunes that I do like to listen to - and a few movies that I like to watch occasionally.

 

I also am trying to set up the binhex vpndeluge, but that's far more complex - and as I run NordVPN, I couldn't even determine which port was being used in OpenVPN files & servers to get past the initial install stage. (And I have other issues at home that are a much higher priority.)

Edited by rollieindc
Link to comment
  • 1 month later...

Moving on (June update)

 

Updated/Max'ed out the RAM in my server to 32Gigs from 16Gigs. (16GB for $49 on eBay)

 

Added a Plex docker. Tried BinHex's SickChill docker, but it's not working well (yet.)

Also bought a 1TB SDD (Samsung $89 at MicroCenter) but have yet to install it.

Also still need to debug my ASUS nVidia 1030 card issues, but the system is working well without that working fully - at present.

Loaded up the DarkTable docker app for photo cataloging, not sure yet it's something I will keep.

Edited by rollieindc
revised some of the text
Link to comment
  • 2 weeks later...

Moving on (June 15 update)

 

Still happy with the Dell T310 server as an unRAID platform. Very few down days.

Installed the Samsung 1TB SDD, definite performance bump with it over the Patriot 240GB one.

Still working on the ASUS nVidia 1030 card - and tonight my next step is to make it the primary video card at boot-up.

(Nope, that didn't work either. Will need to remove the card and dump the bios/firmware on another machine)

Starting to enjoy the Plex docker. Still not got SickChill working well. Also tried to get Deluge-VPN docker working, but confused by the config file locations in the interface to unRAID 6.7 seems to have changed significantly.
Uninstalled the DarkTable docker, the interface was just too clunky and difficult to use in a docker. Even on my metal laptop, it's still clunky.

Noted that there is now a docker for GIMP, but honestly - I'd rather run it through a Windows VM on the server.

Edited by rollieindc
Bad nVidia -no boot
Link to comment

hi 

i just signed up to this forums after reading your thread

 

i have the Dell T310 too

i got it as a gift from a family member who was using it in his company and does not need it anymore,

 

i am still a newbie with these server related topics so i am buying stuff as need

 

my current setup is as following

Dell PowerEdge T310

CPU: Intel Xeon X3450

RAM: 32GB ECC Quad Rank

PSU: 1x T122K POWER SUPPLY, 375W, NON-RDNT

Controller: SAS H700 with 1GB cache and battery.

Drives: 3x 3TB Green WD (2x in Raid 1 for the important files and 1x hdd alone for media files which is not important if i lost them :P)

1x 250GB Sata based M.2 SSD inserted in a pcie slot using an adapter and connected to the motherboard sata port running Windows Server 2016. (Plan to change it to regular 2.5" SSD when i get chance or i will add the new SSD as cache.)

 

what i want to do is:

 

i want to buy new NAS drives and add them to the system but it only has 4 slots and i was searching for a way to add 4 more slots using a cage so that i can use all the drives that i have and add the New NAS drives as my backup drives for my important files. most likely i will not need more than 4TB drives for backing up my important files which includes documents and personal pictures and videos.

 

i want to use this server as backup for my files and editing photos using photoshop on my main PC and as a media server to watch my videos on my SmartTV

 

my question to you

did you use a SAS to Sata splitter to connect your drives to the controller to get 6GBs speed?

can you take a picture of the internal of your server showing your drives arrangement?

do you think i should change my OS from Windows Server to unRaid or something else?

any suggestion will be appreciated

 

thanks for your time

 

 

 

Link to comment
On 6/21/2019 at 12:57 PM, BuHaneenHHHH said:

did you use a SAS to Sata splitter to connect your drives to the controller to get 6GBs speed?

can you take a picture of the internal of your server showing your drives arrangement?

do you think i should change my OS from Windows Server to unRaid or something else?

 

Heyas,

 

Welcome to the T310 club. Feel free to ask/share info. Sounds like you have a good start on a nice system. I’d see about adding a redundant power supply, if you are able. Not a mandatory thing, but I like the reliability bump I get with mine. And yes, I have mine hooked into an UPS brick- just in case. 

 

I had the H700, but went to a H200 card so I could use the Smartdrive info to monitor my drive health more closely. But the H700 is a nice card too. One piece of advice for the H700- Copy your RAID config file settings for the H700, that way if you have a drive fail with unRAID and were running parity, you can still rebuild your drive array. 

 

And - Yes, I use a SAS/SATA splitter cable (4x channel, and 2 channels per card for a total of 8 drives) and yes, I get 6GBs (or the drive’s max speed) on them. Ebay sells them for less than $10. Look for “SAS SFF-8087 4 SATA Hard Drive Cable”. 

 

If I were you, I’d consider making all three of the 3TB drives you have as RAID 0, and then make one of them a parity drive. That would give you 6TB of parity “covered” storage. Then for every drive you add, you add 3TB of “parity covered” storage. I went with 4TB drives, based on price point, but if you stuck with the 3TB and went up to 8 drives, you’d sit at 21TB maxed out.

 

My only reason that I’d go back to RAID 5/10 disk configuration now, would be just for improved read speeds. And since I am using my system mostly for a few VMs (on the SSD) and as a NAS, I don’t see the need. 

 

You might want to read up on how parity is implemented in the unRAID system, vice RAID 1.  There were compelling reasons that I went that way -for my needs. 

 

With unRAID you’d likely want to use SSD drive as a cache drive, which works well for VMs. For me, I’d keep the SSD on the pcie card, you’ll probably get speeds as good as the H700, and it will probably run faster on a separate PCIe slot. You might want to go to a 500gb SSD- and move to an NVME type drive- but to start with 250GB is good (2-3 VMs plus a docker app) 

 

And- Sure, I can grab a pic of my drive layout and will post it later. There’s enough room in the T310 to get 8x 3.5” drives inside the box if you think it out carefully. If you decide to go with an external drive cage you might need to get an external connector card - like a H200e.

 

FYSA- I do not recommend a usb HDD box on the system for anything other than file transfers. The USBs on the T310 are s-l-o—-w. But for temporary file use or transfers, they work well enough. 

 

With unRAID, I think I read that you can run the Windows 2016 image as a virtual machine. Then you can assign the unRAID “shares” as virtual drives- as you want. Best of both worlds.

 

But for me, I am able to attach my unRAID (NAS) shares directly to PCs (win 7, 10, macs) on my network easily. Looks & works like a regular network mapped drive.

 

More later...

 

Edited by rollieindc
Spelling fix
Link to comment
6 hours ago, rollieindc said:

 

I’d see about adding a redundant power supply, I have mine hooked into an UPS brick- just in case. 

 

I had the H700, but went to a H200 card so I could use the Smartdrive info to monitor my drive health more closely.

 

Look for “SAS SFF-8087 4 SATA Hard Drive Cable”. 

 

That would give you 6TB of parity “covered” storage. Then for every drive you add, you add 3TB of “parity covered” storage. 

 

You might want to read up on how parity is implemented in the unRAID system, vice RAID 1.  There were compelling reasons that I went that way -for my needs. 

 

move to an NVME type drive- but to start with 250GB is good (2-3 VMs plus a docker app) 

 

And- Sure, I can grab a pic of my drive layout and will post it later. There’s enough room in the T310 to get 8x 3.5” drives inside the box if you think it out carefully. If you decide to go with an external drive cage you might need to get an external connector card - like a H200e.

 

FYSA- I do not recommend a usb HDD box on the system for anything other than file transfers. The USBs on the T310 are s-l-o—-w. But for temporary file use or transfers, they work well enough. 

 

With unRAID, I think I read that you can run the Windows 2016 image as a virtual machine. Then you can assign the unRAID “shares” as virtual drives- as you want. Best of both worlds.

 

But for me, I am able to attach my unRAID (NAS) shares directly to PCs (win 7, 10, macs) on my network easily. Looks & works like a regular network mapped drive.

 

More later...

 

 

 

thanks for your reply

 

i plan to buy the redundant power supply + a UPS brick when i rewire the cables to the new location for the server, switch and router which will be under my staircase

 

after reading your posts in this thread i decided to go and buy the H200 for the same reasons as yours and i already bought the SAS splitters cables

 

for my raid, setup i will change it once i am able to get all drives to into the server depending on what i would do with them

 

i also tried to use an NVME m.2 drive using a pcie adapter for the OS but it did not work, i am thinking of trying again later as i want to install unRaid. if it works then that will be great because i do not want to format the ssd with windows server, i might also buy a 2.5" ssd and install unRaid on it if the NVME drive does not work as i did not find any information on running NVME drives on our T310 server

 

one thing i forgot to ask you is

did you remove the backplane? if yes, then how did you get power to all drives since the server has only 2 sata power cables? and i read that it is not good to have a splitter for more than 2 drives on one sata power cable. (thats my main reason for asking for pictures :P)

 

thanks again 

 

 

Link to comment
7 hours ago, BuHaneenHHHH said:

i also tried to use an NVME m.2 drive using a pcie adapter for the OS but it did not work, i am thinking of trying again later as i want to install unRaid.

Why are you trying to do this?     Unraid does not install in the traditional sense - instead it unpacks itself from the archives on the USB stick (which also holds the licence file) into RAM and then runs from there.   Since you need the USB stick to be present all the time that Unraid is running why do you think there is any advantage to trying to get it onto a SSD instead of simply booting it from the USB stick?

Link to comment
3 hours ago, itimpi said:

Why are you trying to do this?     Unraid does not install in the traditional sense - instead it unpacks itself from the archives on the USB stick (which also holds the licence file) into RAM and then runs from there.   Since you need the USB stick to be present all the time that Unraid is running why do you think there is any advantage to trying to get it onto a SSD instead of simply booting it from the USB stick?

 

that i did not know since i still did not get to try unRaid. i always thought that it is another OS like windows and linux and it needed to be installed on a hard drive. i will start reading about it and try to install it on a seperate machine for testing purposes

 

thanks for pointing that out to me

 

 

Link to comment
16 hours ago, BuHaneenHHHH said:

did you remove the backplane? if yes, then how did you get power to all drives since the server has only 2 sata power cables? and i read that it is not good to have a splitter for more than 2 drives on one sata power cable. (thats my main reason for asking for pictures :P)

4

Heyas - Happy to share what I know.

 

Just to be clear, I am running my T310 as a "headless" system, with no keyboard, mouse or video display. If you intend to use it as a desktop type system to run games, then you might want to consider how the VMs and the other components are installed. I wanted to be able to run 24/7 as a NAS, with some Virtual Machines (VMs) - that I could remote desktop into via a VPN, have some Docker apps (Plex) and otherwise house my digital photo library.

 

Attached are some of the photos of my system. Excuse the pink floor, the house is in a state of "pre-remodeling". 

 

Let's start with your power question. The 4x HDDs in the array have a power tap already.

 

For my system, and in the photos - you can see how I removed the side cover panel in one of the photos and see all the drive bays. I made an addition of a StarTech 3x 3.5" HDD removable kit in the top/front of the system. So I used the molex power tap from the removed RD (Dell removable hard drive) and the SATA power from the removed Dell DVD Drive, and have since I replaced those with that aftermarket SATA removable drive bay. Essentially, what I had originally was a 2x 5.25 half height (or one Full Height) bay to work with. I also included a view inside, so you can see the "inner workings" of the server, power, video and drive/cabling layout.

 

You can see that new drive bay I installed the other photos too. So I have the one moxel split into two SATA power outlets, and one SATA power outlet (existing) - resulting total of three SATA power taps to work with. Two go into the new three HDD drive bay (yes, it only needed two SATA taps for three 3.5" drives) and I used one SATA power split tap for the SSD.

 

Overall, I like this setup, and think I am good on power- but would have liked it better if I had been able to find a 3x or 4x bay system that didn't use removable trays - and just accepted the bare SATA/SAS drives by sliding them into a SATA port/locked with a cover. And yes, I currently have the SSD hanging from the cords ("etherially mounted") and it will get hard mounted later (or duct taped if enough people gripe complain about it!)

 

I also included a shot of the redundant removable power supplies. I really like this power supply feature, so I can swap out a bad PS, and the system can run on the remaining PS in the interim. So "no," as you can see - I did not remove the backplane - and I wouldn't recommend it. If you install the redundant power bricks- you should be able to pull the existing power supply, and then replace it with the new ones - and add the redundant distribution board. You can see the distribution board just to the left of the power supplies in the overall "guts" view. The one existing molex for the drive - and the one existing SATA power connector came from that distribution board and are unchanged. The molex and SATA power cables from the distribution board looked "beefy" enough, so I think I am ok for power consumption given what I am using the system for, and the way the power is distributed to the drives.

 

CAVEAT EMPTOR: I WOULD NOT RECOMMEND THIS SET UP FOR A VIDEO PRODUCTION ARRAY. IF I WAS BEATING THIS ARRAY WITH VIDEO EDITS, I WOULD GO WITH SOMETHING MUCH MORE ROBUST AS A SERVER! (Besides, I really hate this issue debugging the nVidia card in a VM. If that was my goal, I think I'd rather pay for a SuperMicro server. But - I am a cheap Scotsman with a mustache.)

 

You can also see where I have my USB unRAID memory stick plugged into the motherboard. And trust me when I say this, booting from USB for unRAID is not a speed issue. It's very compact, and it "unfolds" itself very quickly into a full blown server. My unRAID total boot time is about 90 seconds to 2 minutes, and I leave it run 24/7. Now, just to be clear - you will want to have that SSD set up for the apps, dockers, and virtual machines (VMs) in order to get something that is speedy/responsive. And the part I really like is that all the VMs can run independently, 24/7 as long as you don't try to use the same resource (e.g same graphics card) at the same time. And most VMs can run a virtual "display" and output through a VNC. I've already had Unbuntu, Win7 and Win10 images running simultaneously on my T310 with VNCs.

 

(Although I am still fighting with the VMs using an nVidia 1030GT graphics card - ARGH!)

 

If someone just wanted is a single machine that is not up/on 24/7 - then I suggest they consider installing Win10 image on a T310, slapping in a good SDD and video card - and go with it. But if they wanted a NAS, that is on more than an hour or two while you work with photos or watch videos, (but not run a producton system) that can also run VMs and apps (like Plex) - I am pretty much convinced this (unRAID) is the best way to go. If they wanted a production system that serves more than a few (5?) users, and did audio or video production - I'd be looking at a higher class dual Xeon or Dual Thread Ripper machine with a good PCIe backplane/video car compatibility track record.

 

Did I miss anything?

Questions?

Comments?

Argumentative Speculation?

Philosophical Diatribes? 😃
 

 

 

 

 

 

IMG_2066A.jpg

IMG_2067A.jpg

IMG_2068A.jpg

IMG_2070A.jpg

IMG_2073A.jpg

613B8s6fkrL._AC_UL654_QL65_.jpg

Link to comment

Update: June 23, 2019 - The continuing saga that is the nVidia - ASUS 1030 GT OC card.

 

I did manage to get the video card to display from the display port in both boot and a Win10/64 VM configuration with OVMF, i440fx-2.12. (Yea, some success!) I had to make the card the primary display in the T310, essentially disabling the onboard intel video chip in the server's bios. the boot screen now shows up through that display.

 

To get this far with the VMs, I used the downloaded and editted BIOS from the TechPowerUP in the VM's XML, and set the sound card to a ich9 model. So far, it was looking good. Until the machine rebooted when installing nVidia drivers. (UGH!) At that point I got the dreaded "Error 43" code in the windows driver interface box, and was stuck in 800x600 SVGA mode - unable to correct it.

 

I will likely remove the card and dump the BIOS from another machine, and then use that in a new VM machine build to see if that works. I am unsure if I need to go back to SeaBIOS and try that option to make it workable - but that's another path I could persue. Also unclear if i440fx-3.1 is an option or not.

 

In some regards, I am just encouraged to know that the 1030 GT video card is indeed working in the Dell T310, and that I can have it be a displayed output - even if "hobbled" at present.

Link to comment
  • 1 month later...

31JULY2019 - Upgrade to 6.7.2 issue?

Upgraded the OS to 6.7.2 from 6.7.1, but something new happened. I had been able to access the system with https://tower (actually https://nastyfox) but now have to issue https://tower.local/ or the direct IP address http://192.168.1.119 - in order to get to the server GUI. This was not an issue in 6.7.1. I tried a few fixes, including purging the network (DNS) entries in router - without any success.

 

Plus all of my drive maps in Win7/64 laptop had to be remapped and re-logged in. Another issue is when I try to log in with any user name than "root" - I seem to be unable to get the system to recognize the user/password combination correctly. "root" works fine, but my other usernames (admin and peter) are not working very well.

Edited by rollieindc
typeface
Link to comment
8 hours ago, rollieindc said:

Another issue is when I try to log in with any user name than "root" - I seem to be unable to get the system to recognize the user/password combination correctly. "root" works fine, but my other usernames (admin and peter) are not working very well.

Only root is allowed to log in to the management console or the web UI. Root is NOT allowed to log in to the user shares, it will work if no authentication is required, but will fail if the share is private or secured.

 

Your statement is very unclear on what "log in" you mean.

Link to comment
14 hours ago, jonathanm said:

Your statement is very unclear on what "log in" you mean.

Thank you for the reply Jonathanm, I do appreciate you taking the time to follow up with me.

 

To your comment -  I can get to the management console with "root" or "admin" accounts.

However, I used to be able to use https://nastyfox/Main  and now I have to use https://nastyfox.local/Main to get to the server. I can also get to the management console with https://192.168.0.119/Main

 

But I am unable to reliably map a network drive to my unRAID shares in Win7/64, like the one named "Music". In Win 7/64 using 6.7.1 with any user and password combination, it connected the "M:" drive to //nastyfox/Music. But since the upgraded to 6.7.2, nothing else seems to allow me to map it to "M:" with the sole exception of "//192.168.0.119/Music"

 

And I find this rather odd behavior.

Link to comment
  • 1 month later...

i wanted to reply earlier but i was waiting for a few things to arrive and test

 

first of all let me thank you for posting the pictures of your server and updating this thread whenever you made changes

 

regarding my server

 

i bought a dell h200 and flashed it with HBA IT mode Firmware or whatever it is called 😅

i also bought sas to sata splitters but unfortunately they did not work so i ordered another pair from another seller which still did not arrive

 

i also bought 4x WD WD80EZAZ 8tb drives from ebay for which made me poorer by $576

now i think a have more storage than i need unless i downloading higher quality videos 😝

 

 

currently i am running OpenMediaVault 4.1 with a VM Windows 10

i plan to make a Linux VM which might be lighter as a VM and keep the windows 10 VM for testing purposes.

 

for my drives i will use Snapraid with UnionFS as i read about it and it safer than using ZFS which i was using with my old drives

i will setup 3x drives for data and 1x drive for parity

and in future i can add more drives easily with this setup and in case my server dies for any reason then i can remove the drives and put then in any other system (most likely linux systems since the drives are formatted as EXT4) and recover my files

i also have 2x Hikvision 256gb SSDs that i currently do not use but i will wait for the sas to sata splitter to install one drive as cache and one for the VMs and other apps to get better speeds than the motherboard sata ports

 

i am currently copying my files to the server while i am writing this

 

i used smb shares to share data across my network and to watch my videos on my android SmartTV using VLC

 

thanks again

Link to comment
On 9/11/2019 at 7:51 PM, BuHaneenHHHH said:

i wanted to reply earlier but i was waiting for a few things to arrive and test

 

first of all let me thank you for posting the pictures of your server and updating this thread whenever you made changes

 

regarding my server

 

i bought a dell h200 and flashed it with HBA IT mode Firmware or whatever it is called 😅

i also bought sas to sata splitters but unfortunately they did not work so i ordered another pair from another seller which still did not arrive

 

i also bought 4x WD WD80EZAZ 8tb drives from ebay for which made me poorer by $576

now i think a have more storage than i need unless i downloading higher quality videos 😝

 

 

currently i am running OpenMediaVault 4.1 with a VM Windows 10

i plan to make a Linux VM which might be lighter as a VM and keep the windows 10 VM for testing purposes.

 

for my drives i will use Snapraid with UnionFS as i read about it and it safer than using ZFS which i was using with my old drives

i will setup 3x drives for data and 1x drive for parity

and in future i can add more drives easily with this setup and in case my server dies for any reason then i can remove the drives and put then in any other system (most likely linux systems since the drives are formatted as EXT4) and recover my files

i also have 2x Hikvision 256gb SSDs that i currently do not use but i will wait for the sas to sata splitter to install one drive as cache and one for the VMs and other apps to get better speeds than the motherboard sata ports

 

i am currently copying my files to the server while i am writing this

 

i used smb shares to share data across my network and to watch my videos on my android SmartTV using VLC

 

thanks again

First - You're absolutely welcome!

 

Bummer about the SATA/SAS cables, hope that's resolved now. Mine have given me zero issues.

 

And yes, I am "eye"-ing larger SAS drives, but am waiting for the prices to fall a little more. When I can Three 8TB or 10TB drives (one parity, one data, and one "hot swap" spare) then I will move over to larger drives. For now, I am fine with the multiple 4TB drives... easy to get spares if I need them. And I am not using up that much space yet (although I upgraded my photo, sound and video equipment too!)

 

I'll have to look into OpenMediaVault. Am using the unRAID smb shares connections, but not sure I will stick with that - but it works.

 

And yes, the motherboard sata ports are S-L-O....W. Like 1.5Gbs slow.

 

Let me know how you're doing... for me, I am kinda just hanging out for awhile - as I am traveling a lot recently. Nice to be able to access my music via the PLEX server from my cellphone and play it on my rental car's CARPLAY stereo as I was driving between Monterey and San Jose. (My server is back in Virginia)

 

I still need to fix that ASUS 1030 nVidia card... DARNIT!

 

 

Edited by rollieindc
Link to comment
  • 3 weeks later...

Well, time for an unRAID update.

 

The Dell T310 has been mostly quiet and doing what it does best - Parity Checks. Very few issues running unRAID. Only real issues are that it screams for updates on software tools and dockers, which I do regularly. Otherwise, it's been up continuously for 45 days, serving files, running a couple of VMs - without much of an issue. One 600GB Seagate SAS 15K drive I have - alerts that it gets "Hot" at 115F, but quickly goes back to 99F within a minute or two. May be a fan/circulation issue in that 3x drive bay that I have it in. Likely will "push" the drive bay fan to a "fully on" state, rather than temperature controlled (which I think it's currently in.) I also have a Win7/64 VM and a Win10/64 VM, which have also been running in the background with Plex, and they seem mostly "just working." I did install BEET, which cleaned up my music library files nicely - once I figured out the terminal console interface for it. The GUI for BEET doesn't work for much. If the terminal program wasn't easy and relatively useful, I'd have ditched it after the first 20 minutes - the GUI was that bad.

 

Also on the "new to me" hardware front, I am now at the point where I may want to make some "changes." Mostly since I got the following hardware really cheap, which has me quietly thinking about a "rebuild." (Have I mentioned I love cheap used hardware!)

 

(#1) The first is a SuperMicro X7DCL-I-YI001 motherboard with two Xeon LGA 771 quad core cpus (Might be E52XX's), and 32GB of DDR2 ram - for $20. The downside of this is that it only has 2 (x8) PCI-e, 1 (x4) PCI-e (using x8 slot), and 3x 32-bit PCI slots slots, and the DDR2 memory is maxed out. It came with a 4U case (no power supply) and a bunch of other fans and interface cards. My guess is that it was a failed attempt at a bitcoin miner. There was very little dust, and the system looked super clean. For $20, I felt like it was a steal. Still looks like it has a lot of life left in it, even if I throw it into another ATX case with stuff I have lying around - like a nVidia GT610 or 1030GT card (on a riser), 6Gbs SATA PCIe controller, and a 240GB SDD. And I could also upgrade with a new (cheap) set of 3Ghz Xeons (Seeing some of the LGA 771 X5460's on eBay for $25/pair), so I think this could become a decent desktop running Windows 10. I realize I am capped on memory with this Motherboard, but I am also working on a cMP (classic Mac Pro 5.10 w/dual Xeons - that will ultimately become my video/photography editing workhorse.) But even if I strip it and resell the parts from this SuperMicro unit, I think I am ahead of the game - by a lot. (I even heard "HACKINTOSH" in the back of my mind, but quickly grabbed a beer and killed those brain cells before they took root!)

 

My question is, would this SuperMicro really be better than my T310 as a NAS/VM server?

 

(#2) An nVidia TESLA K10 GPGPU card, also for $20. It was mod'ed, but again - it's a "What the heck" buy. Wondering if this would work with the nVidia build of unRAID - but since I run "vanilla Plex" without a subscription, I'm thinking it will have limited value/use/overall speed increases. Mostly I wanted to get it - to tie it to a VM for running some simulation programs - but unsure if it's worth it or not for anything else.

 

I did read that I could put it on a 8X to X16 (powered) riser, maybe for the SuperMicro above... but I am just not sure of the value of it. I cannot put this into the T310, since it needs more power than is available from the T310 Power supply, and I am not really interested in replacing the T310's redundant dual power supplies. To be honest, I just want to leave the T310 alone and just have it keep serving up and storing files for me. (If it ain't broke...)

 

Decisions, decisions...

Edited by rollieindc
Link to comment
  • 1 month later...
On 9/29/2019 at 3:23 PM, rollieindc said:

Decisions, decisions...

November 11, 2019

So, back at it again tonight. Have to say, I like 6.7.2 for the windows "findability." Also those automatic docker and software updates make for a lot fewer "warning notices" and a lot smoother running server/machine. System has repeatedly been up for DAYS with no real dwn time for maintenance.

 

Sadly though, I have had "Less than zero" luck with the nVidia pass-throughs on VMs. And I have tried just about everything. I've re-re-re-watched SpaceInvader One's videos, and tried every trick in the book. So am now to the point of throwing in the towel on the nVidia GT1030 all together (and into another system I have), and finding some other decent video card that doesn't have the continued nVidia "Code 43" VM issues. (See attached photos). I've tried everything I can think of. Different slots, various IOMMU groupings, editing the pulled BIOS ROM binary file with HXD hex editor, turned off Hyper-V, Booting with SeaBios, Booting with OVMF - and finally placed some chicken bones on it, while holding a pair of crossed screwdrivers (one Phillips, one Slot) and reciting Ohm's law.

 

Now, I'm just buggered over it. I'd rather run a cheap AMD video card at this point for my gaming and photo/video editing. Suggestions welcome. nVidia seems to be taking the "Micro$oft" approach to things lately. Makes me think that AMD might just... walk past them too.

 

Card requirements:

  1. Works in a VM
  2. Works with TeamViewer (Personal preference to remote in with)
  3. No PCIE power (must run off the slot power in a PCIE 8x slot)
  4. No need for a HDMI dummy load
  5. Works with Adobe Photoshop, Premier, and program called "Second Life".

 

Ok, time for bed. Night all.

Screenshot_2.png

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.