Jump to content

Converting my bare metal Win10 to unraid Win10VM Desktop


Bean

Recommended Posts

 

 

Thanks for all your help and I'll be returning once I tackle the dedicated build, still contemplaiting picking up a used HP ProLiant system vs a build from scratch

 

Sent from my SM-T800 using Tapatalk

 

I have 4 DL380 G6 machines. There are plusses and minuses.

 

Can you elaborate some of the pros and cons you faced?

Link to comment

 

 

Thanks for all your help and I'll be returning once I tackle the dedicated build, still contemplaiting picking up a used HP ProLiant system vs a build from scratch

 

Sent from my SM-T800 using Tapatalk

 

 

 

I have 4 DL380 G6 machines. There are plusses and minuses.

 

Can you elaborate some of the pros and cons you faced?

 

i'll list both at the same time:

 

can have lots of ram. I have 72gb currently, with max capacity a little more than double that. When doing file transfers with unRaid, I saturate gigabit networking as the file first goes to ram, then to cache (if I have it selected, which I'm not sure I really need for transfers) then to the array. It's nice to be able to run 2-3 vm's and still drop a 30GB file and push it at max speed. I recently added a couple 10gbe cards for backing up one server to another directly. Right now I'm limited at about 4-5gbps, partly because of how unRaid writes files (not striped) and partly because of the 3gbps max on sata drives I'm running into. I don't think I really needed 10gbe cards for backing up the array because when using parity, you only write at 50-80MB/s, but it's fun to play with. I've gotten higher transfer rates, but that was when assigning fast drives to the cache pool, btrfs, and striping them (which I think is what Linus did.) Not really practical though.

 

I currently use an SAS expander to connect an external array to access 3.5 drives. seems to work fine. It holds 15 disks. If I want/need more, I can just add another card.

 

8 2.5 drive slots onboard, which is great for SSDS, but sata disks only connect at 3gbps on the onboard backplane. But if 2.5 is your thing, you can get an additional carrier to make it 16 onboard 2.5 disks. That would be one hell of a ssd cache cluster.

 

6 pcie slots when using risers (4 x4 and 2 x8(16) only running at v.2 (booooooo!)

 

can be relatively cheap to get into and upgrade. I bought mine for 50 each. They came with dual xeon 2.2ghz quad core processors. I upgraded to dual xeon 6 core 2.9ghz processors for 120 bucks. It increased my passmark scores from 7500 to about 13k on a single machine. Cinebench more than doubled but I don't recall the exact numbers.

 

no onboard power for aftermarket gpu and a tight fit: I had to take a dremel to the back plastic edge of my gtx 760 to make it fit (it was hitting the processor heat sink shroud) and I also had to run an external power supply to power the card. I believe G7's have an onboard power source for a single graphics cards up to 300 watts.

 

enterprise equipment is built to be beat on. I read on the forums about people having issues because of hardware failures, and other things that come up from pushing consumer equipment too hard. Consumer computers aren't meant for 80-100% utilization all the time, or even for longer periods of high utilization. Servers (or at least mine) have massive (and often loud) cooling systems and are built a bit "tougher." I'm sure some will disagree, but the longest living hardware i've owned is my 6 year old mac book pro, and this set of servers which are about the same age. Both have been beat on and continue (knock on wood) to chug along. Long story short: if you're going to haul tons and tons of dirt, better to get a dump truck vs a honda civic.

 

bios updates: you have to pay for them from hp. You could probably download them from less reputable sources, but I don't chance it.

 

sloooooow boot up.

 

eat more power, but have better power management. My main server idles at 100 watts doing nothing.

 

4 gigabit ethernet ports onboard, so if you're running a few vm's that are using bandwidth on the network, each doesnt don't bog down sharing 1 gigabit port and you don't lose a pci slot to add a card for it. (side note: I lost access to one of my x4 slots because my gpu was so tall it blocks it.)

 

when I first started using unRaid about 10 months ago, the onboard raid controller wasn't recognized. Then for some reason about 4 months ago, it was. Maybe an upgrade, but I don't remember. only 2 of servers use onboard controller, the other 2 are using an h220.

 

iLo server management is fun to play around with.

 

very easy to "service." if a fan goes out, you get an alert. problem with one of the redundant power supplies? notification. Once the system is powered down, it takes 30 seconds to replace. Everything is a bit easier to service, short of replacing the board.

 

I've spent hours and hours trying to figure out how to make things "work." Part of the is because I was learning about the hardware at the same time I was (and am still) learning about unRaid. And because of that, I both love and hate these servers.

 

I have 4 because I use one as a primary storage, plex transcoder, and host a few vm's which connect to physical desktop locations in my house over cat 5e. The other 3 are for transcoding video projects, with one of those doing double duty making a duplicate of my primary server array. I probably have about 800 dollars into the whole cluster (a thousand total if you include the half rack,) which gives me access to  72 cores worth of transcoding power. For another 360 dollars, I'll have 96 cores, but all with higher clock speeds. I couldn't build anything this powerful for a grand.

 

If my needs were less, then I'd probably buy a desktop server. It doesn't look as "cool" but sometimes function has to override form.

 

 

I'm sure there's more to be said, but I think that's a good start.

 

Link to comment

thanks for your indepth explanation. I was thinking  about the DL380G9 is the bios thing a general HP thing? are there other restrictions on proprietary hardware, i.e. is most stuff compatible or do I need to get everything from HP?

 

Sent from my SM-T800 using Tapatalk

 

 

Link to comment

thanks for your indepth explanation. I was thinking  about the DL380G9 is the bios thing a general HP thing? are there other restrictions on proprietary hardware, i.e. is most stuff compatible or do I need to get everything from HP?

 

Sent from my SM-T800 using Tapatalk

 

I believe the newer hardware comes with updates for a set amount of time.... "entitlement."

 

Since owning servers was new to me, I used hp cards for my hba and sas expander, which i probably didn't have to do. I've also used 3 different nvidia graphics cards, an asrock usb 3.1 card, and mellanox 10gbe adapters with essentially no issues. I don't think there are any hardware restrictions, at least on my machine.

 

I've been eying the G9's but it'll be a few years before i wear my current servers out, or they become woefully outdated. Hopefully by then they'll be a bit more reasonable in price.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...