51TB Dual Xeon Lian Li PC-D600 Build - "Shogun"


Recommended Posts

What I find most impressive about this build is its a 51TB monster that's quiet enough to be in your living room !!

 

How did you manage to get those cpu's for $75? ebay or do you know people in FB?

 

 

Yeah shes pretty quiet. About as loud as a small desk fan. Quiter than the fridge around the corner.

 

The CPUS, it is believed, and after some Googling confirmed. FB upgraded their many data centers and dumped all these cpus back on the used market through recycle vendors like Natex. So many of these hit the consumer market prices dropped like a rock.

 

I picked up 6 ( More on that later) ;D

 

Great prices yeah for us!!

Link to comment
  • 1 month later...

That's a nice looking setup  8)

 

Just wondering, did you measure the power consumption on this? Idle and/or in full use?

 

Good question, unfortunately I do not know. There seems to be a lot of power consumption threads on the forum but I never have and don't really test my power draw. I have three machines that run 24/7 and never have tested as to be honest, I don't really care that much how much power they draw. It would, however, be good information for other users like yourself building similar systems to know. It just has not been a priority.

 

The systems I run al the time are my HTPC, my unraid server, and my DMZ rig.

Link to comment
  • 2 weeks later...

Update:

 

So I changed my mind and now have the system in my "Man Loft." I don't have a cave, the kids took my den. Originally I was going to go headless, but the gaming aspect, and hardware testing (play) was too enticing.

 

I have been testing the EVGA 1050ti Super Clocked Video Card with my Windows 10 "gaming" VM. Actually more of an Emby/Kodi VM. Works great and with no additional cables required it's a clean install. The card pulls, at max, 75 watts from the PCI-e slot, very efficient for my light gaming needs.

 

Good job Pascal/EVGA.

 

Shogun is now running all the norm storage array stuff, "standard" dockers, 3 Vm's including the Win10 gaming VM pushing my 65" Samsung in the loft. I had a Chrome Box in it before. This is better and gives me better gaming options when the mood strikes.

 

Cheers!!

 

 

Link to comment

... So I changed my mind and now have the system in my "Man Loft." I don't have a cave, the kids took my den. Originally I was going to go headless, but the gaming aspect, and hardware testing (play) was too enticing.

 

I can understand the attraction of the "playing" values with this setup -- both gaming and "messing around" with various hardware testing.

 

... but I have to wonder if this was just an "I changed my mind" move -- or perhaps a bit of influence from your "other half", who might not have been quite as enamored by the windowed case and the Japanese Katana - Oni Shogun Mask staring at her  :)

 

I do recall you had commented on her willingness to have this in the living room ...

... Glad this will be stationary in it's new home in the formal living room. (NOTE: I am not sure how I got my wife to  agree to that.)

 

On the other hand, since the mask was a gift from her mother, perhaps she really was okay with it  :) :)

Link to comment

... So I changed my mind and now have the system in my "Man Loft." I don't have a cave, the kids took my den. Originally I was going to go headless, but the gaming aspect, and hardware testing (play) was too enticing.

 

I can understand the attraction of the "playing" values with this setup -- both gaming and "messing around" with various hardware testing.

 

... but I have to wonder if this was just an "I changed my mind" move -- or perhaps a bit of influence from your "other half", who might not have been quite as enamored by the windowed case and the Japanese Katana - Oni Shogun Mask staring at her  :)

 

I do recall you had commented on her willingness to have this in the living room ...

... Glad this will be stationary in it's new home in the formal living room. (NOTE: I am not sure how I got my wife to  agree to that.)

 

On the other hand, since the mask was a gift from her mother, perhaps she really was okay with it  :) :)

 

 

Good point Gary... Hmmm, she did seem all too "fine with" me deciding to move it upstairs LOL.

 

On a side note, my 3y.o. son was not too pleased with the mask watching his every move...  :o

Link to comment

... On a side note, my 3y.o. son was not too pleased with the mask watching his every move...  :o

 

At that age, you NEED somebody to watch every move !!  :)

[i remember it well ... it's nice to have switched to "grandparent mode", where that's not a full-time need  :) ]

 

... I suspect it's nicer to have the system tucked away in your "man cave" [or in your case "man loft"] anyway.

 

Link to comment

 

At that age, you NEED somebody to watch every move !!  :)

[i remember it well ... it's nice to have switched to "grandparent mode", where that's not a full-time need  :) ]

 

... I suspect it's nicer to have the system tucked away in your "man cave" [or in your case "man loft"] anyway.

 

Yes Gary, Shogun was very clear in that it needed my undivided attention and that it enjoyed my company... It also let me know that if I removed it from the "cave" it would KP irrecoverably. So it will remain in my sight.

 

Cheers

Link to comment
  • 1 month later...

Hey man, love the build! I just registered on this site after seeing this build cause I wanted to ask you some questions. Looks like I'm no longer a lurker lol. I'm in the middle of my build and still buying parts, I think I'm about to pull the trigger on the Lian Li D600 as well.

 

I noticed a couple things right off the bat from your parts and pictures. Why did you choose the SuperMicro expansion cards that don't support RAID? Also, is there a reason you're not using any of your onboard SATA ports?

Link to comment

I noticed a couple things right off the bat from your parts and pictures. Why did you choose the SuperMicro expansion cards that don't support RAID? Also, is there a reason you're not using any of your onboard SATA ports?

 

Hello thanks for your comments.

 

1.) I wanted to use fast reliable JBOD controllers as I knew I would be using a lot of disks. Most users dont really set hardware raid on the controllers then add unraid to a hardware raid configuration. I can think of two really good reasons I did not do this. It adds an additional point or points depending on setup, for failure. Depending on the hardware raid setup if a drive fails you will lose data. Also if the hardware fails unraid cannot see the drive to bebuild them and you will/may lose data. Unraid, in the lowest form, was designed to ensure data integrity in the case of drive/hardware failure. T

 

The other reason is Unraids hardware agnostic nature. I can swap out stuff and it does not matter as long as the drives settings are unchanged. Shogun was a pull from older hardware and dropped in the new case with a few other addons. Fired up the system and all my old data from 10 years ago is there. Running a hardware raid config one cannot do this, you will lose whatever is on those drive if you swap out raid cards or switch mother boards set up with raid configs,(unless it's with the exact same M/B. (Some raid cards retain settings but those types are enterprise class and very costly. Even so they only retain info for a short time as they use battery backups on the cards.

 

2.) There are not enough ports for the amount of disks I wanted and using the cards gives me "clean speed" on a "single" controller, the cards. Using M/B sata one may find multiple controller types and speed differences between sata ports. This can make things slow and or bottle neck parity checks. Right now I run a parity check in 12-14 hours with average speed 106MBps on a 6TB parity disk. The data amount in total is 18TB on the 51TB array. Prior to this my old system using M/B sata would take 3-4 days to check like amounts of data.

 

I hope this helps and or makes sense. Good luck with your build

 

Cheers,

LVNW

Link to comment

You make some great points. I've never tried unRAID before but I plan on using it with the build that I'm doing now. I see your point about having a RAID setup, and then using unRAID (counter-intuitive just by the name lol).

 

Are your cache disks also the the dedicated controllers? Or are those the only ones that you have connected to your SATA ports on your mobo?

Link to comment

Are your cache disks also the the dedicated controllers? Or are those the only ones that you have connected to your SATA ports on your mobo?

 

Hello, yeah most people dont raid unraid lol.

 

Yes that is correct. I am using the two high speed Sata 3 ports on the board. All other ports are sata 2.

 

Sata 3.0 = 6GBs

 

Sata 2.0 = 3GBs

 

Using all M/B sata ports would have caused speed issues with transfers and parity checks as some would be sata 2 others sata 3. This way all speeds are the same (more or less) and there is no PCI Bus overhead from Cache to array drives. (based on this board specs) YMMV depending on hardware.

 

 

If you have not yet, post your build so we can all take a look...

Link to comment

Still ordering parts, in fact, I had to submit a return today to an eBayer because he listed the mobo as E-ATX when in fact it was EE-ATX (some SuperMicro proprietary shit instead).

 

I'm thinking about getting your board as it's an E-ATX according to this: http://www.supermicro.com/products/motherboard/Xeon/C600/X9DR3-F.cfm. Your comment about drilling holes into the case concerns me though as the D600 is shown to support E-ATX... which is why that's a bit a strange.

Link to comment
  • 4 months later...

Would you be willing to post pictures that show how you were able to mount the Noctua 80mm fans to the iStar drive cages?  I have the same drive cages, but was trying to figure out what the best way to mount an 80mm fan to these would be?  Did you just get longer screws?  Thanks for posting your build!

Link to comment
On 2/2/2017 at 7:50 PM, Synaptix said:

 

 

I'm thinking about getting your board as it's an E-ATX according to this: http://www.supermicro.com/products/motherboard/Xeon/C600/X9DR3-F.cfm. Your comment about drilling holes into the case concerns me though as the D600 is shown to support E-ATX... which is why that's a bit a strange.

 

The board has more holes than the case did. All of the EE-ATX holes lined up fine but two were missing so I punched them in.

 

BTW Sorry I missed this post, it been a while. I had to change emails and must have missed this.

 

 

Link to comment
4 hours ago, tonynt said:

 Did you just get longer screws?  Thanks for posting your build!

 

 

Yes, I went to my local BIG BOX home improvement store and bought longer screws. The screws were also a bit larger in Dia. So I slowly screwed them in tapping larger holes in the process. Much better fit IMHO, and of course quiet. 

 

Sorry no way to show pics to tight an angle and all the cables. Again, sorry.

 

cheers, good luck.

 

 

Link to comment

UPDATE - ReiserFS to XFS conversion started last week, 14 Drives total, 22.9TB total array storage used. (Not too large)

 

I have been showing the SHFS maxed out to 100% CPU usage when SSHed in and looking at TOP the last few weeks. Only way to access is a hard shutdown. Many forum posts suggest converting reiserFS drives to XFS as a possible fix. I think doing that along with changing my auto updates, plugins and Apps, to manual may alleviate the issue. Thanks for the archived fixes, I hope they work for me as I have a very dynamic media server which is always busy doing something.

 

So far rsync has been working great and my speed is phenomenal. 6 drives completed as of now. 8 more to go. Prior to starting, I shutdown all client services that tap/hit the server and also informed the "Boss" she needs to hold off uploading her DSLR Raw pics to her picture share for the time being. All plugins, and dockers have been shutdown and also all VM's; which is making me a bit stir crazy but I did not want to risk anything writing to a drive while the copy is taking place.

 

For those thinking about doing it, I am using the "mirror" method outlined here in the wiki.

 

https://wiki.lime-technology.com/File_System_Conversion#Mirroring_procedure_to_convert_drives

 

Yes, it is a bit convoluted and not so clear, (some missing steps need more detailed explaining, they generally inferred.)  just make a spread sheet like I did and match/follow the disk name ie (sdi, sdl, sdo, may vary.) instead of disk numbers as they actually use the same and or change when swapping. Follow a spread sheet and you'll be fine  Also, make sure you have "screen" installed and know how to use it.

~Cheat sheet below~ From bash command

cd /boot 

screen -list

screen -r (pid#)-0.(servername)

ctrl+a d

export TERM=linux (maybe needed if you cannot connect back to screen session .)

 

Other than the 100% load from shfs the server has been a great tool. When this conversion is complete I'll update again.

 

Cheers!!

 

 

Link to comment
UPDATE - ReiserFS to XFS conversion started last week, 14 Drives total, 22.9TB total array storage used. (Not too large)
 
I have been showing the SHFS maxed out to 100% CPU usage when SSHed in and looking at TOP the last few weeks. Only way to access is a hard shutdown. Many forum posts suggest converting reiserFS drives to XFS as a possible fix. I think doing that along with changing my auto updates, plugins and Apps, to manual may alleviate the issue. Thanks for the archived fixes, I hope they work for me as I have a very dynamic media server which is always busy doing something.
 
So far rsync has been working great and my speed is phenomenal. 6 drives completed as of now. 8 more to go. Prior to starting, I shutdown all client services that tap/hit the server and also informed the "Boss" she needs to hold off uploading her DSLR Raw pics to her picture share for the time being. All plugins, and dockers have been shutdown and also all VM's; which is making me a bit stir crazy but I did not want to risk anything writing to a drive while the copy is taking place.
 
For those thinking about doing it, I am using the "mirror" method outlined here in the wiki.
 
https://wiki.lime-technology.com/File_System_Conversion#Mirroring_procedure_to_convert_drives
 
Yes, it is a bit convoluted and not so clear, (some missing steps need more detailed explaining, they generally inferred.)  just make a spread sheet like I did and match/follow the disk name ie (sdi, sdl, sdo, may vary.) instead of disk numbers as they actually use the same and or change when swapping. Follow a spread sheet and you'll be fine  Also, make sure you have "screen" installed and know how to use it.
~Cheat sheet below~ From bash command
cd /boot 
screen -list
screen -r (pid#)-0.(servername)
ctrl+a d
export TERM=linux (maybe needed if you cannot connect back to screen session .)
 
Other than the 100% load from shfs the server has been a great tool. When this conversion is complete I'll update again.
 
Cheers!!
 
 


Can you explain why the need for the Raiser FS to XFS? Don't really understand it at all. Also are your drives all done by now?
Link to comment
1 hour ago, jrd680 said:

 


Can you explain why the need for the Raiser FS to XFS? Don't really understand it at all. Also are your drives all done by now?

Some people experience horrible write performance, some to the point of completely locking up the server when writing to ReiserFS volumes. Seems to effect very mature well used volumes that have had lots of writing and deletions over the years, worse with very full volumes. XFS doesn't seem to exhibit the same problems.

 

If you don't have issues, it's not urgent to make the conversion, but it would be wise to migrate whenever convenient, sooner rather than later. ReiserFS is no longer well maintained, and risk goes up with each update that there will be a show stopper issue forcing conversion.

Link to comment
Some people experience horrible write performance, some to the point of completely locking up the server when writing to ReiserFS volumes. Seems to effect very mature well used volumes that have had lots of writing and deletions over the years, worse with very full volumes. XFS doesn't seem to exhibit the same problems.
 
If you don't have issues, it's not urgent to make the conversion, but it would be wise to migrate whenever convenient, sooner rather than later. ReiserFS is no longer well maintained, and risk goes up with each update that there will be a show stopper issue forcing conversion.


Thanks for the explanation. How do I see whether a drive is XFS or ReiserFS?
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.