Norco 4224 Thread


Guest Jomp

Recommended Posts

  • Replies 353
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

 

I would also cover all of the extra screw holes in the fan wall area. I have not covered mine because i still have the shipping film on my norco and it is blocking the holes for me.

I might put a dab of window caulk in my screw holes (only the screw holes) next time i am working around the house.

 

I find this really hard to believe but I'm seeing a good difference in temperature just from covering these case holes with tape - I have not done anything with the holes in the fanwall yet.   In the first image I'm rebuilding the parity drive and in the second I'm rebuilding drive7.  In both instances the rebuild is due to swapping a 7200 drive with a 5400 drive.

 

822011115006am.png

84201193039pm.png
Link to comment

Hey johnm, what about using HVAC pressure sensitive tape, from both sides that should work and hold up well no? I will be trying this soon. I got my new SAS cables I want to try for this chassis backplane. A new CPU cooler for my xeon, and adding additional ram for virtulization. I report on this once I get to it.

Link to comment

Sounds good to me.

 

Is that the stuff that looks like aluminum tape?

I am sure plain Jane Duct tape would work. It is just a pain to remove later. the glue almost never comes off without some sort of  chemical.

 

I just noticed another difference in the latest revision, the backplanes are different!

The most noticeable change without pulling one from the system is that the drive Led's are brighter like the ver1.. and they are flipped! the HDD power is now the bottom light and the activity is on top.

 

i can bearly see the LEDs on my V2 while the V1 and Vnew have bright power and so-so hdd activity.

 

I am working on a new ESXi box today. I am still deliberating on if i want to virtualize unraid on it.

 

I am also thinking about taking photos at every step and documenting it so others can build their own based on my trial and error.

 

Link to comment

Sounds good to me.

 

Is that the stuff that looks like aluminum tape?

I am sure plain Jane Duct tape would work. It is just a pain to remove later. the glue almost never comes off without some sort of  chemical.

 

 

To finish up my 'Silent' gaming PC, I sealed all the excessive holes or unwanted air vents with either sound deadening foam (Accoustipack) and/or self-adhesive foam strips that is used in Air conditioning installs (available at any hardware store in small rolls or by feet). A concept I transferred over to my server to help quiet it down and redirect airflow.

You might want to give those a try.

 

Link to comment

So,

 

Umm...

 

My newest norco's parts arrived.

This one is going to be ESXi.

 

cu2Rmm.jpg

 

Yeah... ummm

 

my 2 unraid boxes and my ESXi box...

I was going to post this in the pimp my Rig section.

I thought people reading this thread might want to see the 3 major chassis styles all in one photo.

 

(click for larger image)

rxFF3m.jpg

 

 

Link to comment
  • 2 weeks later...

I just noticed the RPC-4224 dropped in price at newegg.

 

It is back to $399 from $440ish.. the joke is shipping went up from free to $19.99 to $35.41..

way to make us feel better about them jacking up the price.

FWIW - I got a customer loyalty email from newegg with 20% off all Norco/Supermicro/iStarUSA chassis.  So all told, even after the ridiculous shipping charges I got the 4224 with the RL-26 rails for $373.96. 

Also, this is the first Norco I've purchased, I've just been checking everyday for the price to come down or for the bonus item (no need for a dvd burner) to be worthwhile.

Link to comment

I'm thinking about getting a Norco 4224 as my CM-590 is almost full and since my storage needs have grown greater than I expected, I don't see it smart to invest in 2 additional 5x3s as it would only gain me 3 additional slots.

 

I'm also interested in learning/setting up VMs. I'm going to read up on ESXi and VirtualBox threads to learn more to figure out which way to go.

I may just start off with moving unRAID to the 4224 native, and reuse the CM-590 for playing with VM.

 

With the understanding that I don't know which way I will go, I'll probably want to get the other components for the 4224 to met either need but also don't want to break the bank unnecessarily.

 

What would I want to look for in regards to features for MB, PSU, CPU, Memory. etc?

Any recommended HW for them?

 

Whether I use VM on the 4224 eventually or just on the CM-590, I would be wanting to set up multiple VMs for "work skills" improvement.

I'm a DBA (Oracle & Teradata) so would be wanting to set up multiple *nix VMs to play with Oracle RAC/DR/BAR also another VM or 2 for Teradata, probably a couple Windows VM also. Also would be using the VMs to improve/learn more on the *nix System Admin and Networking knowledge.

 

Pretty ambitious, I know but in this economy, doesn't hurt to learn more outside of your comfort zone.

 

 

Link to comment

I'm thinking about getting a Norco 4224 as my CM-590 is almost full and since my storage needs have grown greater than I expected, I don't see it smart to invest in 2 additional 5x3s as it would only gain me 3 additional slots.

 

I'm also interested in learning/setting up VMs. I'm going to read up on ESXi and VirtualBox threads to learn more to figure out which way to go.

I may just start off with moving unRAID to the 4224 native, and reuse the CM-590 for playing with VM.

 

With the understanding that I don't know which way I will go, I'll probably want to get the other components for the 4224 to met either need but also don't want to break the bank unnecessarily.

 

What would I want to look for in regards to features for MB, PSU, CPU, Memory. etc?

Any recommended HW for them?

 

Whether I use VM on the 4224 eventually or just on the CM-590, I would be wanting to set up multiple VMs for "work skills" improvement.

I'm a DBA (Oracle & Teradata) so would be wanting to set up multiple *nix VMs to play with Oracle RAC/DR/BAR also another VM or 2 for Teradata, probably a couple Windows VM also. Also would be using the VMs to improve/learn more on the *nix System Admin and Networking knowledge.

 

Pretty ambitious, I know but in this economy, doesn't hurt to learn more outside of your comfort zone.

 

 

 

Take a look at this thread for a very good write up on ESXi.  ESXi is pretty hardware specific, so make sure you do your research, or build exactly that the thread above describes.

 

ESXi would probably be the most efficient route to go, especially if you are running multiple VM's.  Either way you go, get the most RAM you can afford... the more the better when it comes to running VM's

Link to comment

For a modern ESXi build.

I would go with a C202 or C204 Supermicro or Tyan Board.

 

Personally I think the Tyan S5510GM3NR Board might be a better buy because it has 3 compatible Intel NICs. there is a rumor it wont work out of the box with most desktop power supplies though. you might need a special cable to get around that

 

the Supermicro X9SCM-F-O only has 1 compatible NIC (2 with an OEM.tgz hack).

the supermicro X9SCL+-F just came out and seems to have 2 NIC's that work in ESXi. you will trade the 2 Sata3 ports for Sata2 and loose one PCIe slot though. that might be a hard choice to trade off.

 

I chose the X9SCM-F-O for my ESXi since I had 2 X9SCM-F-O's already and All of my severs are are Supermicro or intel. I wanted to stay consistent.

 

if you want to try and go with older hardware, there is the ESXi whitebox list. it is a bit outdated.

http://www.vm-help.com//esx40i/esx40_whitebox_HCL.php

 

Link to comment

ok...super noob question I guess...what is the benefit/need for having multiple NICs?

For ESXi VM's.

 

Getting a bit off topic here, but, while the built in virtual switch in ESXi is very efficient, sometimes you want to assign a NIC to a VM that needs a lot of bandwidth, like a file server or backup server.

also if you run a firewall of pfsence box, you would want two NICs.

If you have a SANs or iSCSI target for your ESXi drives, you would then need another NIC for  your VM network traffic..

Link to comment

ok...super noob question I guess...what is the benefit/need for having multiple NICs?

 

The downside to having multiple NICs manually split up is that all VM to VM traffic on the same vSwitch stays on the vSwitch.  If you have two NICs each to their own vSwitch (but still on the same network) or vmDirectPath one NIC through to one of the VM's then any traffic between the other VM's and the one with the dedicated NIC/vSwitch will need to leave the vSwitch and go over the wired network/switch.

 

Now if you have two NIC's attached to the same vSwitch then ESXi will balance across them and inter-VM traffic will stay on the server.  Traffic for 1 VM will go out 1 NIC, ESXi does not bond the NIC's unless you do on the switch side. 

 

Staying on the vSwitch and not exiting the ESXi server is important due to the VMXNET3 NIC (10gb).  If all your VM's have the VMXNET3 NIC then all VM to VM traffic will be a 10gbe speeds.  Stay on vSwitch at 10Gb or dedicate a NIC/vSwitch and use your super fast 1Gb switch...?  (10Gb/s = 1280MB/s, 1Gb/s = 128MB/s)

 

No brainer for me.  90% of everything is on ESXi, the only things that are not are workstations and media players and a single 1Gb/s NIC is more then enough.

Link to comment

I have read only about half this thread, so I may not have seen pertinent information about the 4224. With that said, on the 4224 and quite possibly the newer versions of the 4016, 4020, and 4220 there is now a grate like shield on the drive drawers that most be open in populated ones, and should be closed in unpopulated drawers. My older 4220 does not have this, as they are all open all the time.

 

When installing the drives I noticed this, so I checked, and 3 were closed that I had populated.

Link to comment

I have read only about half this thread, so I may not have seen pertinent information about the 4224. With that said, on the 4224 and quite possibly the newer versions of the 4016, 4020, and 4220 there is now a grate like shield on the drive drawers that most be open in populated ones, and should be closed in unpopulated drawers. My older 4220 does not have this, as they are all open all the time.

 

When installing the drives I noticed this, so I checked, and 3 were closed that I had populated.

Yes it is meantioned in this thread, you must open the cage tray's that have hard drives in them.

 

Update: the HVAC tape is holding up real well. Will post pictures.

Link to comment
  • 4 weeks later...

Should the 4224 come with molex splitters for the backplane.    (I ordered from Raj's recommended prototype list for a 24 drive beast, and no molex splitters were identified.)

 

Mine arrived with nothing.  Need 6 molex for the drive backplane, one for the midwall fans, and one for the back of the case.  My powersupply only has 3 molex connectors. 

 

Is there any better place to get these splitters than the Norco ones from Newegg,

 

http://www.newegg.com/Product/Product.aspx?Item=N82E16816133040

 

 

Link to comment
  • Guest changed the title to Norco 4224 Thread

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.