24 bay servers from Tam Solutions back in stock - $299 / data center pulls


tr0910

Recommended Posts

pinion>>  the add on card is IPMI Card, and that is not for network connection but for headless administration, as in RDP KVM kind of things. you should be able to connect it to your network and administer from any PC on your network remotely.

 

 

Link to comment
  • Replies 51
  • Created
  • Last Reply

Top Posters In This Topic

I got one of the quad core AMD's delivered earlier in the week, but have been having a problem getting network connectivity (won't get ip address through DHCP, when using static ip no connection to rest of network, swapped network components and unraid installs). I posted over in this thread (http://lime-technology.com/forum/index.php?topic=27833.0). Is there anything I need to look for in the bios to get the nic to function properly? Any thoughts or feedback would be appreciated, otherwise I'm going to probably end up looking for an intel NIC and talking to TAM to try to get reimbursement since it appears the mobo I got has an issue.

Link to comment

Thanks a lot j.l

 

I've found a reference that these cards can do at most 105MB/sec... which is probably not enough if you're making a system with 100% 3TB-4TB drives. I currently have 20x 2TB drives which I plan to migrate to migrate to one of these boxes... perhaps I'll start by replacing one of the AOC-SAT2-Mv8 with an IBM M1015 and put 4TB drives on it.

 

 

Link to comment
  I've found a reference that these cards can do at most 105MB/sec... which is probably not enough if you're making a system with 100% 3TB-4TB drives. I currently have 20x 2TB drives which I plan to migrate to migrate to one of these boxes... perhaps I'll start by replacing one of the AOC-SAT2-Mv8 with an IBM M1015 and put 4TB drives on it.

 

Mine surprises me at how fast they run.  I have 3tb drives and get 100 MB/sec parity checks.  Keeps right up with my latest generation Xeon and m1015 controllers running latest v5 rc15.  Don't worry about that.  Only thing is noise, and power consumption.  Noise you can deal with, by pulling the plug on the fans, but power consumption, I pulled 4GB ram out of the 8GB total / unplugged 2 of the 3 power supplies, and unplugged the 2 fans and the back of the case cutting power use by about 100W.  Still it uses about 200W at best.  To really cut power consumption, you need to replace the MB, CPU, and power supply.  Essentially all you keep is the case.

 

Note that the mv8 is a pci card.  The 1015 won't fit in the same slot as the card you pull out. 

 

Link to comment

@tr0910: oh that's awesome. in that case I'll keep the M1015s as spare or for another build. I was thinking of putting in the M1015s in the two x8 PCI-e slots and leaving one of the PCI-X SAT2-MV8 cards without a full (8 drives) load. I've also purchased a couple of 2419EEs.

 

I plan to switch out the PSU to lower the noise and power consumption... also I haven't looked into using S3 suspend-WOL (wake on lan) to lower the idle consumption further. Has anyone configured WOL on these boxes? Are they reliable?

 

If all this doesn't work, then the plan has always been to gut it out and just take the case :)

Link to comment
  • 2 weeks later...

@tr0910: oh that's awesome. in that case I'll keep the M1015s as spare or for another build. I was thinking of putting in the M1015s in the two x8 PCI-e slots and leaving one of the PCI-X SAT2-MV8 cards without a full (8 drives) load. I've also purchased a couple of 2419EEs.

 

I plan to switch out the PSU to lower the noise and power consumption... also I haven't looked into using S3 suspend-WOL (wake on lan) to lower the idle consumption further. Has anyone configured WOL on these boxes? Are they reliable?

 

If all this doesn't work, then the plan has always been to gut it out and just take the case :)

 

Running 2x 2419EEs, 8GB ram, unRAID 5rc13, I am averaging 109MB/sec parity checks. I have Supermicro passive heatsinks and both CPUs are holding 40-42 deg C during parity checks, system temp 55 deg C.

 

Using Kill-a-Watt, power consumption is 140W (parity check and SAB,SB,streaming to 2 XBMC and file read from 1 Win7 PC), 100W idle (with a TX750 PSU, 3x 120mm Cougar and 2x Arctic F8 fans).

 

No luck with S3 suspend, I could not even find that option in BIOS after I upgraded to 3.5 BIOS FW required for running the 2419EEs. Maybe i just missed it. Does not sleep when using S3 suspend from SF.

 

IPMI is great. I use it to turn the server on, then turn it off using powerdown when I don't need it up. If power consumtion was below 75W I might decide to keep it on all the time.

 

 

Link to comment
  • 11 months later...

In case anyone references this thread, the reported high parity speeds from above posters must've been ran with low number of drives (less than 6 probably). Once you go past 8, the PCI-X interface will become the bottleneck and in case you're looking to make use of the PCIe slots, keep in mind that only one of the three are actually 8x and PCIe 1.0 at that.

 

I'd recommend replacing it with a more current Supermicro motherboard (x9 or x10 series); personally I opted for X10SL7-F and so far it's a drop-in replacement. Even the front panel connector (power, led, nic activities, etc) are exactly the same (You'll need to order different cable if you go with, say, asus boards). This motherboard has an on board LSI 2308 chipset with 8 SATA connectors, which can be connected to a 24 port expander via a pair of reverse-breakout SFF-8087 cables ($15-20 each).

Link to comment

FYI:  My first unRAID server had PCI-X slots (X7SBE MB)  I had two AOC-SAT2-MV8s and 6 MB SATA ports on my 22 drive server.  With 22 WD Green (EARS and EADS drives) I got close to 100MB/s (in the 90s) when it started but dropped to ~50MB/s on the inner cylinders all with 20 array drives and 1 parity drive running on a Celeron 140 CPU.

Link to comment

@BobPhoenix:

 

Thanks. I should clarify that this issue is specifically on the AMD based server (H8DME-2 mobo) and it seems that this version is the more popular one from Tam's as they have more of those units for sale than the Intel version.

 

I bought 2 of the AMD (H8DME-2) version and they both perform awfully with > 6 drives.

Link to comment

@BobPhoenix:

 

Thanks. I should clarify that this issue is specifically on the AMD based server (H8DME-2 mobo) and it seems that this version is the more popular one from Tam's as they have more of those units for sale than the Intel version.

 

I bought 2 of the AMD (H8DME-2) version and they both perform awfully with > 6 drives.

Based on the MB manual for the H8DME-2 it looks like to maximize the parity check speeds on your H8DME-2 MB you need to use only 2 PCI-X slots at a time the other two need to be completely empty of any PCI/PCI-X cards.  Also you need to use alternate slots so slots 1&3 or 2&4.  If you put your cards in slots 1&2 you will get parity check speeds much lower.  That is what I had to do with my X7SBE which has a similar setup as in two PCI-x domains one 133Mhz and one 100Mhz that the NEC uPD720400 chipset on the H8DME-2 also appears to contain based on the manual.  Now will you get close to 100MB/s - maybe not - but I would expect the best results following what I've outlined.
Link to comment

@BobPhoenix:

 

Thanks. I should clarify that this issue is specifically on the AMD based server (H8DME-2 mobo) and it seems that this version is the more popular one from Tam's as they have more of those units for sale than the Intel version.

 

I bought 2 of the AMD (H8DME-2) version and they both perform awfully with > 6 drives.

Based on the MB manual for the H8DME-2 it looks like to maximize the parity check speeds on your H8DME-2 MB you need to use only 2 PCI-X slots at a time the other two need to be completely empty of any PCI/PCI-X cards.  Also you need to use alternate slots so slots 1&3 or 2&4.  If you put your cards in slots 1&2 you will get parity check speeds much lower.  That is what I had to do with my X7SBE which has a similar setup as in two PCI-x domains one 133Mhz and one 100Mhz that the NEC uPD720400 chipset on the H8DME-2 also appears to contain based on the manual.  Now will you get close to 100MB/s - maybe not - but I would expect the best results following what I've outlined.

 

I couldnt even reach the drives' max speed when i put more than 4 drives on a single SAT2-MV8.

 

http://lime-technology.com/forum/index.php?topic=32819.15

 

Parity check speed on a single SAT2-MV8...

4 drives: 90MB/sec

6 drives: 58MB/sec

8 drives: 45MB/sec

 

Parity check speed using all 6 of the on board sata ports:

6 drives: 100Mb/sec

 

Ps: If you calculate the total throughput of the SAT2-MV8 w/ different drive numbers, it remained almost exactly at 360MB/sec.

 

I dont have any other pci-x card nor motherboard to swap with, to see which is causing the bottleneck.

Link to comment

I couldnt even reach the drives' max speed when i put more than 4 drives on a single SAT2-MV8.

 

http://lime-technology.com/forum/index.php?topic=32819.15

 

Parity check speed on a single SAT2-MV8...

4 drives: 90MB/sec

6 drives: 58MB/sec

8 drives: 45MB/sec

 

Parity check speed using all 6 of the on board sata ports:

6 drives: 100Mb/sec

 

Ps: If you calculate the total throughput of the SAT2-MV8 w/ different drive numbers, it remained almost exactly at 360MB/sec.

 

I dont have any other pci-x card nor motherboard to swap with, to see which is causing the bottleneck.

That really is disgusting.  AMD's PCI-x implementation must not be implemented very well or at least sub par compared to Intel's PXH chip on my X7SBE.  Or course it looks like from the block diagram that more is hooked up to the nec chip on your board than my PXH chip has so that could be the problem as well.
Link to comment

I have Intel and amd 24 bay Tam boxes used lightly in a backup capacity.  One has 9 drives one 10.  I'm getting 100mbit parity checks on both. If dynamix  is running on one I get way slower parity on that one.  Not sure why you are seeing slow performance.

Link to comment

That really is disgusting.  AMD's PCI-x implementation must not be implemented very well or at least sub par compared to Intel's PXH chip on my X7SBE.  Or course it looks like from the block diagram that more is hooked up to the nec chip on your board than my PXH chip has so that could be the problem as well.

 

Yeah, I keot thinking there's something wrong on my end as I haven't seen this reported much on the two big threads (here and avsforum) afaik. At least the case is super nice and swapping it with another (current) Supermicro board is a very simple swap.

 

Btw, I see you're running Unraid guests on those machines... do you pass through the whole controller or just pass through the disks (RDM)?

 

has anyone bought one of these servers lately(6 months) and what is today's price? mainly looking at the Intel processor

 

Tam Solution has one right now for $599:

 

http://www.ebay.com/itm/SuperMicro-24-Bay-Chasis-SAS8846TQ-1-86GHz-4GB-80GB-/201101593396?pt=COMP_EN_Servers&hash=item2ed296cb34

 

Or you can email Andy and ask him if he has other configurations available if you deal direct...

 

I have Intel and amd 24 bay Tam boxes used lightly in a backup capacity.  One has 9 drives one 10.  I'm getting 100mbit parity checks on both. If dynamix  is running on one I get way slower parity on that one.  Not sure why you are seeing slow performance.

 

Are all the drives on the AMD box connected through SAT2-MV8 controllers?

 

Because if you connect 6 of them to onboard SATA and 3 or 4 to the SAT2-MV8, then my numbers will still check out. You'll see the speed reduction on your next drive addition.

Link to comment

That really is disgusting.  AMD's PCI-x implementation must not be implemented very well or at least sub par compared to Intel's PXH chip on my X7SBE.  Or course it looks like from the block diagram that more is hooked up to the nec chip on your board than my PXH chip has so that could be the problem as well.

 

Yeah, I keot thinking there's something wrong on my end as I haven't seen this reported much on the two big threads (here and avsforum) afaik. At least the case is super nice and swapping it with another (current) Supermicro board is a very simple swap.

 

Btw, I see you're running Unraid guests on those machines... do you pass through the whole controller or just pass through the disks (RDM)?

Making assumption you are asking about X7SBE server that is unRAID/Xen?  The SAT2-MV8 controllers are connected directly to unRAID just like it use to be in my first unRAID server with the Celeron 140 since unRAID is running in dom0 as the hypervisor.  I have an M1015 passed through to my WHS2011 VM on that box.  My other boxes are currently ESXi servers and with those I have an unRAID VM on each in which I have an M1015 passed through to the unRAID VM with a SAS expander to give me a full 24 drive count capability with a single M1015.  That leaves all the other slots for the Windows VM that is running on each ESXi server.  Need the slots for Tuners for my SageTV software running on those Windows VMs.

 

 

That answer your question?

Link to comment

Mine are all connected to the 3 - mv8 cards.  Didn't even realize there was on board sata ports. That might be your issue.

 

I have tried the server in the original configuration (all drive bays connected to the 3x SAT2-MV8) before trying the onboard ports.

 

Making assumption you are asking about X7SBE server that is unRAID/Xen?  The SAT2-MV8 controllers are connected directly to unRAID just like it use to be in my first unRAID server with the Celeron 140 since unRAID is running in dom0 as the hypervisor.  I have an M1015 passed through to my WHS2011 VM on that box.  My other boxes are currently ESXi servers and with those I have an unRAID VM on each in which I have an M1015 passed through to the unRAID VM with a SAS expander to give me a full 24 drive count capability with a single M1015.  That leaves all the other slots for the Windows VM that is running on each ESXi server.  Need the slots for Tuners for my SageTV software running on those Windows VMs.

 

 

That answer your question?

 

Yes. Nice setup!

 

I'm planning to migrate both of my SC846 (tam's server) to X10SL7-F w/ e3-1230v3, if my current test works out alright.

 

One with unraid as my main media library, the other one I will VM a flexraid (maybe on windows) to make use of my older 1.5TB and 2TB drives... since you can have more than 1 parity w/ flexraid (and other kinky stuff..).

ie: one parity is a 4TB drive, the second parity is on a pair of spanned 2x2TB drives. If one of the 2TB drives go down, you'll only need to rebuild 'half a parity'. (this can be implemented in the data drives too, not just parity)

 

I'm wondering about experiences w/ running unraid as a guest vm w/ individual disks passed through to it (via RDM) since I don't want it to handle all 24 bays. I know flexraid works well with RDM, they actually recommend that from performance\flexibility pov.

Link to comment

unRAID will work fine with RDM'd drives.  But they will not spin down, no smart reports, no temperatures and no unique IDs if that is a problem for you.  Didn't like it that way myself so I always pass through a controller for unRAID as a guest.  Even with your TAMS case you should be able to split which drive goes to which VM and still use pass through.  They would just be in multiples of 4.

Link to comment

unRAID will work fine with RDM'd drives.  But they will not spin down, no smart reports, no temperatures and no unique IDs if that is a problem for you.  Didn't like it that way myself so I always pass through a controller for unRAID as a guest.  Even with your TAMS case you should be able to split which drive goes to which VM and still use pass through.  They would just be in multiples of 4.

 

Hmm, have you compared physical RDM vs virtual RDM? According to Flexraid's docs, the physical RDM can pass SMART info and can spin down: (but virtual RDMs can't)

 

http://www.flexraid.com/2014/02/07/storage-deployment-vmware-esxi-iommuvt-d-vs-physical-rdm-vs-virtual-rdm-vs-vmdk/

 

  • The most transparent option is IOMMU/Vt-d (VMDirectPath), which lets you pass-through an entire storage controller to a virtual machine. The storage controller can be a hardware RAID card or one that just passes through the disks (controller without RAID). With this, there is zero abstraction on the disk devices on that passed through controller card. You can access the card and the storage it hosts just as you would on a physical machine.
     
  • Next is physical RDM. In physical RDM, the disks are minimally abstracted in that ESXi passes all SCSI commands to the device except for one command, which is used to used to distinguish the device from others. Otherwise, all physical characteristics of the underlying hardware are exposed. If you think about it, you will note that the VM will have a virtual controller entirely unrelated to the physical controller hosting the disk and that a disk can be on a port that is different on the virtual disk controller than it is on the physical controller. A translation needs to be made at some level for all this to work. Outside of that translation through, everything else is forwarded verbatim to the disk.
     
  • The next level is virtual RDM. In this mode, ESXi only sends READ and WRITE commands to the mapped device. All other commands are virtualized as done for VMDK file based disks. A virtual RDM behaves the same as a VMDK file based disk except that it is backed by raw block storage.
     
  • Finally, we have the VMDK file based disk, which is a fully virtualized storage device. Being just a file, such device is very portable. You can copy, duplicated, or move it anywhere you want without much restriction. For instance, you can move it from an iSCSI datastore, to an NFS datastore, to a local datastore with ease. This is something you cannot do with something directly backed by raw physical storage. The file system on which the VMDK file resides provides all the needed abstraction from the actual block that otherwise backs the file.

 

 

(On why you shouldn't use Virtual RDMs)

 

What about Virtual RDM?

 

Virtual RDMs have the same advantages and restrictions as do VMDK file based disks except for:

- that a virtual RDM must be a whole disk or LUN (not as flexible as VMDK files)

- and that virtual RDMs have better IOPS than VMDK files as read/write I/O operations inside the virtual machine (VM) are passed through to the physical disk

 

Ultimately though, the restrictions on low level disk access is an important factor against going virtual RDM. You want to be able to monitor the health of your disks. You want to be able to allow your idle disks to go into a sleep state and conserve energy. And, if using SSD storage, you want TRIM support to achieve the best continuous performance. Etc.

 

 

Moreover, i would have to use (physical) RDM to pass through the drives to more than 3 separate VMs (omnios, windows w/flexraid, and a couple of linuxes) in finer granularity since i will just use the single onboard LSI-2308 connected to the 24 port hp sas expander.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.