Jump to content


  • Content Count

  • Joined

  • Last visited

Posts posted by brian89gp

  1. I have the following for sale.  Please make an offer for whatever you want.  Shipping from the USA.


    Quantity 6: Hynix 4gb PC3-10600 RAM - HMT151R7AFP4C


    Quantity 19: HP branded 72Gb 15k SAS drives 2.5" (15 are in HP drive trays, 4 are bare)


    Quantity 7: HP 300GB 10k SAS 2.5" (bare)


    Quantity 10: brand new HP 300GB 10k SAS 2.5" (bare, I have the HP trays but no screws)


    Quantity 1: HP dual port PCIe NIC, NC360T (Intel 82571EB chipset)


    Quantity 2: HP quad port PCIe NIC, NC364T (Intel 82571EB chipset)

  2. Best way to use there? (Assuming an M1015 or a SAS2LP card feeding it)


    Assuming controller is installed in a PCIe x8 v2 slot.


    (And I'm talking about spinners, not SSDs)


    1 in 5 out (4 to 20 drives)?

    1 in 4 out (4 to 16 drives) Would be possible to use two of these so you'd be able to support 32 drives off of a single 2 SAS port card.

    2 in 4 out (8 to 16 drives)?

    2 in 3 out (8 to 12 drives)?


    I expect 2 in 4 out is the most common, but just curious on best use without clipping parity check performance significantly.


    The PCIe x8 v2 SAS card would have a maximum of 32Gb/s or 4GB/s

    4 lane SAS2 is 24Gb/s or 3GB/s

        20 drives on 4 lanes would yield 1.2Gb/s per drive still, or 153MB/s


    There is 48Gb/s of SAS2 bandwidth sitting on a 32Gb/s PCIe bus, the card is already oversubscribed bandwidth wise.


    Unless you follow the overclocker mentality of demanding the utmost performance, I don't think that there would be much lost by the 1-in-5-out option with the other 4 lanes on the PCIe card directly driving the parity and cache drives. 

  3. Mine arrived today, well packed in original box.  They came with both high and low profile PCI slot plates and (6) SAS cables.

    I wasn't expecting it to come like that.  I was/am expecting this to just be the card (just ordered one) so hope mine comes the same way.  This sounds like what I got when I ordered from Amazon/Newegg for my first 3 a year or two ago and I didn't think it was a bulk pack like ebay listing says.  Guess I was wrong.


    They shipped my 4 boxed cards in a box that fit 5 of them and had RES2SV240 stickers on the outside, the 5-pack box is probably the "bulk" part of it.  Its original Intel packaging, has the serial number of the card on the sticker stuck on the box.


    I remember paying $5 per SAS cable a while back, thats $30 right there.

  4. I know what you mean about working at enterprise class scales then having to go home and slum it up with a single socket and these silly storage systems that can't handle 500k IOPS with barely a yawn...  I take pictures I can drool over in weak moments to save my home power usage and wallet from certain demise.


    Without gaming, you could probably fairly easily do all of the above and a typical sized GNS3 lab (at least in ESX).  Dual CPU might come into play if you run ot of RAM, but I wouldn't expect CPU usage to be huge (aside from Plex, SABnzbd, and GNS3).  But getting a second single socket system for that stuff would probably be cheaper.


    A Supermicro X9SRA, 64gb with 4x 16gb DIMM modules (half populated for future growth), and an E5-1650 v2 would be a fairly powerful and decently affordable server.  I went my route with the dual Xeon board mainly for the seven x8 PCIe 2.0 slots, not necessarily CPU core count.


    I was doing pricing for GNS3 and VIRL labs (Microsoft would be similar in requirements to VIRL) and found that for massive labs, a quad socket Opteron 6262HE would be worth looking into.  64 cores fully populated and lots of DIMM slots to allow massive amounts of RAM to be had cheaply.  Don't need speed for labs, just quantity.  I've seen an embarrassingly large amount of VM guests jammed onto an old single socket 6 core Xeon 5600, in production workloads, just humming along quite happily.

  5. If you are going to game off of it then a dual socket is not out of the question.  I do wonder why you would want to game on it though?  Pay more for worse performance.


    As far as the MCSE prep, you'll probably run out of RAM long before CPU.  16:1 is a decent consolidation ratio for lighly used servers at least in the VMware world.  How many VM guests you planning on running for your lab?


    I run my server with dual 4 core processors that are quite old in comparison.  Several Plex streams, SABnzbd running full steam, and Bluecherry with a couple camera's, and it usually is less then 3 cores used.


    The only time I have ever stressed it CPU wise is when I started up a 50 router config in GNS3...


  6. As long as unRAID doesn't add too much overhead I think I am going to go that route with a SSD only based array and an SSD cache drive to handle any writes.  6.0 Beta looks to be adding the SMB3 protocol which helps out a great amount on small files to reduce the amount of chatter that SMB2 has. 


    The batch process is currently running off of a 7200 RPM based unRAID array and it works, its just slow.  If it tests out I'll get a Norco 2132 or 4164 chassis and run it as a slave chassis off the main one, I already have a large handful of 2.5" drives I could move to it plus a bunch of 10k SAS 2.5" drives I wouldn't mind using.


    If unRAID doesn't test out, I'll get a hardware RAID card to run the SSD on and throw up a Windows server VM to host the share.  I have a couple HP P410 laying around, though not ideal for SSD I already got them.


    The StarTech card does look very cool.  Maybe for a future RAID0 scratch disk for Photoshop or something.  The workstation already has 32gb (looking at doubling that), photoshop uses most all of it, and it still writes a huge amount of data to the scratch disk.

  7. Throughput isn't really a requirement, the batch job opens a bunch of <1mb images, processes them, and spits out a couple 100k images for each source file all to/from the same share.  Latency is the real killer here and the latency of rotational disks really slows things down.  Also looking to improve indexing and thumbnailing speed when using Adobe Bridge against a several million image store.  Not really throughput but more of latency.  I don't think uNRAID supports SMB3 yet, which would help greatly on the transaction chattiness and throughput/latency vs SMB2, but SMB2 over 1gb should be sufficient.


    The workstation is 1gb connected so that is the throughput bottleneck anyways.  Throughput would only come into play when copying mass data in/out of the shares...which is done from the workstation and in a sequential manner.


    Local SSD would of course be best but I would be stuck using consumer SSD's on a hardware RAID card and I don't really want to do that.  Or a NAS with a traditional RAID 10 or something over CIFS, but that is costly.  Or even so far as a higher end NAS with a iSCSI or FC block output, but that is complicated.  And in reality I only need the performance/latency/throughput of a single SSD drive but the redundancy and space from some sort of RAID array.

  8. I am needing to store a large amount of images and photoshop files on SSD disks, large batch jobs are run against these images which causes quite a bit of disk thrashing and generally runs real slow when running on a spinning drive.  Using a SSD makes a drastic difference, but there is no real requirement for performance beyond what one SSD will typically do so using some form of a RAID array wouldn't provide much benefit.


    Will unRAID work in an all SSD arrangement?  A SSD cache to suck up writes and a set of 3-8 SSD disks for the storage?  I really like unRAID over other options as long as it can perform good.  "Good" = equivalent performance to a single SSD on a NAS that is fast enough not to introduce a bottleneck itself.

  9. The true intention of that backplane is space efficiency and cost savings in an enterprise environment.  Servers tend to have a lack of IO space, thus using fewer PCIe cards for the same number of drives is important so you can have room for those other IO cards.  Or rather expensive RAID cards where having all drives on one card is necessary, thus the need for a SAS expander.  Or massive amounts of disks attached to each server, thus aggregation and over-subscription by using SAS expanders is almost necessary.


    You are a home user so none of that applies.  Three M1015, or one with that backplane.  Either option will work just fine.  It really all comes down to what suites your needs the best.  Using a SAS expander will free up more PCIe slots for other use and using three M1015 will theoretically have more throughput and probably be slightly more cost effective but you will use up most if not all >=x8 lane PCIe slots on a single socket motherboard.

  10. So if I understand correctly the SAS backplane takes the place of say an Intel RES2SV240 expander? So I would run one cable from the M1015 to the backplane to control all 24 drives?


    More accurately, your 24 SAS backplane has SAS expander built into it.


    And in your opinion would 2-3 M1015's and my current SATA backplane be faster for 24 drives (eventually) and/or more ideal for unRAID and VM's or would the SAS expander backplane with one M1015 be a better solution? I am going to sell one chassis or the other just thinking through it all before I get rid of the other one.


    How many PCIe lanes do you have to each M1015?  24Gb/s is quite a bit of throughput...  3 M1015's sitting on enough PCIe lanes to service each full speed would *theoretically* have a higher ceiling on performance then the 24Gb/s on the 4 channel SAS port.  Are you going to be doing something that has more throughput then this and are you using an OS that is capable of driving this throughput?  If you were running several SSD's or a bunch of 15k drives then maybe.  Was thinking the PCIe 2.0 x8 on the M1015 capped out at around 32Gb/s



  11. Has a SAS expander built in, so yes all you need is one connection.


    Having one drive per SAS channel would theoretically be faster assuming you have the PCIe channels avalable to the M1015 card(s) because there would be zero oversubscription, but using the SAS expander is nothing to dismiss because it is still pretty fast.  A lot of enterprise storage arrays run 48+ enterprise SAS drives on a 4 channel 6Gb SAS loop (technically dual loop with one loop per controller) and you would essentially be running only 24 drives on the same loop speed with a setup that doesn't even come close to the requirements of an enterprise array.