Jump to content

SSD as a cache drive?


Ryan23

Recommended Posts

With SSD's coming down in price, has anyone experimented with using one as a cache drive? I'm looking at Newegg and a 64GB G-Skill SSD lists for ~$130. I'm looking to speed up my horrendous write speeds. Currently I'm seeing fluctuations between 1200 -12,000 kbytes/sec from Vista. A 30GB folder is looking like it's going to take 3 hours. to xfer  :-\.  I've gone through all the tricks on the wiki. I don't anticipate pushing more than 60GB a day anyway...

 

Ryan

 

Link to comment

Your problem is likely LAN speed and Vista, not disk speed in unRAID.

 

Here's a suggestion... dump Vista.

 

"Dumping Vista" is not a realistic option. Considering my HTPC is setup the way I want with no real hiccups, I wouldn't want to go out on a limb by undoing all that on a hope & prayer it will get better. Surely I'm not the only Vista user with an Unraid server. 

 

I'm going over Gig-E, no hub, direct to PC:

 

Settings for eth0:
        Supported ports: [ TP MII ]
        Supported link modes:   10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Half 1000baseT/Full
        Supports auto-negotiation: Yes
        Advertised link modes:  10baseT/Half 10baseT/Full
                                100baseT/Half 100baseT/Full
                                1000baseT/Half 1000baseT/Full
        Advertised auto-negotiation: Yes
        Speed: 1000Mb/s
        Duplex: Full
        Port: MII
        PHYAD: 0
        Transceiver: internal
        Auto-negotiation: on
        Supports Wake-on: pumbg
        Wake-on: g
        Current message level: 0x00000033 (51)
        Link detected: yes

 

 

Using "hdparm -tT /dev/xxx" on my various drives, I see ~75MB/S across all 9 disks (~100 for my parity). Using the "I" option, I'm showing *UDMA6 for all SATA drives. Motherboard was just transitioned over from a P5PE-VM to an Abit AB9 Pro. I'm in the process of ditching my IDE drives for all SATA. Only two IDE's remain (750GB).

 

Last parity check came in ~39,000 & took ~200 minutes for 4.5 Gig. 

 

Ryan

 

 

Link to comment

You missed my point.  And your test numbers all confirm that disk I/O on unRAID is not your bottleneck.

 

SMB is proprietary crap from Microsoft.  Samba is a reverse-engineered attempt at compatibility.

 

Vista to Vista (SMB to SMB) xfers are OK.  Vista to Linux (SMB to Samba) has a number of issues, not the least of which is poor performance.

 

The latest SP for Vista helps, but xfer speed still is below that of XP to Linux.

Link to comment

You missed my point.  And your test numbers all confirm that disk I/O on unRAID is not your bottleneck.

 

SMB is proprietary crap from Microsoft.  Samba is a reverse-engineered attempt at compatibility.

 

Vista to Vista (SMB to SMB) xfers are OK.  Vista to Linux (SMB to Samba) has a number of issues, not the least of which is poor performance.

 

The latest SP for Vista helps, but xfer speed still is below that of XP to Linux.

 

I'm on SP1 now.

 

Actually, the test numbers were reinforcing MY point of read speed being satisfactory. They are read I/O tests, not write.

 

From what I gather, there are four operations being performed when data is wrote to the array. Bypassing that, via the cache drive, would help WRITE thouroughput. It was my understanding that Tom added the Cache drive for that purpose, maybe I'm wrong...

 

A few people have used their spare drives and noted an improvement in write performance with the cache drive:

http://lime-technology.com/forum/index.php?topic=2698.0

 

I'm wondering if anyone has any thoughts on an SSD for that purpose.

 

Ryan

 

 

Link to comment

Another possible issue is network throttling by Vista. It is done both in plain Vista and in SP1 whenever the NIC is simultaneously used for AV playback, with a seemingly subtle definition of what that means. Fortunately, there are workarounds for the throttling --- check a few recent posts by me, for example.

 

As for using an SSD drive for cache drive, it strikes me as spectacular overkill, at best. Most IDE drives have peak transfer rates that are better than or comparable to Gb LAN, and the cache drive is probably most often used when no other drives are used, which means that you won't run into MB/bus/... bottlenecks, either. SSD mainly beats spinning drives on latency, but those numbers are dwarfed by LAN latency, so ...

 

On a side-note, I don't think it is fair to say that SMB is crap and, IIRC, CIFS is actually a proposed standard for the protocol. Prior to Vista, Microsoft's implementation of SMB also seemed to be hassle-free, if not exactly samba-friendly; and the problems with Vista's SMB implementation are due to quite advanced things they are trying to piggy-back on top of it combined with design-changes in the Windows kernel that make things more modular. The former issue you can argue with (but it is innovation), and the latter issue is actually a good thing in general. If things are at all as they appear to me to be, for example, Vista machines should no longer hang while waiting, e.g., for a SMB request to time-out.

Link to comment

 

A few people have used their spare drives and noted an improvement in write performance with the cache drive:

http://lime-technology.com/forum/index.php?topic=2698.0

 

I'm wondering if anyone has any thoughts on an SSD for that purpose.

 

 

While SSD's are very fast, the write speeds are usually half the read speed. Therefore you will want to choose a quality product white adequate write speeds. You also need to consider there is a limited amount of writes for each cell. This is usually a point people fret about, studies report that a SSD will last 2-3 years under heavy write conditions and up to 50 years on average write conditions.

 

Considering the nature of the cache drive and that it is always going to be a write destination, I would consider a regular higher speed SATA drive.

 

The pro on using the cache drive is not having a spindle running when writes are made to the server.

If you can afford to replace the cache drive on a shorter cycle then a mechanical SATA drive, it might be worth the effort.

Link to comment
A few people have used their spare drives and noted an improvement in write performance with the cache drive:

 

Yes, when write performance was the bottleneck.  The numbers you posted are below the typical write performance of your unRAID system, and more typical of Vista network I/O bottlenecks.

 

You need to test your network I/O and get some hard numbers on your network I/O between unRAID and your Vista box.... both ways.  Vista does some real crap with network I/O, including throttling, that can devastate your network I/O.

Link to comment

"Dumping Vista" is not a realistic option. Considering my HTPC is setup the way I want with no real hiccups, I wouldn't want to go out on a limb by undoing all that on a hope & prayer it will get better. Surely I'm not the only Vista user with an Unraid server. 

 

Yes, I had a Windows network before unRAID, it does what I need, and if poor performance warranted it, I would dump unRAID long before I dump Vista.  My file server needs to meet my requirements for the existing network, not the other way around, even if I do have about 20 years of experience with UNIX operating systems.

 

Last parity check came in ~39,000 & took ~200 minutes for 4.5 Gig. 

 

I have 3 WD 1.5 TB drives (one of which is parity) + 1 WD 1.0 TB drive (so about 4GB total) on an Asus P5K Deluxe (ICH9) mobo and my parity check came in at ~78000.  It was a steady 110K early and dropped to around 58K late.  I also spent a whopping $68 on 8GB of RAM, if that matters.

 

Anyway, are you not running with ANY cache drive at all now?  After seeing how long direct writes take, I threw in an old 400GB Western Digital (SATA/150) drive as my cache.  Writing to that I get "expected" speeds over the network (e.g. 4GB in around a minute or so).  Checking my syslog, last night, to move 73 GB of files from the cache drive to the array, it took unRAID 3 hours, 23 minutes.  If my array performance speed is twice yours (based on our parity check info), and you're seeing it take 3 hours to move 30GB like your OP, then our performances are fairly consistent with each other when assuming no network limitations or overhead.

 

I'm constantly considering, dismissing, then reconsidering SSD.  For now, I just consider it too expensive for what you get (my personal killer app would be something to hold my Flight Simulator X scenery, which consists of 100GB+ of a LOT of small files). 

 

Without knowing what you have now, for the price of the Intel X-25 80GB SSD ($500), you could probably just buy 4 1.5TB Seagate drives ($130 x 4 = $520) and use your best current drive as a cache which might actually boost your array performance.

 

Link to comment

Thanks for the replies! I wasn't aware that the SSD's had a limited amount of "writes". If I'd be looking at a MTBF of 2 years under heavy writes, that's not a viable option....damn.

 

I have 3 WD 1.5 TB drives (one of which is parity) + 1 WD 1.0 TB drive (so about 4GB total) on an Asus P5K Deluxe (ICH9) mobo and my parity check came in at ~78000.  It was a steady 110K early and dropped to around 58K late.  I also spent a whopping $68 on 8GB of RAM, if that matters.

 

Anyway, are you not running with ANY cache drive at all now?  After seeing how long direct writes take, I threw in an old 400GB Western Digital (SATA/150) drive as my cache.  Writing to that I get "expected" speeds over the network (e.g. 4GB in around a minute or so).  Checking my syslog, last night, to move 73 GB of files from the cache drive to the array, it took unRAID 3 hours, 23 minutes.  If my array performance speed is twice yours (based on our parity check info), and you're seeing it take 3 hours to move 30GB like your OP, then our performances are fairly consistent with each other when assuming no network limitations or overhead.

 

I'm not running any cache drives at the moment. I do have an old 250GB SATA I could utilize as a cache. Here's my current (complete) setup:

 

Coolermaster monster case

Twin 500W PS's

ABIT AB9 Pro mobo

Pentium 4, 3.0 GHZ

2 GB DDR800 memory

One Seagate 1TB Parity drive

Six Seagate 500GB SATA2 (all SATA1 jumpers removed)

Two Seagate 750GB IDE drives

and a vintage 4MB S3 PCI video card (circa 96') :)

 

Four 500GB SATA2 on the ICH8 controller

One 500GB SATA2, One 1TB SATA2 (parity), Two 750GB IDE on the JMB363 controller

One 500GB SATA2 on the SIL3132 controller.

 

AHCI is enabled in bios. I'm using the built in NIC.  I've done the various registry hacks to prevent Vista from throttling. I do have a 1.5TB Seagate coming to replace my parity drive. My current 1TB parity will replace one of my 750GB IDE drives. I'm thinking (hoping!) that my mix of SATA & IDE drives may be slowing things up a tad.

 

Just got done transferring 5GB from Vista c:\  to //Media_server/disk2.

Time: 24 minutes  :-\ .

 

Maybe I'll swap over to the 250GB SATA as a cache drive and do some more data xfers. Looking over my controller layout again, maybe I'll try swapping those drives around while I'm at it. I've got two open SATA ports on the ICH8 controller.

 

Thanks for the input guys!

 

Ryan

 

Link to comment

Just a followup. I swapped my Parity over to the ICH8 controller along with a 500GB drive. Now all 6 SATA ports on the ICH8 Controller are being utilized. I still have one 500GB drive and two 750MB IDEs on the JMB controller. The SIL3132 controller only has my "new" 250GB SATA1 cache drive on it.

 

Parity check speed is identical. ~39,000 KB/sec. So no benefit to swapping the controller ports around.

 

Now on to the plus side :). I managed to transfer an 8GB movie over to the cache drive in 5 minutes!!

 

This was VS. 5GB in 24 minutes when writing directly to a disk in the array. A 7 fold speed increase. I'll cross my fingers and hope everything goes well tonight during the scheduled move off the cache drive. I guess the slow write performance wasn't Vista's fault ;D

 

Ryan

 

Link to comment

Thanks for the replies! I wasn't aware that the SSD's had a limited amount of "writes". If I'd be looking at a MTBF of 2 years under heavy writes, that's not a viable option....damn.

 

We're talking about writes of 20-50 gigabytes or more per day every single day until failure.

This is how they came up with the possibility of a 50yr life span because people do not usually write that much data every day.

Link to comment

We're talking about writes of 20-50 gigabytes or more per day every single day until failure.

This is how they came up with the possibility of a 50yr life span because people do not usually write that much data every day.

 

Within 2 years time, I can see myself meeting that quota on average (Blu-ray and HD recordings tend to be quite big)... I take it a lot of folks on this forum can't be categorized as most people, huh?

Link to comment

We're talking about writes of 20-50 gigabytes or more per day every single day until failure.

This is how they came up with the possibility of a 50yr life span because people do not usually write that much data every day.

 

Within 2 years time, I can see myself meeting that quota on average (Blu-ray and HD recordings tend to be quite big)... I take it a lot of folks on this forum can't be categorized as most people, huh?

 

SSD is not for you. I would just use a regular drive.

In the "most people" case it is more based on desktop use.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...