Jump to content

ST31500341AS drives still recommened and used by Limetech?


starcat

Recommended Posts

After so many people having so many issues with ST31500341AS drives I just ask myself why is Limetech still using them? Are all problems really sorted out by using the newest CC1H firmware with these drives? Why people concerned about saving electricity and spinning down drives still use drives that run that hot, imposing more cooling and reducing efficiency? They are a litle bit faster than the WD15EADS drives because they rotate slightly faster which may be important in a single-disk-read and not a striped environment, but this has very little if any effect on overall performance within an unRAID system with all other bottlenecks... Any thoughts on this?

 

I am just trying to decide if I should completely stop using ST31500341AS and move to WD15EADS drives, but if I take the pludge, I would like to swap all Seagate drives and if not, this means that they are a good and that I will continue to buy them as I don't like mixing drives. 

Link to comment
They are a litle bit faster than the WD15EADS drives because they rotate slightly faster which may be important in a single-disk-read and not a striped environment, but this has very little if any effect on overall performance within an unRAID system with all other bottlenecks... Any thoughts on this?

 

I am just trying to decide if I should completely stop using ST31500341AS and move to WD15EADS drives, but if I take the pludge, I would like to swap all Seagate drives and if not, this means that they are a good and that I will continue to buy them as I don't like mixing drives.

 

There are plenty of mixed results and debates on both sides of this.

From my experience, the faster rotational speed helps a great deal with my torrents and rapid access to small files.

 

I started my array with the 1TB 5400 RPM drives. Operations were sluggish, but acceptable until I started torrenting. (new word???).

I upgraded to a 1.5TB 7200 RPM cache drive and I noticed a difference right away.

 

For most people it will not matter. For me it did.

I have not had any issues with my 1.5tb seagate drives.

 

From other benchmarks, it does seem that the 2TB 5400 RPM drives rival the performance of the 7200 RPM 1.5tb drives

 

I do not see any reason to refrain from mixing drives in an unRAID array. It is not stripped across drives, so it does not matter all that much.

Half my array is wd, have is seagate.

What makes unRAID a saver is the ability to buy drives incrementally according to needs and the sweet spot in pricing.

Link to comment
Why people concerned about saving electricity and spinning down drives still use drives that run that hot, imposing more cooling and reducing efficiency?

 

Because most people will use their unraid array for only a few hours a week but want it available 24/7. During use the array needs to be fast enough to service the systems requirements. When not in use to use as little electricity as possible. Hence people investigating WOL, S3, Cool n Quiet etc...

 

Spinning down a drive will save running costs two fold. Direct energy saved and indirect energy saved trying to manage heat generated when operating.

 

IMO green drives are a marketing mans wet dream to shift slow previously unsellable drives to the public (previously 5400rpm drives were an OEM market). The cost difference of running a green drive vs a regular drive is negligible. WD latest 2TB hdd uses 10W full chat, 8W idle and just over 1W in standby. For a WD green drive 6W, 4W and 1W. Given the drive spends mosts of its life in standby the real saving is negligible. 

 

Assume we watch 5 movies a week @ 90mins and have a 10 disk array. We'll also do a  parity check which will take 4 hours. 

 

7.5hrs a week @ 10W

1.5 hrs spindown @ 8W

 

vs

 

7.5hrs a week @ 6W

1.5 hrs spindown @ 4W

 

Saves me 16 pence a month English or about 25 cents American a month.

 

Without spindown however the same comparison costs me £3 a month or $4.50 a month, hence people buy 7200RPM performance drives and use spindown best of both worlds.

 

Once consumer streaming devices start supporting features like WoL we can really start to be green and use S3/WoL properly. 

 

HTH

 

 

 

Link to comment

One point you didn't mention was the silence. The 7200 rpm drives are significantly louder than the 5400/5900 rpm drives.

 

I don't know what size drives you figured in your equation (small 750Gig?) but parity checks on 2TB drives takes around 7.2 hours even when averaging 75114K/sec.

Link to comment

I am also on 7200.11 7200 1.5TB Seagate ST31500341AS drives but as I am bulding a new media server I am thinking about whether I should continue to use those or sell them and upgrade to either 1.5TB or 2TB WD Greens (the 2TB still being not in the sweetspot though). I know all the differences but as I much like Seagate (and as I don't have problems running them with CC1H firmware) I am kinda keen to not swap the Seagates. Then I took a look on the SMART test on my TS-509 and as I saw errors, however correctable (see attached pics), started again of thinking exchanging all the Seagates. However if I build unRAID it would not be a replacement for my 509, just run separately and this is why I very much like the idea of spinning down the unused drives for a media-only library.

 

Any word on when a 2nd parity drive will be availabe in unRAID and when is support for more than 20+1 drives to be expected? Thanks much again!

Link to comment

.

I don't know what size drives you figured in your equation (small 750Gig?) but parity checks on 2TB drives takes around 7.2 hours even when averaging 75114K/sec

 

Parity check calculation is for 1TB HDD @ 4 hrs and 10 drives @ 10W and 6W. 10p KW/HR. Even redoing the math for 7.2 hrs wont make a huge difference maybe another 5-7 cents per month.

Link to comment

@starcat nothing on that SMART report to be scared off. On Seagate HDDS look for ATA errors and reallocated sector counts.

 

Or Pending Sectors. Pending sectors means there is an uncorrected sector waiting to be repaired.

You can also trigger a SMART short test or SMART long test to double check the drive's health.

Link to comment

Thanks, guys! This was a SMART short test!

 

Btw, anyone know if it is possible while rebuilding/racalculation the parity information in unRAID to increase the NICE level quite a bit in order to finish earlier, like in general Linux with mdraid, etc?

The limiting factor on parity calculation is throughput on the bus from the disk controllers to the CPU.

 

Also... the parity calc/check is performed in the unRAID version of the "md" device. I don't think it is in a process you can "nice" 

Can you point me to a web-page reference on how the "parity" calc can be sped up on other Linux implementations by increasing the NICE level?

Link to comment

Thanks, guys! This was a SMART short test!

 

Btw, anyone know if it is possible while rebuilding/racalculation the parity information in unRAID to increase the NICE level quite a bit in order to finish earlier, like in general Linux with mdraid, etc?

The limiting factor on parity calculation is throughput on the bus from the disk controllers to the CPU.

 

Also... the parity calc/check is performed in the unRAID version of the "md" device. I don't think it is in a process you can "nice" 

Can you point me to a web-page reference on how the "parity" calc can be sped up on other Linux implementations by increasing the NICE level?

 

unRAID is a different architecture then the standard linux md driver.

With the standard linux md driver, there is a RAID_SPEED_MIN and RAID_SPEED_MAX parameter in /proc.

with unRAID there is a separate process unraidd which may or may not respond to priority.

In any case it is drive speed, bus layout and order in which drives are on the controllers that determines how fast a parity calculation can occur.

 

Link to comment

Ok, so having a 64bit/133Mhz PCI-X based Supermicro X7SBE with two AOC-SAT2-MV8 controllers in addition to those 4 on the motherboard (for a total of 20+2 cache/parity drives) will help as the SAT2-MV8 are PCI-X cards placed on two busses. And the onboard they hang on the ICH9. Sounds good.

 

Any news as to when the 2nd parity drive will be available?

 

Link to comment

Any news as to when the 2nd parity drive will be available?

Since version 5.0 of unRAID will focus on a new user-interface API, and it is not yet out, even in its first beta, and we are now nearing the end of the series of beta versions for 4.5 unRAID,  I'd guess it will be at least version unRAID 6.X before a second parity drive is possible.  That would be a major re-write of the unRAID device driver.

 

There are so many other FAR more important features needed than a second parity drive. (at least in my opinion) Better NFS and AFP support, additional file-system type support, error alert/email notifications, UPS support, just to mention a few...  Some of those are providing a web user-interface to features existing in Linux to a non-linux user.  (a non-trivial task)  Some can only be done by lime-technology as they will require support in the provided 5.0 API.

 

Many features will be able to be added by the user-community once the 5.0 API is in place and working.  Some will need to be developed concurrently with support in the stock unRAID and will not be possible to be added by the user-community. (Multiple arrays, and Raid-6 like Reid-Soloman parity fit that category) 

 

In other words, I think it will be a pretty long wait for that second parity drive.

 

Joe L.

Link to comment

Didn't knew that those features are missing as some of them are quite important, at least AFP or APC support (UPS). What is it about NFS that would come, v4? I really hope that the current beta implementation of NFS is stable and has higher performance than SMB? AFP will be great as I am one of those Mac users, however Mac OS X has build in NFS, so no big thing here on that one.

 

Supporting more than one array is cool also, it will give me at least the possibilty to extend to a second 24-bay case using SAS port expanders.

 

Regarding the GUI, IMHO it would be more important to have the features implemented first and then concentrating on the GUI, not the otherway round. There are too many products out there with GUIs, but as you'd say, this is not a trivial taks and then, lot of things remain cmdline anyway.

Link to comment

Didn't knew that those features are missing as some of them are quite important, at least AFP or APC support (UPS).

AFP is not yet in place...  Support of an APC UPS exists, but as an add-on package you can easily install using unMENU, another add-on package that was initially developed to explore improvements in the user-interface.  I wrote it using "awk", a language that was available in the stock distribution.  Many of us have our servers configured with an APC UPS... it works really well and shuts the server down cleanly in a power outage.  The same for e-mail alerts... It exists as add-on packages developed by the user community... It should be in the core product to make it easy for users who are not comfortable with the linux command line.  The new API will allow that, as the user-community will develop web-page interfaces that look like they are part of the total unRAID web-interface.  We are waiting for API hooks that will allow us to start processes after the array is on-line and stop them before the array is being stopped.
What is it about NFS that would come, v4?
The big issue with NFS support is making it "easy" to administer via a web-interface.  The current method is very crude and is vastly complicated by the initial shortcuts used when setting up the user shares.. (the permissions for various users does not map to permissions needed by NFS)

I really hope that the current beta implementation of NFS is stable and has higher performance than SMB?
I think it is stable... or at least as stable as it is under linux... you just can't easily set the permissions you might need.
AFP will be great as I am one of those Mac users, however Mac OS X has build in NFS, so no big thing here on that one.

 

Supporting more than one array is cool also, it will give me at least the possibilty to extend to a second 24-bay case using SAS port expanders.

 

Regarding the GUI, IMHO it would be more important to have the features implemented first and then concentrating on the GUI, not the otherway round. There are too many products out there with GUIs, but as you'd say, this is not a trivial taks and then, lot of things remain cmdline anyway. 

It is exactly for that reason I wrote unMENU with its ability to be extended with plug-in pages.  It has allowed the user community to provide feedback on the various web-enabled features we've been developing.  Once I had shown how to get to the needed information to show array status, bjp999 was able to create the MyMain interface which in my opinion blows away the main screen I had created.

 

I expect the same evolution will occur with the 5.0 API.  We will finally have a structure where creating and installing a "installable package" will be far easier for all involved.

 

In the interim, you really don't want to go over 20 drives. in an array.. a rebuild will take forever.  My hope is that multiple arrays with fewer drives in each array will be implemented before raid-6 like second parity drive for the entire set of drives.

 

Joe L.

Link to comment

Thanks, Joe! I am eager to see multiple array support. I was asking for a second parity drive because I wasn't very comfortable running 20 drives with just a single parity drive, however multiple arrays with less drives and each with its own parity drive is better. This is like the old RAID rule putting up to 8 drives in RAID5 and up to 16 drives in RAID6. Great!

 

How many drives would be supported with the multiple array feature, also 20 or more?

 

What about supporting at least one simple, non-RAID LSI card like the 3443E wich can be had for very $60-100 and on its turn will support the Chenbro LX28 and LX36 based SAS Port Expanders?

Link to comment

Thanks, Joe! I am eager to see multiple array support. I was asking for a second parity drive because I wasn't very confortable on running 20 drives with a single parity drive, however multiple arrays with less drives and each with its own parity drive is way better. This is like the old RAID rule up to 8 drives in RAID5 and up to 16 drives in RAID6. Great!

It is not as bad as you might think... the issue in the old days was the same as today, the time to rebuild a failed disk.  The big difference is in the speed of the disks and the throughput to/from the disks.  When simulating a failed disks's data, you need to read 19 other disks.  The total I/O will saturate a PCI bus and keep a PCI-e bus pretty busy.  Fortunately, for most of us, we'll never grow to 20 disks, as disk sizes allow a smaller number of physical disks to provide the same storage as 20 smaller disks.

 

The original unRAID version 1 was limited to 12 drives (11 data, 1 parity) as it was about all you could put on a PCI bus and still get reasonable rebuild speed on that older version of Linux.  It was incrementally increased due to user demand as faster hardware made it possible.  Tom just gave in the the demand... and now those who grow the array to that size will just have to suffer the time it takes to calculate parity.  I know on my older IDE based array it takes over 14 hours, and its largest disk is a 1TB.  If it used 2TB disks it would take over a day to check parity or rebuild a disk.

How many drives would be supported with the multiple array feature, also 20 or more?

I've got no idea.. I'm just a customer of unRAID, just like you.  I would guess he would still have to support an array of 20.  The limits are both logical and physical.  It takes a lot bigger power supply and case to handle a lot more drives.  Personally, I'd prefer multiple servers... (and in fact, is exactly what I am doing, as I have all the pieces to put together a new SATA based server, just need to find the time to assemble it)

What about supporting at least one simple, non-RAID LSI card like the 3443E wich can be had for very $60-100 and on its turn will support the Chenbro LX28 and LX36 based SAS Port Expanders?

You would need to lobby Tom @ lime-tech to support SAS port extenders and include drivers...  as I said, I'm just a end-user like you... It is just that I'm using currently supported hardware and the biggest SATA expansion card I have is a 4 port one.

 

Joe L.

 

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...