*** SEAGATE 5TB EXTERNALS - MYSTERY SOLVED ***


Recommended Posts

  • Replies 72
  • Created
  • Last Reply

Top Posters In This Topic

Can you share the details for the file corruption you are seeing?

 

How was it detected? What tools were used to confirm the drive is the source of the corruption?

 

By accident I would say. I was copying files to the drive and had problems like the computer freezing up or power cycling due to them switching the electric lines. I have a UPS but was using one of them super dooper efficient power supplies with all these wizbangs on it and it would reset the computer.  So I splurged on a power supply that cost 1/4 of the wizbang one and it dont reset at every hickup. Guess the more wizbang they make things the less tolerance they have. But I figured I would rather have some data protection rather than power protection. The cheap Chinese power supply seems to have no problems with the spikes.

 

One of the main problems was the High temperatures of the drive. I had a fan flowing onto the case but it was still getting over 55C and I shut it down when it hit 60C. It copied I think like 200GB by that time. When I went to play one of the video files it showed corruption but the file on the internal drive worked fine, checking the file contents it was all 0's.. I checked a few other files and they were fine but then I found a couple of others with problems. Reading up on shingled recording I see they erase many tracks and rewrite the data and I think there might be other files you never touched which end up here, so in the end you have no idea which files got corrupted. Which is my main beef! I can understand the file you are writing getting toasted but not files you never touched which you expect to be in good condition. I did not do extensive testing like if anything happens if the computer locks up and the drive still seems to work etc.. By that time I was just sick of the drive which showed wild speed variations. For a 7200 drive to copy files at 20MB was just too much. Testing showed the drive at 140MB read/write but try copying 100GB and it takes hours. With no NCQ, the drive slows down drastically for a lot of small files. I think Seagate got their panties into a bunch because people were using their drives for their own benefit. How else can you explain their behavior where they spend extra effort to make sure their drives are junk now, with warranty that fluctuates wildly with performances thats designed for grandpa's and explanations thats enough to con marketing people.

 

I known seagate for close to 30 years now and the only drive I was impressed with was a certain model that they created once they bought out quantum. They used CDC and Quantum technology to make a decent and performance drive that was rather good and I still have this drive after over 15 years working fine and no errors or bad sectors. All their drives before that was junk and performed like a pinto. After they brought out maxtor they went for the performance but without the quality.  I dont have any maxtor drives working after 10 years, they all died 5 or so years into it but their performance was high compared to others. This latest gimmick from seagate has made me rethink about buying their drives no matter what. it is a HUGE hassle when they go back. Recovering a TB of data takes weeks. Then it is a fight about the RMA and they may or may not. The RMA'ed drive might be worse. The last one they did not return but the one before that got like 3 sectors bad with 10 hours on it but I saw blackblaze say those drives were the second worst drives they have used with almost 20% failures, I would think due to high operating temperatures of over 60C.

 

Since I got the SSD the hard drives are only for storage of media files. But it is getting to the point where the benefits are beginning to weigh in. IE you have to spend weeks recovering data before you realize the penalty. I just cant handle losing 2 drives every year like this. It just makes an entire month go by being extremely depressed and stressed. I saw the WD 4TB and Seagate 5TB for the same price and glad I saw this. The 4TB uses older technology so they are better drives but slower. Not that slower drives are reliable since I also lost data on them. But at least they lasted a few months. You cant spend money and be excited and then 2 days later get depressed and want to throw the thing out. Which is how I felt after I lost a drive and they refused to send the drive back as being out of warranty and then buying a new one and getting all these problems. If they want to be like this then so be it. They wont see my money any longer.

 

 

 

 

 

 

 

 

 

 

Link to comment

Ah, you have bad power so it is the drive's fault. As you mentioned, running the drive at 60C is also a likely source of problems, NOT the SMR technology, if you did in fact even have an SMR drive. If you did have a 7200rpm DX in that case, it is NOT SMR.

 

I am not sure many would run a drive to 60C.

 

The 5TB externals I have gotten are NOT SMR drives. You can tell by the drive model numbers. My externals are ST5000DM000-1FK178. You would see ST5000ASxxx if the drive was SMR. They all have NCQ as well.  As a DM they do NOT run 7200rpm. The SMRs also do NOT run 7200rpm. If you have a 7200rpm drive it is NOT SMR.

 

Oh, the 5TB SMRs are NOT based on 3TB drives, but 4TB drives. The SMR technology increases platter yield by 25%. Even the 33% of the v2, would only turn a 3TB into a 4TB. NOT 5TB.

 

When comparing a 4TB PMR to a 5TB SMR, you'll find the older PMR is rated faster. NOT older is slower.

Link to comment

Ah, you have bad power so it is the drive's fault. As you mentioned, running the drive at 60C is also a likely source of problems, NOT the SMR technology, if you did in fact even have an SMR drive. If you did have a 7200rpm DX in that case, it is NOT SMR.

 

I am not sure many would run a drive to 60C.

 

 

So if I have power supply problems I need to apply special technologies just so I can use seagate drives? And Then keep it in a special freezer so I can use it for longer than 10 minutes?

 

Where do you get off saying this stuff.  You sound like seagate. If they want people to use their stuff they need to make it so it is usable. Like having ventilation instead of sticking it inside a plastic box and among other things have some kind of power failure mechanism because other drives dont  have the problem.  Like no one has kids or cats or dogs. So the drive will always stay stable on the desk. but all that does not matter either since seagate has an exponential failure rate compared to anyone else no matter who uses it or where it is used. The low prices are just a con and now you cant even find decent info on their drives. They try to hide almost everything about it. I could find no info on my 3TB drives, it is not even listed on their web site. And never fret, I have found 4 versions of seagate 2TB green drives and seems they have at least 3 versions of 3TB drives that I know off. All with the same model number but entirely different features. Since they dont publish any info, basically you get what you deserve. I saw someone ask what the difference was between 2 external models and was told thats propriety information. Say what?

 

 

 

Link to comment

Yes, I agree. I wish the newer external cases had better airflow. It would be crazy to ask for a fan, but at least more holes would help. They seem to be getting worse on airflow, not better. I'm not sure about those tiny WD externals. I guess they are laptop drives, but even they need vents.

 

For most capacities, there are several versions. It was just 7200rpm and low power, but now there are even more. Often you can fine a desktop version, a video version, a NAS version, maybe an Archive version, at least one "performance" version, maybe laptop size, and maybe a a couple Enterprise versions on top of those. Even finding the rotation speed on each is difficult or impossible.

 

Even if you know the model number of the drive you want, that's not on the packaging  :-\

 

But once you have a drive, (I know we want these details ahead of spending money), finding the model number is small work and there are various sources online with details.

Link to comment
  • 1 month later...

Hello, joined just to comment on this.

 

These disks are SMR. I got one a little more than a month ago. Unfortunately can't return it because I didn't keep the packaging. I've already got 4728 bad sectors on it.

 

You have to understand the technology. SMR disks keep the random writes on an outer partition sequentially. When the disk is idle, it will start doing garbage collection and do the writes on the actual tracks, in which every track on the affected band must be re-written.

 

So, performance is usually 180MB reads and writes. But top the persistent outer partition, and speeds crawl to 20-30 MB. This seems to happen after several hundred GB, which you hit often if you do things like balance or defrag or mass-convert data.

 

BTW, if you've stopped using your drive, and it's still spinning and the head is moving, DON'T turn it off. It's doing it's garbage collection. It can take up to 10 hours to complete garbage collection.

 

Unfortunately, my use-case involved continuous read-write operations for days on end. This killed the drive. I'm trying to see if I can't remove the bad sectors by not submitting the disk to the normal workload, just as an occasionnal backup.

 

To be fair, these disks are fine as external backups (which is how they are sold), but completely unfit for raid use.

Link to comment

Hello, joined just to comment on this.

 

These disks are SMR. I got one a little more than a month ago. Unfortunately can't return it because I didn't keep the packaging. I've already got 4728 bad sectors on it.

 

You have to understand the technology. SMR disks keep the random writes on an outer partition sequentially. When the disk is idle, it will start doing garbage collection and do the writes on the actual tracks, in which every track on the affected band must be re-written.

 

So, performance is usually 180MB reads and writes. But top the persistent outer partition, and speeds crawl to 20-30 MB. This seems to happen after several hundred GB, which you hit often if you do things like balance or defrag or mass-convert data.

 

BTW, if you've stopped using your drive, and it's still spinning and the head is moving, DON'T turn it off. It's doing it's garbage collection. It can take up to 10 hours to complete garbage collection.

 

Unfortunately, my use-case involved continuous read-write operations for days on end. This killed the drive. I'm trying to see if I can't remove the bad sectors by not submitting the disk to the normal workload, just as an occasionnal backup.

 

To be fair, these disks are fine as external backups (which is how they are sold), but completely unfit for raid use.

Previous posters have found the problems to be A) controller issues, B) power, C) heat. I have experienced the controller problem with other USB3 drives. And unfortunately the heat problem with many more drives :(

 

Just how do you expect to "remove the bad sectors"?

 

Lots of testing real SMR drives seem to contradict your finding as to the usefulness of SMR drives.

Link to comment

They may in fact be SMR drives (Seagate has released both 5TB and 8TB SMR units ... with a 10TB "coming soon") => but some very extensive testing of the 8TB units has shown they work quite well in UnRAID, so the technology by itself isn't the issue.

 

But there have been enough problems reported with these 5TB externals that I'd nevertheless avoid them.

 

With regards to the 8TB units, you may want to read this:  http://lime-technology.com/forum/index.php?topic=39526.0

 

... and (until Daniel adds his testing overview to that post, which he plans to do), you can read about the extensive testing that was done on these here (skim the thread and look for posts by danioj):  http://lime-technology.com/forum/index.php?topic=36749.0

 

Link to comment
  • 1 month later...

Hi I just signed up to ask a question regarding this drive.

 

I am a noob to this type of drive however, I am considering purchasing 2 of them to use in my HTPC (specifically for retro gaming) instead of using them as external drives (only because I want a neat and tidy set up).

 

Without getting too technical, what is the overall census about doing this in terms of pros/cons?

 

Thanks in advance.

Link to comment
  • 2 weeks later...

They may in fact be SMR drives (Seagate has released both 5TB and 8TB SMR units ... with a 10TB "coming soon") => but some very extensive testing of the 8TB units has shown they work quite well in UnRAID, so the technology by itself isn't the issue.

 

But there have been enough problems reported with these 5TB externals that I'd nevertheless avoid them.

 

With regards to the 8TB units, you may want to read this:  http://lime-technology.com/forum/index.php?topic=39526.0

 

... and (until Daniel adds his testing overview to that post, which he plans to do), you can read about the extensive testing that was done on these here (skim the thread and look for posts by danioj):  http://lime-technology.com/forum/index.php?topic=36749.0

 

Gary -

 

You wrote a post somewhere about after doing a lot of writes to an SMR drive, to give it some time plugged in but not doing any I/O for the writes to be written from the "buffer area" (forgot technical term) to the SMR portion of the drive.

 

I started thinking that maybe this 5T was SMR with not as good a firmware as the newer 8T version, and decided to play with it again.

 

I re-hatched my 5T drive from its USB enclosure. I hypothesized that the very slow I/O I was seeing trying to preclear it again after having recently precleared in via the USB connection was due to the dumping of buffer to the SMR area as the second preclear ran. I left it plugged in for well over a week to "do its thing" and started a preclear earlier this evening. It is now doing the preread at a respectable ~130 MB/sec. Am gong to let it complete the preclear and perhaps I will be able to use it after all. My strategy will be to fill it up with archive data, freeing up space on my non-SMR drives in the process, and hopefully after the writes are all caught up, this drive will be basically a read-only archive disk with few if any writes in the future.

 

Wish me luck!

Link to comment

Let us know the results (I'm sure you will).    The "buffer area" is called a "persistent cache".    From what I've read about these drives, the 5TB unit has a smaller persistent cache than the 8TB drives, so that may also have impacted what you were doing earlier.

 

Assuming the drive "passes" your pre-clear testing, it should work very well as an effectively "read only" array drive.

 

 

 

Link to comment

Let us know the results (I'm sure you will).    The "buffer area" is called a "persistent cache".    From what I've read about these drives, the 5TB unit has a smaller persistent cache than the 8TB drives, so that may also have impacted what you were doing earlier.

 

Assuming the drive "passes" your pre-clear testing, it should work very well as an effectively "read only" array drive.

 

Its 32% through zeroing, clipping along at 167 MB/sec.

 

Will reserve final comments until drive is loaded with data, but certainly encouraging.

Link to comment

So far so good. Almost 1/2 way through post read. Running at 121 MB/sec.

 

Seeing something a little odd in the log (sdx is the drive I am preclearing)

Jul 13 22:48:32 Tower udevd[2796]: timeout: killing 'ata_id --export /dev/sdx' [2797] (Minor Issues)
Jul 13 22:48:33 Tower udevd[2796]: timeout: killing 'ata_id --export /dev/sdx' [2797] (Minor Issues)
Jul 13 22:48:34 Tower udevd[2796]: timeout: killing 'ata_id --export /dev/sdx' [2797] (Minor Issues)
Jul 13 22:48:35 Tower udevd[2796]: 'ata_id --export /dev/sdx' [2797] terminated by signal 9 (Killed) (Errors)
Jul 13 22:48:35 Tower udevd[2796]: timeout 'scsi_id --export --whitelisted -d /dev/sdx' (Drive related)
Jul 13 22:48:35 Tower udevd[2796]: timeout '/sbin/blkid -o udev -p /dev/sdx' (Drive related)
Jul 13 22:48:36 Tower udevd[2796]: timeout: killing '/sbin/blkid -o udev -p /dev/sdx' [2951] (Minor Issues)
Jul 13 22:48:36 Tower udevd[2796]: '/sbin/blkid -o udev -p /dev/sdx' [2951] terminated by signal 9 (Killed) (Errors)
Jul 13 22:49:38 Tower kernel: sdx: sdx1 (Drive related)

 

Any idea what that means?

 

Here are the smart attributes.

 

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   114   100   006    Pre-fail  Always       -       65078328
  3 Spin_Up_Time            0x0003   092   091   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       37
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   073   060   030    Pre-fail  Always       -       22308861
  9 Power_On_Hours          0x0032   099   099   000    Old_age   Always       -       1001
10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       29
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   067   065   045    Old_age   Always       -       33 (Min/Max 30/34)
194 Temperature_Celsius     0x0022   033   040   000    Old_age   Always       -       33 (0 18 0 0 0)
195 Hardware_ECC_Recovered  0x001a   114   100   000    Old_age   Always       -       65078328
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       587
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       267718196461863
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       19542320986
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       34685028589

 

The CRC errors (587) are unchanged during the preclear. Might have been something happening in the USB enclosure.

 

The normalized value on the Seek Error Rate are a little low. Is this in the normal range for other SMR drives?

 

Link to comment

Don't know about the UDMA CRC errors, but wouldn't worry about them -- the value for the parameter is still 200, which is "perfect", so it's clearly not an issue.    Seagate reports all of the raw errors (WD does not), so you see all of the corrected sectors reported -- the Raw Read Errors -- along with the fact that they were all just fine after ECC -- the Hardware_ECC_Recovered.

 

 

 

 

 

Link to comment
  • 2 weeks later...

Original post updated with the following:

 

Update 7/25/15: Since originally writing this review - I have learned that this drive is an SMR drive meaning that it has overlapping tracks that require very special writing. This can result in slow performance under certain circumstances. To alleviate performance issues, the drive features a persistent cache - a non SMR portion where writes are made and later, when cache is full or drive idle, the drive copies from the cache to the SMR area.

 

I believe that I was trying to do I/O while the drive was dumping persistent cache which created some very strange performance characteristics observed and detailed in this thread. Maybe there is special logic in the USB bios to prevent this type of thing, but when attached to a normal SATA controller, the odd slowdowns were interpreted as related to a purposefully restricted BIOS making the drive perform poorly when connected to a SATA port. This theory was developed based on research I found on the internet from other users' experiences and seemed to fit the facts.

 

I now believe this is not the case, and that the experiences users were having should actually have been attributed to the SMR technology in the drive. (Seagate never disclosed these to be SMR drives). After reading about 8T SMR drives I decided to test it again (previously it had been reverted to be used as portable USB 3 drives and used very infrequently). Once I removed it from the USB enclosure and inserted it in the server, I let it sit for over a week, so absolutely any persistent cache activity would complete. I then precleared the drive, let it sit a week, copied a bunch of data to it, and performed a parity check. The drive is performing quite admirably. My plan is to fill these drives with sequentially written data (minimal if any fragmentation) and use the PMR drive space freed by copying data to this drive for everyday reads and writes. (Although I let the drive wait about a week, a day would likely have been more than enough).

 

Sorry for misleading anyone with my writeup, which was intended to help others avoid problems with the drive.

Link to comment
  • 3 weeks later...

One more update ...

 

Update: 8/9/15: So happy with the Seagate 5T drive I had, I bought another one that i found locally on a good sale. The shell had a blue bottom - and it was much easier to extract the drive than the prior generation. The drive model was the same as what I had. It made it through the preread and zeroing phases with no problem, and was 80% through the post read. But next time I checked back on it the drive had been dropped from the server. It was /dev/sdc, and that device no longer existed. Long series of errors in the syslog with no clue what happend. I rebooted, and checked and the drive was not precleared. So set it up to postread just the very last few % and watched it. The postread finished, but then it entered a series of I/Os to the very very beginning of the drive to install the partition and preclear signature, the drive hung for a while and then dropped. Rebooted the server with the drive in another slot. Smart report was fine. Same thing. I noticed that the firmware revision was different - I think it ended with a "6" but forgot to write it down. Anyway, I returned it as a defective drive. Can't say why this happened. Maybe a bad drive that just happened to have a problem at or near sector 0, or maybe Seagate playing games. Either way, I've switched to Toshiba.

Link to comment

Interesting experience.  The 8TB SMR's seem to be doing very well r.e. reliability, but not so much for the 5TB versions.  In any event, for 5TB drives I'd use either Toshiba or WD Reds anyway, to stay with PMR technology.

 

I've got one of the 8TB Seagates (not in UnRAID) and it's working very well as an extra storage drive [on my main PC].  I keep large files there ... images of my other PC's, ISO's of all the various program CDs/DVDs I've collected over the years; etc.    Much of this is also on my UnRAID server, but it's convenient to have it locally available.  (and you can never have too many backups  :) )

 

But for additional drives, I'm beginning to learn towards 5TB Reds for my next UnRAID box.    I'm happy with the 8TB SMR Seagate, but not with Seagate in general, as I've seen more issues reported in the last few months than I like.    And there have been a few folks with issues with WD's 6TB Reds ... which are likely due to the "push" to 1.2TB platters  [the 5TB Red is essentially the same drive with 1TB platters, which have proven VERY reliable in all the other Reds).

 

Link to comment
And there have been a few folks with issues with WD's 6TB Reds ... which are likely due to the "push" to 1.2TB platters  [the 5TB Red is essentially the same drive with 1TB platters, which have proven VERY reliable in all the other Reds).

No problems for me so far.  I only have 3 6TB WD Reds in service so far but 1 more will be added in the next month and 1 as a spare.  All went through 3 preclear cycles with flying colors so not expecting troubles. So knock on wood now that I have jinxed them.
Link to comment

Good to know.  I'm actually still on the fence ... plan to build a new server in Nov/Dec after a couple trips we're doing before then.    My original thought was to use 8TB SMR's, but I'm waffling a bit on that -- and if I stay with PMR's I may still go with the 6TB Reds ... I DO trust WD to not release a drive that can't be reliable [but I also know that 1TB platters are "pushing" the density less than 1.2TB platters  :) ].    On the other hand, densities have been improved for years, so all is likely just fine with the 6TB units.

 

Who knows, by Nov WD may release an 8TB PMR Red  :)

Link to comment
  • 2 weeks later...

Who knows, by Nov WD may release an 8TB PMR Red  :)

 

I know from the smiley that it was said in jest, but it wouldn't be a Red drive if it was based on SMR technology (I'm guessing that's what you meant and the "P" is a typo). It would be something else and they'd have to invent some new colour for it - Orange, say - because SMR drives are not suitable for the same applications that define the Red range.

 

The first generation SMR drives, produced by Seagate, are designed to emulate conventional SATA hard drives while being very different internally. In order to make them as near as possible a drop-in replacement the complexities of the SMR technology are handled by the drive's own firmware. The need to flush the persistent cache is hidden from the operating system and so the drive appears to become very unresponsive for several seconds at a time. This is usually enough to confuse a RAID controller and cause it to eject the drive from an array. Hence Seagate's own recommendation that they be used singly. unRAID, of course, isn't RAID so they may well work reasonably well as data drives, though I don't think they would be best suited as parity drives.

 

Second generation SMR drives, such as the 10 TB helium filled HGST product, are host-managed (as opposed to drive-managed) whereby the responsibility for managing the persistent cache is delegated to the operating system or disk controller. The problem is that mainstream operating systems or RAID controllers don't have the capability, yet. But when they do, they will make the best use of the host-managed SMR drive's persistent cache. Current file systems are not optimised for SMR drives - they were designed at a time when seeks were expensive and overwrites cheap. With SMR the situation is reversed, favouring log structured file systems.

 

Reading this thread, it would have been useful if people had mentioned the model of the drive they were discussing because some people were talking about a conventional drive while some were discussing an SMR drive. There are several 5 TB Seagate drives available at the moment, each designed for a particular application. The only one that uses SMR technology has "AS" in its model number, such as ST5000AS001. They work very well in external USB cases and it isn't Seagate's fault if people break open the cases and use the drives as they were never intended to be used. The ST5000DM000, in contrast, is a conventional non-shingled drive.

 

Link to comment

Yes, I indeed meant what I said (i.e. PMR)  :)

 

It wouldn't even require higher density platters => HGST already has a helium-filled 8TB PMR drive using seven 1.2TB platters.    Achieving that density without helium may be a challenge ... but it may also be possible to improve the platter density to 1.35TB and get 8TB with only 6 platters.  Hard to say just what technological advances might happen in the near term ... but it's pretty clear there will be some !!

 

Link to comment

Achieving higher platter density with PMR is unlikely. Seagate has already delivered 826 Gbit per sq in on PMR. This exceeds the helium drives which are under 650. This is also well into the range were SNR becomes a problem (see the EOL 5 platter Makara). Trying to push closer to the 1Tb per sq in limit of PMR is a steep engineering effort. Alternate technologies (SMR, TDMR, HAMR, etc) provide easier ways to get more on each platter.

Link to comment

I registered here just to reply and say this thread is full of nonsense. Telling everyone to avoid a Seagate disk because of a few people's issues is just ridiculous.

 

For reference, I have 9 (nine) of these 5TB Disks, bought in the USA in Seagate Backup Plus units. I had 4 of these running externally for a while (4-5 months) - connected to a Mac mini 2012 running SoftRAID 4 - two sets of RAID 1 over USB 3.0. No issues. (since upgraded to Thunderbolt enclosure)

 

I have 2 running now for about a month, connected to an Asus Sabertooth Z87 motherboard. No issues - 180MB/s read/write or thereabouts.

 

I have 2 running a simple RAID 1 over Thunderbolt connected to my Mac mini 2012.

 

I also now my original 4 of them in an OCZ Thunderbay 4 enclosure - running RAID 5 over Thunderbolt connected to my Mac mini 2012.

 

And then I have a spare 5TB just in case I have a failure.

 

Again, NO issues with any of these drives. The only thing I had to was run the Mac HDAPM tool to prevent the head-parking issue that OS X is infamous for, but I had no performance issues prior to running the tool, and only did so because I didn't like the noise the head-parking sound made when running media in a quiet movie room. I did the same thing with my older 2TB Western Digital drives, too, so this is not unique to the Seagates.

 

So again, to the OP, your experience is certainly not everyone's, and telling EVERYONE to avoid Seagate 5TB Externals is just madness.

 

PS I have several friends with ~15 5TB Seagate drives between them, and neither of them has any issues, either.

 

Quoting myself to update this thread. Still running a bunch of these (12 now) 5TB Seagates connected to my Mac mini over Thunderbolt, running SoftRAID 5. No issues running RAID 5. Performance definitely decreases when the files are tiny and there's zillions of them (to be expected), but still getting excellent large-file throughput speeds despite several of the RAID 5s being quite full (277GB free on one of the 4x 5TB RAID 5s - 15TB available originally). These are ST5000DM000 drives.

 

I'm right now running a verification that all data can be read, and that there's no bad sectors on 8 of them...

 

Here's a picture of the in-progress verification.. (sorry for the crap resolution but was limited to 192KB).

Screen_Shot_2015-08-24_at_8_20.38_PM.png.e9cf4da12b08cb5736eb2f7c74beed59.png

Link to comment
  • JorgeB unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.