garycase Posted January 24, 2016 Author Share Posted January 24, 2016 Still to this day - 1 year on almost - so happy with these drives. Still don't understand those who are not choosing these for Parity drives and or are choosing H/W RAID1 (2 x 4TB) to work as Parity! They are just perfect for unRAID as a Data or Parity disk IMHO! Agree => while I still haven't built a system using these drives, it's purely because I simply don't need the extra space (yet). My next system will absolutely use the 8TB SMR units (unless there's a larger one available at ~ the same cost/TB). Your experience has absolutely shown that Seagate's mitigations of the shingled limitations completely eliminates any concerns for the UnRAID use case (at least for the vast majority of users). Quote Link to comment
methanoid Posted January 24, 2016 Share Posted January 24, 2016 Still to this day - 1 year on almost - so happy with these drives. Still don't understand those who are not choosing these for Parity drives and or are choosing H/W RAID1 (2 x 4TB) to work as Parity! They are just perfect for unRAID as a Data or Parity disk IMHO! Agree => while I still haven't built a system using these drives, it's purely because I simply don't need the extra space (yet). My next system will absolutely use the 8TB SMR units (unless there's a larger one available at ~ the same cost/TB). Your experience has absolutely shown that Seagate's mitigations of the shingled limitations completely eliminates any concerns for the UnRAID use case (at least for the vast majority of users). Not in unRAID but I managed to kill two of these drives with FlexRAID... Both times they died when they were parity drive and both times on the initial parity creation (i.e when writing approx 6-7TB continually to the drive). Not sure I would want to ever risk data on one of these again. Sure, as a data drive IF I was writing to it in smallish-chunks but copy 6-7Tb data at max speed to it and it consistently died... YMMV... I'm back on tried n tested traditional drives now Quote Link to comment
garycase Posted January 24, 2016 Author Share Posted January 24, 2016 As I noted above, they work great "... for the vast majority of users ..." ==> there are definitely use cases where the usage will "hit the wall" of shingled drive performance, which slows everything down to a C..R..A..W..L (a VERY slow crawl indeed) and is completely unacceptable. Hitting that "once in a blue moon" isn't catastrophic (as long as you recognize what happened and simply wait it out) ... but clearly any user who's typical usage is going to do that should NOT be using shingled drives. Quote Link to comment
methanoid Posted January 24, 2016 Share Posted January 24, 2016 As I noted above, they work great "... for the vast majority of users ..." ==> there are definitely use cases where the usage will "hit the wall" of shingled drive performance, which slows everything down to a C..R..A..W..L (a VERY slow crawl indeed) and is completely unacceptable. Hitting that "once in a blue moon" isn't catastrophic (as long as you recognize what happened and simply wait it out) ... but clearly any user who's typical usage is going to do that should NOT be using shingled drives. I can live with CRAWL.. but death and loss of data is more serious than a slowdown Still, as I said.. YMMV... Quote Link to comment
BRiT Posted January 24, 2016 Share Posted January 24, 2016 Not in unRAID but I managed to kill two of these drives with FlexRAID... Both times they died when they were parity drive and both times on the initial parity creation (i.e when writing approx 6-7TB continually to the drive). Not sure I would want to ever risk data on one of these again. Sure, as a data drive IF I was writing to it in smallish-chunks but copy 6-7Tb data at max speed to it and it consistently died... YMMV... I'm back on tried n tested traditional drives now Did you exercise the drives with some sort of burn-in and test program before using them in FlexRAID? I know FlexRAID doesn't need to have them precleared, but if you didn't preclear the drives or do something similar they could have just been infant mortality drives. Quote Link to comment
methanoid Posted January 24, 2016 Share Posted January 24, 2016 Not in unRAID but I managed to kill two of these drives with FlexRAID... Both times they died when they were parity drive and both times on the initial parity creation (i.e when writing approx 6-7TB continually to the drive). Not sure I would want to ever risk data on one of these again. Sure, as a data drive IF I was writing to it in smallish-chunks but copy 6-7Tb data at max speed to it and it consistently died... YMMV... I'm back on tried n tested traditional drives now Did you exercise the drives with some sort of burn-in and test program before using them in FlexRAID? I know FlexRAID doesn't need to have them precleared, but if you didn't preclear the drives or do something similar they could have just been infant mortality drives. No, no preclear of any kind. Yes, they could be infant mortality drives (from 2 unrelated batches)... I guess I must just be unlucky... But Seagate are not fooling me a 3rd time Quote Link to comment
jwegman Posted January 24, 2016 Share Posted January 24, 2016 No, no preclear of any kind. Yes, they could be infant mortality drives (from 2 unrelated batches)... I guess I must just be unlucky... But Seagate are not fooling me a 3rd time Perhaps you mentioned what type of failure those two drives had in some other thread... But just stating that they 'died' in this thread is fairly ambiguous. From my own anecdotal experience (using these drives in Unraid as both parity and data); I've thus far precleared (3 cycles each) 7 of them without obvious issue (1 parity, 5 data, 1 spare). Quote Link to comment
methanoid Posted January 29, 2016 Share Posted January 29, 2016 No, no preclear of any kind. Yes, they could be infant mortality drives (from 2 unrelated batches)... I guess I must just be unlucky... But Seagate are not fooling me a 3rd time Perhaps you mentioned what type of failure those two drives had in some other thread... But just stating that they 'died' in this thread is fairly ambiguous. From my own anecdotal experience (using these drives in Unraid as both parity and data); I've thus far precleared (3 cycles each) 7 of them without obvious issue (1 parity, 5 data, 1 spare). Going from memory I think they killed the reallcoated sector count or something like that... many thousands of errors anyway. BUT, I might be tempted back by the testing done here. So, it was 3 pre-clear runs and then build array (with parity enabled) and start copying data over like that OR the usual build array without parity. Copy data over and then enable parity build? Quote Link to comment
JorgeB Posted January 29, 2016 Share Posted January 29, 2016 So, it was 3 pre-clear runs and then build array (with parity enabled) and start copying data over like that OR the usual build array without parity. Copy data over and then enable parity build? Either way will work, use the most convenient for you. Quote Link to comment
hugenbdd Posted January 29, 2016 Share Posted January 29, 2016 So I decided to build an unraid box and have 3 of these doing pre-clear right now. First round took 60 hours to clear. only 120+ more hours to go... I'm using this script, wget http://bit.ly/1G44UhZ -O /boot/config/plugins/preclear.disk/preclear_disk.sh Hopefully it's the new fast one... ========================================================================1.15 == == Disk /dev/sdb has successfully finished a preclear cycle == == Finished Cycle 1 of 3 cycles == == Using read block size = 1,000,448 Bytes == Last Cycle`s Pre Read Time : 20:09:55 (110 MB/s) == Last Cycle`s Zeroing time : 18:12:11 (122 MB/s) == Last Cycle`s Post Read Time : 21:54:35 (101 MB/s) == Last Cycle`s Total Time : 60:28:09 == == Total Elapsed Time 60:28:09 == == Disk Start Temperature: 27C == == Current Disk Temperature: 27C, == == Starting next cycle == ========================================================================1.15 Quote Link to comment
methanoid Posted January 31, 2016 Share Posted January 31, 2016 So, it was 3 pre-clear runs and then build array (with parity enabled) and start copying data over like that OR the usual build array without parity. Copy data over and then enable parity build? Either way will work, use the most convenient for you. I'd like to know for sure that someone has built a "busy" array by copying data over and then doing parity after... It was the act of creating parity that twice killed these drives for me using Windows and FlexRAID so I am ultra cautious. I just started experiencing reallocated sectors on my 6 week old 5Tb Toshiba drives. One to start and I wake this morning and now four are reporting reallocations. I don't want to jump onto 8Tb Seagates to get the same problems! Quote Link to comment
JorgeB Posted January 31, 2016 Share Posted January 31, 2016 I'd like to know for sure that someone has built a "busy" array by copying data over and then doing parity after... It was the act of creating parity that twice killed these drives for me using Windows and FlexRAID so I am ultra cautious. I just started experiencing reallocated sectors on my 6 week old 5Tb Toshiba drives. One to start and I wake this morning and now four are reporting reallocations. I don't want to jump onto 8Tb Seagates to get the same problems! I did that on mine, also upgraded one 3tb drive with an 8tb, this is the same “workout” as building parity. Quote Link to comment
methanoid Posted January 31, 2016 Share Posted January 31, 2016 IDK how parity drive is calculated i.e. is all drive written to even if the data drives are only 20% full? I'm really freaked out having had problems with the Seagates before and now finding Toshiba's in pre-fail after 60 days... Sure they are under warranty but I need drives to put my data on so have to buy new drives and then I am left with warranty replacements to sell. Either way it's a big hit financially. Do I buy more Toshibas or Seagates? Decisions decisions! Quote Link to comment
JorgeB Posted January 31, 2016 Share Posted January 31, 2016 IDK how parity drive is calculated i.e. is all drive written to even if the data drives are only 20% full? I'm really freaked out having had problems with the Seagates before and now finding Toshiba's in pre-fail after 60 days... Sure they are under warranty but I need drives to put my data on so have to buy new drives and then I am left with warranty replacements to sell. Either way it's a big hit financially. Do I buy more Toshibas or Seagates? Decisions decisions! The initial parity sync will always write 100% of the disk, even if all data disks are empty. After that parity is updated, i.e., you write 20GB to a data disk, same amount will be written to parity. Quote Link to comment
SSD Posted January 31, 2016 Share Posted January 31, 2016 IDK how parity drive is calculated i.e. is all drive written to even if the data drives are only 20% full? I'm really freaked out having had problems with the Seagates before and now finding Toshiba's in pre-fail after 60 days... Sure they are under warranty but I need drives to put my data on so have to buy new drives and then I am left with warranty replacements to sell. Either way it's a big hit financially. Do I buy more Toshibas or Seagates? Decisions decisions! UnRaid creates redundancy on a drive by drive basis. And to unRaid, a drive is always full of 1s and 0s. It has no idea what they mean. Different drives from different manufacturers do have different reliability characteristics. I've had excellent luck with HGST drives (previously Hitachi). Toshiba got some of that technology and the 5T drives they made have also been reliable for me. The newer models, X300, I have no experience with. I don't think they use the Hitachi technology. Of all the drives I have owned, Seagates have been the most unpredictable. Some models have been awful. Others good. I have not jumped into the 8T world, but experience here appears good. The idea that using a drive for it's intended purpose is somehow going to make it fail sooner is flawed. It they are going to fail under normal usage, you want that to happen so you can return them and get new ones. Although there are things you can duo to make drives fail early, like running them at very high temps, copying large amounts of data to them is not dangerous at all. Do it and let the chips fall where they may and then return for factory new replacement (if possible) or RMA (if past return date) the bad ones. Take the lessons learned to your next purchase decision. We tend to think of finding smart issues as a bad thing, but it is far better to get these errors during preclear and weed out the bad ones before your valuable data is at risk. Good luck! Quote Link to comment
JorgeB Posted January 31, 2016 Share Posted January 31, 2016 IDK how parity drive is calculated i.e. is all drive written to even if the data drives are only 20% full? I'm really freaked out having had problems with the Seagates before and now finding Toshiba's in pre-fail after 60 days... Sure they are under warranty but I need drives to put my data on so have to buy new drives and then I am left with warranty replacements to sell. Either way it's a big hit financially. Do I buy more Toshibas or Seagates? Decisions decisions! UnRaid creates redundancy on a drive by drive basis. And to unRaid, a drive is always full of 1s and 0s. It has no idea what they mean. Different drives from different manufacturers do have different reliability characteristics. I've had excellent luck with HGST drives (previously Hitachi). Toshiba got some of that technology and the 5T drives they made have also been reliable for me. The newer models, X300, I have no experience with. I don't think they use the Hitachi technology. Of all the drives I have owned, Seagates have been the most unpredictable. Some models have been awful. Others good. I have not jumped into the 8T world, but experience here appears good. The idea that using a drive for it's intended purpose is somehow going to make it fail sooner is flawed. It they are going to fail under normal usage, you want that to happen so you can return them and get new ones. Although there are things you can duo to make drives fail early, like running them at very high temps, copying large amounts of data to them is not dangerous at all. Do it and let the chips fall where they may and then return for factory new replacement (if possible) or RMA (if past return date) the bad ones. Take the lessons learned to your next purchase decision. We tend to think of finding smart issues as a bad thing, but it is far better to get these errors during preclear and weed out the bad ones before your valuable data is at risk. Good luck! What he said. You were either unlucky with your previous disks or have some underlying issue, like bad cooling or inadequate power supply. Quote Link to comment
John_M Posted January 31, 2016 Share Posted January 31, 2016 Although there are things you can duo to make drives fail early, like running them at very high temps, copying large amounts of data to them is not dangerous at all. It's interesting though, that enterprise class hard disks have a maximum recommended annual data throughput assigned to them. I haven't done the calculation to find out whether it would be possible to exceed this in practice, constantly reading or writing - like an infinite number of pre-clear cycles - or whether the figure is purely theoretical, but in the absence of throughput figures for consumer grade disks the assumption has to be that they are somewhat lower. Quote Link to comment
SSD Posted January 31, 2016 Share Posted January 31, 2016 Although there are things you can duo to make drives fail early, like running them at very high temps, copying large amounts of data to them is not dangerous at all. It's interesting though, that enterprise class hard disks have a maximum recommended annual data throughput assigned to them. I haven't done the calculation to find out whether it would be possible to exceed this in practice, constantly reading or writing - like an infinite number of pre-clear cycles - or whether the figure is purely theoretical, but in the absence of throughput figures for consumer grade disks the assumption has to be that they are somewhat lower. You did not exceed the usage of the drive with a few preclear cycles and filling the drive. Enterprise drives are typically for server applications with very high usage. Our unRAID media servers tend to be very light usage applications. Short response - don't sweat it. Quote Link to comment
Helmonder Posted January 31, 2016 Share Posted January 31, 2016 We tend to think of finding smart issues as a bad thing, but it is far better to get these errors during preclear and weed out the bad ones before your valuable data is at risk. Good luck! Which is exactly the reason we preclear.. An error might only occur after several writes.. An unraid system typically stores data in a Write-Once-Read-Many kind of thing.. Mostly large media folders.. This means that a flaky sector can very good go unnoticed.. Preclearing takes that out of the mix.. Largely.. Not completely.. I have precleared a lot of drives (20+) and only one has shown quickly rising smart errors, I returned it 5 days later and got a new drive.. Quote Link to comment
hugenbdd Posted February 9, 2016 Share Posted February 9, 2016 I don't think I have seen a speed test reported on these drives in the thread. Just wanted to post my results. Averaging about 160MB/sec. not bad... Tested using the drivespeed.sh script. https://lime-technology.com/forum/index.php?topic=31073.0 Quote Link to comment
volume Posted February 24, 2016 Share Posted February 24, 2016 some advice needed.. just bought 2 new Seagate 8TB. i have another 4 seagate 8tb on my hp micro server running xpenology, but i would like to built one big pool with all the 6 drives, in a new unraid machine. 6 drives in total. 1 drive for parity. 2 drives are new (unformatted disks) the other 4 disks are full of data. Which is the best practice to start building my new unraid server.. 1) how many circles is advised to pre clear? 2) can we pre clear all the drives at once, or one by one? 3) when we get better write speeds? on a pool with parity or without parity drive? it is preferred if i move my files first, and then set one drive for parity OR the opposite? thanks Quote Link to comment
Helmonder Posted February 24, 2016 Share Posted February 24, 2016 1) as many as you feel comfortable with.. I do 3 cycles. 2) you can preclear more at the same time 3) parity increases protection but heavily impacts performance when writing. So no parity gives much better write speeds but drive error means data loss. 4) Up to you.. moving files first and then adding parity will make moving the files faster, but should a drive fail during moving you will loose your data. First adding parity means the moving process will go slower but loosing a drive will not cause data loss. What I would do: 3 cycles preclear, two drives at a time. If you are copying from another system you have your data on that old system so loosing one of the new drives will not cause data loss, I would copy first, then add parity on the new system. Quote Link to comment
HellDiverUK Posted February 25, 2016 Share Posted February 25, 2016 Helmonder, how are those inert gas CPUs holding up? No issues with the gas escaping? Quote Link to comment
Helmonder Posted February 25, 2016 Share Posted February 25, 2016 Helmonder, how are those inert gas CPUs holding up? No issues with the gas escaping? Huh ? Quote Link to comment
JonathanM Posted February 25, 2016 Share Posted February 25, 2016 Helmonder, how are those inert gas CPUs holding up? No issues with the gas escaping? Huh ? ;D He's poking fun at your sig. You seem to have an exceedingly rare CPU type. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.