• Content Count

  • Joined

  • Last visited

Community Reputation

0 Neutral

About Heretic

  • Rank


  • Gender
  1. the hgst nas drives were only €150 here compared to €135 for the ST4000DM000 so it was a small premium tho its always good to spread out the risk anyway. that's the beauty of unraid compared to normal raid. expand when you want and also the drive does not need to be the same spec as the other drives.
  2. the temps seem to be only a couple of degrees higher sofar. i have to see how that will work out on a very hot summer day but it seems fine sofar
  3. ye there must have been some anomaly why the preread during preclear took like 17 hours even tho the speeds reported during the preclearing looked normal (150-80 MB/s) == Using :Read block size = 65536 Bytes == Last Cycle's Pre Read Time : 17:19:24 (64 MB/s) == Last Cycle's Zeroing time : 8:53:51 (124 MB/s) == Last Cycle's Post Read Time : 28:02:02 (39 MB/s) == Last Cycle's Total Time : 36:57:06 i think it will remain a mystery to me the HGST NAS drives are 7200rpm which might make up A little for onlu haveing 800gb platters compared to models that have 1TB platters a
  4. The blackblaze report was indeed why i chose this time for the hitachi nas drives. Altho the particular (now HGST) drive was not part of the test itself the positive result of hittachis in general was motivating enough. For the same reason i chose not to go for the faster 1tb/platter seagate drives. When reading through new egg reviews it is shocking how high the initial failure of most current drives is. When reading reviews however i always take into account than someone with a negative experience is far more likely to report than someone who has no problems. I also wonder how much transport
  5. thanks for the insights. i thought it was just an extra safety to do an extra parity check after the new parity disk had rebuild. and before adding a new drive. I had just added a new 4TB data drive so the all next checks will have to be full. i'll just do another full check now before putting any data on it. all new drives have been precleared 3 times. so i assume i should be safe. doing a no-correct check now to see if all is indeed fine
  6. Hi. just replaced my 2TB parity for 4TB drive. just to be safe i did another paritycheck. since all the disks in the array are 2TB i found its strange it would keep checking the other 2TB. I understand that it wont be often that the paritydrive is the largest drive but it would still make sense it would finish once it had covered all data. I canceled the parity check cause i did not see whar benefit it would have to let it finish.
  7. Hi i would like to upgrade my server to the latest 5.x version. I still have a lot of files in a hidden folder on the cache drive. after I upgrade to 5.x can i still access the hidden folder and move the files i want to keep to the protected shares? or will the be deleted or will i get like permission problems or something? i only want to keep a small part of the files but it will take some time to sort them out so i prefer to keep them on the cache drive if hassle free thanks
  8. disks have finally stopped preclearing (The disks are the new HTGS 4tb 7200rpm NAS drives) here is an example from a report. == invoked as: ./ -r 65536 -w 65536 -b2000 -c 3 /dev/sdf == HGSTHDN724040ALE640 PK1334PBH0H4DS == Disk /dev/sdf has been successfully precleared == with a starting sector of 1 == Ran 3 cycles == == Using :Read block size = 65536 Bytes == Last Cycle's Pre Read Time : 17:19:24 (64 MB/s) == Last Cycle's Zeroing time : 8:53:51 (124 MB/s) == Last Cycle's Post Read Time : 28:02:02 (39 MB/s) == Last Cycle's Total Time : 36:57:06 == == Tot
  9. Controller throughput and bus throughput are certainly important factors. Perhaps the biggest factor is the rotational speed of the drive. If you check the User Benchmarks, Preclear Times wiki section, you can see that rough speeds for 7200rpm drives is 10 hours (+/- 2 hours) per terabyte, and that rough speeds for 5900rpm drives is 13 hours (+/- 1 hour) per terabyte. thats why 17 hours seem a bit long for just pre-read. these are 7200 drives. i saw in old pre-clear logs that pre-reading and writing zeroes took roughly the same time on a wd green. (both aprox 6:30h) for what i
  10. 2 of them are connected to an AOC-SASLP-MV8 the other is connected to the mother board. The drive that was connected to the MB was a little faster and finished the pre-read when the others 2 were at 98%. it took about 17 hours for it to finish pre-reading. the number of 40 hours for 4tB would imply the following numbers: 10h pre-read 10h writing zeroes and 20h post read.
  11. its step zero pre-reading the thing is i'm either miscalculating or the speeds don't match the time even if the speed would be constant 80MB/s it would do 288000 MB in an hour and should be finished within 14 hours i'm just curious why it is the speeds have been higher yet it is still not finished the prereading at these speeds a single cycle will be 17h + 17h +34 h = 68 Hours
  12. i started preclearing 3 hgst 4tb nas drives. it seems to take long time. How long would be normal? it started prereading around 140-150 MB/s and is currently 85 to 90 MB/s progress is currently 94-96% and elapsed time is 16:15 - to 16:05 . i used: -r 65536 -w 65536 -b 2000 -c 3 /dev/sdX if the average speed would be 90MB/s it should have finished aleady right?
  13. thanks a lot that's very helpful. its seems to be only a couple of degrees not bad at all. because of their good reputation i had already ordered 3 of the 4tb nas drives from hitachi. one for parity one for data and a spare. now the next step will be finding out how to get the smoothest upgrade from 4.7 to 5.x
  14. ye i'm looking at the 4tb nas drives too$file/DS_NAS_ds.pdf since my system still contains a number of 2tb WD and samsung drives i doubt the higher speed of the 1tb platter of the seagate nas drives matters at all. i do wonder how much hotter the HGST drives will rin compared to all the green drives i have. this article shows the Hitachi's are fare more reliable than the seagates altho it also states: this gives a