Pro-289

Members
  • Content Count

    42
  • Joined

  • Last visited

Everything posted by Pro-289

  1. I don't trust Seagate anything these days. I used to back in 2006. But now all the failed drives I see in the shop are either Seagate, Toshiba, or Hitachi. In order of failures. Of course I still see some Western Digital drives. But those are way less often than the others. So I pick the lesser of 4 evils. I've had great success with hundreds of WD drives I've purchased. I know I would not be a happy person if I was trying to save $10 a drive and purchased SGs. And I'm talking about smaller drives here. Like 500GB to 1.5TB. The Seagate 1TB and 1.5TB drives are the WORST! They se
  2. That diskspeed.sh script uses hdparm. So I tried an 'hdparm -t /dev/sdb' and I was able to do a disk read test. But it was too short. What I'd need is a script to keep repeating the 'hdparm -t' command. Then I could get really crazy and try the -T switch to test each drive using its onboard cache only. With one drive's cache I got a 4535.43 MB/sec test.
  3. Precisely what I want to do. But I'm looking for a way other than using screen for multiple threads and running preclear_disk.sh read-only in each screen. I kind of did this before when I was originally clearing 4 drives at once. I was able to see the bandwidth of each drive to make sure they each got their max speeds. I'm just a bit uneasy running preclear_disk.sh on drives that have data.
  4. Well, again, I don't want to know the total maximum throughput. I have 5 drives using onboard SATA, 3 drives on one PCIe SATA card, and one drive on another PCIe SATA card (experimenting at the moment with this). If my onboard SATA bus is the bottleneck, I would never know it. Same if my PCIe card with 3 drives happens to be in a bad and/or PCIe 1.x slot I'd never know. I suppose I'll just have to run a preclear read only on all my drives to find out what I'm looking for. I'd be able to check each drive one by one to make sure they're running around 130MB/s. If I find one or more low
  5. Yea, I've found some bad spots on the drive that just aren't being remapped through SMART. I ran Spinrite to check for bad sectors, and sure enough around 720400000, the same area in the syslog, brought Spinrite to a halt. I know my drive is not fully compatible with Spinrite, it only found the drive as a 2.2TB instead of a 3TB, but it was able to get to a bad spot at 16.77% in. I even tried a level 4 test to read and write each sector, but it just froze when it got to the bad spot. I also tried the jumpers on the hard drive to put it in 3Gbps mode and spread spectrum mode. When I had it
  6. I'm not looking for the total time it takes. I want to know the total hard drive bus bandwidth is use while all hard drives are being read. Ya know, see if there's a bottleneck in my configuration. Make sure my PCIe card is working to spec. Make sure my onboard SATA ports are functioning properly.
  7. I'm thinking I could use the preclear_disk.sh script with the -V switch to skip the pre-read and clear and go straight to the post-read verify. I could then run screen and do this for all the drives? I'd be able to view the initial MB/s read speed and see if it decreases when starting up more drives. I know it's dangerous running preclear on disks with data, but as long as I use the -V to only post-read I'd be okay, right? It wouldn't change any bits on drives, only read them? Is there a more elegant and proper way to stress the hard drive bandwidth to make sure the onboard and pcie
  8. If you put the drive in Windows and it shows up as unformatted or corrupt, you'll have to run some recovery programs like Active@ Partition Recovery. You could also try their file recovery to scan the whole drive to try and piece your files back together. If it finds partition pieces it can virtually mount them and you may have access to your data. You may be in luck since you stopped it after a minute. I've had success with that software on numerous occasions with hard drives, usb drives, and memory sticks. I've even recovered data from a drive that was quick formatted.
  9. Well the pre-clear logs show nothing, the values are just zero, because the drive was brand new. The raw_read value was 100 at start, then went back to 200 at finish. The worst was 253 then went to 200 at finish. Some of the same sector areas come up each time I do a parity check. But I end up having 0 parity errors. So it's hard to believe if these warnings are critical or not. UnRAID calls them errors, but the data is still okay. This is driving me nuts. I just wanted to build this thing and be done with it.
  10. I get some of those "frozen" errors in my log too. But it's only while copying from one drive to another using Midnight Commander logged in as root, and while trying to access the config menu via http. I figure the system is busy accessing 2 data drives + parity, then when I access the menu it just freezes for about 15 seconds, then continues and writes the "frozen" error in the log and resets one of the drive's "link". But for me it's happening on ata7 even though I'm not using the drive on ata7 in the copying. Maybe it's a sort of "system busy" error message. Oh well, probably has no relatio
  11. There's also a calculation bug in 5.0.4 that calculates the total space used, I believe, when you have unformatted disks present. After I pre-cleared 4 drives and went to the main page they showed up as unformatted, but the space used from working 4 drives didn't add up to the total used. It said there was somewhere around double used. Oh well, went away after formatting the drives.
  12. System: ASRock - H55M/USB3. CPU: Intel® Core i3 CPU 530 @ 2.93GHz - 2.933 GHz Cache: 128 kB, 512 kB, 4096 kB Memory: 4096 Module (max. 8 GB) Network: eth0: 100Mb/s - Half Duplex (What's up with this? It's a Gigabit LAN in a Gigabit router.) 5 3TB HDs in onboard SATA 3.0Gbps, Parity one of them 4 2TB HDs in PCI-E IOCrest (SI-PEX40064) SATA expansion card I believe it's got a 500W power supply.
  13. I've built a new server with 5 3TB drives, and added 4 2TB drives later. But my parity drive is showing occasional Errors on the Main screen. I've precleared all the drives, but don't remember any before or after stats. I filled the 4 drives with data, then enabled the parity drive. I ran a parity check but it found 0 errors. But the syslog is filled with tons of "disk0 read error". The SMART doesn't show any reallocated sectors, but does show some possible "Raw_Read_Error_Rate" and "Multi_Zone_Error_Rate". In the log there's a lot of "ata1.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action
  14. I'm sure other people have done speed tests and know already that parity drives slow down writes to the array. But I was just experimenting with a speed test with and without a parity drive. I have a gigabit network and transferred over a 4.36GB file. With the parity drive it took 113 seconds. Without the parity drive it took 48 seconds. Pretty big difference, cuts the time down more than half. My peak bandwidth was almost 900 Mbps. In the picture below you can see a visual on the actual test using DUMeter. Just thought it was interesting and figured I'd share it with everyone.