Jump to content

Superorb

Members
  • Posts

    975
  • Joined

  • Last visited

Everything posted by Superorb

  1. I'm not using a server board What board are you using?
  2. When I first began having this problem I ran Memtest for 72+ hours without any errors. I was really hoping it would be as easy as bad memory. I recently replaced all the cables with shorter ones and unbundled them. I ordered over twice the cables I needed so I went through a few of them and the sync errors still persist. I even removed all drives from their hot-swap cages to no avail. One thing of note is that I tried to run prime95 and immediately it crashes citing that it ran out of memory. And no matter what I set the memory usage to it always immediately crashes due to memory running out. Though I always have almost all 2GB memory free at all times. This is a pretty vanilla server, there are no add-ons running and no cache drive, just data disks and a parity. Is the PSU thread updated? I originally ordered the CX PSU because it's in the recommended list, that's not the case anymore? What do you guys recommend? I will never run more than 10 drives, especially with 4TB+ drives readily available. I'm currently running 7 green drives. I have gift cards to Amazon so I'll have to buy it there. Also, before I added the Sil3132 a few days ago I was getting around 5 sync errors per check, now I'm getting over 50.
  3. Yup, I'm hoping others chime in with their experience.
  4. Not if the sync errors never match between checks Nor are they ever the same amount of errors. Something other than the data on the drives is causing the errors.
  5. Thanks. Do you know if they all support sleep? I know in the past some cards wouldn't come back from sleep or would hang the system. I'm running a non-correcting check and am at 8 sync errors and it's only at 7% so far. I just want the errors to go away, I'm sick of seeing them. I can't even remember the last time I ran a correcting parity check due to the random errors. It would be great if I could borrow one of these but I don't see that happening.
  6. Is there any reason to go with one 8-port controller card over another or are they all basically the same? I'm on 5.0 and would like to start sleeping the server and waking via magic packet. Parity is 4TB but all other drives are <=2TB. My board has 2xPCI-e x8 slots. I also have Amazon gift cards so buying from AZ is preferred. I have been unsuccessful at preventing random parity sync errors that always occur at different addresses and different amounts each parity check. m1015 m1115 AOC-SASLP-MV8 AOC-SAS2LP-MV8 9211-8i
  7. Have you tried replacing the parity drive? If you're getting errors past the highest data addresses, then it's got to be either an error on the parity drive; memory; or a controller issue. Sounds like you've already pretty thoroughly tested the RAM and controller ... so I'd try a different parity drive. I did replace the parity drive. I had the same random errors with the old 2TB parity and also with the new 4TB parity drive. I really hate gremlins like this I was throwing around the idea of getting a new SATA card and moving all the drives to the new card since I'm using mobo headers for all 6 drives.
  8. Yes, I'm sure. I always copy the addresses of the errors and no two have ever matched. Plus sometimes I'll have 2 errors, then I'll have 30+ errors, then the next time will be 1 or no errors, etc. If it's not memory what could it be? I do get errors past 2TB and my largest data drive is 2TB with parity being 4TB.
  9. If Only I could figure out my random errors during parity checks I'd start buying more drives again. I did figure out the hardware issues with the SATA links being reset, but can't figure out the sync errors. RAM is fine, tested for 72 hours without error.
  10. Amazon has this drive for $150 with free shipping. Normally $175 and lowest price was $169.99 according to the Camel. These drives are excellent for servers and NAS enclosures. http://www.amazon.com/gp/product/B00D1GYO4S Built and tested to provide industry-leading performance for 24x7 NAS applications Includes NASWorks technology to support customized error recovery, advanced power management and vibration tolerance features Designed for home servers or desktop NAS solutions, small-business file sharing, backup server applications Available in 2TB, 3TB and 4TB capacities Always on, 24x7 - 1M hours MTBF
  11. So over the past week I've been copying 100gigs or so to/from each drive on the server along with a few non-correcting parity checks to stress the system and I have not had a single ATA reset/error since moving the two problem drives out of the hot-swap cage and hooked them directly to the motherboard. I have a 3rd cage at the bottom of the case that I'll move to the top and test for errors. Still have some random sync errors that keep popping up during non-correcting checks that I'll have to address. Already checked RAM for days and no errors.
  12. I have those same 2 errors as well eevry time I boot the machine. Feb 28 18:11:34 unRAID kernel: atiixp 0000:00:14.1: simplex device: DMA disabled (Errors) Feb 28 18:11:34 unRAID kernel: ide1: DMA disabled (Errors) I removed the drives from the hot-swap cages and copied tons of data to/from the disks and I haven't had any of the reset SATA connections since. I was copying directly to each drive (\\UNRAID\disk1, \\UNRAID\disk2, etc). When they were in the cage they would always begin to show SATA reset errors. I did get the errors on ATA1 (Seagate NAS 4TB) and ATA3 (old Hitachi 2TB, when Hitachi was still made by Hitachi). I bought new cables and used old cables, but the errors persisted. I changed the cables at least 6 times and the problem still persisted, so that's when I removed them from the cage which ended the ATA errors. So, try some new cables, and if that doesn't work remove them from the hot-swap cage and connect them directly.
  13. What's the best way to do that? Leave the side of the case open and put all the disks on top of each other with a fan blowing on them?
  14. Last night I moved the drive on ata3 out of the cage and have been copying/erasing data on the drive several times. So far no ata3 errors, but I am still getting ata1 errors so I'll move that one next.
  15. So I'm on cable #3 for ata3 and now I'm getting the error on ata3 and also ata1 as well. ata1 is the parity drive. They're all in the same hot-swap cage too. What are the odds of 3 cables from different brands being bad as well as another cable on another bus also being bad at the same time?
  16. So as I was writing to just this disk it had the same error and reset the SATA link. However, the power_cycle_count did not change upon this latest error. I'll replace the cable again...
  17. Replaced the SATA cable on ATA3 and I'm moving some date over to test for errors. I've got 1 molex and 2 SATA power cables going into this backplane so I don't think it's a power cable issue. I have another identical drive showing similar power_off_retract counts and two Samsung drives that are showing low counts. Maybe it's just the way the Hitachi drives park heads?
  18. Hello. What would cause this error? Faulty cable? Poorly seated backplane/drive? I've reseated the drive several times already but this error keeps popping up when copying files to the server. I'm running 5.0 right now. Syslog Feb 21 11:15:03 unRAID kernel: ata3.00: exception Emask 0x50 SAct 0x0 SErr 0x400800 action 0x6 frozen (Errors) Feb 21 11:15:03 unRAID kernel: ata3.00: irq_stat 0x08000000, interface fatal error (Errors) Feb 21 11:15:03 unRAID kernel: ata3: SError: { HostInt Handshk } (Errors) Feb 21 11:15:03 unRAID kernel: ata3.00: failed command: WRITE DMA EXT (Minor Issues) Feb 21 11:15:03 unRAID kernel: ata3.00: cmd 35/00:00:78:ce:f1/00:04:04:00:00/e0 tag 0 dma 524288 out (Drive related) Feb 21 11:15:03 unRAID kernel: res 50/00:00:77:e5:f1/00:00:04:00:00/e4 Emask 0x50 (ATA bus error) (Errors) Feb 21 11:15:03 unRAID kernel: ata3.00: status: { DRDY } (Drive related) Feb 21 11:15:03 unRAID kernel: ata3: hard resetting link (Minor Issues) Feb 21 11:15:03 unRAID kernel: ata3: SATA link up 3.0 Gbps (SStatus 123 SControl 300) (Drive related) Feb 21 11:15:03 unRAID kernel: ata3.00: configured for UDMA/133 (Drive related) Feb 21 11:15:03 unRAID kernel: ata3: EH complete (Drive related) SMART Report smartctl -a -d ata /dev/sdd smartctl 5.40 2010-10-16 r3189 [i486-slackware-linux-gnu] (local build) Copyright (C) 2002-10 by Bruce Allen, http://smartmontools.sourceforge.net === START OF INFORMATION SECTION === Device Model: Hitachi HDS5C3020ALA632 Serial Number: ML0220F30DKEJD Firmware Version: ML6OA580 User Capacity: 2,000,398,934,016 bytes Device is: Not in smartctl database [for details use: -P showall] ATA Version is: 8 ATA Standard is: ATA-8-ACS revision 4 Local Time is: Fri Feb 21 11:28:07 2014 MST SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (22966) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 255) minutes. SCT capabilities: (0x003d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000b 100 100 016 Pre-fail Always - 0 2 Throughput_Performance 0x0005 133 133 054 Pre-fail Offline - 106 3 Spin_Up_Time 0x0007 134 134 024 Pre-fail Always - 409 (Average 409) 4 Start_Stop_Count 0x0012 099 099 000 Old_age Always - 4097 5 Reallocated_Sector_Ct 0x0033 100 100 005 Pre-fail Always - 0 7 Seek_Error_Rate 0x000b 100 100 067 Pre-fail Always - 0 8 Seek_Time_Performance 0x0005 144 144 020 Pre-fail Offline - 30 9 Power_On_Hours 0x0012 099 099 000 Old_age Always - 13794 10 Spin_Retry_Count 0x0013 100 100 060 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 837 192 Power-Off_Retract_Count 0x0032 097 097 000 Old_age Always - 4099 193 Load_Cycle_Count 0x0012 097 097 000 Old_age Always - 4099 194 Temperature_Celsius 0x0002 200 200 000 Old_age Always - 30 (Min/Max 17/40) 196 Reallocated_Event_Count 0x0032 100 100 000 Old_age Always - 0 197 Current_Pending_Sector 0x0022 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0008 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x000a 200 200 000 Old_age Always - 29 SMART Error Log Version: 1 ATA Error Count: 29 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 29 occurred at disk power-on lifetime: 13794 hours (574 days + 18 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 b0 c8 ce f1 04 Error: ICRC, ABRT 176 sectors at LBA = 0x04f1cec8 = 82955976 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 35 00 00 78 ce f1 e0 08 00:03:35.604 WRITE DMA EXT ca 00 c8 b0 e4 f1 e4 08 00:03:35.602 WRITE DMA 35 00 00 b0 e0 f1 e0 08 00:03:35.600 WRITE DMA EXT 35 00 00 b0 dc f1 e0 08 00:03:35.596 WRITE DMA EXT ca 00 f0 90 3d f1 e4 08 00:03:35.586 WRITE DMA Error 28 occurred at disk power-on lifetime: 13794 hours (574 days + 18 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 40 70 56 ee 04 Error: ICRC, ABRT 64 sectors at LBA = 0x04ee5670 = 82728560 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 35 00 00 b0 54 ee e0 08 00:40:07.523 WRITE DMA EXT 25 00 00 b0 68 ee e0 08 00:40:07.521 READ DMA EXT 25 00 00 b0 64 ee e0 08 00:40:07.519 READ DMA EXT 25 00 00 b0 60 ee e0 08 00:40:07.517 READ DMA EXT 25 00 00 b0 5c ee e0 08 00:40:07.515 READ DMA EXT Error 27 occurred at disk power-on lifetime: 13754 hours (573 days + 2 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 d0 00 ed da 00 Error: ICRC, ABRT 208 sectors at LBA = 0x00daed00 = 14347520 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 35 00 80 50 eb da e0 08 10:18:12.180 WRITE DMA EXT 35 00 00 50 e9 da e0 08 10:18:12.179 WRITE DMA EXT 35 00 00 50 e5 da e0 08 10:18:12.176 WRITE DMA EXT 35 00 00 50 e1 da e0 08 10:18:12.174 WRITE DMA EXT 35 00 00 50 dd da e0 08 10:18:12.172 WRITE DMA EXT Error 26 occurred at disk power-on lifetime: 13754 hours (573 days + 2 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 e8 98 b5 95 00 Error: ICRC, ABRT 232 sectors at LBA = 0x0095b598 = 9811352 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 35 00 e8 98 b2 95 e0 08 09:58:38.187 WRITE DMA EXT 35 00 00 98 ae 95 e0 08 09:58:38.185 WRITE DMA EXT 35 00 00 98 aa 95 e0 08 09:58:38.182 WRITE DMA EXT 35 00 00 98 a6 95 e0 08 09:58:38.180 WRITE DMA EXT 35 00 18 78 a6 95 e0 08 09:58:38.180 WRITE DMA EXT Error 25 occurred at disk power-on lifetime: 12847 hours (535 days + 7 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 84 51 88 10 e4 5e 09 Error: ICRC, ABRT 136 sectors at LBA = 0x095ee410 = 157213712 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 35 00 e8 b0 e1 5e e0 08 02:47:37.024 WRITE DMA EXT 35 00 18 98 e0 5e e0 08 02:47:37.024 WRITE DMA EXT 35 00 60 38 df 5e e0 08 02:47:37.023 WRITE DMA EXT 35 00 a0 98 dc 5e e0 08 02:47:37.022 WRITE DMA EXT 35 00 00 98 d8 5e e0 08 02:47:37.020 WRITE DMA EXT SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 92 - # 2 Short offline Completed without error 00% 18 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  19. I just logged into my unRAID box through SSH and downloaded p95. But, when I run the blend test it crashed immediately. No clue why.
  20. Please let me know how it goes with Prime95 via USB boot. I haven't been able to get p95 to work on my server at all, it always crashes immediately.
  21. Lots of people without problems, I just didn't want to take the chance.
  22. Every time ZyXel has some switch on sale I checkout the reviews. And every model people always complain about them dieing in 6 months or a year or 2 years. RMA was quick and painless, but I'd rather never have to warranty anything in the first place. I've read of much better luck with Trendnet switches. In this case you get what you pay for it seems.
  23. Likely none if you remove it from the enclosure. Though, others have mentioned entering the SN into Seagate's website and the drive does have some kind of OEM warranty. If I had to guess I'd say 2 years since the NAS drive comes with a 3yr warranty.
  24. A Newegg review says the 5,900rpm ST4000DM000 is inside. Not the NAS drive.
×
×
  • Create New...