Hiker

Members
  • Posts

    22
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Hiker's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Ok. It's never happened before I started preclearing multiple drives in separate screen sessions. Guess it's just something I can live with.
  2. Once I finish watching something, shouldn't unRAID then clear the cache? it seems this is the root of the problem; I'd been watching a movie yesterday and when I started the 2 preclears, it was enough to eventually cause the failures.
  3. Quick update: I've started watching a 10GB HD rip from the server and memory usage is climbing like crazy. After less than 5mins of watching, it's climbed another 1GB (and still going)
  4. Addons: unMENU with monthly parity check, openssh/ssl, python, screen, SABnzbd-SickBeard-CouchPotato (all three manually installed) I did suspect CouchPotato might be the cause actually. It was preclearing 2x 2TB drives in separate screen sessions and looking at the preclear finish logs, the last of them finished 4 secs after the last of these errors (there have been no such failures since my preclears finished). The server has 6GB at present. Just checked memory usage and it was using around 5.8GB (I just stopped all python servers and checked memory usage; dropped to about 5.5GB. I then stopped the array and it dropped to around 370MB. Restarted the array, no real jump but I'll monitor it). unRAID did report around 5GB of cached memory. Thanks for the reply. Will leave it a day or so running the python servers and see if mem usage climbs again. I know CouchPotato in particular is known for mem leaks. Thanks for the reply.
  5. I'm running unRAID 5b12a and, when looking at my syslog today, saw a ton of errors that I've never seen before. An example in my syslog (attached) is from Oct 03, 09:24:32 Any ideas on what's suddenly causing this? syslog-2011-10-03.txt
  6. Sounds good Rob. I'd not had any issues at all, just saw what was in syslog and thought it worth investigating. I have a couple of 1TB drives free in the box at the moment but didn't really want to use any of them as a cache drive as I'm never moving ridiculously huge amounts of data to the server; certainly not more than 300GB at a time (tho I do have one Samsung HD103SJ that would be nice for the performance. All my other drives are HD103UJ and the HD103SJ is something in the order of 30% faster, from my own tests) I kinda wanted to keep the cache drive off the "regular" SATA ports and use those for the array. Purely personal preference, not any particular reason.
  7. It would have been on a Gigabyte mobo at some point. I recently removed the HPA using a Windows machine. I've moved SATA ports as wel as replacing the cable but I'll move it back to the same port and test if it's the cable next time I have a lot of data to move around. Thanks.
  8. Hmm. Not moved the SATA port yet but this error actually happens during the process. Looking at my syslog, I get the error in the middle of the file copy process, looks like between files (i.e. will be moving data for, say, 1hr then between finishing one file and starting the next, I get that error then it continues and eventually finishes.
  9. I have one "regular" SATA port free so I'll move the drive there and see if I get the same issue I guess.
  10. Here's smartstatus output (hope this isn't TOO long to post). Not really sure what it indicates. smartctl version 5.38 [i486-slackware-linux-gnu] Copyright © 2002-8 Bruce Allen Home page is http://smartmontools.sourceforge.net/ === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.8 family Device Model: ST3300831AS Serial Number: 3NF0CJ94 Firmware Version: 3.02 User Capacity: 300,069,052,416 bytes Device is: In smartctl database [for details use: -P show] ATA Version is: 7 ATA Standard is: Exact ATA specification draft version not indicated Local Time is: Sun Nov 29 01:35:41 2009 GMT-10 SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 430) seconds. Offline data collection capabilities: (0x5b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. No Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 101) minutes. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 060 049 006 Pre-fail Always - 32461468 3 Spin_Up_Time 0x0003 098 097 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 100 100 020 Old_age Always - 444 5 Reallocated_Sector_Ct 0x0033 100 100 036 Pre-fail Always - 0 7 Seek_Error_Rate 0x000f 088 060 030 Pre-fail Always - 718999378 9 Power_On_Hours 0x0032 083 083 000 Old_age Always - 15757 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 0 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 448 194 Temperature_Celsius 0x0022 037 053 000 Old_age Always - 37 (0 13 0 0) 195 Hardware_ECC_Recovered 0x001a 060 047 000 Old_age Always - 32461468 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0000 100 253 000 Old_age Offline - 0 202 TA_Increase_Count 0x0032 100 253 000 Old_age Always - 0 SMART Error Log Version: 1 ATA Error Count: 6 (device log contains only the most recent five errors) CR = Command Register [HEX] FR = Features Register [HEX] SC = Sector Count Register [HEX] SN = Sector Number Register [HEX] CL = Cylinder Low Register [HEX] CH = Cylinder High Register [HEX] DH = Device/Head Register [HEX] DC = Device Command Register [HEX] ER = Error register [HEX] ST = Status register [HEX] Powered_Up_Time is measured from power on, and printed as DDd+hh:mm:SS.sss where DD=days, hh=hours, mm=minutes, SS=sec, and sss=millisec. It "wraps" after 49.710 days. Error 6 occurred at disk power-on lifetime: 12653 hours (527 days + 5 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 10 51 01 ee b8 ee e2 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 37 00 01 ee b8 ee e2 00 00:04:15.122 SET MAX ADDRESS EXT 27 00 00 ee b8 ee e0 00 00:04:14.958 READ NATIVE MAX ADDRESS EXT 37 00 00 ee b8 ee e2 00 00:04:14.958 SET MAX ADDRESS EXT 27 00 01 ee b8 ee e0 00 00:04:14.958 READ NATIVE MAX ADDRESS EXT 37 00 01 ee b8 ee e2 00 00:04:11.243 SET MAX ADDRESS EXT Error 5 occurred at disk power-on lifetime: 12653 hours (527 days + 5 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 10 51 01 ee b8 ee e2 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 37 00 01 ee b8 ee e2 00 00:04:15.122 SET MAX ADDRESS EXT 27 00 00 ee b8 ee e0 00 00:04:14.958 READ NATIVE MAX ADDRESS EXT 37 00 00 ee b8 ee e2 00 00:04:14.958 SET MAX ADDRESS EXT 27 00 01 2f c1 ee e0 00 00:04:14.958 READ NATIVE MAX ADDRESS EXT 29 00 01 2f c1 ee e0 00 00:04:11.243 READ MULTIPLE EXT Error 4 occurred at disk power-on lifetime: 12653 hours (527 days + 5 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 10 51 01 ee b8 ee e2 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 37 00 01 ee b8 ee e2 00 00:03:33.017 SET MAX ADDRESS EXT 27 00 00 ee b8 ee e0 00 00:03:33.017 READ NATIVE MAX ADDRESS EXT 37 00 00 ee b8 ee e2 00 00:03:28.608 SET MAX ADDRESS EXT 27 00 01 ee b8 ee e0 00 00:03:28.574 READ NATIVE MAX ADDRESS EXT 37 00 01 ee b8 ee e2 00 00:03:28.574 SET MAX ADDRESS EXT Error 3 occurred at disk power-on lifetime: 12653 hours (527 days + 5 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 10 51 01 ee b8 ee e2 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 37 00 01 ee b8 ee e2 00 00:03:33.017 SET MAX ADDRESS EXT 27 00 00 ee b8 ee e0 00 00:03:33.017 READ NATIVE MAX ADDRESS EXT 37 00 00 ee b8 ee e2 00 00:03:28.608 SET MAX ADDRESS EXT 27 00 01 2f c1 ee e0 00 00:03:28.574 READ NATIVE MAX ADDRESS EXT 29 00 01 2f c1 ee e0 00 00:03:28.574 READ MULTIPLE EXT Error 2 occurred at disk power-on lifetime: 12653 hours (527 days + 5 hours) When the command that caused the error occurred, the device was active or idle. After command completion occurred, registers were: ER ST SC SN CL CH DH -- -- -- -- -- -- -- 10 51 01 ee b8 ee e2 Commands leading to the command that caused the error were: CR FR SC SN CL CH DH DC Powered_Up_Time Command/Feature_Name -- -- -- -- -- -- -- -- ---------------- -------------------- 37 00 01 ee b8 ee e2 00 00:01:33.104 SET MAX ADDRESS EXT 27 00 00 ee b8 ee e0 00 00:01:33.104 READ NATIVE MAX ADDRESS EXT 37 00 00 ee b8 ee e2 00 00:01:33.077 SET MAX ADDRESS EXT 27 00 01 ee b8 ee e0 00 00:01:39.423 READ NATIVE MAX ADDRESS EXT 37 00 01 ee b8 ee e2 00 00:01:39.423 SET MAX ADDRESS EXT SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 15757 - # 2 Short offline Completed without error 00% 15757 - # 3 Short offline Completed without error 00% 15755 - # 4 Short offline Completed without error 00% 15755 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  11. This is when mover is started to offload data from the cache drive to the array (colours match syslog): Nov 28 17:50:23 BigNAS kernel: ata7.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Nov 28 17:50:23 BigNAS kernel: ata7.00: cmd b0/d0:01:00:4f:c2/00:00:00:00:00/00 tag 0 pio 512 in Nov 28 17:50:23 BigNAS kernel: res 40/00:00:00:4f:c2/00:00:00:4f:c2/00 Emask 0x4 (timeout) Nov 28 17:50:23 BigNAS kernel: ata7.00: status: { DRDY } Nov 28 17:50:23 BigNAS kernel: ata7: hard resetting link Nov 28 17:50:23 BigNAS kernel: ata7: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Nov 28 17:50:23 BigNAS kernel: ata7.00: link online but device misclassifed Nov 28 17:50:23 BigNAS kernel: ata7.00: configured for UDMA/133 Nov 28 17:50:23 BigNAS kernel: ata7: EH complete Time to dump this cache drive? It gets past the error, which seems to randomly happen, and finishes moving data Unraid 4.5b11, Asus P5Q Pro, Seagate drive on the orange P5Q Pro sata port
  12. Had some time so connected my Win 7 64bit machine directly to my unRAID. After setting cfq scheduling, tested and no diff. Reading a folder can be REALLY slow (to the point my DOpus window (awesome app!) fades and becomes "Not Responding" for a while then becomes normal (til I try and browse again0. Is there something I'm not doing right? Perhaps I'm mounting the shares incorrectly on the Win7 box? Things seem to be fine in syslog
  13. Thanks for an excellent response. Kind of made me smile for some reason.
  14. I'd removed the HPA this morning from one of my 1TB drives and forgot I'd started a parity check, basically. Using unRAID 4.5b11 Just looking through the NFS performance thread now. I'll see if setting cfq scheduling makes any diff. Won't really get to test now as I've just had my gbit switch die and am stuck with 100mbit til probably tomorrow.