spinbot

Members
  • Posts

    371
  • Joined

  • Last visited

Everything posted by spinbot

  1. These steps worked: -take a screenshot of current array -go to tools and click new config -reassign all disks, using old disk9, don't assign parity for now, leave it empty -start array and check if all data is there, especially disks 9 and 10 -if all looks fine, stop array, assign parity and start array to begin parity sync Both the original disk9 and disk10 had all their data on them, so I am now just waiting for the Parity to build. Only one small hiccup in that I lost power during the first attempt and since I had the server on my "bench" cleaning and pulling drives, I didn't have it in its UPS. At the moment, its done syncing all my data drives (the largest was 2TB) and is just zero'ing out the 2TB left on the parity drive. In 3 hours, it should be golden! I will take the advice from earlier and upgrade to whatever version of 6 that is either official or stable. I'm pretty sure I can find a thread on that process. Then, I will have to replace my parity drive (thanks to HPA crap) as my one spare 4TB data drive is a few thousand bits larger. Then, I can integrate the smaller 4TB in to the array to replace my smallest drive, then repeat the process for two other drives I have that are bigger than those in the server now. Weeks of fun! Thanks for the help, this issue is officially resolved now!
  2. I usually worry about sustained temps over 50C, but being conservation on this is better than liberal! The drive with the unrealistic temperature, I'm guessing its reporting in Fahrenheit, based on your math you converted with. Darn Americans, if only the world could get them to change to Metric and end confusion like this! lol I'm just running the long Smart Test on disk10. I'll have the results of it by morning. If all looks "safe", I will attempt the steps you suggested earlier. Regardless of my outcome, I would like to thank you now for the assistance, guidance, hand holding and reassurances. They all prove invaluable when I working in an area outside my comfort zone, especially being a person with OCD tendencies. We hate to "try" something unless we know the outcome ahead of time. Risk taking is not one of my best qualities, thanks to 3/4 of my life being plagued with health issues and trying to avoid any further problems. Life's a bitch, then you date one, then you find Ms. Wright! (ya, my girlfriends last name is "Wright", so how can I go wrong with that!)
  3. So the original drive with issues seems to look good after 3 smart tests (one was aborted by my mistake). I see nothing that worries me, beyond the fact it fell out of the array initially for reasons I am unsure of. EDIT: I'm starting to think temperature may be the culprit in all my issues. Despite me having three 4 in 3 drive cages in the server, each with its own intake fan in front, the drives in the middle were hitting higher temperatures than those in the top and bottom. I checked the fan and its running. I cleaned out all the filters, as they were dusty. I've been running a window fan right in front of the server during all the current issues and the drive temps have been perfect. With respect to the two drives in question, disk10's Short Smart Report is showing the drive's worst as 54C: 194 Temperature_Celsius 0x0022 030 054 000 Old_age Always - 30 (0 7 0 0 0) Disk 9's Long Smart Report must be faulty, as it says the temperature worst is 102C yet the current temp is 123C: 194 Temperature_Celsius 0x0022 123 102 000 Old_age Always - 29 Disk 9 Long Smart Report: smartctl 6.2 2013-07-26 r3841 [i686-linux-3.9.11p-unRAID] (local build) Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00J2GB0 Serial Number: WD-WCAYY0273566 LU WWN Device Id: 5 0014ee 2afdf0659 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Sun Mar 27 21:08:11 2016 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x84) Offline data collection activity was suspended by an interrupting command from host. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (34560) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 394) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 220 154 021 Pre-fail Always - 5958 4 Start_Stop_Count 0x0032 096 096 000 Old_age Always - 4207 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 045 045 000 Old_age Always - 40340 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 122 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 44 193 Load_Cycle_Count 0x0032 193 193 000 Old_age Always - 23923 194 Temperature_Celsius 0x0022 123 102 000 Old_age Always - 29 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 1 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Extended offline Completed without error 00% 40334 - # 2 Extended offline Aborted by host 90% 40329 - # 3 Extended offline Completed without error 00% 40324 - # 4 Short offline Completed without error 00% 40236 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  4. I do agree I need to upgrade to version 6. Historically I was upgrading the software each time I need support for larger hard drives, however I will make an effort to stay more current for various reasons you mentioned. I bailed out of UnMenu as I waited the 6 hours and I thought it would display the results of the long smart test, but instead when I reloaded the page it seemed to just start it all over again. This time, I used the command line unmenu showed " smartctl -t long -d ata /dev/sdi 2>&1 " it was running. Testing has begun. Please wait 394 minutes for test to complete. Test will complete after Sun Mar 27 16:57:24 2016 I just have to figure out how to display the results. I was looking through the Wiki and I believe one of these gives me results: smartctl -a -A /dev/sdi or smartctl -a -d ata /dev/sdi I'll give them a try after the estimated time of completion.
  5. The idea was that if the original disk9 (WDC_WD15EARS-00_WD-WCAYY0273566) 1.5TB drive I put in was truly fine and lets just say it was a loose cable or one tiny thing that triggered its stoppage, then is their a way to return that to array in the location the 4TB I tried to rebuild in its place? If it can go back in, then maybe the disk10 (ST31500341AS_9VS2L1L4) that somehow came up with errors during the previous drives repair and then somehow lost 2/3 of the information could be restored? Here's another odd observation: when i view the drives not in the protected array (using UnMenu's Disk Management Section), the two drives mentioned above are listed as 1TB in size, yet both are actually 1.5TB, as you can see from their model/serial numbers). Model/Serial Temp Size Device Mounted File System WDC_WD15EARS-00_WD-WCAYY0273566 26°C 1T /dev/sdi partition (1,465,138,552 blocks): /dev/sdi1 reiserfs ST31500341AS_9VS2L1L4 28°C 1T /dev/sdn partition (1,465,138,552 blocks): /dev/sdn1 reiserfs ------------ I am running the long smart test on the original "disk9" right now, using the option in UnMenu which appears to run the command line: smartctl -t long -d ata /dev/sdi 2>&1 I don't believe I've run this test before, so I am not sure how long it will take before results are presented. I will post them once they appear, or in the morning should it take a while (I'm -5:00GMT - Eastern Time Zone = 00:34 Mar.27 currently)
  6. Only "extra" items I have are single SATA cables (not breakout cables) and hard drives. What information in the first Smart Report I posted brings you to the conclusion it was a problem? I've got to try and figure this out with the limited options I have with the hope of not having to go to the basement and dig out DVD's and re-rip hundreds of them. If disk9's original 1.5TB HDD, the original problem, is actually still functioning, is it too late to put it back in to the array now that the server thinks a 4TB drive replaced it? I can also check, or just replace the sata cable on disk10. If its on a breakout cable, then I would have to order one online. If this makes any difference, I did make a backup of my Flash Drive before these drive swaps started (ie. taken on Mar.21).
  7. this is the drive that started the problems some time ago. As I had spare drives, I initially tried replacing it was a 4TB drive but it ended up being a few bit bigger than my parity drive so I ended up replacing the parity instead. I then used the old parity drive to replace the drive I am posting here: SMART status Info for /dev/sdi smartctl 6.2 2013-07-26 r3841 [i686-linux-3.9.11p-unRAID] (local build) Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Western Digital Caviar Green (AF) Device Model: WDC WD15EARS-00J2GB0 Serial Number: WD-WCAYY0273566 LU WWN Device Id: 5 0014ee 2afdf0659 Firmware Version: 80.00A80 User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS (minor revision not indicated) SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Thu Mar 24 11:02:23 2016 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (34560) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 394) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x3031) SCT Status supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 220 154 021 Pre-fail Always - 5966 4 Start_Stop_Count 0x0032 096 096 000 Old_age Always - 4206 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 100 253 000 Old_age Always - 0 9 Power_On_Hours 0x0032 045 045 000 Old_age Always - 40258 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 100 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 121 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 44 193 Load_Cycle_Count 0x0032 193 193 000 Old_age Always - 23910 194 Temperature_Celsius 0x0022 126 102 000 Old_age Always - 26 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 1 198 Offline_Uncorrectable 0x0030 200 200 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 200 200 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 40236 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Here is the 4TB drive I tried to rebuild the above on to: Statistics for /dev/sdf WDC_WD40EFRX-68WT0N0_WD-WCC4E1333752 smartctl -a -d ata /dev/sdf smartctl 6.2 2013-07-26 r3841 [i686-linux-3.9.11p-unRAID] (local build) Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Device Model: WDC WD40EFRX-68WT0N0 Serial Number: WD-WCC4E1333752 LU WWN Device Id: 5 0014ee 209efb4a8 Firmware Version: 80.00A80 User Capacity: 4,000,785,948,160 bytes [4.00 TB] Sector Sizes: 512 bytes logical, 4096 bytes physical Rotation Rate: 5400 rpm Device is: Not in smartctl database [for details use: -P showall] ATA Version is: ACS-2 (minor revision not indicated) SATA Version is: SATA 3.0, 6.0 Gb/s (current: 3.0 Gb/s) Local Time is: Thu Mar 24 11:06:05 2016 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x00) Offline data collection activity was never started. Auto Offline Data Collection: Disabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: (52080) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 2) minutes. Extended self-test routine recommended polling time: ( 521) minutes. Conveyance self-test routine recommended polling time: ( 5) minutes. SCT capabilities: (0x703d) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 16 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 200 200 051 Pre-fail Always - 0 3 Spin_Up_Time 0x0027 180 178 021 Pre-fail Always - 7983 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 186 5 Reallocated_Sector_Ct 0x0033 200 200 140 Pre-fail Always - 0 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 097 097 000 Old_age Always - 2666 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 15 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 3 193 Load_Cycle_Count 0x0032 200 200 000 Old_age Always - 381 194 Temperature_Celsius 0x0022 127 104 000 Old_age Always - 25 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age Always - 0 197 Current_Pending_Sector 0x0032 200 200 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0030 100 253 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 100 253 000 Old_age Offline - 0 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 No self-tests have been logged. [To run self-tests, use: smartctl -t] SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay. Here is the "disk10" that had all the errors during the rebuild of disk9 (above): SMART status Info for /dev/sdo smartctl 6.2 2013-07-26 r3841 [i686-linux-3.9.11p-unRAID] (local build) Copyright © 2002-13, Bruce Allen, Christian Franke, www.smartmontools.org === START OF INFORMATION SECTION === Model Family: Seagate Barracuda 7200.11 Device Model: ST31500341AS Serial Number: 9VS2L1L4 LU WWN Device Id: 5 000c50 015af8095 Firmware Version: CC1H User Capacity: 1,500,301,910,016 bytes [1.50 TB] Sector Size: 512 bytes logical/physical Rotation Rate: 7200 rpm Device is: In smartctl database [for details use: -P show] ATA Version is: ATA8-ACS T13/1699-D revision 4 SATA Version is: SATA 2.6, 3.0 Gb/s Local Time is: Thu Mar 24 11:09:23 2016 EDT SMART support is: Available - device has SMART capability. SMART support is: Enabled === START OF READ SMART DATA SECTION === SMART overall-health self-assessment test result: PASSED General SMART Values: Offline data collection status: (0x82) Offline data collection activity was completed without error. Auto Offline Data Collection: Enabled. Self-test execution status: ( 0) The previous self-test routine completed without error or no self-test has ever been run. Total time to complete Offline data collection: ( 609) seconds. Offline data collection capabilities: (0x7b) SMART execute Offline immediate. Auto Offline data collection on/off support. Suspend Offline collection upon new command. Offline surface scan supported. Self-test supported. Conveyance Self-test supported. Selective Self-test supported. SMART capabilities: (0x0003) Saves SMART data before entering power-saving mode. Supports SMART auto save timer. Error logging capability: (0x01) Error logging supported. General Purpose Logging supported. Short self-test routine recommended polling time: ( 1) minutes. Extended self-test routine recommended polling time: ( 299) minutes. Conveyance self-test routine recommended polling time: ( 2) minutes. SCT capabilities: (0x103f) SCT Status supported. SCT Error Recovery Control supported. SCT Feature Control supported. SCT Data Table supported. SMART Attributes Data Structure revision number: 10 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 118 099 006 Pre-fail Always - 185620484 3 Spin_Up_Time 0x0003 100 092 000 Pre-fail Always - 0 4 Start_Stop_Count 0x0032 096 096 020 Old_age Always - 4420 5 Reallocated_Sector_Ct 0x0033 099 099 036 Pre-fail Always - 53 7 Seek_Error_Rate 0x000f 074 060 030 Pre-fail Always - 25605074 9 Power_On_Hours 0x0032 043 043 000 Old_age Always - 50207 10 Spin_Retry_Count 0x0013 100 100 097 Pre-fail Always - 8 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 146 184 End-to-End_Error 0x0032 100 100 099 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 1 189 High_Fly_Writes 0x003a 001 001 000 Old_age Always - 270 190 Airflow_Temperature_Cel 0x0022 071 046 045 Old_age Always - 29 (Min/Max 28/38) 194 Temperature_Celsius 0x0022 029 054 000 Old_age Always - 29 (0 7 0 0 0) 195 Hardware_ECC_Recovered 0x001a 040 030 000 Old_age Always - 185620484 197 Current_Pending_Sector 0x0012 100 100 000 Old_age Always - 0 198 Offline_Uncorrectable 0x0010 100 100 000 Old_age Offline - 0 199 UDMA_CRC_Error_Count 0x003e 200 200 000 Old_age Always - 3 240 Head_Flying_Hours 0x0000 100 253 000 Old_age Offline - 271673861352864 241 Total_LBAs_Written 0x0000 100 253 000 Old_age Offline - 2254842753 242 Total_LBAs_Read 0x0000 100 253 000 Old_age Offline - 3816245401 SMART Error Log Version: 1 No Errors Logged SMART Self-test log structure revision number 1 Num Test_Description Status Remaining LifeTime(hours) LBA_of_first_error # 1 Short offline Completed without error 00% 38352 - SMART Selective self-test log data structure revision number 1 SPAN MIN_LBA MAX_LBA CURRENT_TEST_STATUS 1 0 0 Not_testing 2 0 0 Not_testing 3 0 0 Not_testing 4 0 0 Not_testing 5 0 0 Not_testing Selective self-test flags (0x0): After scanning selected spans, do NOT read-scan remainder of disk. If Selective self-test is pending on power-up, resume after 0 minute delay.
  8. Things have become more "challenging" now and the potential for data loss has risen I believe. The rebuild of the former 1.5TB drive on to 4TB drive completed, however 3 negative observations: 1. Despite the array showing up and running okay, when I check disk9's contents it doesn't have 1.5TB of data. It actually has under 400GB. 2. My parity status is listed as: Parity is Valid:. Last parity check 16885 days ago . Parity updated 172293584 times to address sync errors (that means my last parity check was done before I was born) 3. During the rebuild of disk9 , for some reason my device status shows disk10 as having the same number of write to it as disk9 had , however it has a huge error rate: writes: 18842983 errors: 154394030 Why the array was trying to write to disk10 during the rebuild of disk9 makes no sense to me. As I type this, disk10 was just kicked out of the array, which surely is from the error counts, however prior to its dropping I did check the files and like disk9, it was well below the 1.5TB it should have had. I'm starting to get that drowning feeling as things go from concerning to very concerning (as I am fully aware 2 drives going down is unrecoverable). I defer back to those of higher skill and seek your advice as to what might be happening and the best course of action moving forward. Thanks for any advice/assistance you can provide in advance. Boat anchor is not the option as too many hundreds of hours went in to building this and transferring DVD's to digital form! I went to download and attach my syslog, but I don't have anything in it, which is another suspicious matter. I checked through both UnMenu as well as the Main UnRaid GUI interface.
  9. Without doing anything but powering down the server, hooking up the 4TB drive that I previously cancelled the rebuild on, the drive automatically returned to rebuilding itself. I guess I will let this ride out as long as the estimated time remains reasonable (800 minutes) and the "error" count doesn't increase. Assuming it completes this time, I would think my next step would be to run a parity check without correcting errors. If it passes, then maybe run a full parity check again and do some testing of the drive, just as an additional reliability check.
  10. Thanks for the info. I think I will start with hooking the 4TB drive I had issues with again, but before trying to rebuild the data to it, I am going to do some testing on it. I'll run at a preclear cycle and check some of the drive info before and after the test to see if error rates have risen or not. This will take a few days to perform, given the size of the drive! I'll be back, hopefully with good news
  11. At this point, would I be best off preclearing the drive that I had to abort the rebuild on, assuming it was okay and the rebuild issue was related to some other factor? My only other available drive is larger than the parity drive, so that would present a new challenge having to replace the parity drive first before being able to rebuilt the other drive. If I preclear the 4TB that the array now things is in the slot and then try to rebuild to it, I am now starting to question if the parity drive has the necessary info to rebuild the 1.5TB of lost data or did it get changed when I tried to rebuild it my first attempt (thus its expectation I should be putting in a 4TB for the rebuild) EDIT: What if the drive that started this process is actually still okay? It still connected to the array and I am able to run smart status and hdparm on it. Although I wouldn't be able to set this drive back to the space the server thinks it should be (disk9), would I be able to add it like a new disk (disk12) and rebuild the parity? I realize that if the drive really is bad then I will have messed up my parity and never be able to restore it. The issue with this process would be that I doubt the software will let me leave "Disk9" as "no device" and then rebuild the parity.
  12. Basics: unRAID Server Pro version: 5.0.5 Background: One of my drives appeared to have some issues as it was automatically removed from the array. I had 3 other drives "spare" on my shelf, so I first tried to replace it was my former parity 4TB drive that I believed to have no issues, I just ended up replacing it as a new 4TB drive i put in was a few bits larger (HPA maybe). I didn't preclear the former 4TB parity drive, which may turn out to have been a problem. I did start the "rebuild" process after I changed the drive assigned from the faulty 1.5TB to the 4TB and let the process run for a many hours. When I checked back in on it, 4 of my other drives were showing errors now, despite the fact they shouldn't even be needed for this process. On top of that, the rebuild went from having "hours" remaining to "weeks". Clearly something was wrong so I stopped the process, shut the machine down and went through every connector to ensure they were all snug. Given the 4 drives showing errors were all off the same break-out cable, I figured it must have gotten loose on the SATA card. I turn the machine back on and the only drive with issues was the one that I was trying to rebuild. As I now have my doubts on the reliability of that former 4TB parity drive, I opted to hook up a new, precleared/passed 2TB drive I had instead for the rebuild. Here is where the problem stands: When I change the settings to select the 2TB drive, it thinks the drive that is replacing is the 4TB drive. The actual drive it is replacing is a 1.5TB drive - the 4TB is the one I canceled the rebuild on. Question: How do I get my parity to rebuild the 1.5TB of data on the 2TB drive when server thinks I should have a 4TB in that slot? I don't believe the data is lost as the parity drive shouldn't have changed what's stored on it. Any direction would be appreciated as this is not the typical error I have dealt with before when replacing drives. Thanks in advance!
  13. My current parity drive is as WD Red, so I was thinking that would be the best drive for the job (being the drive designed for a NAS and the drive that gets the most activity). Yes, I have a darn MSI motherboard. It has been a few years, but I remember going through some process to redo a bunch of my 1.5TB drives that were slightly undersized. At that time, the issue never happened to my parity (just by chance). Short of replacing my motherboard, is their a permanent solution to prevent this from happening again? I am new to the 4TB drive world and I just figured using a drive made for a "NAS" as my parity would be better than the Hitachi drive. The WD Red has been in a few months and proven reliable. That being said, the Hitachi did pass "preclear", so its trustworthy thus far (ie. not D.O.A.). I don't mind taking a little risk as my data drives are just rips of my DVD's and CD's , so nothing that is life or death if its lost, just time. That being said, its a lot of time, so I prefer not to get in it. Would you guys personally run the WD Red or the Hitachi as your parity? I do hate how long it take to run parity checks on these bigger drives. Do these steps seem practical, taking a bit of risk but risk I feel is reasonable (ie. knowing the data drives are good after step 2): 1. Put the 1TB data drive back in the server 2. Run a parity check to ensure everything is hooked up ok and no cable knocked around with all my fiddling 3. (Taking a bit of risk) - take the parity drive virtually out of the array (ie. just in the Unraid setup) 4. Perform whatever is needed to get the parity drive to the proper size without HPA 5. Add Parity drive to the array and let it rebuild / run parity check 6. Replace 1TB with precleared Hitachi 4TB
  14. I'm about 99% certain my current issue is related to HPA. My situation is that I replaced my parity drive a month or two ago with WD Red 4TB drive. I did not think about HPA possible issues at the time and as its my first 4TB drive, the byte size of the drive I had nothing to compare against. Today, I replaced my smallest data drive (1TB) with a precleared Hitachi 4TB . My service is full (1 parity, 11 data), so swapping drives for bigger one's is my only option before making the plunge on a Norco server. Where I sit now is that after turning the server back on, I am getting warning that the drive I am trying to install is larger than my parity drive. Would you suggest I re-install the 1TB drive I just pulled out and then restart the array. Everything should be functioning normal again. Then, deal with the parity drive being undersized. I don't recall off hand, but with some searching I should be able to find how to remove the HPA and make the drive slightly bigger. The new 4TB drive I am trying to swap in for the 1TB, I am going to have to do something with it also to make sure the HPA issues doesn't happen to it (although not as critical as if it ends up slightly smaller than the parity, no harm). I am currently running UnRaid Server Pro ver 5.0.5 Parity Drive - 3907017476 Bytes New Drive - 3907018532 Bytes I'd appreciate the option of those more skilled / experienced with this than I. Thanks
  15. The basics: I am running UnRaid Server Pro ver. 5.0.5 in an 11 Data drive, 1 Parity drive server. It runs great (knock on wood). I had been running a PopCorn Hour C-300 as my front end for about 7 months, however something happened a little while ago and I have not been able to access the Jukebox or Jukebox Manager for my Shares. I've posted my question at the main networkmediatank forums, but not heard back in multiple weeks anything useful. As it stands now, I removed all my shares and tried to add just 1 to start. If I can get it working, then the others shouldn't be a problem. The trouble is that I can browse my share, through the C-300. I can access all the folders within it and even play the content. Everything works perfect from a text based system. If I try to access the share using either the Jukebox or the Jukebox Manager I get "Database Read Error. Could not proceed with your request" I tried installing the NMJToolbox, but I cannot get it to work properly. I did have it installed months ago and working, but due to some odd issues it created, I decided to stop using it (odd issue was I had a missing image icon for "Vikings" and if I ever clicked it, it would reboot the PCH ). At this point, despite all the fiddling around I needed to do to get my Jukebox accurate, I would be prepared to start from scratch (I have 3000 - 4000 items , so it takes time to make sure each image is accurate). My firmware is up to date. I've removed NMT Applications and then did a "Fresh Setup" I know enough to be dangerous, but not solve my issues. It would be nice if the PCH had a "Repair" feature. Sorry about the random formatting of my thoughts, but hopefully someone has an idea what I am talking about and could give me some direction. Thanks
  16. I've just noticed these entries in the Syslog. Not typical errors I see and ignore, so I wanted to see if someone might know the cause and if their is a simple fix or not. For reference, the two devices related to the IP's in the BOLD lines below are both mine (my Blackberry and my Laptop). I ran a command I seen someone else suggest elsewhere (results further below) and what stands out is this part: RX packets:36049503 errors:0 dropped:49278 Jul 26 09:26:59 Tower avahi-daemon[12278]: Invalid response packet from host 192.168.1.32. Jul 26 09:30:17 Tower avahi-daemon[12278]: Invalid response packet from host 192.168.1.32. Jul 26 09:33:37 Tower avahi-daemon[12278]: Invalid response packet from host 192.168.1.32. Jul 26 09:36:55 Tower avahi-daemon[12278]: Invalid response packet from host 192.168.1.32. Jul 26 13:05:36 Tower kernel: mdcmd (168): spindown 0 (Routine) Jul 26 13:05:37 Tower kernel: mdcmd (169): spindown 1 (Routine) Jul 26 13:05:38 Tower kernel: mdcmd (170): spindown 6 (Routine) Jul 26 13:05:38 Tower kernel: mdcmd (171): spindown 7 (Routine) Jul 26 13:05:38 Tower kernel: mdcmd (172): spindown 8 (Routine) Jul 26 13:05:38 Tower kernel: mdcmd (173): spindown 9 (Routine) Jul 26 13:05:39 Tower kernel: mdcmd (174): spindown 10 (Routine) Jul 26 13:05:39 Tower kernel: mdcmd (175): spindown 11 (Routine) Jul 26 13:49:44 Tower kernel: mvsas 0000:02:00.0: Phy2 : No sig fis (Drive related) Jul 26 13:49:50 Tower kernel: mvsas 0000:02:00.0: Phy2 : No sig fis (Drive related) Jul 26 13:49:53 Tower kernel: sas: sas_form_port: phy2 belongs to port2 already(1)! (Drive related) Jul 26 15:53:51 Tower kernel: mdcmd (176): spindown 3 (Routine) Jul 26 16:54:01 Tower kernel: mdcmd (177): spindown 1 (Routine) Jul 26 16:54:01 Tower kernel: mdcmd (178): spindown 6 (Routine) Jul 26 16:54:01 Tower kernel: mdcmd (179): spindown 7 (Routine) Jul 26 16:54:01 Tower kernel: mdcmd (180): spindown 8 (Routine) Jul 26 16:54:02 Tower kernel: mdcmd (181): spindown 9 (Routine) Jul 26 16:54:03 Tower kernel: mdcmd (182): spindown 10 (Routine) Jul 26 16:54:03 Tower kernel: mdcmd (183): spindown 11 (Routine) Jul 26 17:14:58 Tower kernel: mdcmd (184): spindown 3 (Routine) Jul 26 17:34:51 Tower avahi-daemon[12278]: Invalid response packet from host 192.168.1.33. Jul 26 19:18:39 Tower avahi-daemon[12278]: Invalid response packet from host 192.168.1.33. Jul 26 19:55:01 Tower kernel: mdcmd (185): spindown 1 (Routine) Jul 26 19:55:01 Tower kernel: mdcmd (186): spindown 6 (Routine) Jul 26 19:55:01 Tower kernel: mdcmd (187): spindown 7 (Routine) Jul 26 19:55:01 Tower kernel: mdcmd (188): spindown 8 (Routine) Jul 26 19:55:02 Tower kernel: mdcmd (189): spindown 9 (Routine) Jul 26 19:55:03 Tower kernel: mdcmd (190): spindown 10 (Routine) Jul 26 19:55:03 Tower kernel: mdcmd (191): spindown 11 (Routine) Jul 26 21:24:58 Tower avahi-daemon[12278]: Invalid response packet from host 192.168.1.32 root@Tower:/boot# ifconfig eth0 Link encap:Ethernet HWaddr 00:24:1d:82:a3:25 inet addr:192.168.1.67 Bcast:192.168.1.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:36049503 errors:0 dropped:49278 overruns:0 frame:0 TX packets:140154624 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:3870129206 (3.6 GiB) TX bytes:2431865073 (2.2 GiB) Interrupt:40 Base address:0xc000 lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 UP LOOPBACK RUNNING MTU:65536 Metric:1 RX packets:13092 errors:0 dropped:0 overruns:0 frame:0 TX packets:13092 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:2048717 (1.9 MiB) TX bytes:2048717 (1.9 MiB) Server - 12 drives: 11 Data, 1 Parity unRAID Server Pro version: 5.0.5
  17. Last night, i believe it was only running around 10 MBps , however thanks to good old Windows and its "Critical Update", it rebooted my main PC at some random time in the night. I started up moving some smaller batches of files this morning and the speeds look more normal, at about 23 MBps . bjp999 - thanks for tip. I will check that out!
  18. Everything has gone well. I now have 1.25TB of free space to work with, however on my main computer, I have even more ripped content to transfer over. I just started the first bid data transfer, through a gigabit router the pc and server both connect to. Both machines have gigabit ethernet ports. I am moving 684GB of mainly 4GB TO 7GB ISO rips. The estimated time to complete is nearly 24 hours. This my first major transfer from outside the server since moving to the WD 4TB RED Parity drive. Im thinking the transfer speed is slower than before, but i may be wrong. I am not use to moving this much data. Does that time frame seem reasonable , bearing in mind I do not use a cache drive? Thanks
  19. I like Gary's suggestion, for the simple reason I have no idea how to do a "file binary compare" Gary's makes sense. A non-correcting parity check, after the rebuild, would ensure that every sector has the proper value. I'm guessing it make take a little longer to run another full parity check, especially having moved up to the 4TB world, but its just time, nothing new to figure out. I also agree with the part about not re-tasking the smaller 750GB drive I am removing, until after I know everything is all good with the rebuild on to the new drive. I have no immediate need for it anyways, so it can collect dust for a little while
  20. "Last checked on Mon Jun 9 01:35:19 2014 EDT, finding 0 errors." Looks like I am good to go, with the new parity drive! I'll put it through another test tonight, as my next step is to replace my smallest drive (750GB) with the former parity drive (2TB). Just deciding if I should preclear the old parity drive or not. I've got 11 hours before I get home to decide that! Thanks for the info/opinions to all those that replied!
  21. So, I ran the parity check (well, its 600GB in to it). I just realized you guys said "Non-correcting" parity check. I missed the "non-correcting" check box. Assuming I check the output and/or syslog and it makes no reference to any corrections being made, then I take that to mean "all is well". Reasonable conclusion, based on me missing that check box and if the output is clean? I could stop the parity check and start it over again, but I'd rather not if not needed. Moving to the 4TB hard drive has its drawbacks - mainly being that it takes forever to preclear and run a parity check on due to the large number of sectors. With respect to the time it takes to complete, is that mainly a factor of the HDD speed or does my $50 CPU play a roll in this? During my original build 5 years ago, or so, CPU wasn't important, as it never really did much (at least for my needs, which were simply to run UnRaid with no real add-ons), so I just got a Celeron CPU for around $50.
  22. If began to do a few changes recently, in particular, upgrade to UnRaid 5.0.5. and now upgrade my Parity from a 2TB to WD 4TB Red drive. I did pre-clear the drive before adding it to the array, which took something like 60 hours. I added it in the former parity drive slot (as my server has no expansion bays left - I actually precleared the drive sitting it beside the case on a box). UnRaid did its automatic process and copied everything necessary so that my array is protected again. The question: Is their a need to run a Parity Check after just writing all the information to the new Parity Drive, or is their somewhat of an automatic check that is done while the data is written for the first time? I've read one person's suggestion, from a few years ago, which was "yes", to running it, but the reason was just as a greater burn-in test (sounded like he may have had a drive in the past that wasn't D.O.A., but was "Dead within the week". After this, I can re-task the old Parity to replace my smallest drive (750GB). Not a giant gain, but a gain nonetheless. Trying to postpone the upgrade to a Norco as long as I can!
  23. Here's what I get: root@Tower:~# ls -l /boot total 35736 -rwxrwxrwx 1 root root 0 2009-12-04 09:01 1* -rwxrwxrwx 1 root root 0 2009-12-04 09:01 189* -rwxrwxrwx 1 root root 0 2009-12-04 09:01 195* -rwxrwxrwx 1 root root 0 2009-12-04 09:01 240* -rwxrwxrwx 1 root root 0 2009-12-04 09:01 241* -rwxrwxrwx 1 root root 0 2009-12-04 09:01 242* -rwxrwxrwx 1 root root 0 2009-12-04 09:01 7* drwxrwxrwx 2 root root 4096 2010-07-18 12:38 Bios\ Firmware\ Update\ to\ FC/ drwxrwxrwx 2 root root 4096 2013-02-09 14:15 backup_files/ -rwxrwxrwx 1 root root 2725120 2014-01-11 06:40 bzimage* -rwxrwxrwx 1 root root 33482596 2014-01-11 06:48 bzroot* drwxrwxrwx 3 root root 4096 2014-04-12 22:49 config/ drwxrwxrwx 3 root root 4096 2009-10-11 08:31 custom/ -r-xr-xr-x 1 root root 13639 2009-10-11 08:30 ldlinux.sys* -rwxrwxrwx 1 root root 5162 2009-05-05 21:00 license.txt* drwxrwxrwx 2 root root 4096 2014-04-12 22:33 logs/ -rwxrwxrwx 1 root root 150024 2014-01-11 10:48 memtest* -rwxrwxrwx 1 root root 33404 2009-05-05 21:00 menu.c32* drwxrwxrwx 2 root root 12288 2011-08-11 19:39 packages/ -rwxrwxrwx 1 root root 85224 2014-04-05 11:47 preclear_disk.sh* drwxrwxrwx 2 root root 4096 2012-01-01 00:57 preclear_reports/ -rwxrwxrwx 1 root root 8703 2010-12-08 18:36 preclear_results.txt* -rwxrwxrwx 1 root root 8680 2009-05-05 21:00 readme.txt* -rwxrwxrwx 1 root root 183 2009-05-05 21:00 syslinux.cfg* drwxrwxrwx 4 root root 16384 2013-09-08 17:52 unmenu/ I did run the two commands: cd /boot/unmenu unmenu_install -u -d /boot/unmenu as well as started up unmenu manually, as per the other command you suggested and everything you suggested worked. I can access it now by the manual starting of it. Go short term solution, just need to figure out what's up with my "go" file next. Thanks
  24. I've recently upgrade to UnRaid 5.0.5 from version 4.7 I didn't wipe my flash drive and I am starting to think that is part of my problem. Maybe starting from scratch would have been better. I suspect changes to version 5.0.5 and my lack of fully understanding them is the root of my problem. I do not believe my "Go" file is being executed, as it contains the same info as I've always had in it: #!/bin/bash # Start the Management Utility /usr/local/sbin/emhttp & cd /boot/packages && find . -name '*.auto_install' -type f -print | sort | xargs -n1 sh -c # This is used to send email notifications to me echo "[email protected]" >/root/.forward # This will auto-load unmenu /boot/unmenu/uu It often months between times I access the servce for issues of this nature, so its always an exercice in remembering whats required. I can access the flash drive through my Windows machines File Explorer and I see all the necessary files/directories. I've tried to access the flash drive using Putty's telnet, which I successfully can do, but I can't really go anywhere. I do a "ls" at the tower prompt, I get back: root@Tower:~# ls mdcmd* powerdown@ samba@ I check to see if unmenu is running, using the following command and I get the following response: root@Tower:~# ps -ef | grep awk root 3096 2810 0 06:29 pts/0 00:00:00 grep awk I thought maybe it had to do with restrictions on the "root" account, but root is the almighty, so it shouldn't be. Nonetheless, I setup another user as a test, but I couldn't even get telnet in to the account to test any further. Suggestions on the best course to take to get UnMenu working again (and in turn, the other packages I had installed like powerdown as well as access to items like preclear). Thanks!
  25. I've recently updated my UnRaid to 5.0.5 and it has been working ok. I have some fine tuning to do, but the basics are all good. Last night I was unable to access the array through either my browser, my network shares or telnet. The issue was discovered when I went to transfer a 4GB ISO file over from my local machine to the array. I wasn't watching the transfer to see if anything moved before the server became inaccessible. I had no option but to hold the power button in and shut it down. I knew this would force a parity check once I booted it back up, but that wasn't a major issue as I was going to bed anyways. This morning, after the parity check finished, I see this in the summary: Last checked on Sun Apr 27 04:45:37 2014 EDT, finding 556 errors. There are no corresponding errors listed in the device status screen, yet the syslog does show the corrections. I'm not seeing in the syslog if the errors are specific to one drive or error of little to no concern that are somewhat common when a sudden reboot occurs. One side question: I found it somewhat odd that of my 12 drives (11 Data, 1 parity), that only 4 drives ever spun down during the partiy check. My parity drive is 2TB. I only have one 2TB data drive. Would you not have figured that if some of my 1.5TB drives spun down, they all would have spun down, as the parity check process would have been finished with all of them at the same time. Thanks for the help. Syslog attached. syslog-Apr.27-2014.txt