skoj

Members
  • Posts

    33
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

skoj's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thanks JorgeB. Definitely self-inflicted. I assumed it was a bug with rc3 since the pool mounts in rc2. For background, I created the pool with 2 drives and did some drive failure testing. Remove drive, add it back in. There was never a third drive involved. I must have done something which caused UNRAID & ZFS pool configuration to go out of sync. I detached the failed drive from the pool but got the same error with rc3. Will delete pool & recreate.
  2. Pool created on rc1, working fine in rc2 and after upgrading to rc3 i get this upon array start: Reverted back to rc2 and the pool mounted without trouble. bender-diagnostics-20230414-1937.zip
  3. Smart attributes are randomly filtered out for Seagate Nytro XF1230 SSDs. Cannot repro for any other make/model. Occurs on 6.11 & 6.12.-rc2. Every time I load the attributes I get a different subset: Reload and a different subset every time: Collecting the smart attributes directly always returns the full set: root@BENDER:~# smartctl -n standby -A /dev/sdk smartctl 7.3 2022-02-28 r5338 [x86_64-linux-6.1.20-Unraid] (local build) Copyright (C) 2002-22, Bruce Allen, Christian Franke, www.smartmontools.org === START OF READ SMART DATA SECTION === SMART Attributes Data Structure revision number: 0 Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x000f 100 100 006 Pre-fail Always - 0 5 Reallocated_Sector_Ct 0x0032 100 100 036 Old_age Always - 0 9 Power_On_Hours 0x0032 048 048 000 Old_age Always - 46888 12 Power_Cycle_Count 0x0032 100 100 020 Old_age Always - 75 174 Unexpect_Power_Loss_Ct 0x0030 100 100 000 Old_age Offline - 68 175 Program_Fail_Count_Chip 0x0032 100 100 000 Old_age Always - 0 176 Erase_Fail_Count_Chip 0x0032 100 100 000 Old_age Always - 0 177 Wear_Leveling_Count 0x0032 086 086 000 Old_age Always - 391775296 178 Used_Rsvd_Blk_Cnt_Chip 0x0032 100 100 000 Old_age Always - 76 179 Used_Rsvd_Blk_Cnt_Tot 0x0032 100 100 000 Old_age Always - 538 180 End_to_End_Err_Detect 0x003b 100 100 006 Pre-fail Always - 0 181 Program_Fail_Cnt_Total 0x0032 100 100 000 Old_age Always - 0 182 Erase_Fail_Count_Total 0x0032 100 100 000 Old_age Always - 0 183 SATA_Downshift_Count 0x0032 100 100 000 Old_age Always - 0 184 End-to-End_Error 0x0032 100 100 000 Old_age Always - 0 187 Reported_Uncorrect 0x0032 100 100 000 Old_age Always - 0 188 Command_Timeout 0x0032 100 100 000 Old_age Always - 0 189 SSD_Health_Flags 0x0000 100 100 000 Old_age Offline - 0 190 SATA_Error_Ct 0x0000 100 100 000 Old_age Offline - 0 194 Temperature_Celsius 0x0002 070 049 000 Old_age Always - 30 (Min/Max 10/51) 195 Hardware_ECC_Recovered 0x0032 100 100 000 Old_age Always - 0 199 UDMA_CRC_Error_Count 0x003e 100 100 000 Old_age Always - 0 201 Read_Error_Rate 0x000e 100 100 000 Old_age Always - 0 204 Soft_ECC_Correction 0x000e 100 100 000 Old_age Always - 0 231 SSD_Life_Left_Perc 0x0033 086 086 001 Pre-fail Always - 86 234 Lifetime_Nand_Gb 0x0032 100 100 000 Old_age Always - 1530016 241 Total_Writes_GiB 0x0032 100 100 000 Old_age Always - 1457984 242 Total_Reads_GiB 0x0032 100 100 000 Old_age Always - 12827 245 Read_Error_Rate 0x0033 086 086 001 Pre-fail Always - 86 250 Read_Error_Retry_Rate 0x0032 100 100 000 Old_age Always - 2334 I suspect the bug's in this script which does some fairly complex filtering on the smartctl output: /usr/local/emhttp/plugins/dynamix/include/SmartInfo.php Not a huge thing but I'd appreciate it if one of the devs can take a look. bender-diagnostics-20230408-1503.zip
  4. <EDIT> Please ignore, not an unraid problem. Turns out a host was scanning a share 24/7. </EDIT> I've the same issue since upgrading from 6.7.0 to 6.8.1. All dockers turned off, no clients connected. Idle cpu was 5-10% before the upgrade. Has anyone run into this & fixed it? Minor impact but I hate to lose the capacity. Other odd thing is that there's network traffic even though all of the disks are spun down & the I/O counters are at very nearly zero.
  5. Yeah, that was it. I was testing with nothing plugged in except the board. It worked once I plugged in a couple of fans.
  6. So I bought this motherboard: ASUS - P9A-I/C2750/SAS/4L https://www.asus.com/ca-en/Commercial_Servers_Workstations/P9AIC2750SAS4L/ On paper it's a great board for UnRAID w/ low power, IPMI, 18 SATA ports, etc. but I can't get the thing to even start the POST process. I turn it on, the power supply fan spins for a couple of seconds and then shuts off. That's it. No video, beeps, or anything. CPU fan doesn't even start. The board is getting some power because the power LED is on and so is the LED indicating IPMI activity. This reviewer says the board is sensitive to power and wouldn't start until he switched from 650w power supply to a 150w one. Apparently the board doesn't provide enough load and the power supply shuts down automatically. I'm connecting to a Corsair CX430 which I suspect is running into the same issue. I should mention that i'm testing with memory from the motherboard's approved list and only have 1 drive connected. Has anyone gotten this board to work? If so, what power supply?
  7. Added a second parity disk to my server running 6.2.4. Shortly after the parity sync began I noticed that the # reads on one drive (disk 7) is exactly half that of all the other drives. That doesn't seem right. Shouldn't all drives have roughly the same number of reads?? No errors in the console or logs. Nothing out of the ordinary aside from this. Device Identification Temp. Reads Writes Errors FS Size Used Free View Parity 8 TB (sdn) 36 C 1,215,867 191 0 Parity 28 TB (sdh) 35 C 262 1,260,866 0 Disk 1 2 TB (sdm) 35 C 1,216,048 24 0 reiserfs Disk 2 4 TB (sdl) 34 C 1,216,051 24 0 reiserfs 4 TB Disk 3 6 TB (sdk) 35 C 1,216,056 25 0 reiserfs 6 TB Disk 4 8 TB (sdp) 35 C 1,216,045 24 0 reiserfs 8 TB Disk 5 3 TB (sdo) 35 C 1,216,049 24 0 reiserfs 3 TB Disk 6 4 TB (sdq) 32 C 1,216,040 24 0 reiserfs 4 TB Disk 7 4 TB (sdj) 35 C 648,243 25 0 reiserfs 4 TB Disk 8 2 TB (sdb) 33 C 1,222,568 24 0 reiserfs 2 TB Disk 9 6 TB (sdd) 36 C 1,216,247 25 0 reiserfs 6 TB Disk 10 6 TB (sde) 34 C 1,216,252 24 0 reiserfs 6 TB Disk 11 8 TB (sda) 35 C 1,216,241 25 0 reiserfs 8 TB
  8. Upgraded from 5.0.4 to 6.2.1 without any trouble. Just wanted to thank limetech & everyone on this board for their hard work on this upgrade and the great documentation. Between the upgrades to unraid itself, the new webgui, and the community plugins/dockers its like getting a new & improved NAS for free. Only quibble is that parity checks went from 17 to 25 hours but I expect that adjusting the tunables will take care of that. I'll just live with it until the script for that is ready for 6.2. Thanks again!
  9. @jonathanm Thanks for the referral to donordrives. They repaired the logic boards on the blown drives at a reasonable cost so I didn't wind up losing any data. @limetech Thanks for the e-mail assist as well.
  10. Thanks for the referral to donordrives.com jonathonm. I'll give them a call in the morning. I was in sticker shock from a data recovery firm's estimate when I saw your reply. Replacing all drives is good advice as I can't fully trust even the good drives after this. Not sure I'll be able to follow it as replacement cost for all my storage would be rather high.
  11. Hi all -- Running unRAID 5. Lost multiple drives in my array and looking for advice with regards to recovery. I was careless while swapping out a drive cage and plugged in a molex connector backwards. Now 4 of 12 drives aren't recognized by the BIOS and I suspect the logic boards on those drives are fried. The array didn't mount since it has 4 missing disks and I turned the server off to prevent any further damage. Looking into data recovery services. I have some questions about how unRAID works which would affect my next decisions. 1) Are the individual drives mountable on another server without reconstructing RAID encoding? I vaguely remember hearing each drive was an independent ReiserFS filesystem that could be mounted on any Linux system. 2) If they are able to recover all data on the 4 bad drives, they could clone the disk image to a fresh drive. Is it possible to connect those clones to my server and instruct the array to start with the clones? I suspect this will be problematic since I see the drive serial numbers in the UnRAID config. 3) If #2 isn't feasible, is this plan an option? a) Start array with 8 good disks (understanding that files from bad disks won't be available) b) Copy files recovered by service into array 4) Can you recommend anything else I should try? Are there any options I've neglected to consider? 5) Can you recommend any recovery service? Perhaps someone you've used in the past? Sorry for the long post. Appreciate any help you can offer.
  12. Successfully upgraded, no issues after 24 hours. Performance seems roughly the same as 5.0rc8a. Motherboard - Supermicro CS2EE (using onboard Realtek R8168 NIC)
  13. Just flashed my M1015 into a SAS2008 in IT mode using the instructions and ZIP file in this thread. Was pretty straightforward except that I had to try several motherboards until I found one that didn't give me the "Failed to initialize" error. Props to madburg for putting this together and to everyone that posted their experiences. It would have been a brutal process without this post! Not sure if anyone cares but I have CS2EE motherboard and I flashed to formware version P15 downloaded from LSI's support site. I'll test for a few days but things look good at first glance. No syslog errors, can read all drive temps and I can spin drives up/down from the web console.