Falcon

Members
  • Posts

    33
  • Joined

  • Last visited

Everything posted by Falcon

  1. Yes, then it is 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24
  2. Is it possible to get the layout like this ? 1 13 2 14 3 15 4 16 5 17 6 18 7 19 8 20 9 21 10 22 11 23 12 24 ? I have my disk in two towers with the sata and power cable between them,
  3. Thanks, I will try that. Hmm Logical, why didn't I think about that ?. To stressed ?
  4. I have change the disk 1 and started a rebuild and expand of the disk. During the first minute the log said that it was a lot of read errors from the Parity disk. The parity disk went offline and rebuild and expand stopped. I stopped the server and fixed the loose cable and restarted the server. Now I have a config I can not get fix and get into shape. 1. I can not restart the rebuild and expand, because the parity disk is disabled 2. I can not format the disk 1, because the disk is in a rebuild state. 3. I can not change to the old disk 1 because the disk size is smaller 500GB 4. I can not unassigns the disk and start recalculation on parity disk. The disk 1 is an dedicated disk for Crashplan backup, so it is a backup for other machines. It is ok to loose the data on the disk 1, but it would be nice to rebuild the disk. but I think it is to late to save the data, since the other disk has had some writes without the Parity disk being online. Some of the bootlog: Dec 20 19:14:28 Fileserver emhttp: Start array... Dec 20 19:14:28 Fileserver kernel: mdcmd (73): start STOPPED Dec 20 19:14:28 Fileserver kernel: unraid: allocating 129220K for 1280 stripes (24 disks) Dec 20 19:14:28 Fileserver kernel: md1: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver kernel: md2: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver kernel: md3: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver kernel: md4: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver kernel: md5: running, size: 1953514552 blocks Dec 20 19:14:28 Fileserver kernel: md6: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver kernel: md7: running, size: 1953514552 blocks Dec 20 19:14:28 Fileserver kernel: md8: running, size: 312571192 blocks Dec 20 19:14:28 Fileserver kernel: md9: running, size: 488386552 blocks Dec 20 19:14:28 Fileserver kernel: md10: running, size: 1953514552 blocks Dec 20 19:14:28 Fileserver kernel: md11: running, size: 488386552 blocks Dec 20 19:14:28 Fileserver kernel: md12: running, size: 2930266532 blocks Dec 20 19:14:28 Fileserver kernel: md13: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver kernel: md14: running, size: 2930266532 blocks Dec 20 19:14:28 Fileserver kernel: md15: running, size: 2930266532 blocks Dec 20 19:14:28 Fileserver kernel: md16: running, size: 1465138552 blocks Dec 20 19:14:28 Fileserver kernel: md17: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver kernel: md18: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver kernel: md19: running, size: 2930266532 blocks Dec 20 19:14:28 Fileserver kernel: md20: running, size: 3907018532 blocks Dec 20 19:14:28 Fileserver kernel: md21: running, size: 488386552 blocks Dec 20 19:14:28 Fileserver kernel: md22: running, size: 1953514552 blocks Dec 20 19:14:28 Fileserver kernel: md23: running, size: 976762552 blocks Dec 20 19:14:28 Fileserver emhttp: shcmd (30): udevadm settle Dec 20 19:14:28 Fileserver emhttp: Mounting disks... Dec 20 19:14:28 Fileserver emhttp: shcmd (31): /sbin/btrfs device scan |& logger Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 244190608, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 244190636, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 0, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 1, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 244190637, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 244190637, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 244190637, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 244190637, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 244190637, async page read Dec 20 19:14:28 Fileserver kernel: Buffer I/O error on dev md1, logical block 244190637, async page read Dec 20 19:14:29 Fileserver logger: Scanning for Btrfs filesystems Dec 20 19:14:29 Fileserver emhttp: shcmd (32): mkdir -p /mnt/disk1 Dec 20 19:14:29 Fileserver emhttp: shcmd (33): set -o pipefail ; mount -t xfs -o noatime,nodiratime /dev/md1 /mnt/disk1 |& logger Dec 20 19:14:29 Fileserver kernel: XFS (md1): SB validate failed with error -5. Dec 20 19:14:29 Fileserver logger: mount: /dev/md1: can't read superblock Dec 20 19:14:29 Fileserver emhttp: shcmd: shcmd (33): exit status: 32 Dec 20 19:14:29 Fileserver emhttp: mount error: No file system (32) Dec 20 19:14:29 Fileserver emhttp: shcmd (34): rmdir /mnt/disk1 Dec 20 19:14:29 Fileserver emhttp: shcmd (35): mkdir -p /mnt/disk2 Dec 20 19:14:29 Fileserver emhttp: shcmd (36): set -o pipefail ; mount -t xfs -o noatime,nodiratime /dev/md2 /mnt/disk2 |& logger Dec 20 19:14:29 Fileserver kernel: md: handle_flush: unit=1 and parity both disabled! Dec 20 19:14:29 Fileserver kernel: XFS (md2): Mounting V5 Filesystem Dec 20 19:14:29 Fileserver kernel: XFS (md2): Ending clean mount Dec 20 19:14:29 Fileserver emhttp: shcmd (37): xfs_growfs /mnt/disk2 |& logger Dec 20 19:14:29 Fileserver logger: meta-data=/dev/md2 isize=512 agcount=4, agsize=61047660 blks Dec 20 19:14:29 Fileserver logger: = sectsz=512 attr=2, projid32bit=1 Dec 20 19:14:29 Fileserver logger: = crc=1 finobt=1 Dec 20 19:14:29 Fileserver logger: data = bsize=4096 blocks=244190638, imaxpct=25 Dec 20 19:14:29 Fileserver logger: = sunit=0 swidth=0 blks Dec 20 19:14:29 Fileserver logger: naming =version 2 bsize=4096 ascii-ci=0 ftype=1 Dec 20 19:14:29 Fileserver logger: log =internal bsize=4096 blocks=119233, version=2 Dec 20 19:14:29 Fileserver logger: = sectsz=512 sunit=0 blks, lazy-count=1 Dec 20 19:14:29 Fileserver logger: realtime =none extsz=4096 blocks=0, rtextents=0 Dec 20 19:14:29 Fileserver emhttp: shcmd (38): mkdir -p /mnt/disk3 Dec 20 19:14:29 Fileserver emhttp: shcmd (39): set -o pipefail ; mount -t xfs -o noatime,nodiratime /dev/md3 /mnt/disk3 |& logger Dec 20 19:14:29 Fileserver kernel: XFS (md3): Mounting V5 Filesystem Dec 20 19:14:30 Fileserver emhttp: shcmd (40): xfs_growfs /mnt/disk3 |& logger Dec 20 19:14:30 Fileserver kernel: XFS (md3): Ending clean mount
  5. Nice. I have not tried this yet, but will it use the Cache drive as staging or will it go directly to the destination drive(s) ?
  6. Check out the New version of 5.5 Update 1 11 March 2014 https://www.vmware.com/support/vsphere5/doc/vsphere-esxi-55u1-release-notes.html
  7. It is released a new version of ESXi 5.5 Update 1 11. March 2014 https://www.vmware.com/support/vsphere5/doc/vsphere-esxi-55u1-release-notes.html Maybe this will solve a lot of problems.
  8. Thank you again, now the Parity is rebuilding. Nice feature to have in an situation like this !
  9. Only disk9 is totaly down, disk12 working but is full of read errors. I think I have lost 3 files, but I manage out mostly all files out from the drive with read errors. Maybe some of the errors is on filesystem level. I use Crashplan to all my important data on the unraid server. I will check that, Thank you.
  10. I have problem with 2 drives at almost the same time. First 1 got several read errors from disk121 a 2 TB disk Disk 12: Device Identification Temp Size Free Read Write Errors disk12 WDC_WD20EARS-00MVWB0_WD-WCAZA3191168 (sdm) 1953514552 26°C 2 TB 1.82 TB 238120 3109 11000 As you see this disk have a lot off hardware errors It was not so bad because unraid fixed all the errors since it have parity check. Vendor Specific SMART Attributes with Thresholds: ID# ATTRIBUTE_NAME FLAG VALUE WORST THRESH TYPE UPDATED WHEN_FAILED RAW_VALUE 1 Raw_Read_Error_Rate 0x002f 095 095 051 Pre-fail Always - 112283 3 Spin_Up_Time 0x0027 191 164 021 Pre-fail Always - 5441 4 Start_Stop_Count 0x0032 100 100 000 Old_age Always - 927 5 Reallocated_Sector_Ct 0x0033 160 160 140 Pre-fail Always - 773 7 Seek_Error_Rate 0x002e 200 200 000 Old_age Always - 0 9 Power_On_Hours 0x0032 096 096 000 Old_age Always - 3173 10 Spin_Retry_Count 0x0032 100 100 000 Old_age Always - 0 11 Calibration_Retry_Count 0x0032 100 253 000 Old_age Always - 0 12 Power_Cycle_Count 0x0032 100 100 000 Old_age Always - 76 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age Always - 44 193 Load_Cycle_Count 0x0032 196 196 000 Old_age Always - 14811 194 Temperature_Celsius 0x0022 124 114 000 Old_age Always - 26 196 Reallocated_Event_Count 0x0032 001 001 000 Old_age Always - 513 197 Current_Pending_Sector 0x0032 200 198 000 Old_age Always - 315 198 Offline_Uncorrectable 0x0030 200 199 000 Old_age Offline - 48 199 UDMA_CRC_Error_Count 0x0032 200 200 000 Old_age Always - 0 200 Multi_Zone_Error_Rate 0x0008 123 102 000 Old_age Offline - 20695 Oct 24 20:35:07 Fileserver kernel: md: disk12 read error Oct 24 20:35:07 Fileserver kernel: handle_stripe read error: 321338184/12, count: 1 While I was making a copy to a local drive I got another Error with disk9 One 300 GB drive have been disable since unraid had problem to write to the drive: DISK_DSBL_NP /dev/md9 /mnt/disk9 300.08G 132.30G 45% 167.78G Now I have copied everything from disk9 and all I could recover from disk12 to other disks in unraid + to other external drives. My question is: What should I do now to disable both disk9 and disk12 ? I have taken disk9 out of the raid group and I am running Pre-Clear on it right now. What I think I should do is: 1. Insert disk9 into the raid group again to rebuild it. It will maybe not be correct since I have read errors from disk12.... 2. When disk9 is rebuild take the disk12 out of the raid group. 3. Recalc the parity disk. What will unraid do when it reach the read errors on disk12 when it rebuilding the disk9 ? Is it possible to remove both drives at once and just rebuild the parity disk ? Running Unraid 5 Beta 9. Thanks to unraid I have NOT lost any data yet ! Looking forward to get som answers...
  11. Other for me is: GUARD S COMPACT SERIES http://www.informups.com/guard_s_compact.html
  12. 2. I disabled the interrupt inside the card's firmware Why ?
  13. Crashplan have support for filters and working good on unraid.
  14. Just a few simple patches to the script files will fix this. Can you give me a hint of where ?. I have found out that I need to remove the -d ATA option from one file, but where ?
  15. Now you can also add me to the list of people who have flashed the M1015 card to SAS2008(P10) ! I had no problem with flashing, but my MEGA.SBR file was empty when I checked the USB drive after the flashing was done ? I did not se any error when I used the utility. I will check this the next time I flash a M1015 card. Is there any reason that that I should have had the MEGA.SBR ? other than for this forum ? Do you want the ADAPTERS.TXT instead or is it irrelevant ? Now my old parity disk is running preclear on the new controller with a speed of 113 MB/S on an WD 2 TB Green harddrive. As other people have said, I can not see the smart info on mymenu in unmenu for the disk on the new controller, but Unraid Version 5.0 beta 9 have no problem reading the smart info.
  16. I got the same problem and had to use button to shut down the server. I have no syslog files
  17. 1. On, the IBM M1015 is a LSI MegaRAID with a SAS2008 chipsets in it. 2. The OP explains how to convert (wipe the bios/firmware) of a IBM M1015 and apply a LSI 2008 IT Firmware to it for varies purposes, one being to use it with unRAID. To be clear, there no such thing as a 9220-8i (I know its printed on the back of the IBM M1015 but if you ask LSI they'll tell you there's no such product as the 9220-8i). Thanks for clearing that up.
  18. Hmm, I am a little confused: 1. Is there more than one version of the M1015 ? I see people talk about M1015 with LSI9211 and 9220 ? 2. Is IBM ServeRAID M1015 LSI 9220-8i PCI-E SAS RAID 46M0831 the same as LSI 9240-8i raid controller ? and can I flash this to be a plane LSI SAS2008 ? so I can use it with Unraid ? Example: http://www.ebay.co.uk/itm/IBM-ServeRAID-M1015-LSI-9220-8i-SAS-SATA-Controller-/200636413220
  19. Where did You disable the ncq ? Default disk settings ? Force ncq disable ?
  20. I can't help with your issue, but it should read 4k alignment (or sector 64) 3TB disks use their own alignment and an entirely different partitioning style. They use a GPT partition. The GPT partiton puts into place a "protective" partition table in the MBR to make legacy utilities think the disk is entirely allocated. Many older utilities will just report the partition starting on sector 1. The GPT partition is 4k aligned, so do not worry... and the -A had absolutely no effect on the preclear because the drive was over 2.2TB in size, but it did no harm either. The "-A" option was ignored because of the size of the drive. It is normal for fdisk to see the protective MBR partition and show it as starting on sector 1. It just has no way to know of the actual GPT partition. Joe L. Thanks a lot Now I can relax.
  21. The parity Settings page displays this information. Maybe it is ok ?