twilightzone

Members
  • Posts

    31
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

twilightzone's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Thanks everyone. Since I still had the original three 1TB drives with the original data I decided to preclear the new 4TB drive (like I should of in the first place and then copy over all of the data again. It took a number of days but seems to be ok now and when I build my parity drive afterwords got no errors. Smart is passing and doesn't show any pending sectors now.
  2. Oh boy, I didn't see the 1 sector pending. Any command to tell the drive to commit? I never got any errors when I wrote the data to this drive. It was when I was reconstructing my parity since I dropped two drives from the array. What about the 32 sector read errors during the parity reconstruct (Every eighth sector from 1040768160 to 1040768408) ? Does that mean my parity even though it says its valid might not contain those 32 sectors? Best way to know if those 32 sectors contained data or were not used? Should I preclear the troubled disk4 and start over moving the data from the three removed drives if it passes?
  3. Hello, I recently upgraded my raid to 5.0.5 all ok. Mapped all my drives and rebuild parity ok with no errors. After getting to 5.0.5 I upgraded my parity drive to 4TB. Rebuild parity on it - no problem. Then I upgraded one drive from 1TB to 4TB (Disk4) - Did a parity data rebuild on Disk4 - no problem. Moved data from Disk5 to Disk4 using windows PC with parity on - no problem. Moved data from Disk15 to Disk4 using windows PC with parity on - no problem. So I then wanted to remove disk5 and disk15 from the array since they aren't needed and I would have spare slots again. I did a new config, remapped my drives (Except for disk5 and disk15 to be dropped) Then started a parity rebuild - once finished got 32 sync errors all on Disk4. I did run a reiserfsck on Disk4 with No corruptions found. Trying to figure out my next step to verify everything is ok and no data loss or if yes what files(s) ? Also wondering since Parity and Disk4 are new if either drive might has issues. Didn't run any preclear checks on either drive (parity or disk4) before installing. 1. Best method to know if data on disk4 is ok? Copy it? (Or will array create the bad sectors from parity?) 2. Best method to test parity and/or disk4 (has 2.5TB of data) to make sure they are good? 3. Array says parity is valid but wondering if good? Thanks for any guidance to make sure my array will be stable going forward. Array has 13 data drives now, parity active, two spare bays. Still have original disk4, disk5, and disk15 not online. Syslog and smart files for parity and disk4 attached. syslog20140818.zip smart_parity_20140818.txt smart_disk4_20140818.txt
  4. Interesting, there is a HPA entry but it's for a different drive (2TB) drive that reports the same between 4.3 and 4.7. The drive causing issues (#4 - SDO) was reporting less under 4.3 and 4 bytes less than that under 4.7 (I don't think it reports a HPA issue in the log for drive SDO). I like the idea of making a 5.0.5 key and seeing if Disk4 is recoginized ok as a precheck.
  5. Before on 4.3.3 it was all green and no parity errors. Last done around 1 week before I started the upgrade. I'll wait to hear from Tom before jumping to 5.0.5
  6. Yes, going back to 4.3.3 and everything is fine and the array starts. I wouldn't think it would be off sector wise since it's just a change of four bytes! Emailed Tom but haven't heard back if it's ok to jump to 5.0.x from 4.3.3 directly.
  7. The readme for the latest stable version says to get to V4.7 first from V4.3.3 ? Ok to jump directly to v5? When I build this array it was back in V3 days in 2008! This is it's only 2nd upgrade. I'm a if it's not broke type of person and doing what you need leave it be.
  8. Help, in working towards upgrading to V5 from V4.3.3 I was upgrading to the first step of V4.7 I copied the two bz files and memtest and rebooted the server (15 drives + Parity with no drive changes) and V4.7 installed. But the array didn't start because it reports my disk4 (SDO 1TB) as red because "my replacement disk is to small" No disk was altered or changed. Just upgrade to V4.7 from V4.3.3. I do notice that the total overall size is 4 blocks less under V4.7 V4.3.3 see's it as 976,761,496 and V4.7 see's it as 976,761,492 I don't see any HFA issues in the syslog for this drive. Just a "Wrong" line added. Before the upgrade the array was all happy with no errors. If I revert back to V4.3.3 the array starts. Best solution/steps to get this working ? Thanks for any help. SOLUTION: Installed directly to V5.0.5 bypassing the recommended install to V4.7 first before going on to V5.0.5. V4.7 showed 4 blocks less on one drive so array would not start. However V5.0.5 showed the original 4.3.3 reported size. Steps: Saved my old 4.3.3 disk mappings and remapped them under V5.0.5 except the parity drive. Checked that all drives didn't have any MBR issues before starting the array. Started the array (without a parity drive!) Run the UTIL "New Permissions" Stopped the array Added my parity drive (new 4TB from old 2TB drive) Let parity get created on new drive. All happy 15 green drives + parity. Syslog.txt
  9. bjp999: Thanks for the info on the time for parity checks on your 10TB system. Anyone: I still would like to know if any 8 port PCI-E cards are supported yet? Thanks
  10. I'm wanting to know if any 8 port SATA PCI-E controller cards are supported? Want to run a card in my PCI-E X16 graphics slot on my AB9 Pro. I want the fastest parity checks possible.
  11. So is it ok to use the on-board 10 ports vs 8 in your prev. post? Along with the 4 using the adaptec + 1/Roswill to get my 16 ports What 8 port cards are supported by unRaid? (In case I just want to use 8 on-board +8 in the pciex16 slot. Thanks
  12. Drop 2 from the onboard? Are there issues with either the controler for sata8 and 9, or sata7 + esata ? What 8 port cards are supported? I didn't know unRaid supported any? I really want a fast parity build throughput so it might be worth it getting 8 ports using the x16 slot. I need 16 ports - already have the drives
  13. Okay, as a baseline I have 6 Seagate 1tb drives, initially all 6 on the ICH8R motherboard controller.... Parity check at the 1% point was about 81 MB/sec. I moved the last two drives, one on each new card... wow... 87 MB/sec... reflects the reduced load on the ICH8R, and each of the two moved drives had it's own PCIE 1X lane... put both drives on one new controller... ooops down to 66 MB/sec... ouch... two drives on one lane ain't so good! Installed the Adaptec in the empty PCIE 16X video card slot (running PCI video).... put both drives on it... 92 MB/sec... I like it... so......... I'm gonna leave it just like this.... I can put 6 drives on the ICH8R, 4 on the Adaptec, and one on each Rosewill for 12 drives, which goes with my 4 3in2 Kingwins the other on-board connections will be for the Cache drive and my Slackware boot drive! ciao.... Great info Jim Q: For a 16 drive system on an AB9 Pro is this the best setup?: use all 9 internal sata, the 1 external (loop back into case), 4 on an Adaptec in the 16x slot, then 1/ea on Rosewill in the 1x slots? Or 2/Rosewill or some other config with 2 or 3 on a PM? Does anyone make a 8 port card that works with unRaid instead of the 4 port Adaptec? Thanks
  14. X4N: Yes, I changed halt on error to none. From josetann and Equilibrium looks like the board doesn't support going headless. If you have unRaid going headless on a AB9 please fill us in!
  15. Setting up a AB9 pro. Boots up fine and I can remote login when I have a graphics card installed. But when I remove it looks like the USb key blinks only 3 times after the FF boot code on the motherboard. Instead of the usual 50 secs or so. Is there any setting required in syslinux.cfg to run headless? Thanks