Jump to content

15 posts in this topic Last Reply

Recommended Posts

keep getting the follow error and got a drive showed up as disabled.

 

Jun 25 20:00:35 Tower kernel: ata16: exception Emask 0x10 SAct 0x0 SErr 0x90202 action 0xe frozen

Jun 25 20:00:35 Tower kernel: ata16: irq_stat 0x00400000, PHY RDY changed

Jun 25 20:00:35 Tower kernel: ata16: SError: { RecovComm Persist PHYRdyChg 10B8B }

Jun 25 20:00:35 Tower kernel: ata16: hard resetting link

 

Try to correct it by unassign and reboot then reassign, while it's rebuilding something happen and I end up had to uninstall all the drives hooked up to my Supermicro AOC-SASLP-MV8 8-Port SAS/SATA card and just boot up with the drives that's hooked up to the motherboard. Rebuild the parity so I can use those drives. I think there is something wrong with the PCI card or one of the drive is broke. How should I go about find out which one is causing the problem? Thanks for the help.

 

syslog.zip

Share this post


Link to post

You should post a smart report of the disk.  I suspect either the disk is failing or you have a cabling problem to the disk.  If the smat report is clean you should try shutting down the server and resecuring (or replacing) the cables to this disk.

Share this post


Link to post

I did check the smart report and sometimes it would not report anything at all on the ones that's mark as unformatted. I did check the cables the they all seems to be well secured. The cable I use for my PCI card is the 1 to 4 cable so it would be kind of hard to change it individually and they are not cheap :(. What are the best way to add my drive back to the unraid and preserve the data on the drive that will be added, since I have new parity that does not include the seven drives that's offline now :(. Thanks.

Share this post


Link to post

You should post a smart report of the disk.  I suspect either the disk is failing or you have a cabling problem to the disk.  If the smat report is clean you should try shutting down the server and resecuring (or replacing) the cables to this disk.

 

You don't think there is some wrong with my PCI card?

Share this post


Link to post

Sounds like the drive is losing connectivity with the computer. Could be anything - bad controller, bad sata cable, bad power splitter, bad backplane, bad PSU, bad drive, etc. May take some experimenting to figure out.

Share this post


Link to post

Sounds like the drive is losing connectivity with the computer. Could be anything - bad controller, bad sata cable, bad power splitter, bad backplane, bad PSU, bad drive, etc. May take some experimenting to figure out.

 

lol, great.... Well I at least know my motherboard, CPU, PSU and 6 drives are good since I have not see any problem since I rebuild the parity with those drive. I have a 700W PSU think that's enough to power 14 drives? Some are 2TB green and 1.5TB seagate 7200. Think I will get a new PCI card and see if that will take care of the problem.

 

Thanks for ur help.

Share this post


Link to post

One thing that make me think the PCI card is bad is that when I boot up the drive sometime the port checking on the PCI would not complete and hangs. I can see the server output when I hook a monitor to it.

Share this post


Link to post

Is there anyway to re-add the old drive that still has data on the drive to my raid? I can see the data still intact on them.

Share this post


Link to post

Would u recommend http://www.newegg.com/Product/Product.aspx?Item=N82E16817139020 or http://www.newegg.com/Product/Product.aspx?Item=N82E16817139006. 750W might be an overkill for my max 15 drives server, but might be good if I look to expand further down the road :). Currently, I do have mixed drives, 7200 and Green in my server. I use http://www.newegg.com/Product/Product.aspx?Item=N82E16817994028 for the 3-5 drive configuration. Drive temp goes to mid 40's on the 7200 and 30's with the green drive when they spin up. It use to go up to 50 with the 7200 but I had added some more fans in the box since to get the temperature down.

Share this post


Link to post

What model PSU? Some 700W PSU are not suitable to power 14 drives.

 

Really? That would be cool if that's the problem. I was able to mount all the drive that's connected to the PCI card individually and do SMART test on them. All seems fine. My PSU is the following http://www.newegg.com/Product/Product.aspx?Item=N82E16817341018, OCZ ModXStream Pro 700W Modular High Performance.

According to this drawing

http://www.ocztechnology.com/res_old/manuals/psu/OCZMXSP500-700.pdf

of the cabling of that power supply, all the SATA and MOLEX connectors are on the 12V2 rail.

According to this drawing, it is a 25 Amp rail.

http://www.ocztechnology.com/images/awards/mxsp_wattage_charts.jpg

That same rail is shared by the PCIe power connectors (but odds are you are not using those connectors)

 

Even with all green drives, at 2 amps each, that's 28 amps, over the limit for the one 12V2 rail, and that is not counting the power draw for fans.  

 

Since it seems you have a number of non-green drives, you are well over the limit of that supply. (figure on closer to 3 amps per non-green drive when spinning up)  If, for example you have 7 non-green and 7-green that is still (7*3)+(7*2)= 35 Amps,  plus a few more amps for the fans.  You need 38 to 40 amps of capacity for the disks alone.   It is why most multi-rail power supplies are not suitable for large arrays of  disks.

Share this post


Link to post

What model PSU? Some 700W PSU are not suitable to power 14 drives.

 

Really? That would be cool if that's the problem. I was able to mount all the drive that's connected to the PCI card individually and do SMART test on them. All seems fine. My PSU is the following http://www.newegg.com/Product/Product.aspx?Item=N82E16817341018, OCZ ModXStream Pro 700W Modular High Performance.

According to this drawing

http://www.ocztechnology.com/res_old/manuals/psu/OCZMXSP500-700.pdf

of the cabling of that power supply, all the SATA and MOLEX connectors are on the 12V2 rail.

According to this drawing, it is a 25 Amp rail.

http://www.ocztechnology.com/images/awards/mxsp_wattage_charts.jpg

That same rail is shared by the PCIe power connectors (but odds are you are not using those connectors)

 

Even with all green drives, at 2 amps each, that's 28 amps, over the limit for the one 12V2 rail, and that is not counting the power draw for fans.  

 

Since it seems you have a number of non-green drives, you are well over the limit of that supply. (figure on closer to 3 amps per non-green drive when spinning up)  If, for example you have 7 non-green and 7-green that is still (7*3)+(7*2)= 35 Amps,  plus a few more amps for the fans.  You need 38 to 40 amps of capacity for the disks alone.   It is why most multi-rail power supplies are not suitable for large arrays of  disks.

 

Thanks for the info. Now I know why I am having intermittent share drive dropping and server going off line. Think I might have find my problem. Never occur to me the PSU spec Amp is more important than the Watts :P

 

So I went ahead and order the http://www.newegg.com/Product/Product.aspx?Item=N82E16817139006. Can I just add my old drives back and do a initconfig and rebuild the parity and be good to go with whatever data I have in those drives?

 

Thanks for your help.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.