Vortec Posted December 27, 2018 Share Posted December 27, 2018 Unraid 6.6.6 Working great for 9 months with 4x 3tb toshiba 7200s and 1 250ssd Just had a drive fail. Bought 2 new WD Red 4tbs. Replaced bad drive with new 4tb Red, I am trying to do a parity swap. I noticed that the parity swap was taking a very long time to progress (1% every 10 hours.) waited about 24-30 hours until shutting the system down. (yes I know don't do this i'm stupid but I pulled files that I needed) I assumed the drive was bad, downloaded preclear and tried running that. Got very similar results. 1% every 6-10 hours. Replaced the new WD Red with the other, again same results. I have also changed the physical drive locations to see if that made any difference, it did actually take longer for these errors to start. I checked the logs. I am getting "ata.3.00: Exeption Emask 0x0 SAct 0x0 SErr 0x0 action 0x0" constantly to the point that it completely fills my "flash log docker" within 5 to 10 Minutes. Can anyone tell me if my sata controller is dying or what exactly is dead or dying? Or if there is any way I can resolve this? Very well could be user error as I am the kind to break stuff in the most unpredictable way possible. If there is anything else I can tell you or do to help you guys let me know. You guys are great. datto-diagnostics-20181226-1953.zip Link to comment
JorgeB Posted December 27, 2018 Share Posted December 27, 2018 Replace both cables and/or use a different SATA port (swap with another disk if needed) Link to comment
Vortec Posted December 27, 2018 Author Share Posted December 27, 2018 14 hours ago, johnnie.black said: Replace both cables and/or use a different SATA port (swap with another disk if needed) I tried both. I also figured it out. The way the caddy was holding the drive in place was not properly seating into the drive backplane. I used tape to hold the drive in the caddy a bit farther than screws were. It has been solid for 24 hours now. Thanks again for your help. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.