-
Posts
1132 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by optiman
-
-
ok that's great to news! I've followed those instructions and it is rebuilding both disk2 and the parity drives right now. Once finished, I will post the diag file again and report back.
Thank you!!!
-
diag file attached. I downloaded it right after I booted up the server.
-
we lost power and my server auto shutdown when the ups got low. So I just booted it back up. I lost the system log. Next time I will dl the diag file right after each step. I'm guessing you don't want the current one, as I just booted up.
What is the next step?
Thank you so much for helping!
-
Ok, ran the repair and this was the output
Phase 1 - find and verify superblock... Phase 2 - using internal log - zero log... ALERT: The filesystem has valuable metadata changes in a log which is being destroyed because the -L option was used. - scan filesystem freespace and inode maps... sb_icount 34816, counted 480192 sb_ifree 243, counted 251 sb_fdblocks 245643713, counted 791827383 - found root inode chunk Phase 3 - for each AG... - scan and clear agi unlinked lists... - process known inodes and perform inode discovery... - agno = 0 - agno = 1 - agno = 2 - agno = 3 - agno = 4 - agno = 5 - agno = 6 - agno = 7 - agno = 8 - agno = 9 - agno = 10 - process newly discovered inodes... Phase 4 - check for duplicate blocks... - setting up duplicate extent list... - check for inodes claiming duplicate blocks... - agno = 0 - agno = 2 - agno = 1 - agno = 6 - agno = 5 - agno = 4 - agno = 3 - agno = 7 - agno = 8 - agno = 9 - agno = 10 Phase 5 - rebuild AG headers and trees... - reset superblock... Phase 6 - check inode connectivity... - resetting contents of realtime bitmap and summary inodes - traversing filesystem ... - traversal finished ... - moving disconnected inodes to lost+found ... Phase 7 - verify and correct link counts... Maximum metadata LSN (2:880019) is ahead of log (1:2). Format log to cycle 5. done
I did have to use the -L
I then restarted and the drive was still disabled
What should I try next?
-
dam, the file overwrote the first one, not my day. I will try to Repair disk 2 as you suggested
-
I dl the diag file first thing in the morning before I rebooted, so I have the previous logs, see attached.
-
Thanks. No, all disks SMART logs are good and no errors.
I will check the controller and all cables and power. How can I get unraid to recognize disk 2 and parity again to try again?
-
Anyone using the new WD 14tb RED Pro drives with LSI controller with success?
I've never had 2 drives fail at the same time, very weird. Both are Iron Wolf 12tb and only 3 years old. SMART data and logs look good, so not sure why unraid disabled them. Diag file attached if anyone can help sort out what happened.
Thanks!
-
binhex/arch-sickchill:latest
that worked for me
- 1
-
Great! Thank you for letting us know.
-
read the posts above yours and you'll see that you'll need to roll back to a earlier version
-
Thanks guys - rolled back and working good now.
-
how do I roll back? I've never had to do that with a docker.
-
same for me too
-
16 minutes ago, visionmaster said:
I tried updating to 6.9.1, 2 weeks ago from 6.8.3 which has been stable for years. I had the Seagate drive issue after the drives spin down and then spin back up, I start getting read errors on 2 versions of my Seagate drives (ST8000VN004 and ST8000VN0022). Once reverting, no issues and stable again. I just tried to upgrade to 6.9.2 and immediately back to same 2 drives and spin up read errors, so I back to stable on 6.8.3. Hopefully fixed on next update. Those are the only type of Seagate drives affected on my system. My 4TB Seagates work fine. My drive controllers are Supermicro (MV88SX6081 8-port SATA II PCI-X Controller and MV64460/64461/64462 System Controller, Revision B).
First, you'll want to read and decide if you want to make the suggested drive fw changes. https://forums.unraid.net/topic/103938-69x-lsi-controllers-ironwolf-disks-summary-fix/
Can you confirm which controller you have these two Seagate drives connected to?
-
This is great! It will be nice to be able to test things in a VM before making big changes. Thanks for the guide!
-
Ok now we have several Seagate models affected, something is just not right here. Does anyone know what has actually changed and caused this? Can the Unraid team fix this in a future release or does this mean that for everyone running Seagate drives are at risk? Even if the fix works today, how do you know it will be ok in the next release? It seems there is a deeper issue here that must get addressed.
With all of this, I'm staying 6.8.3 for now and continue to enjoy my trouble-free server. I don't have any spare drives to test with and data loss is not an option for me.
A fix that does not involve messing with drive fw or options would much appreciated.
-
I'm trying to decide if I should update the fw on the 8tb drives or leave them on SN04. I haven't had any issues and they say we should not upgrade unless you are having issues.
Advice please - upgrade SN05 on those 8tb drives first, or leave them and make the changes using TDD's instructions and then upgrade unraid?
-
Yes, thank you! I didn't see that thread. I've posted in that thread, thank you!
-
Thank you all for this thread, very helpful. I'm still on 6.8.3 and I have several Seagate ST8000NM0055 (standard 512E) firmware SN04, which are listed as Enterprise Capacity. I just checked and Seagate has a firmware update for this model, SN05 I also have several Seagate ST12000NE0008 Ironwolf Pro drives with firmware EN01, no firmware updates available. My controller is a LSI 9305-24i x8, bios P14 and firmware P16_IT. I've had zero issues, uptime 329 days.
I was thinking of using the Seagate provided usb linux bootable flash builder and boot to that and run the commands outside of unraid. Given I only have seagate drives, I will need to do them all. Has anyone tried this with success?
-
1 hour ago, jungle said:
I've also got 4 x 2tb Seagate drives and no issues. With 6.9.1. Just updated to 6.9.2 a few mins ago.
awesome, thank you!
-
14 minutes ago, DuzAwe said:
I have almost all Seagate drive and have had no issues.
That is awesome to hear! I think it may have to do with which controller is in use. Mine is a LSI 9305-24i x8. I see you are also running LSI card.
Weird how some are having the issue and some are not.
- 1
-
Has anyone using Seagate drives updated to 6.9.2 yet? I'm still on 6.8.3 due to people posting that their Seagate drives were having errors and dropping. As posted by @TDD, the issue is a result in a recent merge into the combined mpt3sas driver and kernel. It was all fine under 4.19. TDD also said that he reported this bug , so I would like to know if 6.9.2 fixed the issue before I try to upgrade. I have not made any changes to my Seagate drives and my server is rock solid with no issues and uptime is at 329 days
I would rather not have to disable the EPC or the low current spin up settings on all of my Seagate drives if I don't need to.
- 1
-
On 3/10/2021 at 1:17 PM, TDD said:
I believe it to be an issue in any recent merge into the combined mpt3sas driver and kernel. It was all fine under 4.19. Disable and await any non-firmware fixes later. You can then re-enable the aggressive power saving if you wish.
I have had zero issue since this fix across all my controllers that are LSI based.
Kev.
Hello Kev, just checking in with you to see if you still have had no issues with your Seagate drives? I have the Seagate tool on a flash drive ready to go, but I'm holding off for a while to see if the super Unraid team will fix this in the next update or not. I would rather they address this issue so I do not have to make changes to the drives. The other concern is all of my data drives are Seagate, so if things go wrong, it will be really bad for me. My server has been up and running with 6.8.3 for 322 days straight without any issues. Yes, I'm knocking on wood right now LOL.
Did you officially report this as a bug? From what you stated, it sounds like a mpt3sas driver / kernel issues that could be fixed in a future unraid update. With so many people using Seagate drives, I'm surprised that there are not more people reporting issues after updating.
lost 2 seagate drives at the same time, looking at WD RED Pro
in Storage Devices and Controllers
Posted
rebuild complete and it looks like everything is ok. Should I check or test anything else? I never did find any route cause as to why both of those drives were disabled.
Diag file attached. The weird thing is it took forever to collect the diag info, so I hit Done and did it again, and it worked normal, took 10 seconds.
I'll now try to sort out what is going on with my docker not starting.
Thanks again!
tower-diagnostics-20210910-1722.zip