Array won't start anymore


Recommended Posts

So first of all I'm from Germany I hope you can follow my english.

 

My Problem:

I can't start my Array anymore

I reboot the server and when I hit the Start Button in the webinterface it ends in an loading loop.


When the array isn't started unraid and webinterface works just fine.

The first time i was able to reboot the server with putty after that

I couldn't shutdown or reboot the server over the console in putty and needed to do an manual forced shutdown through press button method.

Only when it isn't in the loop.

 

I have now 2x forced the shutdown so unraid wants to start an parity check when i start the array.

 

What I did before the error occured:

I disabled disk 2 in my media share and enabled read& write for my user

I moved via Windows Explorer from Disk 2 (2TB HDD) to Disk 7 (4TB HDD) ~300GB Media Files mkv mp3 etc pp...

Then the copy process abortet due an "unexpected network error"

I don't know what occured it maybe that the windows backup started during the copy process. It also stopped due an "unexpected network error"

my media share contains disks (2),4,6&7

my backup share disks 3&5

so there should be no conflict due the disks.

 

What is functional.

I can login via putty as root

I can restart the Server and the webinterface starts

I can ping my unraid server 192.168.xxx.y

 

My Specs:

Unraid Server Plus v6.3.3

Intel Xeon 1246v3

32GB RAM

2x480 SanDisk Ultra II SSD as Cache

7 Different sized HDDs all Seagate only one WD

1 Seagate 4TB Parity

2x LAN setup as backup

 

 

 

Link to comment

I first started xfs_repair with -n as option after that i started to start the repair again with "blank" in th eoption field as described in the help.

This gave me the following error report:

Phase 1 - find and verify superblock...
Phase 2 - using internal log
        - zero log...
ERROR: The filesystem has valuable metadata changes in a log which needs to
be replayed.  Mount the filesystem to replay the log, and unmount it before
re-running xfs_repair.  If you are unable to mount the filesystem, then use
the -L option to destroy the log and attempt a repair.
Note that destroying the log may cause corruption -- please attempt a mount
of the filesystem before doing this.

So as i can't mount my disk I should run xfs_repair with the option -L am I right?

Link to comment

The Array works again!

It worked exactly as you said.

 

For Short:

Start the Array in maintenance mode

go in the corrupted disk menu and

start xfs_repair option:      <-blank for repair

if error occures follow the help and run

xfs_repair option: -L

and after that again

start xfs_repair option:      <-blank for repair

 

that worked for me.

 

Thank you very much everyone!

Link to comment
  • 4 years later...
On 4/8/2017 at 3:04 PM, JorgeB said:

Disk2 is the problem:
 


Apr  8 21:41:25 Cardinal kernel: XFS (md2): _xfs_buf_find: Block out of range: block 0x874704438, EOFS 0xe8e08870 

 

Start in maintenance mode and run xfs_repair on disk2 (md2)

Hi, where do you find which drive is the faulty one? My array is not starting, there are no errors in the parity but I need to run the diagnostics again, but I don't know wich drive should repair.

Link to comment
17 minutes ago, rojarrolla said:

Hi, where do you find which drive is the faulty one? My array is not starting, there are no errors in the parity but I need to run the diagnostics again, but I don't know wich drive should repair.

Not sure what you mean by ‘the array is not starting’?

 

I would suggest you provide a screenshot of the Main tab and a copy of your system’s diagnostics zip file (obtained via Tools->Notifications) attached to your next post so we can give you some informed feedback.

Link to comment
1 minute ago, itimpi said:

Not sure what you mean by ‘the array is not starting’?

 

I would suggest you provide a screenshot of the Main tab and a copy of your system’s diagnostics zip file (obtained via Tools->Notifications) attached to your next post so we can give you some informed feedback.

Thanks, I mean I can only start it on Maintenance Mode.

 

Here is a copy of the diagnostics:

Thanks

tower-diagnostics-20210726-1231.zip

Link to comment

If I try to start in normal mode, it hangs and I can not access the Graphic interphase. I can access via console and then "powerdown -r" from there. 

I will try to generate the diagnostics file when "starting" the array in normal mode.

 

Auto start is disabled, just a precaution since I've been running some tests. But I can enable it.

Link to comment
Just now, rojarrolla said:

If I try to start in normal mode, it hangs and I can not access the Graphic interphase. I can access via console and then "powerdown -r" from there. 

I will try to generate the diagnostics file when "starting" the array in normal mode.

 

Auto start is disabled, just a precaution since I've been running some tests. But I can enable it.

If you can get to the console then you can also generate the diagnostics using the ‘diagnostics’ command and they get put into the ‘logs’ folder on the flash drive.

 

don’t enable the auto-start as it is probably a good idea to have it off while investigating any issue.  I just wanted to make sure it was intentional.

  • Like 1
Link to comment
21 minutes ago, itimpi said:

If you can get to the console then you can also generate the diagnostics using the ‘diagnostics’ command and they get put into the ‘logs’ folder on the flash drive.

 

don’t enable the auto-start as it is probably a good idea to have it off while investigating any issue.  I just wanted to make sure it was intentional.

Here is the one created when trying to start the array.

tower-diagnostics-20210726-1112.zip

Link to comment

Something strange going on - the syslog in those diagnostics is the same as the one in the earlier one?   Since you can get to the console maybe you need to directly get the current syslog from /var/log/syslog via the console to see if it is different.

 

Link to comment

Hi, I did some digging, and used the console with the "tail" command to the syslog file when starting the array and I found that the array was starting correctly (good thing!) that is why there were no errors on the drives.

 

However, unraid is mounting a  "shim-eth0" ethernet card, with the same ip adress as my eth0 card, so, that was why the GUI "stopped working" (it never did) it was only unreacheable. Then I took it down and I can acces the GUI again, the array is ok and all the shares are there.

 

So, Unraid never stopped working, the array was working fine, the problem is that there were then 2 ethernet cards (eth0 and shimeth0) with the same IP address. 

I hope this helps. 

 

I will keep looking to see if I can figure out, why the shimeth0 is created.

If you know why, it happens, I'll be very thankful.

 

Thanks.

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.