System won't unmount completely and locked out of GUI


Recommended Posts

I am testing my server out and figuring out how it fully works if a drive were to fail when I get the below messages after stopping the array with a "bad drive." In this state I would lose data if another drive goes bad. Once I unmounted I couldn't click anywhere else in the GUI and can't get back in. The most I can do is telnet in. How do I fix this?

 

Stop SMB...Spinning up all drives...Sync filesystems...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...

 

The other odd thing is I tried pulling all but the cache, parity, and one data disk out, and without parity checking, was able to somehow see and use the full 4TB share even though the single drive's 1TB. Shouldn't I lose all data once a second drive has failed without parity checking the first?

Link to comment

I am testing my server out and figuring out how it fully works if a drive were to fail when I get the below messages after stopping the array with a "bad drive." In this state I would lose data if another drive goes bad. Once I unmounted I couldn't click anywhere else in the GUI and can't get back in. The most I can do is telnet in. How do I fix this?

When the system can't unmount a disk it means that something is keeping the disks busy.  Was there a movie streaming when you were trying this?  What (if any) plugins do you have running?

 

The other odd thing is I tried pulling all but the cache, parity, and one data disk out, and without parity checking, was able to somehow see and use the full 4TB share even though the single drive's 1TB. Shouldn't I lose all data once a second drive has failed without parity checking the first?

How many data disks do you have? (and what size are they?) Are you sure that you left in the cache drive? 
Link to comment

I am testing my server out and figuring out how it fully works if a drive were to fail when I get the below messages after stopping the array with a "bad drive." In this state I would lose data if another drive goes bad. Once I unmounted I couldn't click anywhere else in the GUI and can't get back in. The most I can do is telnet in. How do I fix this?

When the system can't unmount a disk it means that something is keeping the disks busy.  Was there a movie streaming when you were trying this?  What (if any) plugins do you have running?

 

The other odd thing is I tried pulling all but the cache, parity, and one data disk out, and without parity checking, was able to somehow see and use the full 4TB share even though the single drive's 1TB. Shouldn't I lose all data once a second drive has failed without parity checking the first?

How many data disks do you have? (and what size are they?) Are you sure that you left in the cache drive? 

 

I may have had a VM running as the docker apps I use never caused this before (have 7-8 of them).

 

As far as disks I bought a basic license last night, so I had the following configuration at the time of testing:

 

Data

3 1TB

1 500GB

 

Cache

1 500GB SSD

 

Parity

1 4TB Red

Link to comment

 

I may have had a VM running as the docker apps I use never caused this before (have 7-8 of them).

 

As far as disks I bought a basic license last night, so I had the following configuration at the time of testing:

 

Data

3 1TB

1 500GB

 

Cache

1 500GB SSD

 

Parity

1 4TB Red

Something was either reading or writing to the drive that was preventing the unmounting.  When that happens, you can usually figure out what's doing it by entering at the terminal.  So far as I know, running VM's through the built-in VM manager should kill the VM when trying to stop the array.

lsof /mnt/disk*

Also, if you had a terminal open and the current directory was in a share, then that alone is enough to stop unmounting.

 

I've never ever had a multiple drive failure, (and its not something that I'm going to replicate on my system), so I can't really comment on what unRaid actually does.  If I was going to take a wild guess, I would say that the system would stay up and running.  Checking the UI would show multiple red-balls.  If you happened to have the dynamix cache-dirs plugin running, then its possible you'd be able to see all of the folders stored on the drives.  You would not however be able to actually access them if they were stored on one of the pulled out drives.

 

If you have enabled email notifications, then the system would have informed you about the multiple failures.

 

A syslog would also be helpful (if you haven't reset the system since this problem happened)

Link to comment

 

I may have had a VM running as the docker apps I use never caused this before (have 7-8 of them).

 

As far as disks I bought a basic license last night, so I had the following configuration at the time of testing:

 

Data

3 1TB

1 500GB

 

Cache

1 500GB SSD

 

Parity

1 4TB Red

Something was either reading or writing to the drive that was preventing the unmounting.  When that happens, you can usually figure out what's doing it by entering at the terminal.  So far as I know, running VM's through the built-in VM manager should kill the VM when trying to stop the array.

lsof /mnt/disk*

Also, if you had a terminal open and the current directory was in a share, then that alone is enough to stop unmounting.

 

I've never ever had a multiple drive failure, (and its not something that I'm going to replicate on my system), so I can't really comment on what unRaid actually does.  If I was going to take a wild guess, I would say that the system would stay up and running.  Checking the UI would show multiple red-balls.  If you happened to have the dynamix cache-dirs plugin running, then its possible you'd be able to see all of the folders stored on the drives.  You would not however be able to actually access them if they were stored on one of the pulled out drives.

 

If you have enabled email notifications, then the system would have informed you about the multiple failures.

 

A syslog would also be helpful (if you haven't reset the system since this problem happened)

 

Thanks for the info. Right now I ended up just hard resetting as no real data is being used, so I can afford to reinstall if I must. The odd thing is the server's not picking up an IP on ether interface which is odd. I've statically set it as well. If this happens again I'll see if I can fix it without hard resetting.

Link to comment
  • 2 years later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.