krone6 Posted May 16, 2015 Share Posted May 16, 2015 I am testing my server out and figuring out how it fully works if a drive were to fail when I get the below messages after stopping the array with a "bad drive." In this state I would lose data if another drive goes bad. Once I unmounted I couldn't click anywhere else in the GUI and can't get back in. The most I can do is telnet in. How do I fix this? Stop SMB...Spinning up all drives...Sync filesystems...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)... The other odd thing is I tried pulling all but the cache, parity, and one data disk out, and without parity checking, was able to somehow see and use the full 4TB share even though the single drive's 1TB. Shouldn't I lose all data once a second drive has failed without parity checking the first? Quote Link to comment
Squid Posted May 17, 2015 Share Posted May 17, 2015 I am testing my server out and figuring out how it fully works if a drive were to fail when I get the below messages after stopping the array with a "bad drive." In this state I would lose data if another drive goes bad. Once I unmounted I couldn't click anywhere else in the GUI and can't get back in. The most I can do is telnet in. How do I fix this? When the system can't unmount a disk it means that something is keeping the disks busy. Was there a movie streaming when you were trying this? What (if any) plugins do you have running? The other odd thing is I tried pulling all but the cache, parity, and one data disk out, and without parity checking, was able to somehow see and use the full 4TB share even though the single drive's 1TB. Shouldn't I lose all data once a second drive has failed without parity checking the first? How many data disks do you have? (and what size are they?) Are you sure that you left in the cache drive? Quote Link to comment
krone6 Posted May 17, 2015 Author Share Posted May 17, 2015 I am testing my server out and figuring out how it fully works if a drive were to fail when I get the below messages after stopping the array with a "bad drive." In this state I would lose data if another drive goes bad. Once I unmounted I couldn't click anywhere else in the GUI and can't get back in. The most I can do is telnet in. How do I fix this? When the system can't unmount a disk it means that something is keeping the disks busy. Was there a movie streaming when you were trying this? What (if any) plugins do you have running? The other odd thing is I tried pulling all but the cache, parity, and one data disk out, and without parity checking, was able to somehow see and use the full 4TB share even though the single drive's 1TB. Shouldn't I lose all data once a second drive has failed without parity checking the first? How many data disks do you have? (and what size are they?) Are you sure that you left in the cache drive? I may have had a VM running as the docker apps I use never caused this before (have 7-8 of them). As far as disks I bought a basic license last night, so I had the following configuration at the time of testing: Data 3 1TB 1 500GB Cache 1 500GB SSD Parity 1 4TB Red Quote Link to comment
Squid Posted May 17, 2015 Share Posted May 17, 2015 I may have had a VM running as the docker apps I use never caused this before (have 7-8 of them). As far as disks I bought a basic license last night, so I had the following configuration at the time of testing: Data 3 1TB 1 500GB Cache 1 500GB SSD Parity 1 4TB Red Something was either reading or writing to the drive that was preventing the unmounting. When that happens, you can usually figure out what's doing it by entering at the terminal. So far as I know, running VM's through the built-in VM manager should kill the VM when trying to stop the array. lsof /mnt/disk* Also, if you had a terminal open and the current directory was in a share, then that alone is enough to stop unmounting. I've never ever had a multiple drive failure, (and its not something that I'm going to replicate on my system), so I can't really comment on what unRaid actually does. If I was going to take a wild guess, I would say that the system would stay up and running. Checking the UI would show multiple red-balls. If you happened to have the dynamix cache-dirs plugin running, then its possible you'd be able to see all of the folders stored on the drives. You would not however be able to actually access them if they were stored on one of the pulled out drives. If you have enabled email notifications, then the system would have informed you about the multiple failures. A syslog would also be helpful (if you haven't reset the system since this problem happened) Quote Link to comment
krone6 Posted May 17, 2015 Author Share Posted May 17, 2015 I may have had a VM running as the docker apps I use never caused this before (have 7-8 of them). As far as disks I bought a basic license last night, so I had the following configuration at the time of testing: Data 3 1TB 1 500GB Cache 1 500GB SSD Parity 1 4TB Red Something was either reading or writing to the drive that was preventing the unmounting. When that happens, you can usually figure out what's doing it by entering at the terminal. So far as I know, running VM's through the built-in VM manager should kill the VM when trying to stop the array. lsof /mnt/disk* Also, if you had a terminal open and the current directory was in a share, then that alone is enough to stop unmounting. I've never ever had a multiple drive failure, (and its not something that I'm going to replicate on my system), so I can't really comment on what unRaid actually does. If I was going to take a wild guess, I would say that the system would stay up and running. Checking the UI would show multiple red-balls. If you happened to have the dynamix cache-dirs plugin running, then its possible you'd be able to see all of the folders stored on the drives. You would not however be able to actually access them if they were stored on one of the pulled out drives. If you have enabled email notifications, then the system would have informed you about the multiple failures. A syslog would also be helpful (if you haven't reset the system since this problem happened) Thanks for the info. Right now I ended up just hard resetting as no real data is being used, so I can afford to reinstall if I must. The odd thing is the server's not picking up an IP on ether interface which is odd. I've statically set it as well. If this happens again I'll see if I can fix it without hard resetting. Quote Link to comment
nate1749 Posted August 10, 2017 Share Posted August 10, 2017 Thanks for the help on this one, for me the problem wasn't a VM or container or anything like that, but rather I had a terminal session I forgot I had open that was idling in location /mnt/disk6 and as soon as I moved out of /mnt it was unmount the shares successfully. Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.