SOLVED - Unraid stuck Mounting Disks


Recommended Posts

So after probably 5 years of no issues at all, I decided to do some work on my array.  I had a few smaller drives that i was going to remove, so i used krusader to move a bunch of files around.  During this process I broke docker, which I managed to get fixed, and then reinstalled my containers.  Everything seemed to be going fine until my cpu pinned out at 100% and then i eventually had to hard power down the server.  Now after i brought it back up, it's stuck on array starting mounting disks.  I didn't find anything obvious to me in the logs, but to be honest i'm not the strongest command line user.  Any help would be GREATLY appreciated.  I've attached my syslog, and the version of unraid i'm using is 6.6.7.  I had sabnzbd, sonarr, krusader and deluge containers installed on docker.

syslog.txtsyslog.txt

 

below is my hardware :
 

Model: Custom

M/B: ASUSTeK Computer INC. - P5KPL-CM

CPU: Intel® Core™2 Duo CPU E8500 @ 3.16GHz 

HVM: Enabled

IOMMU: Disabled

Cache: 64 kB, 6144 kB

Memory: 4 GB (max. installable capacity 4 GB)

Network: eth0: 1000 Mb/s, full duplex, mtu 1500

Kernel: Linux 4.18.20-unRAID x86_64

OpenSSL: 1.1.1a

scruffy-diagnostics-20190422-2121.zip

Edited by cephaswiebe
added info
Link to comment

Other than your disks are still ReiserFS, I don't see anything. And I can't tell how full they are since they aren't mounted.

 

Do you have a backup of flash?

 

You might try editing config/disk.cfg to set startArray="no" then reboot and see if you can start in Maintenance Mode.

  • Upvote 1
Link to comment

So i am pretty sure it's an issue with something to do with docker.  I moved the docker.img file and the appdata directory off of a disk to a different one (as i'm removing some 1TB and 2TB disks and eventually putting in 8TB's to replace it).  I rebooted in "safe mode" and it still didn't start, but the cpu was pinned at 100% the entire time.  Using htop I killed a few docker processes, and the array then came online.  Here's the diagnostic after it came back online.  Thanks a lot for your help so far!  I know I won't lose any data, but it's still pretty nerve wracking :)

scruffy-diagnostics-20190422-2329.zip

Edited by cephaswiebe
Link to comment

Well the disks don't look very full, but you really should consider converting them to XFS.

 

And your dockers would perform better if you had a cache disk for your appdata, domains, and system shares. If you're not running VMs domains doesn't matter but you could disable VMs and there would be no need for libvirt.img.

 

I don't know of any particular docker that might be to blame, but you may not have enough RAM to run many dockers. And in general, your hardware is a bit weak to expect much.

Link to comment
1 hour ago, trurl said:

Well the disks don't look very full, but you really should consider converting them to XFS.

 

And your dockers would perform better if you had a cache disk for your appdata, domains, and system shares. If you're not running VMs domains doesn't matter but you could disable VMs and there would be no need for libvirt.img.

 

I don't know of any particular docker that might be to blame, but you may not have enough RAM to run many dockers. And in general, your hardware is a bit weak to expect much.

Oh yeah, i totally agree.  The thing that started this whole process is that i'm moving to a new, faster system.  I have also purchased 4x8TB drives.  The problem was that the new system only has support for 6 drives total, so i was going to be removing 2x2TB and 1x1TB to bring it down.  Then move it over, keep the 2x5TB drives and replace the other drives with the 8TB ones :).  I think at this point i can close this ticket, since the system is back up and running (albeit poorly), and hopefully i can move it to the new system and start fresh there.  Thanks again for your help and suggestions.  I'll use a caching disk in the new system for my docker stuff.  If i run into any new issues that I can't overcome myself I'll post a new ticket.  

Link to comment

If you are making a new build, probably the best and simplest way to get the data from your old build onto the new is to just get the new build going with those new disks (formatted as XFS), leaving one port free. Then plug in the old disks one at a time and mount them using the Unassigned Devices plugin to copy their data.

 

If instead, you rebuild these disks onto larger disks before making the move, then those rebuilt disks will still be ReiserFS and you will need to move the data off of them so you can reformat as XFS.

Link to comment
23 minutes ago, trurl said:

If you are making a new build, probably the best and simplest way to get the data from your old build onto the new is to just get the new build going with those new disks (formatted as XFS), leaving one port free. Then plug in the old disks one at a time and mount them using the Unassigned Devices plugin to copy their data.

 

If instead, you rebuild these disks onto larger disks before making the move, then those rebuilt disks will still be ReiserFS and you will need to move the data off of them so you can reformat as XFS.

i originally thought about doing that, but do i need to purchase a 2nd unraid key to make this work?  Oh never mind....  I GET IT, thanks, now why didn't i think about this a week ago :)

Edited by cephaswiebe
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.