Jump to content

Unmountable: Wrong or No File System


Recommended Posts

The array has been started in normal mode and I have found the lost+found folder.

 

The disk is still unmountable. I think I may have messed up though, I double checked the serial numbers and the drive that is in there now is the new empty disk. It hasn't been formatted so I'm guessing it won't work and I will need to put in the other 8TB.

Link to comment

Whoops, clearly I can't read. 🙂

 

Ok, so the disk has been mounted again and it appears that I can access all the files.

 

None of the docker apps have loaded, I'm presuming a restart will fix it.

 

The restart had no effect, still the same message "No Docker Container Installed"

 

Added new diagnostics as I assume they will be required

plex-diagnostics-20220817-1352.zip

Edited by David13858
Link to comment

Several things to notice in those screenshots. I will break this up into more than one post.

 

lost+found has 18.6 GB, some on disk3 and some on disk20, since those are the disks that had filesystem repair.

 

Have you taken a look at the contents of your lost+found share?

 

lost+found is where filesystem repair puts things it can't figure out. Usually files with unknown names, from unknown folders. Often not worth the trouble to figure out what it all is, but linux 'file' command might be able to tell you what kind of data is in a file so you can try to open it.

 

Do you have backups of anything important and irreplaceable?

 

 

Link to comment

appdata (and system) has files on disk20. I wouldn't be surprised of some of lost+found was from appdata.

 

You want all of appdata, domains, and system on fast pool (cache) so docker/VM performance won't be impacted by slower parity array, and so array disks can spin down since these files are always open.

 

domains is already all on cache and set to go there, appdata and system have files on disk20 but are set to go to cache. Nothing can move open files, so you will have to disable Docker and VM Manager in Settings and run mover. Also, mover won't move duplicates so there may be some manual cleanup.

 

Why do you have 50G docker.img? Have you had problems filling it? 20G is often more than enough, and making it larger won't fix filling it, it will only make it take longer to fill. The usual reason for filling docker.img is an application writing to a path that isn't mapped.

Link to comment
26 minutes ago, David13858 said:

what should the recommended 'fullness' be ?

Filesystem repair (such as you have just done) needs some freespace to work in or it will fail. Also, very full disks perform worse.

 

On a related note, cache has no Minimum Free set. I didn't look at all of your user shares, but the ones I did look at also have no Minimum Free set.

 

You should set Minimum Free for cache to larger than the largest file you expect to write to cache, and set Minimum Free for each user share to larger than the largest file you expect to write to the share.

 

In the general case, Unraid has no way to know how large a file will become when it chooses a disk for it. If a disk has less than Minimum, Unraid will choose another disk.

 

For cached user shares, if cache has less than minimum, it will choose an array disk instead (overflow), but only for cache:prefer and cache:yes shares.

 

When choosing an array disk, if a disk has less than Minimum, another disk will be chosen, depending on other factors as explained below.

 

If a file is being replaced, the replacement will always go to the disk the file was on, regardless. Split Level takes precedence over Minimum, so if split says a file belongs with other files on a disk, that is the disk that will be chosen regardless. And of course, Include/Exclude can restrict which disks can be chosen.

 

In any case, if a disk has more than Minimum, and other factors allow, it can be chosen. If a disk is chosen and the file won't fit, the write fails.

 

Link to comment

I had a look in the folders but it all looks like gibberish from a far. 

 

Most of the data isn't important and can be replaced from other sources. The one file that will be a hellish experience trying to rebuild if I lost all the data would be the "Backup server" share but it appears 99% of the stuff I care about is in there.

 

Loosely looking at the lost+found, the majority of that looks like its from the app data, but I know anything pre 2020 is from the "Backup server" so I can safely move that.

 

How do I find out where all the appdata files go ? or do I just move the rest of the files from the lost+found to app data and it sorts its self out ? 

 

The 50G docker img is a problem, I was having some issues with one of the containers and I knew they would resolve themselves quickly so the quickest option was to just make the img bigger and now I'm a bit scared to make it smaller encase things get messed up.

 

After I get it all fixed the CA Backup sounds like a good idea, I will also add minimum space requirements to avoid this issue in future.

Link to comment
4 minutes ago, David13858 said:

The 50G docker img is a problem, I was having some issues with one of the containers and I knew they would resolve themselves quickly so the quickest option was to just make the img bigger and now I'm a bit scared to make it smaller encase things get messed up.

 

51 minutes ago, trurl said:

20G is often more than enough, and making it larger won't fix filling it, it will only make it take longer to fill. The usual reason for filling docker.img is an application writing to a path that isn't mapped.

Do you understand that last sentence?

Link to comment
36 minutes ago, trurl said:

Can you actually tell what from lost+found is appdata? Nothing is going to sort that out for you.

No, I misunderstood the situation. I thought you meant my my case I would need to recover those files in order for it to work, so I was making large assumptions.

 

36 minutes ago, trurl said:

Do you understand that last sentence?

Yes, I didn't explain myself very well. I know what was causing the log to fill up. I had a docker container writing to the Cache drive which at the time was only 250G.

Data was being taken off the cache drive and moved to the array whilst also being written to, so it was pinging an error saying Maximum capacity reached every couple of seconds.

I'm not proud but basically most of that img is probably a log saying drive is full.

 

However, GREAT NEWS ! We are back up and running now. Manually moved the folders you said to the cache and then restarted docker. 

I really appreciate all the help from you and the other mods. I've learnt a lot from this whole debacle and I know there is a lot of stuff I need to sort out moving forward.  

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...