David13858 Posted August 16, 2022 Author Share Posted August 16, 2022 (edited) Check Filesystem Disk 3.rtf Attached are the results from the Filesystem Check. So would the next step be to run xfs_repair -v /dev/md3 or are the logs saying something different ? Edited August 16, 2022 by David13858 Quote Link to comment
JorgeB Posted August 17, 2022 Share Posted August 17, 2022 Yes, run without -n, and if it asks for it use -L Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 7 minutes ago, JorgeB said: Yes, run without -n, and if it asks for it use -L xfs repair log.txt Seems to be no change, do I need to restart array or reboot or something like that ? Quote Link to comment
JorgeB Posted August 17, 2022 Share Posted August 17, 2022 You need to start the array in normal mode, disk should mount now, look for a lost+found folder. Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 The array has been started in normal mode and I have found the lost+found folder. The disk is still unmountable. I think I may have messed up though, I double checked the serial numbers and the drive that is in there now is the new empty disk. It hasn't been formatted so I'm guessing it won't work and I will need to put in the other 8TB. Quote Link to comment
JorgeB Posted August 17, 2022 Share Posted August 17, 2022 Post new diags after array start in normal mode. Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 plex-diagnostics-20220817-0927.zip Quote Link to comment
JorgeB Posted August 17, 2022 Share Posted August 17, 2022 43 minutes ago, David13858 said: The disk is still unmountable. Disk3 is now mounting, you do have another unmountable disk, disk20, check filesystem on that one also. Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 xfs repair log.txt Quote Link to comment
JorgeB Posted August 17, 2022 Share Posted August 17, 2022 1 hour ago, JorgeB said: run without -n, and if it asks for it use -L Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 It won't let me run it. xfs repair log.txt Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 5 hours ago, JorgeB said: if it asks for it use -L Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 (edited) Whoops, clearly I can't read. 🙂 Ok, so the disk has been mounted again and it appears that I can access all the files. None of the docker apps have loaded, I'm presuming a restart will fix it. The restart had no effect, still the same message "No Docker Container Installed" Added new diagnostics as I assume they will be required plex-diagnostics-20220817-1352.zip Edited August 17, 2022 by David13858 Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 Most of your disks are a lot fuller than I would recommend. On User Shares page, click Compute All button at bottom, wait for the complete results. Complete results will show how much of each disk is used for each user share. If you don't get complete results after a few minutes, refresh the page. Then post a screenshot. Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 I couldn't agree more, Once I am back up and running I will be replacing one of the smaller drives and doing some major housekeeping. Out of curiosity what should the recommended 'fullness' be ? Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 Several things to notice in those screenshots. I will break this up into more than one post. lost+found has 18.6 GB, some on disk3 and some on disk20, since those are the disks that had filesystem repair. Have you taken a look at the contents of your lost+found share? lost+found is where filesystem repair puts things it can't figure out. Usually files with unknown names, from unknown folders. Often not worth the trouble to figure out what it all is, but linux 'file' command might be able to tell you what kind of data is in a file so you can try to open it. Do you have backups of anything important and irreplaceable? Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 appdata (and system) has files on disk20. I wouldn't be surprised of some of lost+found was from appdata. You want all of appdata, domains, and system on fast pool (cache) so docker/VM performance won't be impacted by slower parity array, and so array disks can spin down since these files are always open. domains is already all on cache and set to go there, appdata and system have files on disk20 but are set to go to cache. Nothing can move open files, so you will have to disable Docker and VM Manager in Settings and run mover. Also, mover won't move duplicates so there may be some manual cleanup. Why do you have 50G docker.img? Have you had problems filling it? 20G is often more than enough, and making it larger won't fix filling it, it will only make it take longer to fill. The usual reason for filling docker.img is an application writing to a path that isn't mapped. Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 1 minute ago, trurl said: wouldn't be surprised of some of lost+found was from appdata. CA Backup plugin will let you do scheduled backups of appdata. Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 26 minutes ago, David13858 said: what should the recommended 'fullness' be ? Filesystem repair (such as you have just done) needs some freespace to work in or it will fail. Also, very full disks perform worse. On a related note, cache has no Minimum Free set. I didn't look at all of your user shares, but the ones I did look at also have no Minimum Free set. You should set Minimum Free for cache to larger than the largest file you expect to write to cache, and set Minimum Free for each user share to larger than the largest file you expect to write to the share. In the general case, Unraid has no way to know how large a file will become when it chooses a disk for it. If a disk has less than Minimum, Unraid will choose another disk. For cached user shares, if cache has less than minimum, it will choose an array disk instead (overflow), but only for cache:prefer and cache:yes shares. When choosing an array disk, if a disk has less than Minimum, another disk will be chosen, depending on other factors as explained below. If a file is being replaced, the replacement will always go to the disk the file was on, regardless. Split Level takes precedence over Minimum, so if split says a file belongs with other files on a disk, that is the disk that will be chosen regardless. And of course, Include/Exclude can restrict which disks can be chosen. In any case, if a disk has more than Minimum, and other factors allow, it can be chosen. If a disk is chosen and the file won't fit, the write fails. Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 I had a look in the folders but it all looks like gibberish from a far. Most of the data isn't important and can be replaced from other sources. The one file that will be a hellish experience trying to rebuild if I lost all the data would be the "Backup server" share but it appears 99% of the stuff I care about is in there. Loosely looking at the lost+found, the majority of that looks like its from the app data, but I know anything pre 2020 is from the "Backup server" so I can safely move that. How do I find out where all the appdata files go ? or do I just move the rest of the files from the lost+found to app data and it sorts its self out ? The 50G docker img is a problem, I was having some issues with one of the containers and I knew they would resolve themselves quickly so the quickest option was to just make the img bigger and now I'm a bit scared to make it smaller encase things get messed up. After I get it all fixed the CA Backup sounds like a good idea, I will also add minimum space requirements to avoid this issue in future. Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 2 minutes ago, David13858 said: How do I find out where all the appdata files go ? or do I just move the rest of the files from the lost+found to app data and it sorts its self out ? Can you actually tell what from lost+found is appdata? Nothing is going to sort that out for you. Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 4 minutes ago, David13858 said: The 50G docker img is a problem, I was having some issues with one of the containers and I knew they would resolve themselves quickly so the quickest option was to just make the img bigger and now I'm a bit scared to make it smaller encase things get messed up. 51 minutes ago, trurl said: 20G is often more than enough, and making it larger won't fix filling it, it will only make it take longer to fill. The usual reason for filling docker.img is an application writing to a path that isn't mapped. Do you understand that last sentence? Quote Link to comment
David13858 Posted August 17, 2022 Author Share Posted August 17, 2022 36 minutes ago, trurl said: Can you actually tell what from lost+found is appdata? Nothing is going to sort that out for you. No, I misunderstood the situation. I thought you meant my my case I would need to recover those files in order for it to work, so I was making large assumptions. 36 minutes ago, trurl said: Do you understand that last sentence? Yes, I didn't explain myself very well. I know what was causing the log to fill up. I had a docker container writing to the Cache drive which at the time was only 250G. Data was being taken off the cache drive and moved to the array whilst also being written to, so it was pinging an error saying Maximum capacity reached every couple of seconds. I'm not proud but basically most of that img is probably a log saying drive is full. However, GREAT NEWS ! We are back up and running now. Manually moved the folders you said to the cache and then restarted docker. I really appreciate all the help from you and the other mods. I've learnt a lot from this whole debacle and I know there is a lot of stuff I need to sort out moving forward. Quote Link to comment
trurl Posted August 17, 2022 Share Posted August 17, 2022 5 hours ago, trurl said: You want all of appdata, domains, and system on fast pool (cache) so docker/VM performance won't be impacted by slower parity array, and so array disks can spin down since these files are always open. So did you get this done? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.