benflux Posted December 26, 2019 Share Posted December 26, 2019 Every once in a while, I find that one or more of my containers vanish. I re-add them from the templates and all is good but i'd like to know how to prevent this... Quote Link to comment
Squid Posted December 26, 2019 Share Posted December 26, 2019 You would have to post your diagnostics showing this behavior Quote Link to comment
trurl Posted December 26, 2019 Share Posted December 26, 2019 Or maybe just post your diagnostics now. It might show something about how you have things configured that would give a clue to how it is breaking. Quote Link to comment
benflux Posted December 26, 2019 Author Share Posted December 26, 2019 Here they are...I've reinstalled the two that got deleted recently (radarr and Jackett) nas2-diagnostics-20191226-2331.zip Quote Link to comment
trurl Posted December 26, 2019 Share Posted December 26, 2019 Why have you allocated 50G to docker image? Have you had problems with it filling? Making it larger won't fix anything, it will just make it take longer to fill. I always recommend 20G and it is unlikely you would need even that much unless you have one or more of your docker applications misconfigured. The typical way you get a docker image filling up, or get usage growing, is by having some path in an application that doesn't correspond to a mapping. Common mistakes are not using the same upper/lower case as mapped, or not using an absolute path. Possibly unrelated, but other things I see with your configuration that are not ideal: Most of your disks are very full, and some are still ReiserFS. Your appdata, domains, and system shares are on the array instead of cache. Those shares, and their dockers and VMs, will perform better on cache since they won't be impacted by parity. And those shares on the array will keep array disks spinning. So the preferred location for those shares is all on cache and set to stay on cache. Quote Link to comment
trurl Posted December 26, 2019 Share Posted December 26, 2019 Looking again I see you don't have any cache, so I guess you can ignore that last paragraph, or not. You might take it into consideration. Quote Link to comment
BRiT Posted December 26, 2019 Share Posted December 26, 2019 One of the days you had an entire drive disappear from md slot 4. What happened there? On most of the days in your syslog it shows jackett docker being auto-uodated and restarted. Does that docker container actually get daily updates? Quote Link to comment
trurl Posted December 26, 2019 Share Posted December 26, 2019 Also see some FCP warnings, one set to ignore. That one shouldn't be ignored, and you need to fix them all. Quote Link to comment
trurl Posted December 26, 2019 Share Posted December 26, 2019 1 hour ago, BRiT said: One of the days you had an entire drive disappear from md slot 4. What happened there? And later on looks like he formatted a rebuilding disk8. On second thought, I think this just indicates partitioning the replacement disk. Dec 24 11:29:25 NAS2 kernel: md: import disk8: (sdk) ST8000VN004-2M2101_WKD034DV size: 7814026532 Dec 24 11:29:25 NAS2 kernel: md: import_slot: 8 replaced ... Dec 24 11:29:48 NAS2 emhttpd: req (22): startState=RECON_DISK&file=&csrf_token=****************&cmdStart=Start ... Dec 24 11:29:50 NAS2 kernel: mdcmd (45): start RECON_DISK ... Dec 24 11:29:50 NAS2 emhttpd: writing GPT on disk (sdk), with partition 1 byte offset 32K, erased: 0 Dec 24 11:29:50 NAS2 emhttpd: shcmd (26404): sgdisk -Z /dev/sdk Dec 24 11:29:51 NAS2 root: Creating new GPT entries in memory. Dec 24 11:29:51 NAS2 root: GPT data structures destroyed! You may now partition the disk using fdisk or Dec 24 11:29:51 NAS2 root: other utilities. Dec 24 11:29:51 NAS2 emhttpd: shcmd (26405): sgdisk -o -a 8 -n 1:32K:0 /dev/sdk Dec 24 11:29:52 NAS2 root: Creating new GPT entries in memory. Dec 24 11:29:52 NAS2 root: The operation has completed successfully. Dec 24 11:29:52 NAS2 kernel: sdk: sdk1 ... Dec 25 12:59:41 NAS2 kernel: md: sync done. time=91774sec Dec 25 12:59:41 NAS2 kernel: md: recovery thread: exit status: 0 And since dockers, etc are on the array, probably this is the cause of the problem. In any case, all this should have been mentioned in OP. Quote Link to comment
benflux Posted December 26, 2019 Author Share Posted December 26, 2019 4 hours ago, BRiT said: One of the days you had an entire drive disappear from md slot 4. What happened there? On most of the days in your syslog it shows jackett docker being auto-uodated and restarted. Does that docker container actually get daily updates? I replaced a drive with a bigger one (it wasn't a trigger for dockers disappearing). jackett does seem to be update frequently Quote Link to comment
benflux Posted December 26, 2019 Author Share Posted December 26, 2019 3 hours ago, trurl said: Also see some FCP warnings, one set to ignore. That one shouldn't be ignored, and you need to fix them all. where do i un-ignore these? Quote Link to comment
benflux Posted December 26, 2019 Author Share Posted December 26, 2019 4 hours ago, trurl said: Why have you allocated 50G to docker image? Have you had problems with it filling? Making it larger won't fix anything, it will just make it take longer to fill. I always recommend 20G and it is unlikely you would need even that much unless you have one or more of your docker applications misconfigured. The typical way you get a docker image filling up, or get usage growing, is by having some path in an application that doesn't correspond to a mapping. Common mistakes are not using the same upper/lower case as mapped, or not using an absolute path. Possibly unrelated, but other things I see with your configuration that are not ideal: Most of your disks are very full, and some are still ReiserFS. Your appdata, domains, and system shares are on the array instead of cache. Those shares, and their dockers and VMs, will perform better on cache since they won't be impacted by parity. And those shares on the array will keep array disks spinning. So the preferred location for those shares is all on cache and set to stay on cache. I went to 50G because once it filled up and I had plenty to give it. I guess I can reduce down. Also, how do i convert from ReiserFS? I guess I should fix the original issue before doing that anyway. Quote Link to comment
trurl Posted December 26, 2019 Share Posted December 26, 2019 1 hour ago, benflux said: where do i un-ignore these? Settings - Fix Common Problems Quote Link to comment
trurl Posted December 26, 2019 Share Posted December 26, 2019 1 hour ago, benflux said: how do i convert from ReiserFS? There is a post about this pinned near the top of the General Support subforum, but here are the basic facts of the matter and there are a lot of ways to get there. In order to change a disk to a different filesystem, you have to format it. So, if the disk has any data on it you want to keep, you have to move or copy its data elsewhere before the format. I will save the details of exactly how to tell Unraid to format a disk after you get a disk ready to format. Quote Link to comment
benflux Posted December 28, 2019 Author Share Posted December 28, 2019 I've noticed that my dockers all stop at approx 5am each day for maybe an hour, can't find why that's the case, I've nothing scheduled daily that I can see...perhaps that is related to why some disappear too? Quote Link to comment
BRiT Posted December 28, 2019 Share Posted December 28, 2019 (edited) You have Auto-Update enabled as well as AppData Backup enabled. Edited December 28, 2019 by BRiT Quote Link to comment
Squid Posted December 28, 2019 Share Posted December 28, 2019 Or you are having them backup at 5am daily via the plugin Quote Link to comment
BRiT Posted December 28, 2019 Share Posted December 28, 2019 Dec 17 05:00:01 NAS2 CA Backup/Restore: ####################################### Dec 17 05:00:01 NAS2 CA Backup/Restore: Community Applications appData Backup Dec 17 05:00:01 NAS2 CA Backup/Restore: Applications will be unavailable during Dec 17 05:00:01 NAS2 CA Backup/Restore: this process. They will automatically Dec 17 05:00:01 NAS2 CA Backup/Restore: be restarted upon completion. Dec 17 05:00:01 NAS2 CA Backup/Restore: ####################################### Dec 17 05:00:01 NAS2 CA Backup/Restore: Stopping duckdns Dec 17 05:00:05 NAS2 CA Backup/Restore: docker stop -t 60 duckdns Dec 17 05:00:05 NAS2 CA Backup/Restore: Stopping lidarr Dec 17 05:00:10 NAS2 CA Backup/Restore: docker stop -t 60 lidarr Dec 17 05:00:10 NAS2 CA Backup/Restore: Stopping medusa Dec 17 05:00:23 NAS2 CA Backup/Restore: docker stop -t 60 medusa Dec 17 05:00:23 NAS2 CA Backup/Restore: Stopping PlexMediaServer Dec 17 05:00:45 NAS2 CA Backup/Restore: docker stop -t 60 PlexMediaServer Dec 17 05:00:45 NAS2 CA Backup/Restore: Stopping radarr Dec 17 05:00:50 NAS2 CA Backup/Restore: docker stop -t 60 radarr Dec 17 05:00:50 NAS2 CA Backup/Restore: Backing up USB Flash drive config folder to Dec 17 05:01:12 NAS2 CA Backup/Restore: Using command: /usr/bin/rsync -avXHq --delete --log-file="/var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log" /boot/ "/mnt/user/B../" > /dev/null 2>&1 Dec 17 05:01:17 NAS2 CA Backup/Restore: Changing permissions on backup Dec 17 05:01:17 NAS2 CA Backup/Restore: Backing Up appData from /mnt/user/appdata/ to /mnt/user/Backups/unraid [email protected] Dec 17 05:01:17 NAS2 CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar -cvaf '/mnt/user/Backups/unraid [email protected]' --exclude "DarkStat" --exclude "headphones" --exclude "home-assistant" --exclude "jackett" --exclude "medusa" --exclude "officialplex" --exclude "radarr" --exclude 'docker.img' * >> /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log 2>&1 & echo $! > /tmp/ca.backup2/tempFiles/backupInProgress Dec 17 05:43:12 NAS2 CA Backup/Restore: Backup Complete Dec 17 05:43:12 NAS2 CA Backup/Restore: Verifying backup Dec 17 05:43:12 NAS2 CA Backup/Restore: Using command: cd '/mnt/user/appdata/' && /usr/bin/tar --diff -C '/mnt/user/appdata/' -af '/mnt/user/Backups/unraid [email protected]' > /var/lib/docker/unraid/ca.backup2.datastore/appdata_backup.log & echo $! > /tmp/ca.backup2/tempFiles/verifyInProgress Quote Link to comment
benflux Posted December 28, 2019 Author Share Posted December 28, 2019 yes that makes sense, that part seems to be working then, just the disappearing to solve. Quote Link to comment
BRiT Posted December 28, 2019 Share Posted December 28, 2019 Your backup is taking over an hour, more like 82 minutes... Starting at 5am and ending around 6:21 am. Dec 17 06:21:34 NAS2 CA Backup/Restore: ####################### Dec 17 06:21:34 NAS2 CA Backup/Restore: appData Backup complete Dec 17 06:21:34 NAS2 CA Backup/Restore: ####################### Quote Link to comment
benflux Posted December 28, 2019 Author Share Posted December 28, 2019 i've set that to weekly now, thank you. Quote Link to comment
BRiT Posted December 28, 2019 Share Posted December 28, 2019 By comparison, my backup of AppData takes 75 SECONDS. Dec 16 03:00:02 REAVER CA Backup/Restore: ####################################### Dec 16 03:00:02 REAVER CA Backup/Restore: Community Applications appData Backup Dec 16 03:00:02 REAVER CA Backup/Restore: Applications will be unavailable during Dec 16 03:00:02 REAVER CA Backup/Restore: this process. They will automatically Dec 16 03:00:02 REAVER CA Backup/Restore: be restarted upon completion. Dec 16 03:00:02 REAVER CA Backup/Restore: ####################################### Dec 16 03:01:14 REAVER CA Backup/Restore: ####################### Dec 16 03:01:14 REAVER CA Backup/Restore: appData Backup complete Dec 16 03:01:14 REAVER CA Backup/Restore: ####################### Quote Link to comment
Squid Posted December 28, 2019 Share Posted December 28, 2019 15 minutes ago, BRiT said: By comparison, my backup of AppData takes 75 SECONDS. Dec 16 03:00:02 REAVER CA Backup/Restore: ####################################### Dec 16 03:00:02 REAVER CA Backup/Restore: Community Applications appData Backup Dec 16 03:00:02 REAVER CA Backup/Restore: Applications will be unavailable during Dec 16 03:00:02 REAVER CA Backup/Restore: this process. They will automatically Dec 16 03:00:02 REAVER CA Backup/Restore: be restarted upon completion. Dec 16 03:00:02 REAVER CA Backup/Restore: ####################################### Dec 16 03:01:14 REAVER CA Backup/Restore: ####################### Dec 16 03:01:14 REAVER CA Backup/Restore: appData Backup complete Dec 16 03:01:14 REAVER CA Backup/Restore: ####################### Now there's an example of someone who is not utilizing containers to their full extent (or has a boring server ) 1 Quote Link to comment
BRiT Posted December 28, 2019 Share Posted December 28, 2019 Cleanliness is next to godliness. The only large quantity of data to be backed up from inside AppData is an Emby Media Library, and even with that all the real metadata and images are stored at the same level as the media so it's decentralized when compared to things like monster Plex DBs (for instance: /mnt/user/Movies/movie_name/* or /mnt/user/TV/tv_name/*). Any and all of my development databases are backed up using native sql tools. Quote Link to comment
benflux Posted December 28, 2019 Author Share Posted December 28, 2019 (edited) 15gb backup it seems. medusa: 1.5gb plex: many more of those gigs Edited December 28, 2019 by benflux typo Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.