Firelfy Posted December 22, 2017 Share Posted December 22, 2017 So it looks like while attempting to install a version of the koken docker (photography file management system, php with database) I have inadvertently modified/destroyed/created files which are currently conflicting with unRaid webui. I can not start both radarr and sonarr for some reason (missing from docker ps, along with other dockers. Images still found) and performing tasks in the webui is not possible including obtaining logs. I have also noticed that this isn't an issue while the array is offline, so some offending file/s are located within the array somewhere and not on the boot usb. The best I could do in terms of logs is attached as UnRaid_Log.txt and there's also an image of what the main page in unRaid looks like. Along with the page looking like that the page title is now referred to as /Main. How would I fix this uninitialized csrf_token issue? If it means a new copy of unRaid on usb then will I lose my dockers/files etc? Thanks for the help! UnRaid_Log.txt Quote Link to comment
trurl Posted December 22, 2017 Share Posted December 22, 2017 Not clear what you have done but your post is not a defect report, and there is no evidence of a defect, so I am moving to General Support. 1 Quote Link to comment
Squid Posted December 22, 2017 Share Posted December 22, 2017 What would really be great is if you could post the output of df -h Thus far there has only been a single reported case of uninitialized csrf token, and anecdotally it was related to the rootfs being 100% full. Ultimately, your recovery is probably going to wind up having to restart (post the output first though). Unless you've really played around with having strange path mappings for your containers (ie: like mapping / to /), a container won't be able to actually trash unRaid. Its more likely that you've got a container storing some huge datafiles in memory itself which is causing the problems. 1 1 Quote Link to comment
Firelfy Posted December 24, 2017 Author Share Posted December 24, 2017 Here is the output of df -h, and it looks a full rootfs is the culprit: Filesystem Size Used Avail Use% Mounted on rootfs 32G 31G 427M 99% / tmpfs 32G 528K 32G 1% /run devtmpfs 32G 8.0K 32G 1% /dev cgroup_root 32G 0 32G 0% /sys/fs/cgroup tmpfs 128M 2.9M 126M 3% /var/log /dev/sda1 15G 416M 15G 3% /boot /dev/md1 7.3T 7.3T 23G 100% /mnt/disk1 /dev/sdd1 2.8T 70G 2.7T 3% /mnt/cache shfs 7.3T 7.3T 23G 100% /mnt/user0 shfs 11T 7.4T 2.7T 74% /mnt/user /dev/loop0 20G 11G 8.6G 57% /var/lib/docker shm 64M 8.0K 64M 1% /var/lib/docker/containers/9dbad0aa6c20fd0833645d215d8ce27703b69bf832fe9121d41229548d61a053/shm shm 64M 4.0K 64M 1% /var/lib/docker/containers/b14c60be0840883e6155a12ec6e923031b8bbc28cd9e2a273341c14d59d10432/shm shm 64M 0 64M 0% /var/lib/docker/containers/5444c7e05c38f77b70232cc8dff0086ed82181276b0e8d2dec2b5249c7ab170a/shm shm 64M 0 64M 0% /var/lib/docker/containers/cb31bf35203a91123eda5f22282f796562033d25766c124e4c66548e2a6f4125/shm shm 64M 0 64M 0% /var/lib/docker/containers/e64c5e0185ae4453369c419410f632402895d25e71751c2247256c9d18f10657/shm shm 64M 0 64M 0% /var/lib/docker/containers/369e640324494d30ccd3ca5e60fc091349ece66ace3bfc40af1ec11d3e685a1b/shm shm 64M 0 64M 0% /var/lib/docker/containers/85728a7e088d062d7e270d41863e7f1a2d60970b0aa2fa2b5b6ebeb3a53bd955/shm shm 64M 0 64M 0% /var/lib/docker/containers/70a081bdb59a771a2dfd4e9d3c9dab1de9b3b4706469485ede1095a0030fd92e/shm shm 64M 0 64M 0% /var/lib/docker/containers/1513089c13cda75e68d7794f1775ce4bfe207207deb34f07ba0f78f92b998c43/shm shm 64M 0 64M 0% /var/lib/docker/containers/5cd45170bafec81656c1cf0656989b752c194512cf1f000f70851cb2534a4834/shm shm 64M 0 64M 0% /var/lib/docker/containers/9ab5a25be0fe3716df2228083819a757c9dd244f2e7d9f24c9ef07065df7dc67/shm shm 64M 9.3M 55M 15% /var/lib/docker/containers/5ae34418546c5964ac2a86a188a99752106de93d5121c404309902eaf3471a2d/shm /dev/loop1 1.0G 17M 905M 2% /etc/libvirt /dev/sdc1 2.8T 879G 1.9T 32% /mnt/disks/WDC_WD30EZRZ-00GXCB0_WD-WCC7K1XF5ZZC shm 64M 4.0K 64M 1% /var/lib/docker/containers/969e9d786fb38c2f1a49b60d44007b5c8fdaf10544b687486ae05ef422f6bf20/shm shm 64M 4.0K 64M 1% /var/lib/docker/containers/098d2217a58c27273e4a3fbdc163b7727638b6364bfbe76b2436785342f8ae53/shm What's strange is I was monitoring unraid upon a reboot and restart of array and rootfs fills completely and what felt like instantaniously on this line (something in sa1/sadc): Dec 24 22:47:01 Server crond[2112]: exit status 2 from user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &>/dev/null Not sure why initializing this script would cause something like this, but it might be useful. Thanks for the help! Quote Link to comment
Squid Posted December 24, 2017 Share Posted December 24, 2017 27 minutes ago, Firelfy said: Here is the output of df -h, and it looks a full rootfs is the culprit: Yup. If you have that container set to autostart, then disable it and then reboot. After that, edit the container and post a screen shot of all the path mappings (or ideally, edit the container, make a change, revert the change and then hit Apply and post the resulting docker run command that appears. Quote Link to comment
Firelfy Posted December 24, 2017 Author Share Posted December 24, 2017 If you're referring to koken, it has been removed (afaik completely within shell) along with it's appdata. Quote Link to comment
Squid Posted December 24, 2017 Share Posted December 24, 2017 does the problem still exist then? Quote Link to comment
Firelfy Posted December 24, 2017 Author Share Posted December 24, 2017 Yep. I'll mention I used this koken docker script to generate the docker, adjusting for variables such as port, and changing path mappings and so on. Quote Link to comment
Firelfy Posted December 25, 2017 Author Share Posted December 25, 2017 (edited) 6 hours ago, Squid said: Reboot Rebooted, still having the same issue. Just remembered I had telegraf mapped to / for access to system info, this is how the docker is designed to be setup. In order to conclude whether this was the issue or not I stopped influxdb/grafana and telegraf upon array initialization and I'm still having issues. The array is fine up until this script is run: /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 In which I get this error: exit status 2 from user root /usr/local/emhttp/plugins/dynamix.system.stats/scripts/sa1 1 1 &>/dev/null My rootfs before this command is run is nowhere near full (2% usage usually) so there's something going on with sa1/sadc. Edited December 25, 2017 by Firelfy Added info refarding telegraf/influxdb etc Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.