smeeee.again

Members
  • Posts

    19
  • Joined

  • Last visited

smeeee.again's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Hi, I don't know if you could fix the problem yet, but for me it was enough to pass the path from /generated to the host, like /metadata, /cache and /transcode. Also thanks for usable template!
  2. I've probably found the culprit. It was a userscript from the User Scripts plugin, which I forgot to disable. Because I wasn't at home I set the Fix common problems plugin to check every hour. When I noticed the message at home I checked the /tmp folder via the console and figured the script out. The script was supposed to synchronize the images of one folder with another every day, but couldn't do that anymore because the target was switched off and so it came as you already suspected and it just filled the /tmp to the limit. I'm now doing another check to see if it was the only problem. Hopefully this will be so. Thank you so much for your help!
  3. Duplicati is currently without entries. I suspected it after some research in the forum, but I had already removed the links to paths that no longer exist. I will do that. Hopefully, it won't take too long, I already miss my server. Thanks also in between for the extensive help.
  4. The check is still running, but the problem is already there. root@KartoffelHQ:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 5.8G 5.8G 0 100% / tmpfs 32M 236K 32M 1% /run devtmpfs 5.8G 0 5.8G 0% /dev tmpfs 5.9G 0 5.9G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 396K 128M 1% /var/log /dev/sda1 30G 404M 29G 2% /boot /dev/loop0 8.2M 8.2M 0 100% /lib/modules /dev/loop1 4.9M 4.9M 0 100% /lib/firmware /dev/md1 3.7T 3.1T 563G 85% /mnt/disk1 /dev/md2 3.7T 3.1T 604G 84% /mnt/disk2 /dev/md3 4.6T 4.0T 620G 87% /mnt/disk3 /dev/md4 4.6T 4.1T 552G 89% /mnt/disk4 /dev/md5 4.6T 2.3T 2.3T 50% /mnt/disk5 /dev/md6 3.7T 529G 3.2T 15% /mnt/disk6 /dev/md7 5.5T 11G 5.5T 1% /mnt/disk7 /dev/sdf1 239G 199G 40G 84% /mnt/cache shfs 31T 17T 14T 57% /mnt/user0 shfs 31T 18T 14T 57% /mnt/user /dev/loop2 30G 19G 11G 64% /var/lib/docker /dev/loop3 1.0G 17M 905M 2% /etc/libvirt
  5. The parity check has been started and is expected to take 15h.
  6. The Docker Containers are configured correctly. Most of them have been running without errors for some time and the newest ones are deactivated to exclude them as a source of errors. The problem only seems to occur during a parity check or, as it turns out, during the rebuild. To see if this is really the case, I will run the server for a few days without a parity check, unless there are other ideas.
  7. root@KartoffelHQ:~# df -h Filesystem Size Used Avail Use% Mounted on rootfs 5.8G 5.8G 0 100% / tmpfs 32M 236K 32M 1% /run devtmpfs 5.8G 0 5.8G 0% /dev tmpfs 5.9G 0 5.9G 0% /dev/shm cgroup_root 8.0M 0 8.0M 0% /sys/fs/cgroup tmpfs 128M 884K 128M 1% /var/log /dev/sda1 30G 402M 29G 2% /boot /dev/loop0 8.2M 8.2M 0 100% /lib/modules /dev/loop1 4.9M 4.9M 0 100% /lib/firmware /dev/md1 3.7T 3.1T 563G 85% /mnt/disk1 /dev/md2 3.7T 3.1T 604G 84% /mnt/disk2 /dev/md3 4.6T 4.0T 620G 87% /mnt/disk3 /dev/md4 4.6T 4.1T 552G 89% /mnt/disk4 /dev/md5 4.6T 2.3T 2.3T 50% /mnt/disk5 /dev/md6 3.7T 711G 3.0T 20% /mnt/disk6 /dev/md7 5.5T 11G 5.5T 1% /mnt/disk7 /dev/sdf1 239G 187G 52G 79% /mnt/cache shfs 31T 18T 13T 57% /mnt/user0 shfs 31T 18T 14T 57% /mnt/user /dev/loop2 30G 19G 11G 64% /var/lib/docker /dev/loop3 1.0G 17M 905M 2% /etc/libvirt shm 64M 0 64M 0% /var/lib/docker/containers/89466d7ab026469d123e4ba4a13c0ee3163565b62ee4192cc0a300e297511fd2/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/9e9c6a6a0422683abc65ab76b338e2bf5cb40e157081cdfd717b6c5d7a8cc62c/mounts/shm shm 64M 8.0K 64M 1% /var/lib/docker/containers/3c081b4b83ec61bb8d5bb09b6f841c7148f633c8c4e2cea0011ed04424dffae7/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/05f143591df703c25bbfb1cc4161157e32415ee6e8e658b8b17000613c308ba6/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/aa2c6ac117296e52339feed1afaf51b01598c216489bfb13e7d452432c69cb32/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/7f409b380de76f8a41ca237cc8d3ebd13df2955cfd17b341485a437dd6ec5aa5/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/e83688f60259f94fb25cd06a4f97841f71fcb6f0d81bce36ccc7f94bce107969/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/01b2dd903585a67a0e33ca24857538b3651e107fde03e6727961c13e9760e7b4/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/267de35ba0fc79b147a96475186cd10e3136036fb16b991cb4253988e549a723/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/e6ad761dbd49b663c38a3c1e822e8da16f16e399c9c133375029bbb65e671844/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/8bbf7613bd879d9c29b0d53f2245e29e2ed858aaa9c72d993f3286b85c177aae/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/b74a0e0c2b90fef37bf683085cb1e117c8173a3daf49db809e99a8572673fab9/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/03ed9eea62d0dc11dc1a74ffc10ecfab7ae285b8d14cec8000e49d664797ffe4/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/e67eab68be52d12d55f9ba2c95483589a90e7c447f4736ec0c077ac726e8533c/mounts/shm shm 64M 0 64M 0% /var/lib/docker/containers/3413a746470808c88aa678f661bec03c67d5bef4018dd476beb3030cd5b32994/mounts/shm
  8. I can't get a diagnostics file via the webUI. Via the console I get the error message from the screenshot. Now I tried to capture the syslog, but to get it I have to shut down the server and read the USB stick on my computer. At the bottom you can see how the messages "Array undefined" and "Array Started" alternate in different intervals.
  9. The rebuild just finished without errors. Now it looks like this in webui. Docker Services and the VM are running.
  10. Thanks for the help! I do a parity check after the rebuild and then report again.
  11. I can't find the point autostart in the file. Do you mean startArray? I changed it from "yes" to "no" and started the server. The hard disks are all where they belong. Is a rebuild still triggered that way?
  12. Yeah, this is correct... seems so I fucked this up pretty hard. is it possible to build the parity drive from scratch without data loss?