TessyPowder

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by TessyPowder

  1. As far as i know that is exactly the expected behaviour. I am sure it is possible to seperate the docker from your home network, but i don't know how.
  2. That was very helpful. I was getting worried because i had this problem with millions of sync errors. I don't know if i just didnt see it in the docs, but if this known bug isn't in the documentation i think it would be very important to add it. It took me multiple search terms and 20 forum/reddit posts to get here. For others with this problem: You can confirm your problems are related to this known bug, by looking in the syslog and finding the first sector that got repaired (it will only print the first 100 errors in the log so it should not be hard to find) and calculating the following: [first_repaired_sector_number] * (512 / 1000000000) = [position_of_sector_in_gigabytes] If the result is close to the size of the previously replaced parity drive you are affected by this bug.
  3. I forgot to mention, that i am currently running my unraid in safe mode.
  4. And would that solve my problem ? Is loop2 the docker.img ? EDIT: I found the answer in another thread. " loop2 is your docker.img Easiest, pain free solution is to stop the docker service (settings) delete the image (advanced view) re-enable the service, then check off all your apps via Apps - Previous Apps section and hit install. Couple of minutes later you're back in business Probably caused by unclean shutdowns." I will try to recreate the docker.img.
  5. I installed new RAM yesterday, but it was defect and caused a kernel panic stack trace. I rebooted multiple times to figure out if the module or the socket is bad and had multiple stack trace errors. After changing a bios i could boot into unraid and started the array. But after an hour a vm had a stack trace error then i stopped the system and removed the new RAM. After starting again i could not get the array started. In the system log i saw multiple btrfs errors. I then started the array in maintenance mode and started a parity check. 640 Errors were found and corrected. Now i still can't start the array (it get's stuck on starting services) without maintenance mode and the btrfs errors didn't change (logs are attached) I can acces my Filesystem and all files seem OK, but i can't use the array normally. Server Specs: AMD Ryzen 5 2700 (8 Cores, 16 Threads) 16 GB RAM DDR4-3000 (2x8GB) (i tried to add 16 GB DDR4-3200) 3 x 2TB HDD Drives No cache I have 40 Dockers and one VM. Some of the Data is EXTREMELY important to me (a lot of Source code and a Nextcloud Server) How can i repair the filesystem ? What went wrong ? Do i need to recreate my filesystem ? I would need a Cloud Storage for Backup, because my server is the biggest storage device i have. Did i send enough information ? Thanks in advance I have seen a few other threads about similar errors, but i didn't understand how they fixed it (how can i rebuild the docker image ?) greenytron-diagnostics-20200506-0948.zip greenytron-syslog-20200506-0802.zip
  6. I like the beautiful Web Interface. I would like to see SSD Support in 2020.