HonkyTonk

Members
  • Posts

    10
  • Joined

  • Last visited

Everything posted by HonkyTonk

  1. Hello everybody, last week I upgraded from Unraid 6.9.2 to Version 6.12.2. After that I got some error messages like - general protection fault, probably for non-canonical adress... - BTRFS critical (device loop2): corrupt leaf. block... These were caused by a corrupted docker image. I learned that, in this case, you can basically get rid of this image and simple build a new one, since the templates (for building the container) and the appdata is stored outside of the image. So after rebuild all standard Community Application container, I recreated the only container deployed via Docker-Compose (Teslamate), and all the data was missing . After some research I learned that its volumes are basically stored in the docker.img. Luckily I didn't deleted the corrupt, old image. I managed to mount it again, only starting the teslamate container and was able to do a backup. Now, the following things: - first of all do backups - second: everybody who is using docker-compose: make sure to persist the data outside of the docker image (I almost lost 2 years of driving data and felt really bad) - and lastly: are there any best practices on persisting data in unraid? I would simply manually editing the yml and mapping the volumes to the corresponding appfolder (in this case /appdata/teslamate)
  2. Thank you. for the fast response, saved me probably 2 more hours of debugging and frustration Maybe you can add a list of supported / tested vpn provider in the readme of the docker for other users.
  3. Hi @ich777: I got the exact same issue. Could you please share the solution, if you found one?
  4. Sorry, my bad. Thank you for the link. I'm gonna do it the correct way now.
  5. Thank you for this hint. I'll updated the bios just 2 weeks ago, but I keep an eye on this. Regarding my other issue of rebuilding in case somebody has the same issue. My approach is the following, if you need to rebuild your array with the same disabled data drive (and only if you are absolutly sure that the drive is still fine): stop the array remove the disabled drive (aka set it to "no device") from its current place (e.g. Disc 4) add the same data drive as another drive on an extra slot (e.g. Disc 5) start array (clearing and rebuild should happen automatically)
  6. Hi everyone, I have an array of 5 discs (1x Parity, 4x Data). During the last parity check one of the data drives got almost 2000 Error. Rest of the data discs had 75 errors each too. So the „problematic“ data drive has been disabled. I downloaded the diagnostics (see attached) and restarted the server (don’t know why actually). After that I started the array in maintanance mode, started a read check of the array and did short and long smart test of the problematic drive. Everything seems fine, except an additional UDMA CRC error count of 1. Guess this is due a faulty SATA Cable, which I have replaced already. Now I’m not sure what to do next and how? Or better question: Are there any more options than rebuild the array? And if I no, what are the step to rebuild the array with the "problematic" drive? Sorry for asking this basic question. Its my first serious incident with unraid and I just want to make sure I'm on the right track. Please find the diagnostics attachted. Thank for your help. Much appreciated. nas-diagnostics-20210614-1406.zip
  7. Had the same issue. The creator of the docker container modified the Docker variables. Edit your tt-rss container, take a look on the Variables and fill them correctly. According to the error message you are using Postgres as Database. eg. in the Variable TTRSS_DB_TYPE you have to type "pgsql" instead of "mysql". In my case I had to fill out everything again (DB Type, Host, User, Pw, Name etc.). No clue how and why this happened, because I setup the container month ago and didn't update it since then.
  8. Thanks, its working again. I am wondering how this happened, since I didn't change any configuration and browsed the gui via https before.
  9. Hi, since yesterday I am unable to access the Webgui of my Unraid server, although I was able to access several dockers which are running on the system. I ssh'd to the machine (which was instant) and created the attached diagnostics.zip. Any ideas how to fix that? Please help! nas-diagnostics-20200514-0912.zip
  10. - I'love the simple Docker and VM Integration aaand... - would like to see in 2020 some way of grouping Docker Containers in the GUI