GrehgyHils

Members
  • Posts

    17
  • Joined

  • Last visited

GrehgyHils's Achievements

Newbie

Newbie (1/14)

1

Reputation

  1. @ich777 One more follow up question. I have been playing with the Pavlov VR server this afternoon and it has been an absolute blast. I've been downloading custom maps etc... I just noticed alerts from my Unraid server saying: > Warning: Docker high image disk utilization (at... After running `$ docker system df -v`, I've noticed that this container has been writing a lot of data to the `docker.img`. ``` CONTAINER ID IMAGE COMMAND LOCAL VOLUMES SIZE CREATED STATUS NAMES 0fae144b0f31 ich777/steamcmd:pavlovvr "/opt/scripts/start.…" 0 5.33GB 10 hours ago Exited (143) 10 minutes ago PavlovVR ``` Are there any specific volumes would should be aware of to create when playing with this image? IE, so that the large amount of disk space is say written onto the cache directory or the array itself, rather than in the `docker.img`?
  2. Okay that makes sense, I was able to get rcon running as described. Thanks ich777 That makes sense, it didn't dawn on me that there's probably only one dedicated linux server for the game and I could just safely assume you're using that. Appreciate your explanation and help!
  3. @ich777 That helps a ton, I should be able to easily follow this and recreate it, so a big thank you! Let me ask you this, how would one figure this out on their own, without resorting to asking on this thread? Is there some documentation that I may have missed? I ask because I've been resorting to exploring the container itself to try to see which ports are expected, what is running, etc. My whole exploration is based quite a bit on luck and exploration ha Thanks again, Greg
  4. Is there any support for Rcon for Pavlov VR? Perhaps I'm unsure where to look for documentation for a specific game
  5. No official word from the Lime Tech folks if this is going to be officially fixed?
  6. Is git lfs still offered by this pack? I have it installed but seemingly do not have access to this tool
  7. This is amazing work mgutt. This issue has plagued me for a long time and has destroyed two of my nice SSDs already. Hoping to see this officially fixed in an unraid update... Anyone have any idea if they'll officially reply?
  8. That's unfortunate to hear. Can you share your results of going back to BTRFS when you have them in a few days? Also, what's the thought process behind going to XFS? Additionally, how many cache drives did you have when you were using BTRFS?
  9. Hey everyone, I wanted to report that I believe i'm seeing this bug demonstrated on a 6.9.2 Unraid box. I had a cache pool of two 480 GB SSDs in RAID 1 that stopped working, which I believe it was due to excessive writes. I replaced the hardware just this morning and put only `appdata`, `domains` and `system` shares on the cache using the setting `prefer`. Being concerned about the number of writes, I checked thees and with the server being online for ~26 minutes, the cache has experienced already 110,519 writes (~55,000 per disk). Installing `iotop` with Nerdpack allowed me to run `iotop -ao` which showed that `[loop2]` is responsible for the majority of the writes. ``` Linux 5.10.28-Unraid. root@tower:~# tmux new -s cache Total DISK READ : 0.00 B/s | Total DISK WRITE : 0.00 B/s Actual DISK READ: 0.00 B/s | Actual DISK WRITE: 0.00 B/s TID PRIO USER DISK READ DISK WRITE SWAPIN IO> COMMAND 13149 be/0 root 564.00 K 135.61 M 0.00 % 0.44 % [loop2] ``` I've read that some people have been unable to have their cache drives me unencrypted and experience less writes. That's not something I'd like to do... I searched online for any advice on how to fix this, found this threads: which pointed me to this bug report. Any advice on how to resolve this? Thanks, Greg
  10. Okay so it looks like the scrub process finished successfully: UUID: some-uuid Scrub started: Mon Jan 25 08:41:42 2021 Status: finished Duration: 1:08:11 Total to scrub: 318.25GiB Rate: 79.66MiB/s Error summary: verify=18593 csum=1805142 Corrected: 1823735 Uncorrectable: 0 Unverified: 0 Everything is back to working as expected! So a big thank you to JorgeB and Trurl. I'm going to document what happened and what I did so that the next person can hopefully have less panic than I experienced. What happened: One of my two cache drives, that are in an array together, disconnected at some point I reconnected the cache drive. This caused problems to happen when reading or writing to sections that were updated on the first disk Unrelated, but my `docker.img` disk use was climbing and I ignored it, until it hit 100% and all my Docker containers stopped as did the Docker daemon What I did to resolve the problem: Stopped the docker service ran a cache scrub by selecting the first disk in the array (selecting the "repair corruptions...") verified the corruptions were fixed by running `$ btrfs dev stats /mnt/cache` backed up my `docker.img` file just in case (this might not be needed) deleted the original `docker.img` file moved the `docker.img` file to `mnt/cache/docker.img` opposed to the original location of `/mnt/user/system/docker/docker.img` lowered the `docker.img` filesize fom 60 GB to 40GB, as this was an experiment I was performing to try to fix the issue turned on Docker which created a new file Went to the Apps tab and saw the "previous apps" which allowed me to batch install all my old Docker containers with their original templates already selected What I do not have figured out yet or resolved Figure out what container was the original culprit in filling my `docker.img`. Lots of forum posts, and replies above, suggest I have a misconfigured container that is writing incorrectly to the `docker.img`. If anyone has any tips on how to debug this it'd be appreciated! Thanks again everyone
  11. Ah! That was absolutely my problem here. Okay! I began a scrub with "repair corrupted blocks". Since I have two 500 GB SSDs, I imagine my slow CPU might take awhile. I'll let this command run then rerun the above command to ensure no more errors occur. From there I'll learn what "recreate the docker image" means in this context and give that a go. Thanks fro you help so far!
  12. I apologize but I'm still not seeing this. If I navigate to the Cache drive (sdc1)'s page I see sections like: Cache 2 Settings SMART Settings Self-Test Attributes Capabilities Identity I see the SMART tests I could run but I do not see, nor did CTRL + F find, anything named scrub. Am I misunderstanding something?
  13. Apologies, I just reread what I wrote and realized what I wrote wasn't clear. I'm trying to express that I don't actually follow what command one runs to perform the scrub.I ran btrfs dev stats -z /mnt/cache and the output now shows no errors: If the btrfs dev command was not the correct way to perform a scrub, can you help me understand that? I've googled this, with respect to unraid, and have not been able to piece that together. Thank you for your patience
  14. Ah okay that makes sense, I remember one of my two cache disks disconnecting. I did not realize that would cause an issue. I ran the `$ btrfs dev stats /mnt/cache` script and got the output of [/dev/sdb1].write_io_errs 0 [/dev/sdb1].read_io_errs 0 [/dev/sdb1].flush_io_errs 0 [/dev/sdb1].corruption_errs 0 [/dev/sdb1].generation_errs 0 [/dev/sdc1].write_io_errs 1507246927 [/dev/sdc1].read_io_errs 137577961 [/dev/sdc1].flush_io_errs 19733411 [/dev/sdc1].corruption_errs 0 [/dev/sdc1].generation_errs 0 Which is what you showed me from the diagnostics output above. So I'm a bit confused from your link above, and trying to be extra careful to not result in any data loss as I have like 50+ containers configured. I've re-seated the cable to the cache that disconnected and believe that is resolved. I've also reset the btrfs dev stats. I'm now at the point where I want to: As I want to be able to bring the Docker containers back online. Any advice?
  15. Hey all, I noticed my Docker disk space was at 100% as all my containers were stopped. I've read quite a few threads that point to potentially setting up a container incorrectly where the data it downloads goes to the wrong folder. I have been unable to figure out what is responsible. I upped the Docker "Docker vDisk size:" from 40gb to 50gb and still the Docker service reports "Docker Service failed to start." Any advice is appreciated! Attached is my diagnostics, as I've seen many people ask for this data. tower-diagnostics-20210124-1449.zip