shaihulud

Members
  • Posts

    14
  • Joined

  • Last visited

shaihulud's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Oh wow thank you, this may be just what I needed. Set it to 30GB, let's see how it goes.
  2. Oh this sounds like it may be exactly what I need, thank you. I will look into it further.
  3. Thanks for your help, here they are. Things seem to be working after a reboot. Dockers are back, I've disabled Graylog and DelugeVPN for now until I have a better idea of what was going on. I did have a large torrent that was downloading and my cache drive got 80% full, so maybe it was just a space issue, although I would have expected things to fail a little more gracefully if my cache disk filled up (altho maybe that was an incorrect assumption). shai-hulud-diagnostics-20240106-1046.zip
  4. I recently ran into a similar issue on 6.12.6. Discovered that my dockers were not running and saw a message "Docker service failed to start" in my docker tab. Saw a bunch of errors in the logs like: ``` kernel: I/O error, dev loop2, sector 2764512 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2 kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 67, rd 0, flush 0, corrupt 0, gen 0 kernel: loop: Write error at byte offset 5152833536, length 4096. ``` Sounds like there is some issue writing to the docker image? So I try to shut down the array, it won't go down, gets stuck on unmounting disks. Tried the suggestions in this thread, unfortunately `umount /var/lib/docker` did not seem to work. Messages in the logs make it look like the docker image isn't the issue, the system also cannot unmount my cache drive and my drive in the array. I wound up going into the Open Files plugin and saw that several appdata folders were in use by `shfs`. I killed that process via the terminal, still no luck. Wound up hitting the Shutdown button and that worked cleanly but still not sure why. Seems like maybe a disk I/O issue? My plan is to go into my docker configs and change `/mnt/user` to `/mnt/cache` where possible to hopefully lighten the load on the fuse filesystem. My only significant recent changes to the system were adding an additional NVME ZFS pool (however no data has been added to that yet so I'm doubtful it's the culprit) and also setting up a Graylog stack with docker-compose... I'm a little worried that writes from Graylog mucked things up, but I only have syslog and plex feeding into there, so the writes shouldn't be *too* excessive. Anyone have any thoughts?
  5. I recently ran into a similar issue on 6.12.6. Discovered that my dockers were not running and saw a message "Docker service failed to start" in my docker tab. Saw a bunch of errors in the logs like: ``` kernel: I/O error, dev loop2, sector 2764512 op 0x1:(WRITE) flags 0x100000 phys_seg 1 prio class 2 kernel: BTRFS error (device loop2: state EA): bdev /dev/loop2 errs: wr 67, rd 0, flush 0, corrupt 0, gen 0 kernel: loop: Write error at byte offset 5152833536, length 4096. ``` Sounds like there is some issue writing to the docker image? So I try to shut down the array, it won't go down, gets stuck on unmounting disks. Tried the suggestions in this thread, unfortunately `umount /var/lib/docker` did not seem to work. Messages in the logs make it look like the docker image isn't the issue, the system also cannot unmount my cache drive and my drive in the array. I wound up going into the Open Files plugin and saw that several appdata folders were in use by `shfs`. I killed that process via the terminal, still no luck. Wound up hitting the Shutdown button and that worked cleanly but still not sure why. Seems like maybe a disk I/O issue? My plan is to go into my docker configs and change `/mnt/user` to `/mnt/cache` where possible to hopefully lighten the load on the fuse filesystem. My only significant recent changes to the system were adding an additional NVME ZFS pool (however no data has been added to that yet so I'm doubtful it's the culprit) and also setting up a Graylog stack with docker-compose... I'm a little worried that writes from Graylog mucked things up, but I only have syslog and plex feeding into there, so the writes shouldn't be *too* excessive. Anyone have any thoughts?
  6. Hey @Konijntjes, ever find a solution to this issue? I've got the same Gigabyte motherboard, and I'm unable to get the PWM controller to show up in Dynamix Fan Auto Control
  7. For anyone finding this later, I'm pretty sure I fixed this issue by removing the docker folders plugin
  8. @rs5050 Were you ever able to figure out what was causing your container size to be so large? I seem to be running into a similar issue, my container is using up 3.44GB, which seems large.
  9. Hi there, I recently received the message "Out Of Memory errors detected on your server" in the Fix Common Problems plugin. The error message has directed me to make a post here with my diagnostics file. I have not yet experienced any ill-effects from this message, but if anyone could help me resolve it in the correct way, would be greatly appreciated. Happy to provide any additional info to help debug. Thank you! server-diagnostics-20221211-1200.zip
  10. Hey @sit_rp, I was able to get time machine backing up to an unassigned drive by using your settings. I'm encountering an issue though where I'm unable to unmount the unassigned drive. Have you encountered this at all?
  11. Would you be able to point me in the right direction for setting up keyfile authentication? I tried googling but wasn't able to find much on running ssh from inside an unraid docker container. I found that running this command instead of `findmnt` seems to work sometimes test -d mnt/backup_1_repo/data > /dev/null || (echo "Backup disk 1 not found, failing over" && exit 75) This works correctly when I run borgmatic manually inside the container, However, for some reason it's not working correctly when running from my crontab -- the soft failure always occurs, even when the disk is mounted. So I'm going back to trying the ssh/findmnt suggestion to see if that will work any better.
  12. Hi! I'm using this container in unraid to set up backups to an intermittently connected external drive. Following the borgmatic docs here, I'm trying to probe for the drive using findmnt, however when I run it in the container I get `findmnt: not found`. From googling it seems like findmnt is including in most linux distros, and it works in the unraid shell, so I'm not sure why it doesn't work in the container. Do you know why it may not be available, and if there is any fix or alternative approach to checking if a drive is mounted? Thanks!
  13. Heyyo, did you ever find a solution here? I have a similar use case, I have a single hot-swap bay that I want to use to rotate two different backup disks periodically, keeping one actively backing up, and another off-site in cold storage. I looked at the linuxserver/rsnapshot docker, but like you said, it looks like that needs to be configured to a single mount point. So I would either somehow need to duplicate the container and have an instance for each disk, or reconfigure the container each time I swap disks? Ideally I'd be able to just swap the disks and have rsnapshot automatically pick up where it left off making incremental backups to that disk, but I'm not certain how to achieve this.
  14. Hey, did you ever find a solution to this? I think I'm getting the same issue, right down to CA taking forever to load.