vb543

Members
  • Posts

    25
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

vb543's Achievements

Noob

Noob (1/14)

0

Reputation

  1. It's unfortunately looking like the process of starting the docker container blows out any existing data as I'm having trouble even restoring these on a different docker host. Not sure if anyone has any ideas at this point or if I should just consider the data lost. -- edit -- Well, my current guess is that somewhere along the backup & restore process - the folder structures of everything in appdata was saved but just about every file is gone. Not sure if unbalance or the native mover is at fault -- but it looks like I'm rebuilding everything from scratch.
  2. Hello! My cache pool recently required reformatting. I did this by using the unbalance plugin to move my appdata and system folders from the pool onto an array disk. Then I formatted the cache and used the native mover to move the appdata folder back onto the cache. That all seemed to go just fine, but now whenever I start a Docker container - it acts like it's the first time running and seems to ignore any existing configuration stored in the appdata folder. I can see the modified dates and times update within the appdata folder when I launch the Docker, but I can't figure out why it's ignoring everything and behaving as if it was just installed. root@Enterprise:/mnt/user/appdata/unifi-controller# ls -la total 0 drwxr-xr-x 1 nobody users 22 Dec 3 2019 ./ drwxrwxrwx 1 nobody users 1098 Mar 19 15:21 ../ drwxr-xr-x 1 nobody users 42 Mar 5 15:48 data/ drwxr-xr-x 1 nobody users 0 May 22 2021 logs/ drwxr-xr-x 1 nobody users 8 Mar 3 15:33 run/ root@Enterprise:/mnt/user/appdata/unifi-controller# ls -la total 0 drwxr-xr-x 1 nobody users 22 Dec 3 2019 ./ drwxrwxrwx 1 nobody users 1098 Mar 19 15:21 ../ drwxr-xr-x 1 nobody users 200 Mar 19 20:35 data/ drwxr-xr-x 1 nobody users 52 Mar 19 20:34 logs/ drwxr-xr-x 1 nobody users 86 Mar 19 20:35 run/` What's even more confusing, is a couple dockers are working just fine - like Grafana and Home Assistant for example. And all of its paths are configured the same as the others from what I can see. I've looked over the docker troubleshooting page, in particular - the "Why did my files get moved off the Cache drive?" section. I've made sure that my containers are using /mnt/cache/appdata and that still doesn't seem to work properly. (Some were previously set to /mnt/user/appdata/ and I've tried changing those without any luck). Any suggestions? Thanks! enterprise-diagnostics-20220322-1041.zip
  3. I seem to be having an issue with most of my docker containers now. I used the mover and set my appdata to prefer the cache. I can see all the appdata back on the cache pool, however - when I start most of my dockers, they appear to have lost all their settings/configs and come up as if they were just installed. Any ideas? -- edit -- Take this UniFi Controller docker for example. The appdata is there, I start the container and can see it's accessing the same appdata -- but then the docker takes me through the 'new setup' wizard as if it has never been configured before. root@Enterprise:/mnt/user/appdata/unifi-controller# ls -la total 0 drwxr-xr-x 1 nobody users 22 Dec 3 2019 ./ drwxrwxrwx 1 nobody users 1098 Mar 19 15:21 ../ drwxr-xr-x 1 nobody users 42 Mar 5 15:48 data/ drwxr-xr-x 1 nobody users 0 May 22 2021 logs/ drwxr-xr-x 1 nobody users 8 Mar 3 15:33 run/ root@Enterprise:/mnt/user/appdata/unifi-controller# ls -la total 0 drwxr-xr-x 1 nobody users 22 Dec 3 2019 ./ drwxrwxrwx 1 nobody users 1098 Mar 19 15:21 ../ drwxr-xr-x 1 nobody users 200 Mar 19 20:35 data/ drwxr-xr-x 1 nobody users 52 Mar 19 20:34 logs/ drwxr-xr-x 1 nobody users 86 Mar 19 20:35 run/ root@Enterprise:/mnt/user/appdata/unifi-controller#
  4. Ahh, "Pool" aka the Cache pool. Sorry for the misunderstanding! It seems the mover is going extremely slow / not clearing the cache pool. Is there a recommended method for manually copying its contents over to the array? -- edit -- Was able to use the unbalance plugin to copy everything to the array. Then I used 'blkdiscard /dev/sd#' to wipe the cache drives and formatted them after starting the array. So far so good. Thanks everyone!
  5. That's unfortunate, any other options other than a backup and re-format? I have a cloud backup, but with 45TB of data - that's going to take a while... -- edit -- This seems to be the error in the syslog right before the entire array goes read only. Mar 18 23:06:45 Enterprise kernel: BTRFS critical (device sdd1): corrupt leaf: root=10 block=602423296 slot=522, unexpected item end, have 2290802566 expect 16275 Mar 18 23:06:45 Enterprise kernel: BTRFS error (device sdd1): block=602423296 read time tree block corruption detected Mar 18 23:06:45 Enterprise kernel: BTRFS: error (device sdd1) in add_to_free_space_tree:1039: errno=-5 IO failure Mar 18 23:06:45 Enterprise kernel: BTRFS info (device sdd1): forced readonly Mar 18 23:06:45 Enterprise kernel: BTRFS: error (device sdd1) in __btrfs_free_extent:3221: errno=-5 IO failure Mar 18 23:06:45 Enterprise kernel: BTRFS: error (device sdd1) in btrfs_run_delayed_refs:2144: errno=-5 IO failure -- edit 2 -- Well, I seem to have been mistaken in stating that the entire array goes read only. It seems that the share I was testing from was set to utilize the cache, so it would throw a read only error. It appears that it is just my cache that is going read only.
  6. Hello! I've been having some issues where my array and cache drives go read only after a few hours. Parity checks complete without any issues. Everything works fine for a few hours after a reboot -- but eventually the array will go read only. Attached are my diagnostics. Any suggestions? Thanks! enterprise-diagnostics-20220317-2102.zip
  7. Cool, can confirm rolling back from 6.9.1 to 6.8.3 resolved this issue. Hopefully they can get it patched in a future update.
  8. Doing some troubleshooting, I created a new VM with just a live boot of CentOS 7 attached to the VM and it too would not boot past the underscore. I suppose this rules out my VM being the issue and this more so looks like an Unraid / KVM issue.
  9. Interesting, I thought my problem was related more so to the unclean shutdown - but I did do the update to 6.9.1 during all of that as well.
  10. Hello! I have a VM that's giving me some trouble. The issue started after an unclean shutdown. When starting the VM, I can get to grub just fine - but after the CentOS grub I just get a solid underscore on a black screen. I can also see that one of the unraid hosts CPU cores is stuck at 100% usage when this occurs. Any suggestions for getting this VM working again? Thanks!
  11. Having the same problem unfortunately. Ever find a fix?
  12. Just had another crash with a few VMs running but no parity check this time. I'm at a loss at this point.
  13. Was able to replicate the same results. Fine for nearly a day, start a parity check and it freezes up within the hour. What's the next best step?
  14. So it ran just fine with the array enabled for 24 hours. At the 24 hour mark I started a parity check and it failed six hours in. Last time it failed about four hours into the parity check. Any idea why? Still worth looking into the PSU or is it more likely something else?
  15. I was indeed running parity checks during previous crashes. However, they seemed to run for hours without any issues so I didn't think too much of it. I ended up with this PSU after trying to find something with two EPS connections that Amazon could get to me quickly. If this really could be the cause, I'll look around for different power supply.