veri745

Members
  • Posts

    80
  • Joined

  • Last visited

Everything posted by veri745

  1. You are correct, I tracked it down and it turns out this is related to the Dynamix Cache Dirs plugin. Still not sure why the conversion to ZFS caused the noticeable increase in CPU usage, but I removed my ZFS shares from the cached directories and the problem went away
  2. ZFS master appears to chewing tons of CPU every 30s, in line with the "refresh interval" specified on the plugin settings. I noticed my server was constantly cycling having 2 of the cores maxed out every few seconds: So I started looking for the cause: Sure enough, those are my zfs pools, and the timeout matches with the refresh interval. Why so much CPU usage?
  3. Whatever it was causing the slow parity check did not persist the second night of the parity check (I have it set to pause during the day and resume at night), so I didn't get a chance to grab the diagnostics in the midst of the issue. Going to chalk it up to one of my docker containers doing some sort of media scan, since I had to blow away the docker system directory and re-create all the containers (although my appdata was all intact after the cache drive migration). I dunno.
  4. I reformatted my system cache drive to zfs, and added a disk to the array for zfs snapshot replication. Most of my array drives (including the new zfs drive) are 4TB, and typical read speeds for parity check start around 140-150MB/s and head down to ~80MB/s toward the end of the 4TB space on the array After backing up my cache drive to the array, restoring it back to the cache, and adding the new drive, I decided to kick off a parity check (there were some issues with mover hanging on the docker folders of the cache drive) Read speeds around the 1TB mark for the parity check are already down to ~33-40MB/s What would cause such a slow parity check after adding a new drive? Does having a mix of xfs and zfs drives affect parity check performance?
  5. Yes, except I wanted a different subnet because it created issues with connecting to my wireguard VPN from remote networks using the same subnet. Maybe there's another way to fix that, but I figured new router/firewall install was a good opportunity to do so.
  6. OK, another subsequent reboot seems to have resolved the issue. I'm still confused as to where that incorrect IP was coming from.
  7. Also note, it's not all docker containers. One new container I added gets the proper WebUI link, as well as a couple of the old ones, notably the containers that connect via VPN, which needed config changes before they would connect afterward. But restarting/shutting down individual containers, nor disabling/re-enabling docker seems to fix the others.
  8. BTW, the docker containers are all up and working, and talking to eachother.
  9. I recently got a new router and changed my network config. My old network had a subnet of 192.168.1.0/24, and my new network has a subnet of 192.168.2.0/24 My unraid server used to have a fixed IP of 192.168.1.200, and now it is located at 192.168.2.100 I updated the network.cfg with the new network info: IPADDR/GATEWAY/DNS etc, updated my port mappings for my docker containers, rebooted the server, disabled/re-enabled docker. But still, the WebUI links for each of my docker containers links to 192.168.2.200. This is the new subnet, but the old fixed IP octet. What do I need to do here? *edit* Unraid version is 6.12.3
  10. The 2 hour difference between this post and the previous makes me think you may not be running memtest86 for long enough. It takes quite a lot longer than 2 hours to test 32GB of RAM
  11. The version check for unraid 6.2 is broken in 6.10+ Here is an alternative check: function version { echo "$@" | awk -F '[.]|-rc' '{ printf("%d%03d%03d%03d\n", $1,$2,$3,$4); }'; } # check unRAID version v1=`cat /etc/unraid-version | awk -F= '{ print $2 }' | tr -d '"'` if [[ $(version $v1) -ge $(version "6.2") ]] then v=" status=progress" else v="" fi
  12. You can boot into memtest86 and run that for a few hours to test memory stability. You also mentioned your UPS. Try running without that and/or on a new unit or battery
  13. I followed the Shrink array instructions with the user script for clearing an array disk. As soon as I started the script, the write activity to the clearing disk was ~600 KB/s, and web UI interactivity went to shit. Plex stopped responding, I had to log in via SSH to shut plex and several other docker containers before I could get back in via the the web interface. In 'top', the "Wait for I/O" was sitting around 70-90%. I tried turning off Turbo-write (had it enabled via the CA Auto Turbo Write plugin), and writes to clearing disk (and parity) are up at 12-13 MB/s. That's still pretty slow, but at least my system isn't getting crushed. Any thoughts on why Turbo-write performance was so bad? I thought it was supposed to help in situations like this.
  14. So what is the proper way to exclude a folder? I've seen the question asked several times but no definitive recommendations or answer. Do I need a trailing `/`? Does the script accept wildcards in the filename? Should I point it to /mnt/cache/<Share> or /mnt/user/<Share>? It might be good to put something in the actual help text of the "File list path" field in the UI, or add something to the pinned messages so people don't have to dig through 40 pages of posts So is it A) /mnt/user/<Share>/folder B) /mnt/user/<Share>/folder/ C) /mnt/user/<Share>/folder/* D) /mnt/cache/<Share>... and one of A, B, or C E) Some combination
  15. Auto-update is enabled, but I just did a fresh install today. The netdata UI says "v1.39.0-23-nightly"
  16. The only files created are `.container-hostname` and `.opt-out-from-anonymous-statistics` I did get a bunch of folders and a `netdata.conf` when I installed the `stable` version, but not with `latest`
  17. I reinstalled and used those default mappings. There does not appear to be any write activity to any of those mapped directories. The config directory remained complete empty. I copied over the data from my "override" folder, but that doesn't seem to make any difference. I would just go ahead and use the official netdata docker, but the template for that image is completely empty, too
  18. What's in your system share? Mine is literally only my docker.img and libvirt.img, and the sizes of those are controlled via VM and docker configs, so I don't know what would even being eating space there. *edit* it might be helpful to understand how much of that space is taken up by your each of your docker containers. You can go the "Docker" tab and click on "container size" to get a listing
  19. So I pulled open my server's case tonight to swap out to all-new SATA cables and to move my pci-e SATA card to a different port. I discovered something interesting that may point to a potential failure-mode that I've been experiencing: My parity drive and disks 2 and 4, the disks that had errors on them, are in a 4-bay hot-swappable drive cage, and they're also all connected to my 4-port SATA card. One of the molex power connectors that powers the hotswap cage backplane board had come loose, so it was being powered by only two molex connectors instead of three. I'm thinking that under certain load conditions, there wasn't enough juice getting to the drives in that drive cage
  20. Try enabling syslog server to catch what happens prior to the unexpected reboot
  21. No problem. An alternative possible mapping is just /mnt/user/Media <-> /data Then the container sees whatever you have in your Media share. It just depends on whether you want to expose everything to the container
  22. Yeah, so you just need to populate the "container path" for "Home Movies", "Music", and "Photos"
  23. And what do the mappings in your Emby docker look like?
  24. It sounds like you have your shares and docker mappings correct, but it might help if you shared exactly what your docker mapping looks like and how you have your files organized in your user shares to avoid confusion