Kaldek

Members
  • Posts

    87
  • Joined

  • Last visited

Everything posted by Kaldek

  1. Confirmed at my end that not using Active/Backup NIC Failover has kept my system stable for 25 days.
  2. Fair enough. Not sure if I'm game to turn active/backup back on and wait for a crash again. I had so many crashes I think I'm still in the "just want stability for a while" phase. I guess if that means this bug report has to be closed due to that, I'll just have to live with it.
  3. Yeah I did plan on that but given the fact that I wasn't even getting Kernel Panic messages on the console, and the fact that it's core network driver related, the chances of that syslog message even getting out were...low.
  4. Unfortunately I do not have any diagnostics files from during this issue as the system insta-reboots when this issue occurs and leaves no logs at all. I recently enabled link bonding (active/backup) on unRAID 6.12.6 between a dual port Intel 10Gb/s XFP module (ixgbe driver) using eth0, and the onboard gigabit Intel NIC (igb driver) at eth2. My server started rebooting every few days, with no pause for kernel dumps or anything. The issue did not go away until I removed the active/backup link bond and shut down the eth2 NIC again. Some additional useful information is that, whilst in the same Layer-2 broadcast domain, eth0 and eth2 are connected to different switches. Diagnostics file attached but note that it does not have the active/backup config in it. unraid-diagnostics-20240110-1155.zip
  5. Support had to do a manual key replacement to fix this issue. I literally just listened to the unRAID podcast where the support guys were saying license key management was a major headache. Guess there's still a lot of work to do at the back end.
  6. I have access to all that, and it looks like the image below. Unfortunately there appears to be no documentation on "Signing Out" a key and what that all means. It drives me up the freaking wall when I find this lack of documentation update. Documentation must never be delayed and must be part of the release process! It keeps a high workload on the tech support folks when there's just no need if the damned documentation would be updated.
  7. I bit the bullet and did it. I confirmed that keeping Pool Slot assignments also keeps ZFS pools.
  8. I just installed a new DOM-based USB key for my server, following all of the instructions located at https://docs.unraid.net/unraid-os/manual/changing-the-flash-device. There is NO "Replace USB key" option, and it appears this is because I upgraded my license on the 14th of July. I have had this USB key for over four years. This is ridiculous, why is a license upgrade classed as a "key replacement"?
  9. Folks it's a bit of a worry that I'm getting zero response to this question, and makes me scared to use ZFS on unRAID at all. Can *somebody* reply?
  10. I will admit that my current flash drive - a SanDisk Cruzer Fit 32GB has lasted me a very long time without issue, when connected to a USB2 port.
  11. Ah, I seem to have missed the part where you wrote you're running unRAID on a QNAS box.
  12. Likely. Also I'm a bit surprised your motherboard has *no* USB2 headers. They're common even on new stuff, even it's only a single two-port header.
  13. I burned through a few flash drives before using USB2 ports only. My current unit has lasted 3 years now. However, I am switching to a USB DOM (Disk On Module) shortly for some extra reliability. They are more expensive but use quality SLC flash. The only downside is that it's mounted to a motherboard header, and harder to get to. But, it should be unlikely I ever need to touch it.
  14. I'm in the middle of some major array disk maneuvering which will require a "New Config" in a few days to remove some drives from the array. However, I have both a BTRFS cache pool (mirrored 1TB SSDs) and a ZFS RAIDz Pool of 4x 480GB Enterprise grade SSDs. All of my Appdata and Domains lives on the ZFS pool. If I lose that, I'm hosed. So, does the "New Config" option support keeping of both traditional "pools" and also ZFS pools? It just says "Pool Slots" but doesn't clarify if that will retain ZFS pools.
  15. Hi folks, long time user here. Upgraded to 6.12 and then 6.12.2 and decided to create a 4-drive SSD ZFS RaidZ pool using some enterprise grade SSDs I was given, and use that pool for all my VMs and Docker containers. Everything went great, except when I moved the libvirt.img file from the old cache pool to the new zfs pool. Here's what I did: Set the system share to use the new ZFS pool Shut down the Docker engine via Settings-->Docker "mv /mnt/cache/system/docker /mnt/zfs-cache/system" Restarted Docker - no issues. Shut down the VM engine via Settings-->VM Manager "mv /mnt/cache/system/libvirt /mnt/zfs-cache/system" Validated that the file exists within /mnt/user/system/libvirt but physically exists only on the ZFS pool Attempted to restart the VM engine This gave me "libvirt service failed to start" and the system logs gave me a bunch of errors about btrfs saying that the "file already existed" and information about /dev/loop4 and duplicate entities. The issue went away after the reboot, but, why did it happen in the first place? I did not have this issue when I moved the docker.img file. Diagnostics file also attached. unraid-diagnostics-20230710-1317.zip
  16. Concurred; mine had also just filled up with the same consumption caused by the log.json file used by Plex.
  17. Well, DNS kept going after having IPv6 disabled but Docker had a fit and lost its mind. Server reboot required. I'll just reboot it every couple of days until RC7 is out.
  18. It's been 7 days for me with no issues, and the only change I made was to disable IPv6. User JorgeB above mentioned there is an RC7 floating about that fixes a Docker Bug which could cause the issue. He must be a member of the team as he's a moderator.
  19. I can say with some confidence this is not a DNS server issue for me. Nothing changed except unRAID version unRAID was always set to use the router, followed by Google for DNS /etc/resolv.conf shows as blank when the problem occurs
  20. I can confirm the same behaviour. I also tried to stop the array so I could change DNS settings and see if it forced DNS to come back, but it constantly said that Docker was still running. I'm not sure if I said this in my original post either, but it was also impossible to stop the array because Docker would never stop, and as a result it could not unmount the cache pool.
  21. This issue appears to be occurring every few days. I can't ping any hostnames from the CLI, and /etc/resolv.conf is blank. I do not use DHCP for the server address, and my DNS servers are statically assigned. In addition, when it happens I am unable to reboot the server as the shares will never unmount as it constantly tells me /mnt/cache is busy. There are definitely no clients holding shares open when this happens. I have attached diagnostics of when the server is working, and will attach again when it next fails. I have made one change today after the last failure, and that was to disable IPv6. My dual stack ISP connection isn't always the best when it comes to IPv6 working all the time, so I've disabled IPv6 to see if that helps, since this seems to mainly be a network issue. unraid-diagnostics-20230521-1953.zip
  22. For what it's worth, here's the code from my discussion with ChatGPT. This is - as yet - untested. But knock yourselves out if you want to see what was generated. Note that this script is intended to be run "After start of array". #!/bin/bash CONTAINER_NAME="frigate" # Wait for 2 minutes for container to start sleep 120 # Get the current container configuration CONFIG_JSON=$(docker inspect --type container --format '{{json .}}' ${CONTAINER_NAME}) # Extract the current command and entrypoint CMD=$(echo ${CONFIG_JSON} | jq -r '.Config.Cmd | join(" ")') ENTRYPOINT=$(echo ${CONFIG_JSON} | jq -r '.Config.Entrypoint | join(" ")') # Extract all options from the HostConfig property HOST_CONFIG_OPTIONS=$(echo ${CONFIG_JSON} | jq -r '.HostConfig | to_entries | map(select(.key != "Devices")) | map("--" + .key + "=\"" + (.value | tostring) + "\"") | join(" ")') # Replace the USB device option with the new bus ID HOST_CONFIG_OPTIONS=$(echo ${HOST_CONFIG_OPTIONS} | sed 's@--device="/dev/bus/usb/004/002@--device="/dev/bus/usb/004/003@g') # Extract the image name IMAGE=$(echo ${CONFIG_JSON} | jq -r '.Config.Image') # Build the new container run command NEW_CMD="docker run --name ${CONTAINER_NAME} ${HOST_CONFIG_OPTIONS} ${ENTRYPOINT} ${CMD}" # Stop the existing container docker stop ${CONTAINER_NAME} # Run the container with the new configuration eval ${NEW_CMD}