Jump to content

L0rdRaiden

Members
  • Joined

  • Last visited

  1. As a reference is not needed, it works without it apparently, and it's an "unsafe" practice.
  2. Network config Docker Settings eth4 or its vlans don't appear I have rebooted, apply the settings several times but always the same result.... can anyone help me to troubleshoot this bug? Diagnostic file attached unraid-diagnostics-20250708-2208.zip
  3. Tailscale in LXC containers · Tailscale Docs To configure this, this is the right way? is "lxc.cap.drop =" needed? # Allow Tailscale to work lxc.cap.drop = lxc.cgroup2.devices.allow = c 10:200 rwm lxc.mount.entry = /dev/net/tun dev/net/tun none bind,create=file
  4. No, still consuming 4 or 5 gb, in previous versions 21. A lot of people have been reporting this here and in reddit but officially there is no acknowledge officially by unRAID
  5. And could you please consider to implement the concept of global env in combination with the "local" one? You could add the setting here for the global path env And then modify the code to run both env files in case the global one exist. This is the script I use to launch after array start, the problem is that isn't "compatible" with compose manager since I can't use a global env #!/bin/bash # Exit on error, unset variables, and pipefail set -eo pipefail sleep 10 echo "⏹️ Checking running containers..." running_containers=$(docker ps -q) if [ -n "$running_containers" ]; then echo "⏹️ Stopping all running containers..." docker stop $running_containers echo "✅ Containers stopped successfully." else echo "✅ No running containers." fi echo "" # Base path for docker-compose files base_path="/mnt/services/docker/git/homeserver/docker-compose" # Path to the global .env file global_env_file="$base_path/.env" # Check if global .env exists if [ ! -f "$global_env_file" ]; then echo "⚠️ Global .env file not found at $global_env_file — proceeding anyway." fi # Function to parse .env file into an associative array parse_env_file() { local file=$1 declare -n env_map=$2 # Use -g if you need it globally, but local nameref is good here # Read only lines that are not comments and contain an equals sign while IFS='=' read -r key value || [[ -n "$key" ]]; do # Basic sanitization of key, skip if key is empty after read key=$(echo "$key" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//') if [[ -z "$key" ]]; then continue fi # Trim spaces from value value=$(echo "$value" | sed 's/^[[:space:]]*//;s/[[:space:]]*$//') # Remove quotes if present (handles simple cases) value="${value#\"}"; value="${value%\"}" value="${value#\'}"; value="${value%\'}" env_map["$key"]="$value" done < <(grep -Ev '^[[:space:]]*#|^[[:space:]]*$' "$file" | grep '=') } # List of services with their delay in seconds (format: "service:delay") services_with_delays=( "adguardhome5:0" "adguardhome6:0" "administration:0" "komodo:0" "homeautomation:0" "media:0" "webproxyint:60" "safeline:30" "webproxydmz:0" "monitoring:0" "backup:0" #"portainer:10" #"dockge:10" #"immich:10" ) # List of containers to stop at the end containers_to_stop=( "Netdata" ) # Start services with delays for entry in "${services_with_delays[@]}"; do IFS=":" read -r service delay <<< "$entry" compose_file="$base_path/$service/docker-compose.yml" local_env_file="$base_path/$service/.env" echo "" echo "🚀 Starting service: $service" # Check if the docker-compose file exists if [ ! -f "$compose_file" ]; then echo "❌ Compose file not found for $service at $compose_file. Skipping." continue fi # Check for env file collisions if [ -f "$global_env_file" ] && [ -f "$local_env_file" ]; then declare -A global_vars declare -A local_vars parse_env_file "$global_env_file" global_vars parse_env_file "$local_env_file" local_vars for key in "${!global_vars[@]}"; do if [[ -n "${local_vars[$key]}" ]]; then echo "⚠️ Variable override detected for '$key'" echo " 🌐 Global value: ${global_vars[$key]}" echo " 📁 Local value: ${local_vars[$key]}" fi done fi # Build docker compose command arguments compose_cmd_args=("-f" "$compose_file") # Add global .env file if it exists if [ -f "$global_env_file" ]; then compose_cmd_args+=("--env-file" "$global_env_file") fi # Add local .env file if it exists if [ -f "$local_env_file" ]; then compose_cmd_args+=("--env-file" "$local_env_file") fi # Run docker compose with the accumulated arguments compose_cmd_args+=("up" "-d") echo "🐳 Executing: docker compose ${compose_cmd_args[*]}" if docker compose "${compose_cmd_args[@]}"; then echo "✅ Service $service started successfully." else echo "❌ Failed to start service $service. Check output above." # Consider 'exit 1' or other error handling if a service fails to start fi echo "⏳ Waiting $delay seconds..." sleep "$delay" done echo "" echo "✅ All services have been started in order." echo "" # Stop specific containers at the end of the process echo "🛑 Stopping specific containers at the end of the process..." for container in "${containers_to_stop[@]}"; do if docker ps -q -f name="^${container}$" > /dev/null; then docker stop "$container" echo "✅ Container stopped: $container" else echo "ℹ️ Container not running or not found: $container" fi done echo "" echo "🏁 Process completed."
  6. I'm getting this error .env doesn't exist: /mnt/services/docker/git/homeserver/docker-compose/ Any idea why? Does it replace the env file that is in the same path than the yml? or it's actually a global env? I want both files to be available to the compose project. Basically if I remember well I can achieve this with docker compose \ -f /mnt/services/docker/docker-compose/Monitoring/docker-compose.yml \ --env-file /mnt/services/docker/docker-compose/.env \ ##GLOBAL --env-file /mnt/services/docker/docker-compose/Monitoring/.env \ ##SPECIFIC ENV up -d
  7. Docker compose official support Current templates could be just compose files, migration would be easy, templates to compose files could be automated. It makes sense to use compose since it's an universal standard and not the unRAID templates
  8. Docker released Docker Scout. I think it would be interesting at least to have the scout-cli already included in Unraid by default. An step further would be an additional page in unraid to see a report of the vulnerabilities found, the command line is pretty simple and the output could be easily formated for a web internface. https://docs.docker.com/scout/ https://github.com/docker/scout-cli https://docs.docker.com/scout/dashboard (this is only docker hub, doesn't apply just interesting)
  9. @primeval_god Could you please add the option to use a global env? It would be easier to manager in a single file things like common variables, IPs, etc
  10. Any solution to this? https://forums.unraid.net/topic/47266-plugin-ca-fix-common-problems/page/97/#findComment-1552392
  11. I haven't changed direct setting yet but this are my stats after 7 days of uptime. The services pool is made by nvme. Still the direct value is too low to justify that the arc is still in 6GB after a week, when usually it was full at 21GB Maybe L2ARC has something to do? --- Estadísticas del ARC (Cache en RAM) --- Hits (Lecturas desde ARC): 453976458 Misses (Lecturas que no estaban en ARC): 1801262 Hit Ratio del ARC Global: 99% ----------------------------------------- Tamaño actual del ARC: 6.41 GiB Límite mínimo configurado (zfs_arc_min): 953 MiB Límite máximo configurado (zfs_arc_max): 22.35 GiB --- Estadísticas del L2ARC (Cache en SSD/NVMe) --- Hits (Lecturas desde L2ARC): 90766 Misses (Lecturas que no estaban en L2ARC): 60691 Hit Ratio del L2ARC: 59% ----------------------------------------- Espacio en dispositivos L2ARC actualmente usado: 71.65 GiB Espacio en dispositivos L2ARC disponible para cachear: B --- Estadísticas de Lecturas/Escrituras desde ARC y acceso directo por Pool --- Pool: data ARC Read Bytes: 143.31 GiB ARC Write Bytes: 52.35 GiB Direct Read Bytes: 0 B Direct Write Bytes: 0 B Pool: services ARC Read Bytes: 71.20 GiB ARC Write Bytes: 215.34 GiB Direct Read Bytes: 2.04 GiB Direct Write Bytes: 21 MiB The script #!/bin/bash # Script para obtener estadísticas clave de ZFS ARC, L2ARC, acceso directo y compresión. # Compatible con múltiples pools y datasets. # --- Configuración --- # Lista de pools ZFS para extraer estadísticas de /proc/spl/kstat/zfs/<pool name>/iostats POOL_NAMES=("data" "services") # Lista de datasets para verificar la compresión. DATASETS_TO_CHECK_COMPRESSION=( "data/personal" "services/docker" "services/vm" ) # --- Funciones de utilidad --- format_bytes_smart() { local bytes=$1 local gib_threshold=$((1024 * 1024 * 1024)) local mib_threshold=$((1024 * 1024)) local kib_threshold=$((1024)) if (( bytes < kib_threshold )); then echo "${bytes} B" elif (( bytes < mib_threshold )); then echo "$((bytes / 1024)) KiB" elif (( bytes < gib_threshold )); then echo "$((bytes / 1024 / 1024)) MiB" else echo "$bytes" | awk '{printf "%.2f GiB", $1 / 1024 / 1024 / 1024}' fi } # --- Verificación de requisitos --- if [ ! -f /proc/spl/kstat/zfs/arcstats ]; then echo "Error: No se encontró /proc/spl/kstat/zfs/arcstats." echo "Asegúrate de que el módulo ZFS está cargado." exit 1 fi # --- Obtener estadísticas del ARC y L2ARC --- arcstats_output=$(cat /proc/spl/kstat/zfs/arcstats) hits=$(echo "$arcstats_output" | awk '/^hits / {print $NF}') misses=$(echo "$arcstats_output" | awk '/^misses / {print $NF}') arc_current_size=$(echo "$arcstats_output" | awk '/^size / {print $NF}') l2_hits=$(echo "$arcstats_output" | awk '/^l2_hits / {print $NF}') l2_misses=$(echo "$arcstats_output" | awk '/^l2_misses / {print $NF}') l2_size=$(echo "$arcstats_output" | awk '/^l2_size / {print $NF}') l2_free=$(echo "$arcstats_output" | awk '/^l2_free / {print $NF}') if [ -z "$l2_size" ]; then l2arc_present=false l2_hits=0 l2_misses=0 l2_size=0 l2_free=0 else l2arc_present=true fi arc_min_limit_bytes=$(cat /sys/module/zfs/parameters/zfs_arc_min 2>/dev/null || echo "0") arc_max_limit_bytes=$(cat /sys/module/zfs/parameters/zfs_arc_max 2>/dev/null || echo "0") # --- Calcular Hit Ratios --- total_accesses=$((hits + misses)) arc_hit_ratio=$(( total_accesses > 0 ? hits * 100 / total_accesses : 0 )) l2_total_accesses=$((l2_hits + l2_misses)) l2arc_hit_ratio=$(( l2_total_accesses > 0 ? l2_hits * 100 / l2_total_accesses : 0 )) # --- Mostrar Resultados --- echo "--- Estadísticas de ZFS ARC/L2ARC y Compresión ---" echo "" echo "--- Estadísticas del ARC (Cache en RAM) ---" echo " Hits (Lecturas desde ARC): $hits" echo " Misses (Lecturas que no estaban en ARC): $misses" echo " Hit Ratio del ARC Global: ${arc_hit_ratio}%" echo " -----------------------------------------" echo " Tamaño actual del ARC: $(format_bytes_smart $arc_current_size)" echo " Límite mínimo configurado (zfs_arc_min): $(format_bytes_smart $arc_min_limit_bytes)" echo " Límite máximo configurado (zfs_arc_max): $(format_bytes_smart $arc_max_limit_bytes)" echo "" echo "--- Estadísticas del L2ARC (Cache en SSD/NVMe) ---" if [ "$l2arc_present" = true ]; then echo " Hits (Lecturas desde L2ARC): $l2_hits" echo " Misses (Lecturas que no estaban en L2ARC): $l2_misses" echo " Hit Ratio del L2ARC: ${l2arc_hit_ratio}%" echo " -----------------------------------------" echo " Espacio en dispositivos L2ARC actualmente usado: $(format_bytes_smart $l2_size)" echo " Espacio en dispositivos L2ARC disponible para cachear: $(format_bytes_smart $l2_free)" else echo " No se detectó L2ARC configurado." fi echo "" echo "--- Estadísticas de Lecturas/Escrituras desde ARC y acceso directo por Pool ---" for pool in "${POOL_NAMES[@]}"; do iostats_file="/proc/spl/kstat/zfs/$pool/iostats" if [ ! -f "$iostats_file" ]; then echo " Pool '$pool': No se encontró $iostats_file. ¿Es un nombre de pool válido?" continue fi arc_read_bytes=$(awk '$1 == "arc_read_bytes" {print $3}' "$iostats_file") arc_write_bytes=$(awk '$1 == "arc_write_bytes" {print $3}' "$iostats_file") direct_read_bytes=$(awk '$1 == "direct_read_bytes" {print $3}' "$iostats_file") direct_write_bytes=$(awk '$1 == "direct_write_bytes" {print $3}' "$iostats_file") echo " Pool: $pool" echo " ARC Read Bytes: $(format_bytes_smart $arc_read_bytes)" echo " ARC Write Bytes: $(format_bytes_smart $arc_write_bytes)" echo " Direct Read Bytes: $(format_bytes_smart $direct_read_bytes)" echo " Direct Write Bytes: $(format_bytes_smart $direct_write_bytes)" echo "" done echo "--- Estadísticas de Compresión ---" echo "Nota: Mostrando compresión para los datasets listados en el script." echo "" for dataset in "${DATASETS_TO_CHECK_COMPRESSION[@]}"; do echo " Dataset: $dataset" zfs get -H -o value compressratio "$dataset" 2>/dev/null | { read -r compressratio if [ -z "$compressratio" ]; then echo " Ratio de compresión: No encontrado o error al obtener." else echo " Ratio de compresión: $compressratio" fi } done echo "" echo "-------------------------------------------------"
  12. How much used storage do you have in your nvme pools?
  13. If with the new version of Unraid that comes with zfs 2.3.x and you have upgraded your pools you are having a lower ARC consumption, specially if you have NVMe or fast drives, SSD, this might be the setting that it's causing it. Just for you to know, so you don't waste more time troubleshooting. https://discourse.practicalzfs.com/t/openzfs-2-3-0-release-direct-i-o-question-also-i-dont-understand-how-distros-package-zfs-apparently/2159 https://openzfs.github.io/openzfs-docs/man/v2.3/7/zfsprops.7.html#direct The question could be, even if it's less performant could be interesting to disable it just to not to wear the nvme with readings? It's better in any nvme zpool setup? or it has its use cases? it's worth to enable it with 2 NVMe with zfs mirror?
  14. But still you have to install an agent in the host if you want to monitor unRAID. If I remember well the docker instructions are for the wazuh platform, webui etc