Jump to content

nearcatch

Members
  • Joined

  • Last visited

  1. ZFS Master was the odd one out so that's the one I chose to convert to match. Everything else on the Unraid Main page is already SI and it would've been more effort to identify the units in all the different parts of the page. I use Windows primarily as well, so IEC is what I see everywhere else, but I'm just used to seeing SI in Unraid.
  2. Sorry to hear about the fires! Thanks for following up. I had the proper attributes set, they just would not run for whatever reason. I finally just moved them into userscripts to run when the array was started or stopped using the userscripts plugin, and now the functionality works again, even if the emhttp events don't.
  3. I had the below scripts in 6.12 to spin up/down disks from a command line. They no longer work in 7.0.0. Does anyone know what changes need to be made to get them working, or if there's an alternative in 7.0? spinup() { . /usr/local/emhttp/state/var.ini curl --unix-socket /var/run/emhttpd.socket \ --data-urlencode cmdSpinupAll=apply \ --data-urlencode startState=$mdState \ --data-urlencode csrf_token=$csrf_token \ http://127.0.0.1/update } spindown() { . /usr/local/emhttp/state/var.ini curl --unix-socket /var/run/emhttpd.socket \ --data-urlencode cmdSpindownAll=apply \ --data-urlencode startState=$mdState \ --data-urlencode csrf_token=$csrf_token \ http://127.0.0.1/update }
  4. Hello all. Started using ZFS in Unraid and ZFS Master, and the different size units in the plugin compared to Unraid annoyed me, although I understand the reasons why. I wrote a userscript to watch the ZFS Master table on the Main dashboard page and convert all size units from IEC to SI. This will fix the 1st item in the below list Iker gave for reasons that the sizes are different in the plugin. The 2nd reason is not fixable via this script because ZFS and Unraid have different definitions for pool size. Before and after: Tested with ZFS Master 2024.12.09.104 on Unraid 7.0.0 Tested in these browsers: Firefox 135.0b9 with Violentmonkey 2.29.0 Chrome 132.0.6834.160 with Tampermonkey Beta 5.4.6224 It doesn't use anything fancy so I hope it will work with most modern browsers and script managers. Source: https://github.com/jathek/firefox-tweaks/blob/main/userscripts/unraid-zfs-master-si.user.js Install: https://github.com/jathek/firefox-tweaks/raw/main/userscripts/unraid-zfs-master-si.user.js IMPORTANT INSTALL NOTE: You will have to add a custom match rule in your script manager if you use something other than "http://tower/Main*" for your Unraid Dashboard.
  5. Has anyone else had only certain events stop triggering? The "started" event which calls key_delete works fine to remove the key, but the "starting" or "stopped" events never trigger to fetch the keys. I even tried to put a simple script that did 'echo "foo" > ~/bar.txt' and it never worked. I know the key_fetch script works because I can run it manually from the command line.
  6. THANK YOU! Yes, I did! (In fact my first post ever on this forum was thanking someone for the script). But it's been so long since the key_fetch script worked, I completely forgot that the key_delete might still be running. Moved the key_delete.sh script out of emhttp, erased the pools, and started the drives, and they mounted correctly.
  7. Updated to 7.0.0 and tried to switch my btrfs pools to be zfs-encrypted, but ran into issues. I did the following: erased the pool switched the filesystem to zfs-encrypted started array with my luks key checked box for format unreadable disks to format the new zfs-encrypted pools because of "Unmountable disk" notification Format *seems* successful, but it's stuck on orange padlock I also tried rebooting several times and switching the pool to non-encrypted btrfs and then back to zfs-encrypted, but nothing worked. Here's a bit of the syslog from when I clicked "Format" after first switching the drives to zfs-encrypted: Jan 28 16:16:50 DeeperVisor emhttpd: creating volume: cache (zfs - encrypted) Jan 28 16:16:50 DeeperVisor emhttpd: shcmd (793): /sbin/wipefs -af --lock /dev/sdb1 Jan 28 16:16:50 DeeperVisor root: wipefs: error: /dev/sdb1: probing initialization failed: No such file or directory Jan 28 16:16:50 DeeperVisor emhttpd: shcmd (793): exit status: 1 Jan 28 16:16:50 DeeperVisor emhttpd: writing MBR on disk (sdb) with partition 1 offset 2048, erased: 0 Jan 28 16:16:50 DeeperVisor kernel: sdb: sdb1 Jan 28 16:16:51 DeeperVisor emhttpd: re-reading (sdb) partition table Jan 28 16:16:51 DeeperVisor emhttpd: shcmd (794): udevadm settle Jan 28 16:16:51 DeeperVisor kernel: sdb: sdb1 Jan 28 16:16:53 DeeperVisor emhttpd: shcmd (795): /sbin/blkdiscard /dev/sdb1 Jan 28 16:16:53 DeeperVisor emhttpd: shcmd (796): /sbin/wipefs -af --lock /dev/sdc1 Jan 28 16:16:53 DeeperVisor root: wipefs: error: /dev/sdc1: probing initialization failed: No such file or directory Jan 28 16:16:53 DeeperVisor emhttpd: shcmd (796): exit status: 1 Jan 28 16:16:53 DeeperVisor emhttpd: writing MBR on disk (sdc) with partition 1 offset 2048, erased: 0 Jan 28 16:16:53 DeeperVisor kernel: sdc: sdc1 Jan 28 16:16:54 DeeperVisor emhttpd: re-reading (sdc) partition table Jan 28 16:16:54 DeeperVisor emhttpd: shcmd (797): udevadm settle Jan 28 16:16:54 DeeperVisor kernel: sdc: sdc1 Jan 28 16:16:57 DeeperVisor emhttpd: shcmd (798): /sbin/blkdiscard /dev/sdc1 Jan 28 16:16:57 DeeperVisor emhttpd: shcmd (799): /usr/sbin/zpool create -f -o ashift=12 -O dnodesize=auto -O acltype=posixacl -O xattr=sa -O utf8only=on -m /mnt/cache cache mirror /dev/sdb1 /dev/sdc1 Jan 28 16:16:57 DeeperVisor emhttpd: update_pool_cfg: 30 cache 0 Jan 28 16:16:57 DeeperVisor emhttpd: shcmd (800): /usr/sbin/zpool export -f cache Jan 28 16:16:57 DeeperVisor emhttpd: mounting /mnt/cache Jan 28 16:16:57 DeeperVisor emhttpd: shcmd (801): mkdir -m 0666 -p /mnt/cache Jan 28 16:16:57 DeeperVisor emhttpd: shcmd (802): /usr/sbin/zpool import -f -m -N -o autoexpand=on -d /dev/sdb1 -d /dev/sdc1 6554892298978727352 cache Jan 28 16:16:57 DeeperVisor emhttpd: cache: zfs verify devices Jan 28 16:16:57 DeeperVisor emhttpd: /usr/sbin/zpool status -P cache 2>&1 The `root: wipefs: error: /dev/sdb1: probing initialization failed: No such file or directory` line appears once for each drive in the pool. This line also appeared when I tried to do normal btrfs without encryption. deepervisor-diagnostics-20250128-1721.zip
  8. Local icons definitely work with absolute paths, that's how all my icons are set for unRAID in my compose file.
  9. Restrict unbalanced LAN Access I haven't liked that unbalanced is available to anyone in my LAN once it's started, but I have figured out a solution for myself. Sharing the steps here for anyone else who is curious and uses a reverse proxy for other things, like I was. How-to 1. Set up your reverse proxy to have an authenticated subdomain for unbalanced. I use Traefik and Authelia. You will have to do something specific to your setup, but this is what I added to my Traefik config file: http: routers: unbalanced-rtr: rule: "Host(`unbalanced.unraid.lan`)" entryPoints: - websecure middlewares: - chain-authelia-lan - error-pages@docker service: unbalanced-svc services: unbalanced-svc: loadBalancer: servers: - url: "http://192.168.1.10:7090" # substitute your unraid server's IP address and unbalanced port 2. Run these iptables rules in a console session so that any request to unbalanced's port gets rejected, unless it is from Unraid's IP or the IP range of your reverse proxy's network. Substitute the correct IP addresses and ports for your network. You can also add these to your go file to have them activated every time Unraid is rebooted. EDIT 2024-06-07: I had these in my go file and they didn't apply after a reboot. I've since moved it into a userscript to run on first array start. iptables -A INPUT -p tcp --dport 7090 -s 10.10.1.0/24 -j ACCEPT # substitute the subnet your reverse proxy uses. you can also limit this to the exact IP of your reverse proxy docker container if you want iptables -A INPUT -p tcp --dport 7090 -s 192.168.1.10 -j ACCEPT iptables -A INPUT -p tcp --dport 7090 -j REJECT --reject-with tcp-reset Result After these two steps, unbalanced cannot be accessed by the ip:port of my Unraid server. It can only be accessed using https://unbalanced.unraid.lan, and because I add Authelia using a Traefik middleware, it requires authentication instead of being freely accessible. No reverse proxy? If you don't use a reverse proxy, you can still do step 2 and edit the 1st iptables rule to reject any request to unbalanced's port except from one specific computer on your LAN, which should still help limit access. EDIT: Also for those who don't know, you can remove the iptables rules by running them again with "- D" instead of "- A". Restarting your Unraid server will also reset your iptables if you haven't modified your go file with these rules.
  10. I was able to fix this on my unRAID server. Iirc the fix was to not use external networks, and instead let docker-compose create the networks my stack needed. My networks block: ########################### NETWORKS networks: default: driver: bridge reverse_proxy: external: false name: reverse_proxy ipam: config: - subnet: ${REVERSE_PROXY_SUBNET} gateway: ${REVERSE_PROXY_GATEWAY} socket_proxy: external: false name: socket_proxy ipam: config: - subnet: ${SOCKET_PROXY_SUBNET} gateway: ${SOCKET_PROXY_GATEWAY} lan_ipvlan: external: false name: lan_ipvlan driver: ipvlan driver_opts: parent: br0 ipam: config: - subnet: ${LAN_IPVLAN_SUBNET} gateway: ${LAN_IPVLAN_GATEWAY} ip_range: ${LAN_IPVLAN_IP_RANGE}
  11. For anyone interested, the below is the minimum required to run this container using docker-compose, based off the example docker command from the repo. You can modify it easily to use traefik or another reverse proxy instead of accessing directly by port. Change $CONTDIR to wherever you want to store the logs. preclear: container_name: preclear image: ghcr.io/binhex/arch-preclear:latest restart: unless-stopped privileged: true ports: - 5900:5900 - 6080:6080 environment: - WEBPAGE_TITLE=Preclear - VNC_PASSWORD=mypassword - ENABLE_STARTUP_SCRIPTS=yes - UMASK=000 - PUID=0 - PGID=0 volumes: - $CONTDIR/preclear/config:/config - /boot/config/disk.cfg:/unraid/config/disk.cfg:ro - /boot/config/super.dat:/unraid/config/super.dat:ro - /var/local/emhttp/disks.ini:/unraid/emhttp/disks.ini:ro - /usr/local/sbin/mdcmd:/unraid/mdcmd:ro - /dev/disk/by-id:/unraid/disk/by-id:ro - /boot/config/plugins/dynamix/dynamix.cfg:/unraid/config/plugins/dynamix/dynamix.cfg:ro - /etc/ssmtp/ssmtp.conf:/unraid/ssmtp/ssmtp.conf:ro - /etc/localtime:/etc/localtime:ro
  12. Ah sorry, I misunderstood. I thought the other person was asking for the cli analogue, not the plugin analogue.
  13. I modified my docker compose update script to create a script to install docker scout on unraid. Save the script somewhere, source it in your profile.sh with `source /YOURPATHTOSCRIPT/dsupdate.source`, and then run with `dsupdate` or `dsupdate check`. This works for me on a linux x86 system. If your system is different then you may need to edit line 12 to pull the proper filename from the release page. #!/bin/bash alias notify='/usr/local/emhttp/webGui/scripts/notify' dsupdate() { SCOUT_LOCAL=$(docker scout version 2>/dev/null | grep version | cut -d " " -f2) SCOUT_LOCAL=${SCOUT_LOCAL:-"none"} echo Current: ${SCOUT_LOCAL} SCOUT_REPO=$(curl -s https://api.github.com/repos/docker/scout-cli/releases/latest | grep 'tag_name' | cut -d '"' -f4) if [ ${SCOUT_LOCAL} != ${SCOUT_REPO} ]; then dsdownload() { echo Repo: ${SCOUT_REPO} # curl -L "https://github.com/docker/scout-cli/releases/download/${SCOUT_REPO}/docker-scout_${SCOUT_REPO/v/}_$(uname -s)_$(uname -m).tar.gz" --create-dirs -o /tmp/docker-scout/docker-scout.tar.gz curl -L "https://github.com/docker/scout-cli/releases/download/${SCOUT_REPO}/docker-scout_${SCOUT_REPO/v/}_linux_amd64.tar.gz" --create-dirs -o /tmp/docker-scout/docker-scout.tar.gz tar -xf "${_}" -C /tmp/docker-scout/ --no-same-owner mkdir -p /usr/local/lib/docker/scout mv -T /tmp/docker-scout/docker-scout /usr/local/lib/docker/scout/docker-scout && chmod +x "${_}" rm -r /tmp/docker-scout cat "$HOME/.docker/config.json" | jq '.cliPluginsExtraDirs[]' 2>/dev/null | grep -qs /usr/local/lib/docker/scout 2>/dev/null if [ $? -eq 1 ]; then echo "Scout entry not found in .docker/config.json. Creating a backup and adding the scout entry." cp -vnT "$HOME/.docker/config.json" "$HOME/.docker/config.json.bak" cat "$HOME/.docker/config.json" | jq '.cliPluginsExtraDirs[.cliPluginsExtraDirs| length] |= . + "/usr/local/lib/docker/scout"' >"$HOME/.docker/config.json.tmp" mv -vT "$HOME/.docker/config.json.tmp" "$HOME/.docker/config.json" fi echo "Installed: $(docker scout version | grep version | cut -d " " -f2)" notify -e "docker-scout updater" -s "Update Complete" -d "New version: $(docker scout version | grep version | cut -d " " -f2)<br>Previous version: ${SCOUT_LOCAL}" -i "normal" } if [ -n "${1}" ]; then if [ "${1}" = "check" ]; then echo "Update available: ${SCOUT_REPO}" notify -e "docker-scout updater" -s "Update Available" -d "Repo version: ${SCOUT_REPO}<br>Local version: ${SCOUT_LOCAL}" -i "normal" else dsdownload fi else dsdownload fi else echo Repo: ${SCOUT_REPO} echo "Versions match, no update needed" fi unset SCOUT_LOCAL unset SCOUT_REPO }
  14. wouldn't the equivalent be `docker compose pull SERVICENAME`? I always get extraction progress when pulling via docker compose.
  15. nearcatch changed their profile photo
  16. @jbrodriguezIf you take PRs, I sent one on github that losslessly compresses the png images.