KyleK29

Members
  • Posts

    9
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

KyleK29's Achievements

Noob

Noob (1/14)

10

Reputation

  1. I had to tinker with mine to get it to work as well. My setup, Docker-Directory is set to: /mnt/zfspool/.docker_system/docker_directory/ ZFS Master Exclusion is set to: .docker_system
  2. Alright, got it working again with a forked image. If anyone else wants to try (I do recommend spinning up a different copy, just for testing purposes). All you have to do is swap the repository of the GUS package / Docker container to this (right-click the Docker Containers Icon --> Edit --> find "repository"): kylek29/grafana-unraid-stack-2023:latest That is a new forked image, which adds additional layers to the grafana-unraid-stack image by testdasi. Changes made: Add a new install script that will install the new apt-key certs from Grafana/Influxdb, then reinstall latest InfluxDb/Telegraf, and perform clean up. Added tweaks to the healthcheck command so that it fires every 30seconds and will wait 30seconds for the machine to start (I noticed it often failed with the original images 0s parameters). Fixed a bug where the healthcheck status would fail and not detect the Grafana Server pid. It will now display the proper statuses of starting, healthy, and unhealthy. Fixed an issue where Tini (init handler) was failing. To look at the code and Dockerfile, see here: https://github.com/kylek29/misc_code/tree/main/unRAID/Grafana-Unraid-Stack For general usage, see the original post. This is just a fix layer.
  3. For those installing this through the unRaid app tab, you may encounter the dashboards being blank and the Docker Logs saying something about the Influxdb.exe not being installed. As another user mentioned, the problem is in the influx setup. How I Fixed It: Launch a Docker Terminal (in unRAID, click the docker's icon -> Console ) Verify issue, type cmd: /static-ubuntu/grafana-unraid-stack/healthcheck.sh You should see "Executable /usr/bin/influxd does not exist!" -- if you do, proceed. Go to /data directory --> cd /data Download the corrected install script ( https://raw.githubusercontent.com/kylek29/misc_code/main/unRAID/Grafana-Unraid-Stack/fix_influxdb_2023.sh ) with this command: curl -sOL https://raw.githubusercontent.com/kylek29/misc_code/main/unRAID/Grafana-Unraid-Stack/fix_influxdb_2023.sh Give it execution permissions: chmod +x fix_influxdb_2023.sh Execute the script. ./fix_influxdb_2023.sh Verify error is gone. /static-ubuntu/grafana-unraid-stack/healthcheck.sh Now go to the admin dashboard and verify it's receiving data. If you go to the your datasources section -> influxdb -> bottom, "test" button --> it should say "datasource is working" now. *EDIT* That didn't last long. I decided to do a completely fresh install to test the above instructions one more time (complete with purged image, etc.), this time it didn't work. Data doesn't come through. Using the old image tag mentioned earlier does seem to work: testdasi/grafana-unraid-stack:s230122 If I figure out the missing step I did when I got it completely working with the :latest image, I'll update this post.
  4. That'd be the ZFS ARC cache, you can adjust the usage, but as another member said it's probably 74% of the available ZFS (RAM) cache. ZFS has 2 caching tiers. - ARC: Uses upto X% of available RAM (I think unRAID has this set as 1/8th of the total system RAM, you can adjust it) - L2ARC: This can be enabled to act as a secondary (larger, but slower) cache, usually placed on an NVME/SSD drive -- as items fall out of the ARC cache, they can end up in L2ARC if enabled. - These are considered volatile. Not a big issue with read workloads, but worth noting for write heavy workloads. ZFS groups transactions into a memory buffer and then writes to disk. So you want some form of battery backup, by default I think the write buffer fires every ~5 seconds. Again, not a concern if you're doing low-writes and heavy-reads (like a media server). So as you read data it gets loaded into the ARC cache and stays until it's evicted. If you have L2ARC enabled, it would then move to that. This is why ZFS is really good at read access / heavy read workloads, because after the first I/O hits, it's stays in hot memory. JorgeB gave a great answer. Once you're past the Pool/Vdev config, you have datasets which appear as just normal folders, but are a collection within the larger pool. You can also nest datasets within datasets if you want to get granular with your snapshots. For example, on my system I have 4x4TB drives configured into 1 ZFS pool with 2 vdev mirror groups of 2 drives (~7TB of usable space). - A few side notes, LZ4 compression is really good, and there's been some tests that show it can perform faster (I/O wise) when enabled. - If you're worried about redundancy, Vdev mirrors are the way to go. They can be a tad more friendly. See: https://jrs-s.net/2015/02/06/zfs-you-should-use-mirror-vdevs-not-raidz/ - To the end-user, this just looks like a folder (zfspool) of folders. You can get an idea for the flexibility of this in the screenshots below, using the Sanoid plugin to handle scheduling of snapshots of sub datasets for individual docker/VM's running. Interface is from the ZFS Master plugin. I imagine a lot of these plugin features will eventually make their way into native unRAID at some point, since ZFS already provides this stuff - it's just the interface.
  5. I have been testing RC5 since it released, haven't noticed any issues so far (I did have a few hiccups with RC4, but I believe those were addressed or I haven't noticed them) and it's been rock-solid when combined with ZFS Master (for dataset interface) and Sanoid (for Snapshot scheduling).
  6. Separate message to keep the quote chain clean. Running 6.12.0-rc4.1, and reconfigured for ZFS. I configured everything on Saturday and then today I noticed all of my shares had vanished, datasets are still there. For my setup, I have a dummy USB drive attached to the array, since I couldn't start it without one (I just assume that's a work-in-progress workflow). Using ZFS Master plugin to manage the ZFS Pool side of things. I imagine all of this is heavy work-in-progress and subject to change as things get built out. It appears the flashdrive I attached to the array died (for the record, it's very old, and I did move all data points off of it so it wouldn't see writes), which kept the array up but the shares disappeared. I noticed this when I rebooted and couldn't restart the array nor mount that particular flash drive. So that'd be something I hope changes as things evolve -- ability to start the server ("array") without an attached drive to the Array side of things, since I'm doing only ZFS pools. Attaching diagnostics anyways, if it helps anything. jager01-diagnostics-20230501-1008.zip
  7. On mine, I noticed at first it showed "compression=on" on the tooltip info flyout that you get when you hover over the dataset (in ZFS Master), but after a little bit it will populate with the compression type. Screenshot:
  8. I'm running 6.12.0-rc4.1 and I don't see any native UI options for snapshot control (but I could be blind), so I'd say if it works, keep using it.
  9. Just came here to say this. I hope they consider adding it for unRAID 7 (or whatever next major version is). I've always used groups to configure separation of concerns for users, even in a home environment. The current vanilla way of doing it is way too lax for my liking and having to abstract to a VM just to handle file sharing on the native OS seems like an unnecessary hurdle.