Renegade605

Members
  • Posts

    92
  • Joined

  • Last visited

Everything posted by Renegade605

  1. Typically, it works fine to delete datasets if you unmount them first (mountpoint=none). You can still back up off-site if you're using snapshots. If your off-site server is also running zfs, you can also back up with snapshots included. Unless you're so low on space that you can't, snapshots are too good of a tool not to use them for all important data. Using datasets for almost everything is also good zfs practice, especially if you want to play with tunables later. When the docker storage method is directory and the volume is zfs, this is expected behaviour and nothing is wrong. Just hide them from the plugin to avoid clutter.
  2. I misunderstood somewhat. Regardless, mine still show undefined for mountpoint, not just the other issues with properties.
  3. @Iker FYI, I'm on the latest version but nothing changed about the zvol properties in the GUI.
  4. Reading the help file that Iker linked you will be a good move. But, in a nutshell, snaps use ~0 space to create, and only increase in size when you edit or delete data that is snapshotted. If you use it on a dataset that contains constantly changing contents (like an NVR) it will very quickly balloon out of control. If you want to keep more NVR footage, you should tell frigate and let it handle that its own way. Reserve snapshots for data that doesn't change a lot and/or that would be extremely important to roll back with one click. For example, I snapshot most of my appdata so that if I or an update borks everything, I can downgrade the container and roll its appdata folder back to how it was before and get everything back the way it was in a few minutes. My appdata has snaps every hour for 3 days and every 4 hours for 7 days after that, uses 13.5G for current data and 40G for snaps.
  5. This is pretty far outside of the scope of what the mover (and the cache itself) is intended for. If there is specific data that you want on the cache at all times, I suggest making a cache-only share for that data and keeping it separate from the other files that should be moved.
  6. @Iker Just an additional note, not sure if you're aware, but reporting of zvol properties is a bit wonky:
  7. Preferences probably depend a lot on how people are using their snapshots. Manual only, you might want to see them all. I have automatic snapshots and as you guessed, many of my datasets are carrying over 100 snapshots at any given time. I definitely don't want (or need) to see all of those all the time or even in the context menu. Context menu isn't a bad idea, maybe for just the last one? Last two? Should cover being able to see one you've taken manually without cluttering. In fact, for settings I preferred a pop-up over the new context menu. Fine for changing one thing, but if you want to change a bunch of settings at once it's a pain, so now I just do that from the terminal. I've also noticed that a lot of settings aren't there or not all the options are available. I assume you tried to pick the most commonly used to avoid clutter which makes total sense. But, for example, Quota is there but not Reservation. I pretty much never use one without the other. Recordsize only provides a few options, but any power of 2 from 512 to 1M is valid.
  8. O.o ... Datasets have settings... Just because I want the data moved to the array doesn't mean I want my settings erased every time that happens? I'd want to snapshot the empty dataset to prevent the dataset from being destroyed. I thought that reasoning was clear. It's empty in the snapshot; it won't be empty forever.
  9. Cool, I'll let them mark this as solved in 6.13 then.
  10. I agree, and I shall. But, it's still Unraid's job to correct for this in the future, if it's going to be important to the ZFS working properly. Ideally, Unraid would correct for it even when it's the user's fault, but definitely when it isn't.
  11. It was created with the GUI. Cache disks were manually added though. They definitely were in the correct order back when I did that because that was required for the cache disk part. I notice now the order that they are listed in zpool status has since changed, two of the disks in the raidz1 vdev and two of the cache disks have swapped position.
  12. Today I needed to replace a disk in a zfs pool. I could not have the new and old disks in the system at the same time and this server does not properly hot swap disks so I: Stopped the array Unassigned the old disk Shut down the server Physically swapped disks Restarted the server Assigned the new disk to the pool Started the array The zpool did not automatically begin resilvering; I manually ran a `zpool replace` command. edi-diagnostics-20240306-2114.zip
  13. Also, this has been around as long as I can remember, but it doesn't affect anything so I never mentioned it: Quotes in the Tray ID cause some weirdness. My trays are labeled '3.5" Bays' and '2.5" bays', but in the dropdown menus on tray allocations they appear as 'Bays 3.5"' and 'Bays 2.5"'
  14. Maybe something to do with the last update, but the configuration disappeared from my plugin. Based on this, looks like yesterday at 12:50ish. Strange timing, as I run plugin updates nightly at 23:00, so not sure why that would have happened at that time. But maybe something to be aware of. Restoring to the 20240303-124703 plugin brought everything back.
  15. Today I accidentally removed a drive from it's bay while my server was running (turns out that bay's power light doesn't work, oops). Wouldn't have been a big deal, except when I reinserted it, it was detected as a separate device (/dev/sdq instead of /dev/sdj). I went to stop the array to fix this, but the stop array button in the GUI is broken now. This is logged when I click "proceed" to stop the array: Mar 4 16:33:51 Global-Dynamics nginx: 2024/03/04 16:33:51 [error] 24668#24668: *4649493 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 192.168.xxx.xxx, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "192.168.xxx.xxx:5001", referrer: "http://192.168.xxx.xxx:5001/Main" I had to reboot the server instead. After reboot I got a notification about an unclean shutdown, which normally does not happen. The drive was still marked as offline (contents emulated), even though it was reconnected (temperature was available; wasn't before). The stop array button did work post-reboot, and I got the drive to be recognized again with the procedure from the docs, but of course that required rebuilding the drive when I would have preferred to just run a parity check instead. Pre-reboot: global-dynamics-diagnostics-20240304-1634.zip Post-reboot + fixing disk assignment: global-dynamics-diagnostics-20240304-1701.zip
  16. Both have issues. zpool commands use the raw capacity of the pool (including parity drives, etc.) and ignore the properties of the underlying filesystem. df is too aware of the filesystem and ignores everything in the datasets. After a little playing around... How about zfs get instead? 318 / (318 + 580) = 35.41% More than close enough to 342GB / 965GB = 35.44% (Looks like just imprecision with GB vs GiB) Get precise with the -p flag. Plus some bash magic: (there was definitely an easier way to do this if it doesn't have to be a one line command lol) expr `zfs get -Hp -o value used cache` * 100 / `zfs get -Hp -o value used,available cache | awk '{s+=$1} END {print s}'` Tested again while letting the pool fill up: I just realized this was for a "Move All" so that's got to be why. Grrr Unraid Awesome! Thanks a lot. For now, the snapshot of datasets while empty is working, other than cluttering up the logs with errors.
  17. I'm really not sure either way. It seemed like nothing bad happened until I mounted unassigned disks that were formally array disks. Since all the data was still on the actual array disks, I thought maybe the mounting masked their contents temporarily. I've seen similar behaviour when mounting a ZFS dataset to the same location as an existing directory (although in that case, unmounting the dataset always reveals the contents, while it didn't in this case). If you don't think it could be related, no worries. If you do, I'm willing to help troubleshoot. Whichever, just thought I'd mention it.
  18. Hey @dlandon, I made a bug report for this but it occurs to me it might be an unassigned devices plugin issue. I was a bit panicked as I wrote the bug report and troubleshooted in real time, but the fact that it was the mounting step that seems to have caused the problem makes me think that, now that I have a clear head. All the details here:
  19. You're correct! That's not super intuitive, perhaps it should show a message about that when this is the case. But, I had also expected it to work like Scatter, and just grab as much of those folders as possible to move even if it couldn't do all of them. I'm trying to balance out the free space on my disks, and I assumed Gather would take whatever it could from disks 2 - 8 to move to disk 1 until they all have 1.5 TB free space (I set that as the limit in the settings), instead of picking and choosing which parts to move.
  20. Fellow ZFS enthusiasts, I made this to notify me if (when lol) I accidentally create a folder somewhere that should have been a dataset instead. Please use and enjoy if you like. https://gist.github.com/Renegade605/8eba0af1e7fa1b16cb74af0e79f3be98 Also note, I discovered recently that the mover (with CA Mover Tuning installed, not sure if it's part of the default mover) as part of it's operation destroys empty datasets when it's finished. If this isn't desirable to you for the obvious reasons, my current workaround is to snapshot datasets when they're empty and the destroy operation will fail. There are a few other quirks with that plugin on ZFS pools, see here for more info:
  21. Sorry I should have mentioned I did that right away. No errors when connected local ip:port. When connected via reverse proxy there's an error loading site.manifest. Not related though as the problem is present either way.
  22. I get that, but "disable custom configuration and let the OS run wild on your data" is a pretty nuclear option that I wouldn't take as a first step.
  23. That wouldn't actually solve the problem though, and I found the cause without doing that. Maybe a good step for novice users, but not for experienced ones.
  24. Today I had my cache drive fill but the mover not move any files. I've determined the cause to be the method of reading used disk space on ZFS. Move All threshold is set to 80%, and the cache pool's used space is 97%. But, there are datasets on the cache pool with reservations and quotas, so the actual used space is 74% even though there is no space remaining for some shares. Can the mover tuning plugin evaluate used space on a per-dataset basis instead of a whole pool basis? Or is there some other solution that would solve this problem? EDIT TO ADD: I also notice now that the mover does not seem to be respecting the Ignore hidden files and directories option. Feb 27 09:41:55 Global-Dynamics move: file: /mnt/cache/movies/.radarr/Arrivl.2016.MULTi.2160p.UHD.BluRay.x265-OohLaLa/t82hgxa6svubn8lh42nkbjs4vjd5bfn50b/Arrival.2016.MULTi.2160p.UHD.BluRay.x265-OohLaLa/Proof/arrival.2016.multi.2160p.uhd.bluray.x265-oohlala.proof.jpg Also Important This error brought my attention that the mover destroys empty datasets when finished. Please remove this step or at least make it an option to be decided by the user! Datasets have settings that should be maintained and they should never be destroyed without explicit instruction from the sysadmin. For now I've taken a snapshot of the empty datasets to force this step to fail.
  25. I have isolated the cause and will address it in the mover tuning plugin thread.