Renegade605

Members
  • Posts

    92
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Renegade605's Achievements

Apprentice

Apprentice (3/14)

24

Reputation

2

Community Answers

  1. Typically, it works fine to delete datasets if you unmount them first (mountpoint=none). You can still back up off-site if you're using snapshots. If your off-site server is also running zfs, you can also back up with snapshots included. Unless you're so low on space that you can't, snapshots are too good of a tool not to use them for all important data. Using datasets for almost everything is also good zfs practice, especially if you want to play with tunables later. When the docker storage method is directory and the volume is zfs, this is expected behaviour and nothing is wrong. Just hide them from the plugin to avoid clutter.
  2. I misunderstood somewhat. Regardless, mine still show undefined for mountpoint, not just the other issues with properties.
  3. @Iker FYI, I'm on the latest version but nothing changed about the zvol properties in the GUI.
  4. Reading the help file that Iker linked you will be a good move. But, in a nutshell, snaps use ~0 space to create, and only increase in size when you edit or delete data that is snapshotted. If you use it on a dataset that contains constantly changing contents (like an NVR) it will very quickly balloon out of control. If you want to keep more NVR footage, you should tell frigate and let it handle that its own way. Reserve snapshots for data that doesn't change a lot and/or that would be extremely important to roll back with one click. For example, I snapshot most of my appdata so that if I or an update borks everything, I can downgrade the container and roll its appdata folder back to how it was before and get everything back the way it was in a few minutes. My appdata has snaps every hour for 3 days and every 4 hours for 7 days after that, uses 13.5G for current data and 40G for snaps.
  5. This is pretty far outside of the scope of what the mover (and the cache itself) is intended for. If there is specific data that you want on the cache at all times, I suggest making a cache-only share for that data and keeping it separate from the other files that should be moved.
  6. @Iker Just an additional note, not sure if you're aware, but reporting of zvol properties is a bit wonky:
  7. Preferences probably depend a lot on how people are using their snapshots. Manual only, you might want to see them all. I have automatic snapshots and as you guessed, many of my datasets are carrying over 100 snapshots at any given time. I definitely don't want (or need) to see all of those all the time or even in the context menu. Context menu isn't a bad idea, maybe for just the last one? Last two? Should cover being able to see one you've taken manually without cluttering. In fact, for settings I preferred a pop-up over the new context menu. Fine for changing one thing, but if you want to change a bunch of settings at once it's a pain, so now I just do that from the terminal. I've also noticed that a lot of settings aren't there or not all the options are available. I assume you tried to pick the most commonly used to avoid clutter which makes total sense. But, for example, Quota is there but not Reservation. I pretty much never use one without the other. Recordsize only provides a few options, but any power of 2 from 512 to 1M is valid.
  8. O.o ... Datasets have settings... Just because I want the data moved to the array doesn't mean I want my settings erased every time that happens? I'd want to snapshot the empty dataset to prevent the dataset from being destroyed. I thought that reasoning was clear. It's empty in the snapshot; it won't be empty forever.
  9. Cool, I'll let them mark this as solved in 6.13 then.
  10. I agree, and I shall. But, it's still Unraid's job to correct for this in the future, if it's going to be important to the ZFS working properly. Ideally, Unraid would correct for it even when it's the user's fault, but definitely when it isn't.
  11. It was created with the GUI. Cache disks were manually added though. They definitely were in the correct order back when I did that because that was required for the cache disk part. I notice now the order that they are listed in zpool status has since changed, two of the disks in the raidz1 vdev and two of the cache disks have swapped position.
  12. Today I needed to replace a disk in a zfs pool. I could not have the new and old disks in the system at the same time and this server does not properly hot swap disks so I: Stopped the array Unassigned the old disk Shut down the server Physically swapped disks Restarted the server Assigned the new disk to the pool Started the array The zpool did not automatically begin resilvering; I manually ran a `zpool replace` command. edi-diagnostics-20240306-2114.zip
  13. Also, this has been around as long as I can remember, but it doesn't affect anything so I never mentioned it: Quotes in the Tray ID cause some weirdness. My trays are labeled '3.5" Bays' and '2.5" bays', but in the dropdown menus on tray allocations they appear as 'Bays 3.5"' and 'Bays 2.5"'
  14. Maybe something to do with the last update, but the configuration disappeared from my plugin. Based on this, looks like yesterday at 12:50ish. Strange timing, as I run plugin updates nightly at 23:00, so not sure why that would have happened at that time. But maybe something to be aware of. Restoring to the 20240303-124703 plugin brought everything back.
  15. Today I accidentally removed a drive from it's bay while my server was running (turns out that bay's power light doesn't work, oops). Wouldn't have been a big deal, except when I reinserted it, it was detected as a separate device (/dev/sdq instead of /dev/sdj). I went to stop the array to fix this, but the stop array button in the GUI is broken now. This is logged when I click "proceed" to stop the array: Mar 4 16:33:51 Global-Dynamics nginx: 2024/03/04 16:33:51 [error] 24668#24668: *4649493 connect() to unix:/var/run/emhttpd.socket failed (11: Resource temporarily unavailable) while connecting to upstream, client: 192.168.xxx.xxx, server: , request: "POST /update.htm HTTP/1.1", upstream: "http://unix:/var/run/emhttpd.socket:/update.htm", host: "192.168.xxx.xxx:5001", referrer: "http://192.168.xxx.xxx:5001/Main" I had to reboot the server instead. After reboot I got a notification about an unclean shutdown, which normally does not happen. The drive was still marked as offline (contents emulated), even though it was reconnected (temperature was available; wasn't before). The stop array button did work post-reboot, and I got the drive to be recognized again with the procedure from the docs, but of course that required rebuilding the drive when I would have preferred to just run a parity check instead. Pre-reboot: global-dynamics-diagnostics-20240304-1634.zip Post-reboot + fixing disk assignment: global-dynamics-diagnostics-20240304-1701.zip