[PLUGIN] ZFS Master


Iker

Recommended Posts

9 hours ago, unr41dus3r said:

Maybe a Noob ZFS question, but i have to run "zfs mount -a" after every reboot to mount my zfs datasets again.

Is this by design or an configuration mistake by me?

 

That is not even close to normal, you should report this to General support, as the pools and datasets are supposed to be mounted automatically on every reboot; this is unless you have defined otherwise at creation time.

Link to comment
8 hours ago, Iker said:

 

That is not even close to normal, you should report this to General support, as the pools and datasets are supposed to be mounted automatically on every reboot; this is unless you have defined otherwise at creation time.

 

The datasets on my Array Cache Drive is mounting correctly, but on my unassigned Devices Disk where i created some datasets are not mounting.

I created the ZFS Disk with the unassigned devices and configured the datasets with ZFS Master

Link to comment
21 minutes ago, unr41dus3r said:

The datasets on my Array Cache Drive is mounting correctly, but on my unassigned Devices Disk where i created some datasets are not mounting.

I created the ZFS Disk with the unassigned devices and configured the datasets with ZFS Master

Unraid does a zfs mount -a after array start, but UD disks are mounted after that, so you'd need to ask in the UD plugin support thread or just have a script doing it.

  • Thanks 1
Link to comment
On 10/3/2023 at 3:17 PM, Niklas said:


Look at page 8-9 in this thread. You will probably need to remove the datasets manually.

I have deleted everything now, reformated my pool and setup everything fresh and new, now it looks right, but i still cant find the right setting to hide those datasets
Any idea? Tried "/cache/docker/.*" or "/docker/.*"

grafik.png

Link to comment
1 hour ago, Joly0 said:

I have deleted everything now, reformated my pool and setup everything fresh and new, now it looks right, but i still cant find the right setting to hide those datasets
Any idea? Tried "/cache/docker/.*" or "/docker/.*"

 

/docker/.* should do the trick, if not please send me a pm with the following command result "zfs list".

 

Best/

Link to comment
40 minutes ago, Iker said:

 

/docker/.* should do the trick, if not please send me a pm with the following command result "zfs list".

 

Best/

Ok, i dont know why it worked now and not before, but i tested it again and just waited a bit longer and now it works. Thx

  • Like 1
Link to comment

A new update is live with the following changelog:

 

2023.10.07

  • Add - Cache last data in Local Storage when using "no refresh"
  • Fix - Dataset admin Dialog - Error on select all datasets
  • Fix - Multiple typos
  • Fix - Special condition crashing the backend
  • Fix - Status refresh on Snapshots admin dialog
  • Change - Date format across multiple dialogs
  • Change - Local Storage for datasets and pools view options

Thanks @Niklas; when looking for a way to preserve the views, I end up finding an excellent way to implement a cache for the last refresh :). Also, now the view options are as durable as they can be; even across reboots.

 

How Cache Works?

Every time the plugin refreshes the data, it saves a copy to the web browser's local storage; if you have configured the "No refresh" option, once you enter the main page, the plugin loads that information (Including the timestamp) from that cache, this operation is almost instantaneously. This only happens if the "No refresh" option is enabled; otherwise, the plugins load the information from the pools directly. The cache also works with Lazy and Classic load.

 

Best,

Edited by Iker
  • Thanks 3
Link to comment
7 hours ago, Iker said:

No, you don't need to create the destination in advanced, which Unraid and ZFS Master version are you using?

Ok, I worked this 1 out and its definitely my lack of knowledge....again😒

Basically i was trying to clone to another ZFS pool on a spersarate drive, once I cloned to "pool_name/" on which the snapshot is actually stored on, it cloned just fine.

Link to comment

Ok, I think I'm missing something here. I have had this setup for a while, as per the Spaceinvaderone video, with all of my appdata folders as datasets, using the script to create snapshots and replicate to a ZFS disk in the array. Today is the first time I've had to try and rollback/restore and I'm at a loss. 

So, my calibre-web install appears to have reverted to a new install, and I can't even login. No problem I thought, I'll rollback to a snapshot from last week, when I knew it was working. Doing this results in an empty appdata folder. Weird I thought, but no problem, I have these replicated. And so my first issue, there is no documentation anywhere on how I get my replicated snapshots from Disc1 back to the Cache. Do I even need to do this? Can I not just restore the appdata folder from Disk1? Any ideas why rolling back the snapshots on Cache are resulting in empty folders?

So confused.

Link to comment

@Iker I can't tell you how nice it has been to use the plugin since you changed the refresh methodology!  Huge improvement for my use case!

 

I do have a future feature request:  The ability to refresh by pool. I.e. a refresh button on the pool bar that has the "hide dataset" "create dataset" buttons.

And/or in the config the ability to select/deselect pools from the refresh.


How I use this:

All my ZFS pools are SSD's, so I don't care about spinup/down on those.  But I do have some ZFZ formatted disks as snapshot backup targets in the Unraid Array.   I rarely browse those and don't need to update ZFS Master very often. 

Being able to exclude just those pools (or having a button to refresh only the pool that I'm working on) would make it so those zfs array disks spin up even less.

 

But even without that, it has been a huge improvement!  My disks are only spun up for about an hour a day now, where before they spun almost all day!

 

 

Link to comment

@NeoDude I don't think this is a zfsmaster issue, but I think I got an idea of what's happening.

What is your folder and dataset structure?  You may need to execute the command with -r "recursive"
It sounds like you are trying to rollback the parent dataset and it has children datasets nested within it.  When you browse .zfs/snapshot of the parent, then you won't see data for the children. But if you go to .zfs/snapshot in the children, then you will see the data of the children.
The only way for you to mess this up is if you didn't take your snapshot with -r recursive.  If you have the snapshot, then the data is in there.

Another reason I can see for this is if the dataset isn't being mounted.


To restore, you need to do the restore on the specific child dataset.  If you have subfolders within, that are actually configured to be datasets, then you need to restore each folder/dataset.  (Or make sure it's doing it recursively)  

 

To pretty much clone the dataset, recursively with all the snapshots and properties you use:

`zfs send -wR metapool/appdata@migration | zfs receive -Fdu metapool/appdata_new`


But read my notes below so you understand the flags, as that's unmounted.
 

and name the send differently for your testing, so you don't override anything.  Then when you confirm the data is there, you can rename it and update your mount points if needed.

Here are some of my notes on how I have done it in the past.  Hope it helps you!  If you need more help you can IM me so we aren't clogging up the thread here.

 

# Backups & Snapshots
## Snapshots

### Create New Snapshot
`zfs snapshot workpool/nextcloud@new_empty`
Recursive:
`zfs snapshot -r workpool/nextcloud@new_empty`

## Transfer Dataset from one location to another
### Create Snapshot
1. `zfs snapshot -r metapool/appdata@migration`

### Send to new dataset (Recursive with dataset properties & snapshots)
2. `zfs send -wR metapool/appdata@migration | zfs receive -Fdu metapool/appdata_new`

-w sends raw data, needed with encrypted datasets. also keeps recordsize & options.
-R is recursive and includes all snapshots/clones
-F Forces the overwrite of the target dataset - use with care!
-d Uses the provided dataset name as the prefix for the names of all received datasets. Essentially, this means that the data will be received into the named dataset, but not as a clone
-u ensures that the received datasets are not mounted, even if their mountpoint properties would typically cause them to be automatically mounted.  Requires manual mounting!
###

3. Confirm data is in location and present

4. Rename old dataset to appdata_old

5. Confirm mount points changed for appdata_old

6. Rename appdata_new to appdata

### Mount Dataset (if left unmounted with -u flag)
7. `zfs mount metapool/appdata`

Done

## Syncoid
### Replicate Dataset from dataset (Not fully tested)
`syncoid metapool/gitea/database workpool/gitea/database`  

 

 

Edited by samsausages
Link to comment
On 10/19/2023 at 4:06 PM, samsausages said:

I do have a future feature request:  The ability to refresh by pool. I.e. a refresh button on the pool bar that has the "hide dataset" "create dataset" buttons.

And/or in the config the ability to select/deselect pools from the refresh.

 

Right now, the refresh options are a global setting, but the plugin functionality is implemented at the pool level, so it should be... not easy (The cache could be a mess), but at least possible.

  • Like 2
Link to comment
  • 2 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.