[PLUGIN] ZFS Master


Recommended Posts

What is ZFS Master?

 

The ZFS Master plugin provides information and control over the ZFS Pools in your Unraid Server. Available ZFS Pools are listed under the "Main/ZFSMaster" tab. This plugin requires ZFS for Unraid 6 Plugin to be installed.

 

Features (All related to ZFS)

 

  • Main tab GUI integration
  • Report Health for the Pool
  • Scrub Pools and Export Pool
  • Create new Datasets, including options for name, mount point, access time, case sensitive, compression, quota, permissions, extended attributes, record size, primary cache, read-only, encryption, and size.
  • Destroy Datasets, including an option for forcing and recursively destroy all children and dependents
  • Filter the Datasets listed in the GUI using Lua Patterns.
  • Administration of Dataset Snapshots (Take a snapshot, Hold a snapshot, Released it, Rollback to it, and Destroy it)
  • Destructive Mode, you could restrict the buttons that appear in the GUI by turning on or off destructive mode.
  • GUI Indicator of last snapshot date, through an icon on the right. 

 

image.thumb.png.1e974ab41caa3be78f484312dfdf9d3a.png

 

Configuration

 

Unraid Settings Menu-> ZFS Master. It's possible to specify the refresh interval for the GUI, if destructive mode is on or off; Lua pattern for dataset exclusions; these are convenient if you are using a docker folder in a Dataset; and the amount of days used for the Snapshot Icon alert and snapshot options for naming snapshots taken from the GUI.

 

image.png.743e5202a0c6a58445de944da038a012.png

 

 

ChangeLog

 

2022.08.21

  • Change - UI into "folder" structure
  • Add - Support for ZFS Encryption
  • Add - Unlock and Lock actions for encrypted datasets
  • Fix - Error on unRaid 6.9.2 associated with session management

 

2022.08.02

 

  • Warning - Please Update your exclusion pattern!
  • Add - Browse Button for Datasets
  • Add - Support for listing volumes!!
  • Add - Lua script backend for loading dataset information (50% faster loading times)
  • Change - Exclusion pattern for datasets (Please check http://lua-users.org/wiki/PatternsTutorial)
  • Change - UI columns re-organized to the unraid way (sort of)

 

2022.04.13

  • Add - Dataset Snapshot Creation Option
  • Add - Settings for Snapshot Creation (pattern and prefix)
  • Change - "Destroy" and "Snapshots" buttons merged to "Actions"

 

2022.04.10

  • Add - Dataset Snapshot management (rollback, hold, release, destroy)
  • Fix - Installation script bug

 

2022.04.08

  • Add - Set permissions for new Datasets

 

2021.11.09a

  • Add - List of current Datasets at Dataset Creation
  • Add - Option for export a Pool (In construction)
  • Fix - Compatibility with unRAID RC versions

 

2021.10.08e

  • Add - SweetAlert2 for notifications
  • Add - Refresh and Settings Buttons
  • Add - Mountpoint information for Pools
  • Add - Configurable Settings for Refresh Time, Destructive Mode, Dataset Exclussions, Alert Max Days Snapshot Icon
  • Fix - Compatibility with Other Themes (Dark, Grey, etc.)
  • Fix - Improper dataset parsing
  • Fix - Regex warnings
  • Fix - UI freeze error on some system at destroying a Dataset
  • Remove - Unassigned Devices Plugin dependency

 

2021.10.04

  • Initial Release.

 

Official GitHub Repo

 

https://github.com/IkerSaint/ZFS-Master-Unraid

 

 

Edited by Iker
Link to comment
  • Iker changed the title to [PLUGIN] ZFS Master

Works well so far. Very neat to have this capability. Things I think would be cool to see:

 

1. Bulk delete

2. The ability to mount a snapshot (instead of just rolling back, in case you just need to pull a file or something)

 

Obviously second one can be done easily with cli, but given how nice your stuff is looking so far...

Link to comment
Posted (edited)

@ich777 Github link added!, for the support link, I'll wiat for the next update (A couple of days).

 

@muddro Thanks for the feedback; 1. It will be great; I'll give a thought to the UI design and add on future versions; 2. I'm not so sure if it's convenient; having dangling snapshots mounted here and there could be troublesome; the current way of using .zfs directory to access the snapshots seems more convenient for me.

Edited by Iker
  • Thanks 1
Link to comment
4 hours ago, Iker said:

@ich777 Github link added!, for the support link, I'll wiat for the next update (A couple of days).

 

@muddro Thanks for the feedback; 1. It will be great; I'll give a thought to the UI design and add on future versions; 2. I'm not so sure if it's convenient; having dangling snapshots mounted here and there could be troublesome; the current way of using .zfs directory to access the snapshots seems more convenient for me.

Thanks! As far as number 2, I think I put the wrong thing down. Meant the ability to clone a snapshot. From old TrueNAS documentation which is where I picked up the practice:


 

Quote

 

Rollback is a potentially dangerous operation and causes any configured replication tasks to fail as the replication system uses the existing snapshot when doing an incremental backup. To restore the data within a snapshot, the recommended steps are:

Clone the desired snapshot.

Share the clone with the share type or service running on the TrueNAS® system.

After users have recovered the needed data, delete the clone in the Active Pools tab.

This approach does not destroy any on-disk data and has no impact on replication.

 

 

  • Like 1
Link to comment

Hi thanks for starting this wonderful plugin.  With my datasets (there are quite a lot).  I wonder if it might be possible to make it so that the pools are shown in a list and then the data sets are only shown / hidden by clicking the pool i.e. having an expand / retract feature.  That way we could have a summary status across the whole system and more easily find problems without scrolling, potentially missing them in the process.  Plus it would be a lot cleaner, currently mine takes up about 3-4 screens of scrolling.

 

Thanks.

 

<Edit> I think I should have opened my eyes - 'Show Datasets'!

Edited by Marshalleq
  • Like 1
Link to comment

Thanks again for this welcome update.

 

I suggest an idea. Everytime we proceed to an action in the "SNAPS" UI, the windows automaticaly closes. Could it be possible to keep the windows on the selected dataset opened?

 

Else to select several snapshots with a checkbox (Only possible with deletion I think in this case) for massive operations, when we proceed snapshots test for exemple.

 

To be understood, I tested Sanoid earlier, and it tooks 30 snapshots in a row for each dataset (20) due to a bad configuration. I used the CLI to do the work as the window closed by itself for each deletion.

 

And thanks for your work!

Edited by gyto6
Link to comment

New update, now you can take snapshots; in the config section, it's possible to define the pattern and prefix for snapshots name; all the dataset buttons (Snapshot and Destroy) have been merged to "Actions".

 

@gyto6 The bulk deletion of snapshots it's in the near future plans as a button in the "Admin Dataset Snapshots".

  • Like 1
  • Upvote 1
Link to comment

@Iker This is a very useful plugin! Thanks so much for building this!

 

One feature request I would have is the ability browse to the dataset using the built in Unraid web based file browser. Similar to other drives on the "Main" page. For example, it would just link to the dataset paths that you already have listed. For example: http://unraid-server/Main/Browse?dir=/path/to/zfs/dataset 

 

Thanks again!

Edited by RedTechie
Link to comment
58 minutes ago, RedTechie said:

One feature request I would have is the ability browse to the dataset using the built in Unraid web based file browser....

 

In the early days of the plugin that was a feature; but... it's not possible for all the pools; for example, my main pool is "hddmain" mounted in "/hddmain"; the file browser doesn't work there, why? i don't have any idea :S, it works for root directories like boot or mnt, but not for others like tmp or home; so it's not a problem on my end, but probably a filebrowser restriction on which directories it could navigate.

Link to comment
1 hour ago, Iker said:

 

In the early days of the plugin that was a feature; but... it's not possible for all the pools; for example, my main pool is "hddmain" mounted in "/hddmain"; the file browser doesn't work there, why? i don't have any idea :S, it works for root directories like boot or mnt, but not for others like tmp or home; so it's not a problem on my end, but probably a filebrowser restriction on which directories it could navigate.

 

Ah interesting! This is why it worked for me when I tested it manually in my URL, as I mounted my zfs pool in /mnt/.

 

It may be a security related issue. Yet they allow you to access /boot... I'm not sure.

One potential alternative could be to put if behind a condition whereas if you mount in an 'Unraid acceptable' location (like /mnt/*) then the icon would appear?

Link to comment

I'l take a look; but most likely it's going to take a while any new major features. Currently I'm working on rewriting some of the backend; the load/refresh speed it's not really that good so I'm exploring ZFS API/Programs that should help the plugin to be free of parsing errors and improve overall speed.

  • Like 1
Link to comment
On 4/16/2022 at 7:51 AM, nathan47 said:

Appreciate the plugin and the work that goes into it. One thing I've noticed is that it doesn't see one of my pools. It's missing my pool named "main".

I have the exact same issue with my server...

Edited by SuperW2
Link to comment
root@TaylorPlex:~# zpool list -v
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                  936G  2.61G   933G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                                 936G  2.61G   933G        -         -     0%  0.27%      -    ONLINE
    sdk3                                      -      -      -        -         -      -      -      -    ONLINE
    sdl3                                      -      -      -        -         -      -      -      -    ONLINE
mainpool                                   674T   358T   316T        -       48G     1%    53%  1.00x    ONLINE  -
  raidz3-0                                 187T   173T  13.9T        -       16G     0%  92.5%      -    ONLINE
    95135db1-c66b-4dc1-af86-9294e996cfd0      -      -      -        -         -      -      -      -    ONLINE
    035d8726-5610-4b32-a260-b391e0aeb809      -      -      -        -         -      -      -      -    ONLINE
    d3a94833-9293-495a-b905-24d285d722ab      -      -      -        -         -      -      -      -    ONLINE
    672d7b6f-965c-415a-95a0-ad5cd974fe7d      -      -      -        -         -      -      -      -    ONLINE
    127da1c4-b2aa-4942-baba-bfae490d66fa      -      -      -        -         -      -      -      -    ONLINE
    150feebe-72b9-4ce5-8039-64018c024f27      -      -      -        -         -      -      -      -    ONLINE
    d3a32ea7-c354-40fe-9ced-828d1d56f4b8      -      -      -        -         -      -      -      -    ONLINE
    23086c7a-8632-4fae-8894-8d53df099b13      -      -      -        -         -      -      -      -    ONLINE
    b7fa0359-1c83-4c3e-a11d-7bf0c5c45c76      -      -      -        -         -      -      -      -    ONLINE
    778df321-8c7e-44dd-a9dd-fe7dbb512822      -      -      -        -         -      -      -      -    ONLINE
    520c23be-cea1-4438-a720-a39a6b516f18      -      -      -        -         -      -      -      -    ONLINE
    1a0f9a46-9c42-400a-8cb6-f394ed7a6ec1      -      -      -        -         -      -      -      -    ONLINE
    4224fdfb-8d0a-46bb-bf3a-516cac17f430      -      -      -        -         -      -      -      -    ONLINE
    a9a2e832-f13f-4f19-acd8-b37f0262a09e      -      -      -        -         -      -      -      -    ONLINE
    896bdbbb-7dc1-476f-bc77-8626de2aec66      -      -      -        -         -      -      -      -    ONLINE
  raidz3-1                                 136T   135T  1.36T        -       16G     5%  99.0%      -    ONLINE
    9afc98cc-e84b-451e-bdb1-f6759f62635e      -      -      -        -         -      -      -      -    ONLINE
    b2953602-932f-4900-a25d-278298762b7f      -      -      -        -         -      -      -      -    ONLINE
    ad53b00b-9d7b-4135-8b92-b1c1a99d6854      -      -      -        -         -      -      -      -    ONLINE
    c234663c-12b6-45dc-bb7b-42135ed53cb9      -      -      -        -         -      -      -      -    ONLINE
    39bd5a10-8e37-4bb1-ad79-dae4692143ba      -      -      -        -         -      -      -      -    ONLINE
    8ee21b30-63f9-4026-8d66-1e69b0ff4972      -      -      -        -         -      -      -      -    ONLINE
    a86e5df2-3a00-4e4c-aec4-cb23627d6215      -      -      -        -         -      -      -      -    ONLINE
    78f631c6-4bee-45fb-8578-36396683c759      -      -      -        -         -      -      -      -    ONLINE
    6def9ab5-9b7c-430a-86f2-ae03ed090493      -      -      -        -         -      -      -      -    ONLINE
    ebb7fbd2-6f7c-400e-ab75-8d664fb15762      -      -      -        -         -      -      -      -    ONLINE
    d83df2df-a0fe-4f0a-895e-c8a02ab44781      -      -      -        -         -      -      -      -    ONLINE
    82c424e4-6d1c-43a7-9ef1-bd9f4b7fe1d5      -      -      -        -         -      -      -      -    ONLINE
    0c988f98-8ae4-40d5-9a9e-a0ced36d5391      -      -      -        -         -      -      -      -    ONLINE
    bab6441a-9605-4683-9599-64efcdec8477      -      -      -        -         -      -      -      -    ONLINE
    827b4605-209e-43d0-bbb2-e44d2f2414d0      -      -      -        -         -      -      -      -    ONLINE
  raidz3-2                                 187T  49.7T   137T        -       16G     0%  26.6%      -    ONLINE
    28b0bf94-4328-4d4c-a3ae-46e010e21f66      -      -      -        -         -      -      -      -    ONLINE
    30a46b87-2ac4-4c40-bf53-e84cf222f5c3      -      -      -        -         -      -      -      -    ONLINE
    e89b48ad-4d6e-4cae-9932-7fda1220d491      -      -      -        -         -      -      -      -    ONLINE
    a16701ab-a151-4b1b-9afc-ca5303d1b53a      -      -      -        -         -      -      -      -    ONLINE
    1a6ee52c-94a8-463a-9edb-b3db277863f0      -      -      -        -         -      -      -      -    ONLINE
    826de601-2706-491b-af3e-e2916fe223c2      -      -      -        -         -      -      -      -    ONLINE
    84571fba-f1ba-4af0-9dd0-1ed54e8ffd1e      -      -      -        -         -      -      -      -    ONLINE
    534c0af8-304d-4296-9133-921bd90a4dee      -      -      -        -         -      -      -      -    ONLINE
    4486918f-d8b1-4479-9439-259b06b3d3c6      -      -      -        -         -      -      -      -    ONLINE
    0435633d-612a-471b-aa56-345caac43ba7      -      -      -        -         -      -      -      -    ONLINE
    6eecb4ff-c9e9-423a-810a-0b65a6a1dccd      -      -      -        -         -      -      -      -    ONLINE
    1b488a53-247d-4786-ba8f-40c29c6ea5e3      -      -      -        -         -      -      -      -    ONLINE
    6f901e50-0b53-4ea8-b83b-712766853919      -      -      -        -         -      -      -      -    ONLINE
    c6d66210-4621-4d59-a751-a5dc9a327d84      -      -      -        -         -      -      -      -    ONLINE
    777da20a-67e6-4452-b892-7a5d548e41cd      -      -      -        -         -      -      -      -    ONLINE
  raidz3-3                                 164T   186G   164T        -         -     0%  0.11%      -    ONLINE
    8056a492-c16a-4bdc-93c1-bb13dfb88af5      -      -      -        -         -      -      -      -    ONLINE
    2ba06a18-4c8b-44d5-b858-f77e03581051      -      -      -        -         -      -      -      -    ONLINE
    f882076c-76af-4bc7-b531-30a9369a73fc      -      -      -        -         -      -      -      -    ONLINE
    47a89ace-81b4-4d71-9851-d4e7bdd0bd88      -      -      -        -         -      -      -      -    ONLINE
    8d9e4c4f-c7a2-4bee-b1ff-4e412f9841ba      -      -      -        -         -      -      -      -    ONLINE
    e30045b7-4a60-40f7-9899-45e4ce166cca      -      -      -        -         -      -      -      -    ONLINE
    0a09baed-9f62-407f-9be8-4909ab3c1060      -      -      -        -         -      -      -      -    ONLINE
    0f571f77-cf5b-493a-bf81-1b4b7e416b61      -      -      -        -         -      -      -      -    ONLINE
    fa835582-aef7-461b-99b4-803ffb30d13b      -      -      -        -         -      -      -      -    ONLINE
    2ce51f18-a0f1-4071-99ac-f3b3896ec42d      -      -      -        -         -      -      -      -    ONLINE
    14792535-5e65-4eab-8993-9cdcaaa4799d      -      -      -        -         -      -      -      -    ONLINE
    33e560ce-e50e-492a-8de4-b823767dd3f6      -      -      -        -         -      -      -      -    ONLINE
    b96570e2-b608-474a-aec5-053506cf0479      -      -      -        -         -      -      -      -    ONLINE
    3a64abe6-379a-4be4-9f4b-b043243fad54      -      -      -        -         -      -      -      -    ONLINE
    c7ee0ddf-17b0-4853-9bb5-9d32e03619f3      -      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -  -
  0e4c710e-b0e0-4b23-bf5c-40db6bc681d4    1.75T   205G  1.55T        -         -     0%  11.5%      -    ONLINE
spare                                         -      -      -        -         -      -      -      -  -
  79a8d09a-a4dc-467a-866d-e364f4a30c79        -      -      -        -         -      -      -      -     AVAIL
  c6effcbe-3c88-445f-9d02-91b5ec741ec5        -      -      -        -         -      -      -      -     AVAIL
  7121a1a9-21f4-43b1-be4a-a4842ab63d90        -      -      -        -         -      -      -      -     AVAIL
  b8336d4d-0b11-490c-9bd0-77a9b80ce584        -      -      -        -         -      -      -      -     AVAIL
  07ceebe0-b976-43ed-bcc2-2e9737f31666        -      -      -        -         -      -      -      -     AVAIL
  1e8efebc-da56-471b-a46d-1a27d559a7d3        -      -      -        -         -      -      -      -     AVAIL
ssdpool                                   11.6T  2.79T  8.84T        -         -     2%    23%  1.00x    ONLINE  -
  raidz1-0                                5.81T  1.39T  4.42T        -         -     2%  23.9%      -    ONLINE
    scsi-35000cca05068a8d0                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca050697eac                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca05069d390                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a1718                    -      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                5.81T  1.40T  4.41T        -         -     2%  24.1%      -    ONLINE
    scsi-35000cca0506a4cd8                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a87e4                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0531605e4                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0531606f0                    -      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -  -
  d54e0d6a-41f2-4d79-a344-7f3f93793e04    1.75T   884M  1.75T        -         -     0%  0.04%      -    ONLINE
spare                                         -      -      -        -         -      -      -      -  -
  scsi-35000cca0532a531c                      -      -      -        -         -      -      -      -     AVAIL
  scsi-35000cca053410650                      -      -      -        -         -      -      -      -     AVAIL
root@TaylorPlex:~# 

I recently tried to re-import my "main" pool as "mainpool" to see if it helped. It didn't.

Link to comment
54 minutes ago, nathan47 said:
root@TaylorPlex:~# zpool list -v
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                  936G  2.61G   933G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                                 936G  2.61G   933G        -         -     0%  0.27%      -    ONLINE
    sdk3                                      -      -      -        -         -      -      -      -    ONLINE
    sdl3                                      -      -      -        -         -      -      -      -    ONLINE
mainpool                                   674T   358T   316T        -       48G     1%    53%  1.00x    ONLINE  -
  raidz3-0                                 187T   173T  13.9T        -       16G     0%  92.5%      -    ONLINE
    95135db1-c66b-4dc1-af86-9294e996cfd0      -      -      -        -         -      -      -      -    ONLINE
    035d8726-5610-4b32-a260-b391e0aeb809      -      -      -        -         -      -      -      -    ONLINE
    d3a94833-9293-495a-b905-24d285d722ab      -      -      -        -         -      -      -      -    ONLINE
    672d7b6f-965c-415a-95a0-ad5cd974fe7d      -      -      -        -         -      -      -      -    ONLINE
    127da1c4-b2aa-4942-baba-bfae490d66fa      -      -      -        -         -      -      -      -    ONLINE
    150feebe-72b9-4ce5-8039-64018c024f27      -      -      -        -         -      -      -      -    ONLINE
    d3a32ea7-c354-40fe-9ced-828d1d56f4b8      -      -      -        -         -      -      -      -    ONLINE
    23086c7a-8632-4fae-8894-8d53df099b13      -      -      -        -         -      -      -      -    ONLINE
    b7fa0359-1c83-4c3e-a11d-7bf0c5c45c76      -      -      -        -         -      -      -      -    ONLINE
    778df321-8c7e-44dd-a9dd-fe7dbb512822      -      -      -        -         -      -      -      -    ONLINE
    520c23be-cea1-4438-a720-a39a6b516f18      -      -      -        -         -      -      -      -    ONLINE
    1a0f9a46-9c42-400a-8cb6-f394ed7a6ec1      -      -      -        -         -      -      -      -    ONLINE
    4224fdfb-8d0a-46bb-bf3a-516cac17f430      -      -      -        -         -      -      -      -    ONLINE
    a9a2e832-f13f-4f19-acd8-b37f0262a09e      -      -      -        -         -      -      -      -    ONLINE
    896bdbbb-7dc1-476f-bc77-8626de2aec66      -      -      -        -         -      -      -      -    ONLINE
  raidz3-1                                 136T   135T  1.36T        -       16G     5%  99.0%      -    ONLINE
    9afc98cc-e84b-451e-bdb1-f6759f62635e      -      -      -        -         -      -      -      -    ONLINE
    b2953602-932f-4900-a25d-278298762b7f      -      -      -        -         -      -      -      -    ONLINE
    ad53b00b-9d7b-4135-8b92-b1c1a99d6854      -      -      -        -         -      -      -      -    ONLINE
    c234663c-12b6-45dc-bb7b-42135ed53cb9      -      -      -        -         -      -      -      -    ONLINE
    39bd5a10-8e37-4bb1-ad79-dae4692143ba      -      -      -        -         -      -      -      -    ONLINE
    8ee21b30-63f9-4026-8d66-1e69b0ff4972      -      -      -        -         -      -      -      -    ONLINE
    a86e5df2-3a00-4e4c-aec4-cb23627d6215      -      -      -        -         -      -      -      -    ONLINE
    78f631c6-4bee-45fb-8578-36396683c759      -      -      -        -         -      -      -      -    ONLINE
    6def9ab5-9b7c-430a-86f2-ae03ed090493      -      -      -        -         -      -      -      -    ONLINE
    ebb7fbd2-6f7c-400e-ab75-8d664fb15762      -      -      -        -         -      -      -      -    ONLINE
    d83df2df-a0fe-4f0a-895e-c8a02ab44781      -      -      -        -         -      -      -      -    ONLINE
    82c424e4-6d1c-43a7-9ef1-bd9f4b7fe1d5      -      -      -        -         -      -      -      -    ONLINE
    0c988f98-8ae4-40d5-9a9e-a0ced36d5391      -      -      -        -         -      -      -      -    ONLINE
    bab6441a-9605-4683-9599-64efcdec8477      -      -      -        -         -      -      -      -    ONLINE
    827b4605-209e-43d0-bbb2-e44d2f2414d0      -      -      -        -         -      -      -      -    ONLINE
  raidz3-2                                 187T  49.7T   137T        -       16G     0%  26.6%      -    ONLINE
    28b0bf94-4328-4d4c-a3ae-46e010e21f66      -      -      -        -         -      -      -      -    ONLINE
    30a46b87-2ac4-4c40-bf53-e84cf222f5c3      -      -      -        -         -      -      -      -    ONLINE
    e89b48ad-4d6e-4cae-9932-7fda1220d491      -      -      -        -         -      -      -      -    ONLINE
    a16701ab-a151-4b1b-9afc-ca5303d1b53a      -      -      -        -         -      -      -      -    ONLINE
    1a6ee52c-94a8-463a-9edb-b3db277863f0      -      -      -        -         -      -      -      -    ONLINE
    826de601-2706-491b-af3e-e2916fe223c2      -      -      -        -         -      -      -      -    ONLINE
    84571fba-f1ba-4af0-9dd0-1ed54e8ffd1e      -      -      -        -         -      -      -      -    ONLINE
    534c0af8-304d-4296-9133-921bd90a4dee      -      -      -        -         -      -      -      -    ONLINE
    4486918f-d8b1-4479-9439-259b06b3d3c6      -      -      -        -         -      -      -      -    ONLINE
    0435633d-612a-471b-aa56-345caac43ba7      -      -      -        -         -      -      -      -    ONLINE
    6eecb4ff-c9e9-423a-810a-0b65a6a1dccd      -      -      -        -         -      -      -      -    ONLINE
    1b488a53-247d-4786-ba8f-40c29c6ea5e3      -      -      -        -         -      -      -      -    ONLINE
    6f901e50-0b53-4ea8-b83b-712766853919      -      -      -        -         -      -      -      -    ONLINE
    c6d66210-4621-4d59-a751-a5dc9a327d84      -      -      -        -         -      -      -      -    ONLINE
    777da20a-67e6-4452-b892-7a5d548e41cd      -      -      -        -         -      -      -      -    ONLINE
  raidz3-3                                 164T   186G   164T        -         -     0%  0.11%      -    ONLINE
    8056a492-c16a-4bdc-93c1-bb13dfb88af5      -      -      -        -         -      -      -      -    ONLINE
    2ba06a18-4c8b-44d5-b858-f77e03581051      -      -      -        -         -      -      -      -    ONLINE
    f882076c-76af-4bc7-b531-30a9369a73fc      -      -      -        -         -      -      -      -    ONLINE
    47a89ace-81b4-4d71-9851-d4e7bdd0bd88      -      -      -        -         -      -      -      -    ONLINE
    8d9e4c4f-c7a2-4bee-b1ff-4e412f9841ba      -      -      -        -         -      -      -      -    ONLINE
    e30045b7-4a60-40f7-9899-45e4ce166cca      -      -      -        -         -      -      -      -    ONLINE
    0a09baed-9f62-407f-9be8-4909ab3c1060      -      -      -        -         -      -      -      -    ONLINE
    0f571f77-cf5b-493a-bf81-1b4b7e416b61      -      -      -        -         -      -      -      -    ONLINE
    fa835582-aef7-461b-99b4-803ffb30d13b      -      -      -        -         -      -      -      -    ONLINE
    2ce51f18-a0f1-4071-99ac-f3b3896ec42d      -      -      -        -         -      -      -      -    ONLINE
    14792535-5e65-4eab-8993-9cdcaaa4799d      -      -      -        -         -      -      -      -    ONLINE
    33e560ce-e50e-492a-8de4-b823767dd3f6      -      -      -        -         -      -      -      -    ONLINE
    b96570e2-b608-474a-aec5-053506cf0479      -      -      -        -         -      -      -      -    ONLINE
    3a64abe6-379a-4be4-9f4b-b043243fad54      -      -      -        -         -      -      -      -    ONLINE
    c7ee0ddf-17b0-4853-9bb5-9d32e03619f3      -      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -  -
  0e4c710e-b0e0-4b23-bf5c-40db6bc681d4    1.75T   205G  1.55T        -         -     0%  11.5%      -    ONLINE
spare                                         -      -      -        -         -      -      -      -  -
  79a8d09a-a4dc-467a-866d-e364f4a30c79        -      -      -        -         -      -      -      -     AVAIL
  c6effcbe-3c88-445f-9d02-91b5ec741ec5        -      -      -        -         -      -      -      -     AVAIL
  7121a1a9-21f4-43b1-be4a-a4842ab63d90        -      -      -        -         -      -      -      -     AVAIL
  b8336d4d-0b11-490c-9bd0-77a9b80ce584        -      -      -        -         -      -      -      -     AVAIL
  07ceebe0-b976-43ed-bcc2-2e9737f31666        -      -      -        -         -      -      -      -     AVAIL
  1e8efebc-da56-471b-a46d-1a27d559a7d3        -      -      -        -         -      -      -      -     AVAIL
ssdpool                                   11.6T  2.79T  8.84T        -         -     2%    23%  1.00x    ONLINE  -
  raidz1-0                                5.81T  1.39T  4.42T        -         -     2%  23.9%      -    ONLINE
    scsi-35000cca05068a8d0                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca050697eac                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca05069d390                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a1718                    -      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                5.81T  1.40T  4.41T        -         -     2%  24.1%      -    ONLINE
    scsi-35000cca0506a4cd8                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a87e4                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0531605e4                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0531606f0                    -      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -  -
  d54e0d6a-41f2-4d79-a344-7f3f93793e04    1.75T   884M  1.75T        -         -     0%  0.04%      -    ONLINE
spare                                         -      -      -        -         -      -      -      -  -
  scsi-35000cca0532a531c                      -      -      -        -         -      -      -      -     AVAIL
  scsi-35000cca053410650                      -      -      -        -         -      -      -      -     AVAIL
root@TaylorPlex:~# 

I recently tried to re-import my "main" pool as "mainpool" to see if it helped. It didn't.

WOW!!! That's a lot of disks :) 

Link to comment
Posted (edited)
1 hour ago, nathan47 said:
root@TaylorPlex:~# zpool list -v
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                  936G  2.61G   933G        -         -     0%     0%  1.00x    ONLINE  -

 

 

That's quite a system!; I fixed the regex for identifying the pools; the "Expandz" was the cause; in abot 10 minutes the update should be live. Thanks for your help.

Edited by Iker
Link to comment
  • 3 weeks later...

Could this plugin get the feature to able to create smb share for datasets via GUI similar to the plugin "unassigned devices"?

Currently I'm modifying the "/boot/config/smb-extra.conf" and trigger smb config reload via "/usr/bin/smbcontrol $(cat /var/run/smbd.pid 2>/dev/null) reload-config 2>&1" to not have the array shutting down for smb share changes.

Link to comment

Hi @bergi9 , that is a great idea; probably, I will implement it in a couple of versions; right now, I'm focused on refactoring part of the backend. Do you think that just having templates would be good?, I mean an option on the dataset for "Create SMB Share" and then present the templates as unRaid do "Private, Read Only, Public".

Link to comment

Hey, how is the "Set permissions" part of the Create Dataset supposed to be used?

I keep getting this error message when I'm trying to fill in one of my Unraid share users (and keep having smb issues on Windows, no write access possible so far, that's why i'm trying to investigate zfs user permissions).

Cheers!

 

image.png

Link to comment
10 minutes ago, chrismuc said:

Hey, how is the "Set permissions" part of the Create Dataset supposed to be used?

 

That's a weird error; Set Permissions is very straightforward; just specify the Linux permissions that you want the Plugin to set for the folder (777, 755, 755, etc.); Most of the time, I use 775; it saves me a lot of troubles with SMB write.


If you like, you could send me the parameters you are using for the dataset creation in a PM, and I could take a look if there is anything wrong.

Link to comment

@Iker Glad to hear that you would plan to implenent it. It's not time critical feature to me, but nice to have.

 

I did took the commands to reload smb config files from https://github.com/dlandon/unassigned.devices/blob/master/source/Unassigned.devices/include/lib.php#L1627

Maybe you could look how the plugin unassigned.devices did it work with shares. As of 6.10rc8 the unassigned devices share feature does not work for me.

Reading this code from unassigned devices on github it appears to support a range of share options like the unraid share page. Maybe it could help you further.

 

On 5/11/2022 at 7:11 PM, Iker said:

Do you think that just having templates would be good?, I mean an option on the dataset for "Create SMB Share" and then present the templates as unRaid do "Private, Read Only, Public".

Yes, that's good to me. But also if the share is already created on that dataset, then add the option "Remove SMB Share".

  • Like 1
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.