Jump to content

[PLUGIN] ZFS Master

Featured Replies

Posted

What is ZFS Master?

The ZFS Master plugin provides information and control over the ZFS Pools in your Unraid Server. Available ZFS Pools are listed under the "Main/ZFSMaster" tab. This plugin requires Unraid 6.12 or higher; for Unraid 6.11 or lower ZFS for Unraid 6 Plugin needs to be installed.

 

Features (All related to ZFS)

  • Main tab GUI integration
  • Report Health for the Pool
  • Scrub Pools and Export Pool
  • Lazy load for snapshot information
  • Cache last data when using "no refresh" option
  • Create new Datasets, including options for name, mount point, access time, case sensitive, compression, quota, permissions, extended attributes, record size, primary cache, read-only, encryption, and size.
  • Edit Datasets properties like compression, access time, quota, etc.
  • List children directories on a given dataset.
  • Convert directories to datasets.
  • Destroy Datasets and an option for forcing and recursively destroy all children and dependents
  • Filter the Datasets listed in the GUI using Lua Patterns.
  • Clone snapshots and Promote datasets
  • Dataset Snapshots Administration (Take a snapshot, Hold a snapshot, Released it, Rollback to it, Batch Delete and Destroy them)
  • Destructive Mode, you could restrict buttons that appear in the GUI by turning on or off destructive mode.
  • GUI Indicator of last snapshot date, through an icon on the right. 

 

image.thumb.png.a1ab9d233a43cbddda08406b05366653.png

 

Note About Disks not Spinning Down

ZFS Master wakes up the disks only when you open the dashboard in the main tab; this is because some snapshot information needs to be read from the disks, but again, this only happens when you open the dashboard in the main tab. If you want your disks not to spin up, you have to change the "Refresh Interval" option on the settings to "No refresh"; this will cause the information not to be loaded, even on page refresh.

 

Configuration

Unraid Settings Menu-> ZFS Master. It's possible to specify the refresh interval for the GUI, if destructive mode is on or off; Lua pattern for dataset exclusions; these are convenient if you are using a docker folder in a Dataset; and the amount of days used for the Snapshot Icon alert and snapshot options for naming snapshots taken from the GUI.

 

323622571_Screenshot2023-12-08185149.png.0968321ac4aae37719a9a2499445017d.png

 

Exclusion Patterns (Or Why I'm seeing a lot of docker daemon related datasets)?

When you configure docker to use a directory on a ZFS Pool, the docker daemon detects the underlying filesystem and uses some of its capabilities; in this case, it creates multiple datasets and snapshots; this slows quite a bit the user interface, that is why you have to exclude such directory; this can be configured using the "Datasets Exclusion Patterns" option. To learn how to create your exclusion pattern, please check http://lua-users.org/wiki/PatternsTutorial.

It's important to note that, if you are using Unraid 7 you can change the docker filesystem driver to overlay2 and get rid of this issue.

What is Lazy Load?

Lazy load is a feature (You should enable it on the settings page) that loads the data in two stages:

  1. Load datasets: This stage loads all the datasets and the associated information for the pools (size, attributes, etc.) except the Snapshots data. This little change improves initial loading times by up to 90% (Less than a second in most cases). However, be aware that all Snapshot-related information and options will be unavailable until the second stage finish.
  2. Load Snapshots: In this stage, the snapshot information and options are loaded and updated dynamically in the GUI; the time this takes depends on how many datasets and snapshots you have in your pool. This change increments the total load time up to 15%; however, the interface feels more responsive.

 

In summary, Lazy Load provides a very good improvement on initial load times, but it increments the total load time; the following is a comparison of what you can expect:

 

  • Classic Load: Load time - 1.4s
  • Lazy Load: Load Datasets time - 196ms, Load Snapshots:  1.65s (This includes the initial 196 ms).

 

What is Directory Listing?

Directory Listing is a feature (You should enable it per Dataset or in the plugin configuration) that lists the top-level folders for a given dataset. This functionality should give you better visibility over your pools, allowing you to spot possible duplicates and directories that may be associated with leftovers of a migration. 

 

The folders are listed as children elements, after the datasets and with a different icon (a folder); the plugin doesn't gather any information about the directory besides its name. Given that a Dataset Snapshot covers his subfolders, the Snapshots count is associated with all the subfolders, even if the folders are brand new and not present in any snapshot; this is by design.

 

This new feature needs to be enabled per Dataset, using the Actions menu or the plugin configuration; it's important to note that it may impact loading times up to 5 or 10% depending on the number of folders under the Dataset.

 

How Convert to Dataset Works?

The process is divided into three steps:

  1. Rename Directory: Source directory is renamed to <folder_name>_tmp_<datetime>
  2. Create Dataset: A dataset with the directory's original name is created in the same pool (and path); the dataset options are the default ones.
  3. Copy the data: Data is copied using the command "rsync rsync -ra --stats --info=progress2 <source_directory> <dataset_mountpoint>"; the GUI displays a dialog with a progress bar and some relevant information about the process.

 

If anything fails on steps 1 or 2, the plugin returns an error, and the folder is renamed back to its original name. If something fails in step 3, an error is returned, but the dataset and directory remain intact.

 

Why is the reported free/used space is different from what Unraid shows?

 

The plugin shows the very same information that you get from the regular "zpool" and "zfs" commands; for example, here is a pool reported via zpool list vs Unraid:

image.png.553784953a7969915b346ef7915c3755.png

 

But why is that? There are two factors:

  1. Unraid shows the units using SI (TeraBytes, Gigabytes, etc.) instead of IEC (Tibibytes, Gibibytes, etc.); zfs natively uses IEC units, so there is a discrepancy. The larger the pool, the larger the difference.
  2. Unraid shows you the usable space on the GUI interface reported from the filesystem, but that's not the same information reported by ZFS; for example for a RAIDZ-1 config with three disks, you get 2TB * (3-p) with p = 1 disk for redundancy; in practical terms, that means 4 TB of usable space; but at the pool level that's not what gets reported, instead ZFS reports 6 TB (Convert it to IEC TBi and you get 5.5 TB) of total space because that's the pool topology, not the one showed at the filesystem level. Now, at the dataset level (zfs list) ZFS reports the actual numbers for the filesystem because that config applies to the filesystem level, how much space you can use, excluding the parity, slop space, snapshots, etc.

 

 

ChangeLog

 

2024.12.08

  • Add - Config for pulling ZnapZend plans
  • Fix - Refresh and settings icon
  • Fix - Corner case if no information is loaded

 

2024.11.17

  • Add - ZnapZend plans information

 

2024.11.10

  • Fix - Style for Unraid 7
  • Add - Properties extraction for ZFS Volumes
  • Fix - for folder listing corner cases

 

2024.05.05

  • Fix - Malicious content in SweetAlert2 package. Thanks @Ubsefor
  • Add - Initial support for ZFS Vols (Just detection, more is coming)
  • Fix - Exclussion patterns keywords

 

2024.02.15

  • Fix - Directory Listing not Working

 

2024.02.10

  • Add - "-X" option for Connvert Dataset rsync command
  • Fix - Directory Listing detecting datasets as folders

 

2024.02.9

  • Add - Convert directory to dataset functionality
  • Add - Written property for snapshots
  • Add - Directory listing for root datasets
  • Fix - Tabbed view support
  • Fix - Configuration file associated errors
  • Fix - Units nomenclature
  • Fix - Pool information parsing errors
  • Remove - Unraid Notifications 

 

2023.12.8

  • Add - Directory Listing functionality
  • Fix - Optimize multiple operations

 

2023.12.4

  • Fix - Used and Free % bars/texts are now consistent with unraid theme and config
  • Fix - Set time format for the last refresh to short date and time
  • Fix - Detect Pools with used % under 0%
  • Fix - ZPool regex not caching some pools with dots or Underscore in the name

 

2023.10.07

  • Add - Cache last data in Local Storage when using "no refresh"
  • Fix - Dataset admin Dialog - Error on select all datasets
  • Fix - Multiple typos
  • Fix - Special condition crashing the backend
  • Fix - Status refresh on Snapshots admin dialog
  • Change - Date format across multiple dialogs
  • Change - Local Storage for datasets and pools view options

 

2023.09.27

  • Change - "No refresh" option now doesn't load information on page refresh
  • Fix - Dynamic Config reload

 

2023.09.25.72

  • Fix - Config load
  • Fix - Exclusion patterns for datasets with spaces
  • Fix - Destroy dataset functionality

 

2023.09.25

  • Add - Lazy load functionality
  • Add - Nchan for updates
  • Add - Refresh options (Including on demand)
  • Add - Last refresh timestamp
  • Change - Quota Unit setting on Create Dataset Dialog
  • Change - Notifications and messages improvement
  • Change - Edit datasets UI as a dropdown menu
  • Fix - Default permissions for datasets (u:nobody, g:users)
  • Fix - Dataset passphrase input not masked
  • Fix - ZPool regex not caching some pools
  • Fix - Dataset passphrase size difference
  • Fix - Multiple typos
  • Fix - PHP 8 Compatibility

 

2022.07.04

  • Fix - Dataset names with spaces not being properly handled

 

2023.04.03

  • Add - Rename datasets UI
  • Add - Edit datasets UI
  • Add - unRaid 6.12 compatiblity
  • Add - Lazy load for snapshots admin UI
  • Fix - Improve PHP 8 Compatibility

 

2023.02.28

  • Fix - PHP 8 Upgrades
  • Fix - Export pool command
  • Fix - Error on parsing dataset origin property

 

2022.12.04

  • Fix - Error on counting childs

 

2022.11.12

  • Fix - Error on dialogs and input controls
  • Add - Clone capabilities for snapshots
  • Add - Promote capabilities for datasets

 

2022.11.05

  • Fix - Error on pools with snapshots but without datasets
  • Fix - Dialogs not sizing properly
  • Add - Snapshot Batch Deletion

 

2022.08.21

  • Change - UI into "folder" structure
  • Add - Support for ZFS Encryption
  • Add - Unlock and Lock actions for encrypted datasets
  • Fix - Error on unRaid 6.9.2 associated with session management

 

2022.08.02

 

  • Warning - Please Update your exclusion pattern!
  • Add - Browse Button for Datasets
  • Add - Support for listing volumes!!
  • Add - Lua script backend for loading dataset information (50% faster loading times)
  • Change - Exclusion pattern for datasets (Please check http://lua-users.org/wiki/PatternsTutorial)
  • Change - UI columns re-organized to the unraid way (sort of)

 

2022.04.13

  • Add - Dataset Snapshot Creation Option
  • Add - Settings for Snapshot Creation (pattern and prefix)
  • Change - "Destroy" and "Snapshots" buttons merged to "Actions"

 

2022.04.10

  • Add - Dataset Snapshot management (rollback, hold, release, destroy)
  • Fix - Installation script bug

 

2022.04.08

  • Add - Set permissions for new Datasets

 

2021.11.09a

  • Add - List of current Datasets at Dataset Creation
  • Add - Option for export a Pool (In construction)
  • Fix - Compatibility with unRAID RC versions

 

2021.10.08e

  • Add - SweetAlert2 for notifications
  • Add - Refresh and Settings Buttons
  • Add - Mountpoint information for Pools
  • Add - Configurable Settings for Refresh Time, Destructive Mode, Dataset Exclussions, Alert Max Days Snapshot Icon
  • Fix - Compatibility with Other Themes (Dark, Grey, etc.)
  • Fix - Improper dataset parsing
  • Fix - Regex warnings
  • Fix - UI freeze error on some system at destroying a Dataset
  • Remove - Unassigned Devices Plugin dependency

 

2021.10.04

  • Initial Release.

 

Official GitHub Repo

 

https://github.com/IkerSaint/ZFS-Master-Unraid

 

 

Edited by Iker

  • Iker changed the title to [PLUGIN] ZFS Master
  • Replies 677
  • Views 103.7k
  • Created
  • Last Reply

Top Posters In This Topic

Most Popular Posts

  • Hey, answering some of the questions: @XuvinWhat does it mean if the dataset/snapshot icon is yellow instead of blue: It means that the last snapshot is older than the time configured on the s

  • Answers to the questions:     Thanks!, Through the "donate" link in my App profile, Red Peroni is my favorite!.     No problem; I will update for 12h format on the ne

  • New update, now you can take snapshots; in the config section, it's possible to define the pattern and prefix for snapshots name; all the dataset buttons (Snapshot and Destroy) have been merged to "Ac

Posted Images

Can you please add the plugin URL somewhere or the link to the source on Github or wherever it is hosted?

Also your plugin support link from the CA App points to this thread.

Works well so far. Very neat to have this capability. Things I think would be cool to see:

 

1. Bulk delete

2. The ability to mount a snapshot (instead of just rolling back, in case you just need to pull a file or something)

 

Obviously second one can be done easily with cli, but given how nice your stuff is looking so far...

  • Author

@ich777 Github link added!, for the support link, I'll wiat for the next update (A couple of days).

 

@muddro Thanks for the feedback; 1. It will be great; I'll give a thought to the UI design and add on future versions; 2. I'm not so sure if it's convenient; having dangling snapshots mounted here and there could be troublesome; the current way of using .zfs directory to access the snapshots seems more convenient for me.

Edited by Iker

4 hours ago, Iker said:

@ich777 Github link added!, for the support link, I'll wiat for the next update (A couple of days).

 

@muddro Thanks for the feedback; 1. It will be great; I'll give a thought to the UI design and add on future versions; 2. I'm not so sure if it's convenient; having dangling snapshots mounted here and there could be troublesome; the current way of using .zfs directory to access the snapshots seems more convenient for me.

Thanks! As far as number 2, I think I put the wrong thing down. Meant the ability to clone a snapshot. From old TrueNAS documentation which is where I picked up the practice:


 

Quote

 

Rollback is a potentially dangerous operation and causes any configured replication tasks to fail as the replication system uses the existing snapshot when doing an incremental backup. To restore the data within a snapshot, the recommended steps are:

Clone the desired snapshot.

Share the clone with the share type or service running on the TrueNAS® system.

After users have recovered the needed data, delete the clone in the Active Pools tab.

This approach does not destroy any on-disk data and has no impact on replication.

 

 

Hi thanks for starting this wonderful plugin.  With my datasets (there are quite a lot).  I wonder if it might be possible to make it so that the pools are shown in a list and then the data sets are only shown / hidden by clicking the pool i.e. having an expand / retract feature.  That way we could have a summary status across the whole system and more easily find problems without scrolling, potentially missing them in the process.  Plus it would be a lot cleaner, currently mine takes up about 3-4 screens of scrolling.

 

Thanks.

 

<Edit> I think I should have opened my eyes - 'Show Datasets'!

Edited by Marshalleq

Thanks again for this welcome update.

 

I suggest an idea. Everytime we proceed to an action in the "SNAPS" UI, the windows automaticaly closes. Could it be possible to keep the windows on the selected dataset opened?

 

Else to select several snapshots with a checkbox (Only possible with deletion I think in this case) for massive operations, when we proceed snapshots test for exemple.

 

To be understood, I tested Sanoid earlier, and it tooks 30 snapshots in a row for each dataset (20) due to a bad configuration. I used the CLI to do the work as the window closed by itself for each deletion.

 

And thanks for your work!

Edited by gyto6

  • Author

New update, now you can take snapshots; in the config section, it's possible to define the pattern and prefix for snapshots name; all the dataset buttons (Snapshot and Destroy) have been merged to "Actions".

 

@gyto6 The bulk deletion of snapshots it's in the near future plans as a button in the "Admin Dataset Snapshots".

@Iker This is a very useful plugin! Thanks so much for building this!

 

One feature request I would have is the ability browse to the dataset using the built in Unraid web based file browser. Similar to other drives on the "Main" page. For example, it would just link to the dataset paths that you already have listed. For example: http://unraid-server/Main/Browse?dir=/path/to/zfs/dataset 

 

Thanks again!

Edited by RedTechie

  • Author
58 minutes ago, RedTechie said:

One feature request I would have is the ability browse to the dataset using the built in Unraid web based file browser....

 

In the early days of the plugin that was a feature; but... it's not possible for all the pools; for example, my main pool is "hddmain" mounted in "/hddmain"; the file browser doesn't work there, why? i don't have any idea :S, it works for root directories like boot or mnt, but not for others like tmp or home; so it's not a problem on my end, but probably a filebrowser restriction on which directories it could navigate.

1 hour ago, Iker said:

 

In the early days of the plugin that was a feature; but... it's not possible for all the pools; for example, my main pool is "hddmain" mounted in "/hddmain"; the file browser doesn't work there, why? i don't have any idea :S, it works for root directories like boot or mnt, but not for others like tmp or home; so it's not a problem on my end, but probably a filebrowser restriction on which directories it could navigate.

 

Ah interesting! This is why it worked for me when I tested it manually in my URL, as I mounted my zfs pool in /mnt/.

 

It may be a security related issue. Yet they allow you to access /boot... I'm not sure.

One potential alternative could be to put if behind a condition whereas if you mount in an 'Unraid acceptable' location (like /mnt/*) then the icon would appear?

  • Author

I'l take a look; but most likely it's going to take a while any new major features. Currently I'm working on rewriting some of the backend; the load/refresh speed it's not really that good so I'm exploring ZFS API/Programs that should help the plugin to be free of parsing errors and improve overall speed.

Hi,

 

Thank you for your awesome work. This plugin is very great.

 

One notice: could the plugin please respect the coloring and the alignment of the Main page?

2022-04-15.thumb.png.c5bfea2163b2c59d43e086009f24bf77.png

At least roughly like the Unassigned Devices plugin.

 

Thanks,

Mark

Appreciate the plugin and the work that goes into it. One thing I've noticed is that it doesn't see one of my pools. It's missing my pool named "main".

On 4/16/2022 at 7:51 AM, nathan47 said:

Appreciate the plugin and the work that goes into it. One thing I've noticed is that it doesn't see one of my pools. It's missing my pool named "main".

I have the exact same issue with my server...

Edited by SuperW2

  • Author

Could you please post the output (In Text) of this command:

 

zpool list -v

 

root@TaylorPlex:~# zpool list -v
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                  936G  2.61G   933G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                                 936G  2.61G   933G        -         -     0%  0.27%      -    ONLINE
    sdk3                                      -      -      -        -         -      -      -      -    ONLINE
    sdl3                                      -      -      -        -         -      -      -      -    ONLINE
mainpool                                   674T   358T   316T        -       48G     1%    53%  1.00x    ONLINE  -
  raidz3-0                                 187T   173T  13.9T        -       16G     0%  92.5%      -    ONLINE
    95135db1-c66b-4dc1-af86-9294e996cfd0      -      -      -        -         -      -      -      -    ONLINE
    035d8726-5610-4b32-a260-b391e0aeb809      -      -      -        -         -      -      -      -    ONLINE
    d3a94833-9293-495a-b905-24d285d722ab      -      -      -        -         -      -      -      -    ONLINE
    672d7b6f-965c-415a-95a0-ad5cd974fe7d      -      -      -        -         -      -      -      -    ONLINE
    127da1c4-b2aa-4942-baba-bfae490d66fa      -      -      -        -         -      -      -      -    ONLINE
    150feebe-72b9-4ce5-8039-64018c024f27      -      -      -        -         -      -      -      -    ONLINE
    d3a32ea7-c354-40fe-9ced-828d1d56f4b8      -      -      -        -         -      -      -      -    ONLINE
    23086c7a-8632-4fae-8894-8d53df099b13      -      -      -        -         -      -      -      -    ONLINE
    b7fa0359-1c83-4c3e-a11d-7bf0c5c45c76      -      -      -        -         -      -      -      -    ONLINE
    778df321-8c7e-44dd-a9dd-fe7dbb512822      -      -      -        -         -      -      -      -    ONLINE
    520c23be-cea1-4438-a720-a39a6b516f18      -      -      -        -         -      -      -      -    ONLINE
    1a0f9a46-9c42-400a-8cb6-f394ed7a6ec1      -      -      -        -         -      -      -      -    ONLINE
    4224fdfb-8d0a-46bb-bf3a-516cac17f430      -      -      -        -         -      -      -      -    ONLINE
    a9a2e832-f13f-4f19-acd8-b37f0262a09e      -      -      -        -         -      -      -      -    ONLINE
    896bdbbb-7dc1-476f-bc77-8626de2aec66      -      -      -        -         -      -      -      -    ONLINE
  raidz3-1                                 136T   135T  1.36T        -       16G     5%  99.0%      -    ONLINE
    9afc98cc-e84b-451e-bdb1-f6759f62635e      -      -      -        -         -      -      -      -    ONLINE
    b2953602-932f-4900-a25d-278298762b7f      -      -      -        -         -      -      -      -    ONLINE
    ad53b00b-9d7b-4135-8b92-b1c1a99d6854      -      -      -        -         -      -      -      -    ONLINE
    c234663c-12b6-45dc-bb7b-42135ed53cb9      -      -      -        -         -      -      -      -    ONLINE
    39bd5a10-8e37-4bb1-ad79-dae4692143ba      -      -      -        -         -      -      -      -    ONLINE
    8ee21b30-63f9-4026-8d66-1e69b0ff4972      -      -      -        -         -      -      -      -    ONLINE
    a86e5df2-3a00-4e4c-aec4-cb23627d6215      -      -      -        -         -      -      -      -    ONLINE
    78f631c6-4bee-45fb-8578-36396683c759      -      -      -        -         -      -      -      -    ONLINE
    6def9ab5-9b7c-430a-86f2-ae03ed090493      -      -      -        -         -      -      -      -    ONLINE
    ebb7fbd2-6f7c-400e-ab75-8d664fb15762      -      -      -        -         -      -      -      -    ONLINE
    d83df2df-a0fe-4f0a-895e-c8a02ab44781      -      -      -        -         -      -      -      -    ONLINE
    82c424e4-6d1c-43a7-9ef1-bd9f4b7fe1d5      -      -      -        -         -      -      -      -    ONLINE
    0c988f98-8ae4-40d5-9a9e-a0ced36d5391      -      -      -        -         -      -      -      -    ONLINE
    bab6441a-9605-4683-9599-64efcdec8477      -      -      -        -         -      -      -      -    ONLINE
    827b4605-209e-43d0-bbb2-e44d2f2414d0      -      -      -        -         -      -      -      -    ONLINE
  raidz3-2                                 187T  49.7T   137T        -       16G     0%  26.6%      -    ONLINE
    28b0bf94-4328-4d4c-a3ae-46e010e21f66      -      -      -        -         -      -      -      -    ONLINE
    30a46b87-2ac4-4c40-bf53-e84cf222f5c3      -      -      -        -         -      -      -      -    ONLINE
    e89b48ad-4d6e-4cae-9932-7fda1220d491      -      -      -        -         -      -      -      -    ONLINE
    a16701ab-a151-4b1b-9afc-ca5303d1b53a      -      -      -        -         -      -      -      -    ONLINE
    1a6ee52c-94a8-463a-9edb-b3db277863f0      -      -      -        -         -      -      -      -    ONLINE
    826de601-2706-491b-af3e-e2916fe223c2      -      -      -        -         -      -      -      -    ONLINE
    84571fba-f1ba-4af0-9dd0-1ed54e8ffd1e      -      -      -        -         -      -      -      -    ONLINE
    534c0af8-304d-4296-9133-921bd90a4dee      -      -      -        -         -      -      -      -    ONLINE
    4486918f-d8b1-4479-9439-259b06b3d3c6      -      -      -        -         -      -      -      -    ONLINE
    0435633d-612a-471b-aa56-345caac43ba7      -      -      -        -         -      -      -      -    ONLINE
    6eecb4ff-c9e9-423a-810a-0b65a6a1dccd      -      -      -        -         -      -      -      -    ONLINE
    1b488a53-247d-4786-ba8f-40c29c6ea5e3      -      -      -        -         -      -      -      -    ONLINE
    6f901e50-0b53-4ea8-b83b-712766853919      -      -      -        -         -      -      -      -    ONLINE
    c6d66210-4621-4d59-a751-a5dc9a327d84      -      -      -        -         -      -      -      -    ONLINE
    777da20a-67e6-4452-b892-7a5d548e41cd      -      -      -        -         -      -      -      -    ONLINE
  raidz3-3                                 164T   186G   164T        -         -     0%  0.11%      -    ONLINE
    8056a492-c16a-4bdc-93c1-bb13dfb88af5      -      -      -        -         -      -      -      -    ONLINE
    2ba06a18-4c8b-44d5-b858-f77e03581051      -      -      -        -         -      -      -      -    ONLINE
    f882076c-76af-4bc7-b531-30a9369a73fc      -      -      -        -         -      -      -      -    ONLINE
    47a89ace-81b4-4d71-9851-d4e7bdd0bd88      -      -      -        -         -      -      -      -    ONLINE
    8d9e4c4f-c7a2-4bee-b1ff-4e412f9841ba      -      -      -        -         -      -      -      -    ONLINE
    e30045b7-4a60-40f7-9899-45e4ce166cca      -      -      -        -         -      -      -      -    ONLINE
    0a09baed-9f62-407f-9be8-4909ab3c1060      -      -      -        -         -      -      -      -    ONLINE
    0f571f77-cf5b-493a-bf81-1b4b7e416b61      -      -      -        -         -      -      -      -    ONLINE
    fa835582-aef7-461b-99b4-803ffb30d13b      -      -      -        -         -      -      -      -    ONLINE
    2ce51f18-a0f1-4071-99ac-f3b3896ec42d      -      -      -        -         -      -      -      -    ONLINE
    14792535-5e65-4eab-8993-9cdcaaa4799d      -      -      -        -         -      -      -      -    ONLINE
    33e560ce-e50e-492a-8de4-b823767dd3f6      -      -      -        -         -      -      -      -    ONLINE
    b96570e2-b608-474a-aec5-053506cf0479      -      -      -        -         -      -      -      -    ONLINE
    3a64abe6-379a-4be4-9f4b-b043243fad54      -      -      -        -         -      -      -      -    ONLINE
    c7ee0ddf-17b0-4853-9bb5-9d32e03619f3      -      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -  -
  0e4c710e-b0e0-4b23-bf5c-40db6bc681d4    1.75T   205G  1.55T        -         -     0%  11.5%      -    ONLINE
spare                                         -      -      -        -         -      -      -      -  -
  79a8d09a-a4dc-467a-866d-e364f4a30c79        -      -      -        -         -      -      -      -     AVAIL
  c6effcbe-3c88-445f-9d02-91b5ec741ec5        -      -      -        -         -      -      -      -     AVAIL
  7121a1a9-21f4-43b1-be4a-a4842ab63d90        -      -      -        -         -      -      -      -     AVAIL
  b8336d4d-0b11-490c-9bd0-77a9b80ce584        -      -      -        -         -      -      -      -     AVAIL
  07ceebe0-b976-43ed-bcc2-2e9737f31666        -      -      -        -         -      -      -      -     AVAIL
  1e8efebc-da56-471b-a46d-1a27d559a7d3        -      -      -        -         -      -      -      -     AVAIL
ssdpool                                   11.6T  2.79T  8.84T        -         -     2%    23%  1.00x    ONLINE  -
  raidz1-0                                5.81T  1.39T  4.42T        -         -     2%  23.9%      -    ONLINE
    scsi-35000cca05068a8d0                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca050697eac                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca05069d390                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a1718                    -      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                5.81T  1.40T  4.41T        -         -     2%  24.1%      -    ONLINE
    scsi-35000cca0506a4cd8                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a87e4                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0531605e4                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0531606f0                    -      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -  -
  d54e0d6a-41f2-4d79-a344-7f3f93793e04    1.75T   884M  1.75T        -         -     0%  0.04%      -    ONLINE
spare                                         -      -      -        -         -      -      -      -  -
  scsi-35000cca0532a531c                      -      -      -        -         -      -      -      -     AVAIL
  scsi-35000cca053410650                      -      -      -        -         -      -      -      -     AVAIL
root@TaylorPlex:~# 

I recently tried to re-import my "main" pool as "mainpool" to see if it helped. It didn't.

54 minutes ago, nathan47 said:
root@TaylorPlex:~# zpool list -v
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                  936G  2.61G   933G        -         -     0%     0%  1.00x    ONLINE  -
  mirror-0                                 936G  2.61G   933G        -         -     0%  0.27%      -    ONLINE
    sdk3                                      -      -      -        -         -      -      -      -    ONLINE
    sdl3                                      -      -      -        -         -      -      -      -    ONLINE
mainpool                                   674T   358T   316T        -       48G     1%    53%  1.00x    ONLINE  -
  raidz3-0                                 187T   173T  13.9T        -       16G     0%  92.5%      -    ONLINE
    95135db1-c66b-4dc1-af86-9294e996cfd0      -      -      -        -         -      -      -      -    ONLINE
    035d8726-5610-4b32-a260-b391e0aeb809      -      -      -        -         -      -      -      -    ONLINE
    d3a94833-9293-495a-b905-24d285d722ab      -      -      -        -         -      -      -      -    ONLINE
    672d7b6f-965c-415a-95a0-ad5cd974fe7d      -      -      -        -         -      -      -      -    ONLINE
    127da1c4-b2aa-4942-baba-bfae490d66fa      -      -      -        -         -      -      -      -    ONLINE
    150feebe-72b9-4ce5-8039-64018c024f27      -      -      -        -         -      -      -      -    ONLINE
    d3a32ea7-c354-40fe-9ced-828d1d56f4b8      -      -      -        -         -      -      -      -    ONLINE
    23086c7a-8632-4fae-8894-8d53df099b13      -      -      -        -         -      -      -      -    ONLINE
    b7fa0359-1c83-4c3e-a11d-7bf0c5c45c76      -      -      -        -         -      -      -      -    ONLINE
    778df321-8c7e-44dd-a9dd-fe7dbb512822      -      -      -        -         -      -      -      -    ONLINE
    520c23be-cea1-4438-a720-a39a6b516f18      -      -      -        -         -      -      -      -    ONLINE
    1a0f9a46-9c42-400a-8cb6-f394ed7a6ec1      -      -      -        -         -      -      -      -    ONLINE
    4224fdfb-8d0a-46bb-bf3a-516cac17f430      -      -      -        -         -      -      -      -    ONLINE
    a9a2e832-f13f-4f19-acd8-b37f0262a09e      -      -      -        -         -      -      -      -    ONLINE
    896bdbbb-7dc1-476f-bc77-8626de2aec66      -      -      -        -         -      -      -      -    ONLINE
  raidz3-1                                 136T   135T  1.36T        -       16G     5%  99.0%      -    ONLINE
    9afc98cc-e84b-451e-bdb1-f6759f62635e      -      -      -        -         -      -      -      -    ONLINE
    b2953602-932f-4900-a25d-278298762b7f      -      -      -        -         -      -      -      -    ONLINE
    ad53b00b-9d7b-4135-8b92-b1c1a99d6854      -      -      -        -         -      -      -      -    ONLINE
    c234663c-12b6-45dc-bb7b-42135ed53cb9      -      -      -        -         -      -      -      -    ONLINE
    39bd5a10-8e37-4bb1-ad79-dae4692143ba      -      -      -        -         -      -      -      -    ONLINE
    8ee21b30-63f9-4026-8d66-1e69b0ff4972      -      -      -        -         -      -      -      -    ONLINE
    a86e5df2-3a00-4e4c-aec4-cb23627d6215      -      -      -        -         -      -      -      -    ONLINE
    78f631c6-4bee-45fb-8578-36396683c759      -      -      -        -         -      -      -      -    ONLINE
    6def9ab5-9b7c-430a-86f2-ae03ed090493      -      -      -        -         -      -      -      -    ONLINE
    ebb7fbd2-6f7c-400e-ab75-8d664fb15762      -      -      -        -         -      -      -      -    ONLINE
    d83df2df-a0fe-4f0a-895e-c8a02ab44781      -      -      -        -         -      -      -      -    ONLINE
    82c424e4-6d1c-43a7-9ef1-bd9f4b7fe1d5      -      -      -        -         -      -      -      -    ONLINE
    0c988f98-8ae4-40d5-9a9e-a0ced36d5391      -      -      -        -         -      -      -      -    ONLINE
    bab6441a-9605-4683-9599-64efcdec8477      -      -      -        -         -      -      -      -    ONLINE
    827b4605-209e-43d0-bbb2-e44d2f2414d0      -      -      -        -         -      -      -      -    ONLINE
  raidz3-2                                 187T  49.7T   137T        -       16G     0%  26.6%      -    ONLINE
    28b0bf94-4328-4d4c-a3ae-46e010e21f66      -      -      -        -         -      -      -      -    ONLINE
    30a46b87-2ac4-4c40-bf53-e84cf222f5c3      -      -      -        -         -      -      -      -    ONLINE
    e89b48ad-4d6e-4cae-9932-7fda1220d491      -      -      -        -         -      -      -      -    ONLINE
    a16701ab-a151-4b1b-9afc-ca5303d1b53a      -      -      -        -         -      -      -      -    ONLINE
    1a6ee52c-94a8-463a-9edb-b3db277863f0      -      -      -        -         -      -      -      -    ONLINE
    826de601-2706-491b-af3e-e2916fe223c2      -      -      -        -         -      -      -      -    ONLINE
    84571fba-f1ba-4af0-9dd0-1ed54e8ffd1e      -      -      -        -         -      -      -      -    ONLINE
    534c0af8-304d-4296-9133-921bd90a4dee      -      -      -        -         -      -      -      -    ONLINE
    4486918f-d8b1-4479-9439-259b06b3d3c6      -      -      -        -         -      -      -      -    ONLINE
    0435633d-612a-471b-aa56-345caac43ba7      -      -      -        -         -      -      -      -    ONLINE
    6eecb4ff-c9e9-423a-810a-0b65a6a1dccd      -      -      -        -         -      -      -      -    ONLINE
    1b488a53-247d-4786-ba8f-40c29c6ea5e3      -      -      -        -         -      -      -      -    ONLINE
    6f901e50-0b53-4ea8-b83b-712766853919      -      -      -        -         -      -      -      -    ONLINE
    c6d66210-4621-4d59-a751-a5dc9a327d84      -      -      -        -         -      -      -      -    ONLINE
    777da20a-67e6-4452-b892-7a5d548e41cd      -      -      -        -         -      -      -      -    ONLINE
  raidz3-3                                 164T   186G   164T        -         -     0%  0.11%      -    ONLINE
    8056a492-c16a-4bdc-93c1-bb13dfb88af5      -      -      -        -         -      -      -      -    ONLINE
    2ba06a18-4c8b-44d5-b858-f77e03581051      -      -      -        -         -      -      -      -    ONLINE
    f882076c-76af-4bc7-b531-30a9369a73fc      -      -      -        -         -      -      -      -    ONLINE
    47a89ace-81b4-4d71-9851-d4e7bdd0bd88      -      -      -        -         -      -      -      -    ONLINE
    8d9e4c4f-c7a2-4bee-b1ff-4e412f9841ba      -      -      -        -         -      -      -      -    ONLINE
    e30045b7-4a60-40f7-9899-45e4ce166cca      -      -      -        -         -      -      -      -    ONLINE
    0a09baed-9f62-407f-9be8-4909ab3c1060      -      -      -        -         -      -      -      -    ONLINE
    0f571f77-cf5b-493a-bf81-1b4b7e416b61      -      -      -        -         -      -      -      -    ONLINE
    fa835582-aef7-461b-99b4-803ffb30d13b      -      -      -        -         -      -      -      -    ONLINE
    2ce51f18-a0f1-4071-99ac-f3b3896ec42d      -      -      -        -         -      -      -      -    ONLINE
    14792535-5e65-4eab-8993-9cdcaaa4799d      -      -      -        -         -      -      -      -    ONLINE
    33e560ce-e50e-492a-8de4-b823767dd3f6      -      -      -        -         -      -      -      -    ONLINE
    b96570e2-b608-474a-aec5-053506cf0479      -      -      -        -         -      -      -      -    ONLINE
    3a64abe6-379a-4be4-9f4b-b043243fad54      -      -      -        -         -      -      -      -    ONLINE
    c7ee0ddf-17b0-4853-9bb5-9d32e03619f3      -      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -  -
  0e4c710e-b0e0-4b23-bf5c-40db6bc681d4    1.75T   205G  1.55T        -         -     0%  11.5%      -    ONLINE
spare                                         -      -      -        -         -      -      -      -  -
  79a8d09a-a4dc-467a-866d-e364f4a30c79        -      -      -        -         -      -      -      -     AVAIL
  c6effcbe-3c88-445f-9d02-91b5ec741ec5        -      -      -        -         -      -      -      -     AVAIL
  7121a1a9-21f4-43b1-be4a-a4842ab63d90        -      -      -        -         -      -      -      -     AVAIL
  b8336d4d-0b11-490c-9bd0-77a9b80ce584        -      -      -        -         -      -      -      -     AVAIL
  07ceebe0-b976-43ed-bcc2-2e9737f31666        -      -      -        -         -      -      -      -     AVAIL
  1e8efebc-da56-471b-a46d-1a27d559a7d3        -      -      -        -         -      -      -      -     AVAIL
ssdpool                                   11.6T  2.79T  8.84T        -         -     2%    23%  1.00x    ONLINE  -
  raidz1-0                                5.81T  1.39T  4.42T        -         -     2%  23.9%      -    ONLINE
    scsi-35000cca05068a8d0                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca050697eac                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca05069d390                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a1718                    -      -      -        -         -      -      -      -    ONLINE
  raidz1-1                                5.81T  1.40T  4.41T        -         -     2%  24.1%      -    ONLINE
    scsi-35000cca0506a4cd8                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0506a87e4                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0531605e4                    -      -      -        -         -      -      -      -    ONLINE
    scsi-35000cca0531606f0                    -      -      -        -         -      -      -      -    ONLINE
cache                                         -      -      -        -         -      -      -      -  -
  d54e0d6a-41f2-4d79-a344-7f3f93793e04    1.75T   884M  1.75T        -         -     0%  0.04%      -    ONLINE
spare                                         -      -      -        -         -      -      -      -  -
  scsi-35000cca0532a531c                      -      -      -        -         -      -      -      -     AVAIL
  scsi-35000cca053410650                      -      -      -        -         -      -      -      -     AVAIL
root@TaylorPlex:~# 

I recently tried to re-import my "main" pool as "mainpool" to see if it helped. It didn't.

WOW!!! That's a lot of disks :) 

  • Author
1 hour ago, nathan47 said:
root@TaylorPlex:~# zpool list -v
NAME                                       SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
boot-pool                                  936G  2.61G   933G        -         -     0%     0%  1.00x    ONLINE  -

 

 

That's quite a system!; I fixed the regex for identifying the pools; the "Expandz" was the cause; in abot 10 minutes the update should be live. Thanks for your help.

Edited by Iker

thanks, works great now.

  • 3 weeks later...

Could this plugin get the feature to able to create smb share for datasets via GUI similar to the plugin "unassigned devices"?

Currently I'm modifying the "/boot/config/smb-extra.conf" and trigger smb config reload via "/usr/bin/smbcontrol $(cat /var/run/smbd.pid 2>/dev/null) reload-config 2>&1" to not have the array shutting down for smb share changes.

  • Author

Hi @bergi9 , that is a great idea; probably, I will implement it in a couple of versions; right now, I'm focused on refactoring part of the backend. Do you think that just having templates would be good?, I mean an option on the dataset for "Create SMB Share" and then present the templates as unRaid do "Private, Read Only, Public".

Hey, how is the "Set permissions" part of the Create Dataset supposed to be used?

I keep getting this error message when I'm trying to fill in one of my Unraid share users (and keep having smb issues on Windows, no write access possible so far, that's why i'm trying to investigate zfs user permissions).

Cheers!

 

image.png

  • Author
10 minutes ago, chrismuc said:

Hey, how is the "Set permissions" part of the Create Dataset supposed to be used?

 

That's a weird error; Set Permissions is very straightforward; just specify the Linux permissions that you want the Plugin to set for the folder (777, 755, 755, etc.); Most of the time, I use 775; it saves me a lot of troubles with SMB write.


If you like, you could send me the parameters you are using for the dataset creation in a PM, and I could take a look if there is anything wrong.

@Iker Glad to hear that you would plan to implenent it. It's not time critical feature to me, but nice to have.

 

I did took the commands to reload smb config files from https://github.com/dlandon/unassigned.devices/blob/master/source/Unassigned.devices/include/lib.php#L1627

Maybe you could look how the plugin unassigned.devices did it work with shares. As of 6.10rc8 the unassigned devices share feature does not work for me.

Reading this code from unassigned devices on github it appears to support a range of share options like the unraid share page. Maybe it could help you further.

 

On 5/11/2022 at 7:11 PM, Iker said:

Do you think that just having templates would be good?, I mean an option on the dataset for "Create SMB Share" and then present the templates as unRaid do "Private, Read Only, Public".

Yes, that's good to me. But also if the share is already created on that dataset, then add the option "Remove SMB Share".

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...