Jump to content
  • 6.12.0.Rc2 and ZFS Z1 (Raid5) Cache - can't delete empty Folders


    DataCollector
    • Minor

    Hello.

     

    I don't know if I found a 6.12.0-RC2 oder ZFS bug or if it is a problem in my configuration.

     

    When I used 6.11.5 stable, I used 3 pieces of 2TB SSD combined with BTRFS als a Cache for several SMB shares.
    There I copied from external Win-PC to the share, and the files were buffered in this Cache ("cachesam").
    Then i could automatically (mover) or manually (mc, krusader oder File Plugin) move the fiiles and directories from the cache to the Disks in the array.

     

    Now I changed to 6.12.0-RC2, deleted this cache, created it with ZFS Z1 (Raid5) new.

    Here I do also copy files from the same extrenal Win-PC to the smb share, it gets buffered on the cache ("cachesam").
    Then I try to manually move the directory/folders including the files to the array.
    The files get moved, but mc and even the file manager plugin can't move or even delete the now empty DATA or AEV folders on the cache ("cachesam").

     

    If technical info is neccesary: see my Footer, this is my 2nd System (Shipon).

     

    Here some Pics:
    Screenshot 1: mc can not move the DATA Directory to the Array. The files in it were moved.
    Screenshot 2: FileManager Plugin tries to delete the empty DATA folder. but after I klick proceed and reload the cachesam Directory DATA still exists.
    Screenshot 3: Emptied Directory(Folder)  /mnt/cachesam/DATA  shown in Filemanager plugin and mc simultaneous.

    DELETE-ERROR-11--2023-03-30 13_08_49-102 Tessa Main (TESSA-MAIN1064) – VNC Viewer.png

    DELETE-ERROR-22--2023-03-30 13_08_49-102 Tessa Main (TESSA-MAIN1064) – VNC Viewer.png

    DELETE-ERROR-00--2023-03-30 13_08_49-102 Tessa Main (TESSA-MAIN1064) – VNC Viewer.png




    User Feedback

    Recommended Comments

    If the share was created using the GUI it will create a zfs dataset, make sure it's empty and you can delete with:

    zfs destroy cachesam/DATA

     

    Note that if you create the folder/share manually with mkdir for example it will create a regular folder that can be normally deleted.

    Link to comment

    When the share is empty, you can use the GUI to delete it.

    See Shares -> sharename -> delete

     

    Ps keep in mind a share is a folder, but a special case at the same time.

    Shares are best managed via the GUI

     

    Link to comment
    12 hours ago, JorgeB said:

    If the share was created using the GUI it will create a zfs dataset, make sure it's empty and you can delete with:

    zfs destroy cachesam/DATA

     

    Note that if you create the folder/share manually with mkdir for example it will create a regular folder that can be normally deleted.

    Like I mentioned: the folders are automatically created (from unraid) as I copy data from an external windows PC to the share (with this Cache/ZFS-Pool).

    Link to comment
    13 hours ago, bonienl said:

    When the share is empty, you can use the GUI to delete it.

    See Shares -> sharename -> delete

    Stop: I do not want to delete the complete share including the folder on the array!

    unraid uses the ZFS Pool as Cache and creates additional folders/directories inside the share on the Cache SSDs.

    And I want to manually delete the folders in/on the cache SSDs.

     

    I guess my description was to shallow, so I add 3 additional Screenshots

    Screenshot 1: View of my Shares

    Screenshot 2: View of the Pool "cachesam" (Pool of 3 Samsung NVMe SSDs zfs)

    Screenshot 3: folders/directories created from unraid, because of the fact that the shares AEV and DATA usees the pool "cachesam" as cache.

     

    13 hours ago, bonienl said:

    Ps keep in mind a share is a folder, but a special case at the same time.

    Thanks, but I do not want to kill the complete share (every folder even on the Array). I want to delete the now empty folders created from unraid because this pool is used as cache. (and yes, I do try to kill the folder from the pool directly and not from the usershare (which would include the same names folders on the Array). And I prefer to use mc and not console.)

    With unraid 6.11.5 stable and BTRFS Cache it worked that way.

    But since BTRFS Pools (consisting of several Disks/SSDs) are not very stable, I switched to 6.12.0-rc2 with zfs because I thought it would work the same on this file-level, except it would be more stable as BTRFS Raid.

     

    13 hours ago, bonienl said:

    Shares are best managed via the GUI

    Yes, but I do not want to kill the user shares called AEV or DATA, I want to delete only the now empty folder(s) on only the cache disk/pool/ssds.

    Shares2023-03-31 02_39_18-102 Tessa Main (TESSA-MAIN1064) – VNC Viewer.png

    Shares2-2023-03-31 02_40_16-102 Tessa Main (TESSA-MAIN1064) – VNC Viewer.png

    Shares-3-2023-03-31 02_40_47-102 Tessa Main (TESSA-MAIN1064) – VNC Viewer.png

    Edited by DataCollector
    clarify
    Link to comment
    6 hours ago, DataCollector said:

    Like I mentioned: the folders are automatically created (from unraid) as I copy data from an external windows PC to the share (with this Cache/ZFS-Pool).

    Same as if you create them using the GUI, it will create a ZFS dataset, this is intended behavior, and like mentioned those can only be deleted with zfs destroy, alternatively you can create the folders manually with mkdir before doing the transfer and then you can delete them normally.

    Link to comment

    Also note that if you use the mover to move the data from the pool to the array the mover will delete those empty datasets in the end.

    Link to comment
    3 hours ago, JorgeB said:

    Also note that if you use the mover to move the data from the pool to the array the mover will delete those empty datasets in the end.

    That I tried: no. an empty (AEV) folder still exists.

     

    But thanks anyway. It seems zfs may be more stable than BTRFS, but sadly it does not behave the same when making normal file operations.

    😔

    • Upvote 1
    Link to comment
    Just now, DataCollector said:

    That I tried: no. an empty (AEV) folder still exists.

    I cannot replicate that and there are no other reports, please enable mover logging, run the mover and post the diagnostics.

    Link to comment
    2 minutes ago, JorgeB said:

    I cannot replicate that and there are no other reports, please enable mover logging, run the mover and post the diagnostics.

    Since AEV ist now (since this Morning [geman timezone]) in use, I can not now test it again.

    I will test it again in several hours

     

    Link to comment

    I made an update to the Dynamix File Manager plugin, which now allows ZFS shares to be deleted or moved.

     

    There is some caution to take into account:

    1. Shares created in the GUI on a ZFS pool, are automatically set as a ZFS dataset. When moving these shares to a non ZFS pool (or array), the dataset is lost, content is never lost
    2. Shares with a specific (only) pool designation need to be updated to reflect the new destination.

     

    Example:

    The share "third" is created and exists only on the ZFS pool

     

    image.png

     

    Now we want to move this share to the "cache_extra" pool, which is a btrfs pool.

    It is required to update the share settings

     

    image.png

     

    Use the file manager to move the share

     

    image.png

     

    After the move the share "third" is completely gone from the ZFS pool

     

    image.png

     

    AGAIN: USE WITH CAUTION!

     

    Link to comment

    So, I could test it (using mover to remove empty folder from zfs pool/cache) again yesterday.

     

    In this test mover did move the empty AEV folder (after manual staring mover, it was ready in a flash [no surprise, because there was only the empty folder to move], then the AEV folder was gone from the zfs pool/cache).
    I am puzzled, because before I started this Diskussion, I did do the same.

     

    So now I know that I can at least use this method to remove the empty folgers from the zfs Pool/cache.

     

    I thank bonienl very much for update the File Manager plugin to be possible to use it for this purpose!

     

    I still think, it would be usefull and userfriendly, if the supplied tool (mc) could also work with it.

    Edited by DataCollector
    • Like 1
    Link to comment
    On 3/31/2023 at 6:18 AM, DataCollector said:

    That I tried: no. an empty (AEV) folder still exists.

     

    But thanks anyway. It seems zfs may be more stable than BTRFS, but sadly it does not behave the same when making normal file operations.

    😔

     

    I am seeing the exact same issue since I switch me cache from btrfs to zfs.  The top level "share" folders will not delete from the cache even through the are moved with mover and are completely empty.   When I try to delete them even with the command line it give some type of message like the resource is busy.  Whether through mover or manually I can move and delete folders and files underneither the "share" top level directories, so that is good.

    It is just the top level directories that are the problem.  I have been using unraid heavily for serveral years now and I have never seen this issue until I moved to zfs.  It doesn't cause any real issues, it's just announcing because I am OCD about clean my server cleaned up.

    Link to comment
    5 hours ago, howitzer79 said:

    When I try to delete them even with the command line it give some type of message like the resource is busy.

    What command are you using? Top level shares on zfs, if created by Unraid, will be datasets, so you would need to use:

     

    zfs destroy pool\dataset_name

     

    Link to comment

    Can confirm, I'm having the same problem.

     

    I have 1 drive xfs pool that works, but once I add a zfs cache drive I get permission errors and problems. Also the share folder is unable to be deleted unless I use zfs destroy.

     

    I'm on 7.0.0 beta 2.

    Link to comment
    5 hours ago, JustOverride said:

    Also the share folder is unable to be deleted unless I use zfs destroy.

    This is normal:

     

    On 7/24/2024 at 7:35 AM, JorgeB said:

    Top level shares on zfs, if created by Unraid, will be datasets, so you would need to use:

     

    zfs destroy pool\dataset_name

     

    Link to comment

    If this is the case, you can't use a ZFS pool as cache pool for another pool? If so, I guess it would make sense why when it is setup this way things just don't work well. If this is the expected behavior then I think Unraid shouldn't allow setting a ZFS pool as cache imo, or at least show a message of the expected use-case.

    Edited by JustOverride
    Link to comment
    15 hours ago, JustOverride said:

    If this is the case, you can't use a ZFS pool as cache pool for another pool?

    Not sure I follow, why not?

     

     

    Link to comment
    On 8/25/2024 at 4:50 AM, JorgeB said:

    Not sure I follow, why not?

     

     

    Because I see that right now this is causing problems as I mentioned above, and you said that was normal/expected. To explain in further detail...

     

    1 drive HDD XFS pool, with a 1 drive NVMe ZFS pool (as cache) is causing folders to be made in the cache pool but not able to be deleted unless I ZFS destroy. These folders already exist in the HDD XFS pool. Also I'm seeing permission errors that don't happen otherwise.

     

    Is this the expected behavior or is this a bug?

    Link to comment

    That is normal as mentioned, but the mover will still destroy the zfs datasets, if you are not using the mover but some kind of custom script, you would need to adjust it.

    Link to comment
    6 hours ago, JustOverride said:

    The mover wasn't destroying it.

    It should as long as it's empty after the move, enable mover logging, run the mover and post the diagnostics. 

    Link to comment
    On 8/29/2024 at 3:19 AM, JorgeB said:

    It should as long as it's empty after the move, enable mover logging, run the mover and post the diagnostics. 

    I've already took it off and going to change my layout instead to run the ZFS from a big NVMe so I won't need to cache from another drive, but if I decide to run it this way again and run into problems I'll post it. This is running in my production environment ( I know I shouldn't do betas in production but you know, w.e) that being said I'm not going to experiment further.

    BTW, why isn't the destroy command just added to the GUI delete when needed?

    Edited by JustOverride
    Link to comment


    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.

×
×
  • Create New...