Jump to content

unr41dus3r

Members
  • Posts

    142
  • Joined

  • Last visited

Report Comments posted by unr41dus3r

  1. 4 hours ago, JorgeB said:

    On second though I think a zfs clone would be more efficient for what you need, assuming it still works:

    zfs snapshot cache-mirror/appdata@borgmatic
    zfs clone cache-mirror/appdata@borgmatic cache-mirror/borgmatic

     

    This won't take any extra space unless source dataset is changed, same as a snapshot.

     

    Then work with /mnt/cache-mirror/borgmatic and once done destroy the clone and the snapshot:

    zfs destroy cache-mirror/borgmatic
    zfs destroy cache-mirror/appdata@borgmatic

     

     

    This works great, this is the solution i was looking for!

    Still a little bit strange the stuck dataset if you use a path in a container

    • Like 1
  2. 1 minute ago, JorgeB said:

    On second though I think a zfs clone would be more efficient for what you need, assuming it still works:

    zfs snapshot cache-mirror/appdata@borgmatic
    zfs clone cache-mirror/appdata@borgmatic cache-mirror/borgmatic

     

    This won't take any extra space unless source dataset is changed, same as a snapshot.

     

    Then work with /mnt/cache-mirror/borgmatic and once done destroy the clone and the snapshot:

    zfs destroy cache-mirror/borgmatic
    zfs destroy cache-mirror/appdata@borgmatic

     

     

    Thanks will try it later

  3. Another interesting find.

     

    If i create a snapshot in cache-mirror@borgmatic and the snapshot dataset is stuck.

    I recieve the error which i described in the RC7 thread when stopping the array/ rebooting

     

    Jun 13 15:25:22 tower emhttpd: shcmd (9223): /usr/sbin/zpool export cache-mirror
    Jun 13 15:25:22 tower root: cannot unmount '/mnt/cache-mirror': pool or dataset is busy
    Jun 13 15:25:22 tower emhttpd: shcmd (9223): exit status: 1

     

     

    If i create a snapshot in cache-mirror/appdata@borgmatic and the snapshot dataset is stuck.

    It is possible to stop the array normally.

  4. 25 minutes ago, JorgeB said:

    You could create a child dataset just for borgmatic, then snapshot and send/receive only that dataset.

     

    Could you explain what you mean with child dataset? I dont exactly know how to do this.

     

    About the loop error message in an container, is this expected or is their something wrong?

  5. 9 minutes ago, JorgeB said:

    Yes, but unless I missed something you were also snapshotting the complete appdata folder before:

    zfs snapshot cache-mirror/appdata@borgmatic

    and than backing up the complete snapshot:

    Or did I misunderstood?

     

     

    Yeah i did a complete snapshot of the appdata folder but this means, the state is frozen and all new files are written additional.

     

    With the send | recieve command it copys the complete frozen state into a new dataset.

    So i have the cache-mirror/appdata dataset incl. snapshot and now a new dataset cache-mirror/borgmatic

     

    I dont know how zfs exactly works with compression and dedup in this situation but now it uses 2 times the space.

    Also it did a more or less copy operation when i checked the saturation of my SSDs.

     

    Edit:

    With this method, i dont recieve the error with the symlink loop / when trying to destroy that the dataset is busy.

     

    But the problem is still its to much overhead as i understand.

     

    Example:

    • cache-mirror/appdata = 150GB
    • cache-mirror/appdata@borgmatic = Snapshot has only some MB or maybe 1GB

     

    • Now i start abackup of the snapshot (frozen state of the specific time)

     

    When i issue the send | recieve command i need much more space and time/performance

    • cache-mirror/appdata = 150GB
    • cache-mirror/appdata@borgmatic = 1GB (for example)
    • cache-mirror/borgmatic = 150GB

     

    So i need double the space and it needs time to the copy data.

     

    I hope it is understandable what i mean.

  6. mmmh, i am afraid this is not what i want to achieve?

     

    zfs send cache-mirror/appdata@borgmatic | zfs receive cache-mirror/borgmatic

     

    The process is now running but i am afraid now it is copying all data from the snapshot time to a cache-mirror/borgmatic dataset?

    I dont want to clone the complete folder. My appdata folder has 150GB ^^

     

    This is contraproductive, the backup would only copy the delta and now i would clone the complete directory for every backup.

    Still will test it, if it works this way, it still needs some time to copy the data

  7. Ok, tested the following.

     

    Working:

    • Create Snapshot
    • Start Container
    • Stop Container
    • Destroy Snapshot

     

    Not Working:

    • Create Snapshot
    • Start Container
      • ls /mnt/cache-mirror/appdata/.zfs/snapshot/borgmatic
        • ls: /mnt/cache-mirror/appdata/.zfs/snapshot/borgmatic/: Symbolic link loop
    • Stop Container
    • Destroy Snapshot
      • cannot destroy snapshot cache-mirror/appdata@borgmatic: dataset is busy

     

     

    Trying to fix the problem:

    • Disable Docker
      • Still Unable to delete Snapshot

     

    • Stop Array
    • Start Array (still stopped docker)
      • Destroy is working

     

    Looks like an process is stuck?

     

    Next step i will try your suggestion

     

  8. I wanna report something, maybe it is already known.

    I use a cache pool as ZFS.

     

    All new shares that will be created are getting an dataset.

    The problem is all existing shares/folders that are only on the pool (example appdata) will never be created as dataset because the "folder" stays on the pool.

    It is necessary to create a new share so a dataset will be created. I migrated now all of my shares/folders to the dataset.

    Dont know if this can be automated or it should be mentioned anywhere

  9. 6 hours ago, JorgeB said:

    Thanks for the report, it does suggest the issue is still present, at least good to see why it was not deleted (dataset busy), earlier releases didn't show that.

     

    Sorry i am highjacking this now, but i think the comments about rc7 are now over ;)

     

    I think i found the problem.

    I tried to shutdown the server and have a problem to unmount cache-mirror

     

    I found out an snapshot is stuck and busy and cant destroy it.

     

    cannot destroy snapshot cache-mirror@backup1: dataset is busy

     

    I create this snapshot with "zfs snapshot cache-mirror@backup1" for my backup docker and use the command "zfs destroy cache-mirror@backup1" to destroy it, but then i recieve the above command.

    At the moment i dont know why this happens. I use this snapshot to backup my appdata folder.

     

    NAME                                                                                           USED  AVAIL     REFER  MOUNTPOINT
    cache-mirror@backup1                                                                        8.26G      -      167G  -

     

    Will debug it next, maybe you have an idea.

     

    Edit:

    It is possible this is an snapshot from and older RC. I started with RC5 and it could be from this version

    The docker container is of course disabled.

  10. @JorgeB

    Sadly this night i received the old error again.

     

    rserver shfs: /usr/sbin/zfs create 'cache-mirror/share1' |& logger
    rserver root: cannot create 'cache-mirror/share1': dataset already exists
    rserver shfs: command failed: 1

     

    I am on RC8 now, but the dataset was probably created with RC7

     

    I can see it tried to delete the dataset but couldnt do it because it was busy. Before this a force mover schedule was running.

     

    Log about the failed destroy (maybe a second try after some time could be implemented?)

     

    tower shfs: /usr/sbin/zfs destroy -r 'cache-mirror/share1' |& logger
    tower root: cannot destroy 'cache-mirror/share1': dataset is busy
    tower shfs: error: retval 1 attempting 'zfs destroy'

     

    Some hours after this error i get the message that the new dataset cache-mirror/share1 could not be created.

     

    I did the following now:

    • In /mnt/cache-mirror/ the share1 folder is missing.
    • With "zfs list" i can see the share1 dataset.
    • zfs mount -a mounted the dataset correct in /mnt/cache-mirror/share1
    • After a mover run, the folder and dataset share1 was correctly removed.

     

    As i wrote above i am on RC8 now BUT the datasets was created with RC7 as i remember.

    I will report back in the RC8 thread or create a new bug report, if the error occurs again.

  11. 9 hours ago, JorgeB said:

    That looks more like a plugin or custom setting reacting to the 'fail' part of pipefail, update to rc8, that no longer shows up in the log.

     

    No more issues with the zfs datasets so far?

     

    You are completly right! It was an script i had. Thanks!

     

    It looks like the dataset errors are also gone! So you idea with creating the dataset new with RC7 should fix the problem.

    Will update to RC8 next

    • Like 1
  12. 2 hours ago, JorgeB said:

    Those are not errors, a dataset is destroyed after the mover runs (if it's empty) and it's recreated when new data is written to that share.

     

    I thought so, still it appears as an error in the notification area :)

     

    image.png.52d6554e0f2f8089da69c3a986738c83.png

     

    Edit:

    In the syslog itself it is a white message and i recieve an emai that it is an alert.

  13. @JorgeB

    For now the massive create errors are gone. (Before the errors was spammed every minute if something was wrong)

    So looks like it helped.

     

    But now i get some single errors over the last hours in syslog, but it is not shown directly as an red error in log.

     

    shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger

    some hours later:

    shfs: set -o pipefail ; /usr/sbin/zfs destroy -r 'cache-mirror/share1' |& logger

     

    No more information is found. in syslog

    A dataset with share1 is now existing

     

    I checked with zfs list now my datasets and i dont have a share1 dataset, after i recieved the above destroy error.

    Maybe it destroyed in a second try but i dont see anything in the log

  14. 3 hours ago, JorgeB said:

    They are not exactly the same, those users have issues removing the datasets, not creating them, but it's likely related, because the end result is the same.

     

    Instead of creating a new folder (which won't be a dataset) try typing:

    zfs mount -a

    If that works you should move everything from that dataset to the array, then see if you can delete it manually with (make sure it's empty):

    zfs destroy cache-mirror/share1

    Then copy new data to that share, a new dataset should now be created, see if the issue happens again, there's a theory that creating the datasets from new with rc7 may fix the problem.

     

    The status now, i ran "zfs mount -a" and after that i saw that all folders / datasets (zfs list) was on the cache drive.

    I started the Mover and now the folders and the datasets are removed. (checked with zfs list again)

     

    It was possible to copy something on the drive and the dataset was created correctly, so the error is gone for the moment.

    I will report back if it should happen again.

    • Like 1
  15. 3 hours ago, JorgeB said:

    The mover will delete the dataset after it finishes, and a new one should be automatically created for any new writes, please create a bug report, reboot the server to clear the logs, run the mover, write something to a share set to cache primary then grab and post the diags.

     

     

    I will do this later, but i found this 2 bug reports and it looks like they are the same as mine.

     

     

    When the above error

     

    shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger
    root: cannot create 'cache-mirror/share1': dataset already exists
    shfs: command failed: 1

     

    occurs i cant write anything onto "mnt/user/share1/testfolder"

    It does not matter, if i want to create files from my pc over smb share or i use the web gui File Browser to create a folder in "mnt/user/share1/"

     

    share1 has cache-mirror as cache drive set

     

    After creating a the folder "mnt/cache-mirror/share1" i can normally copy files and folders into "mnt/user/share1"

  16. The problem is this error occurs when there is no folder and happens with all shares.

     

    After i created the folder the errors doesnt appear again for the moment.

    I think after a mover action the errors appears again. Then the "share folder" is missing, till i would create the folder manual.

     

    Edit:

    This destory error i recieved 1 hours before the create error occurs in the syslog

     

    shfs: set -o pipefail ; /usr/sbin/zfs destroy -r 'cache-/m..r/.../share1' |& logger
    root: cannot destroy 'cache-/m..r/.../share1': dataset is busy
    shfs: error: retval 1 attempting 'zfs destroy'

     

     

  17.   

    Hey everyone,

     

    i switched from a BTRFS Cache Pool to a ZFS Cache Pool.

    Nothing changed in config, i only created the new pool, copied the files, deleted old pool and renamed the new one.

     

    I am recieving now from time to time the error in the log:

    shfs: set -o pipefail ; /usr/sbin/zfs create 'cache-mirror/share1' |& logger
    root: cannot create 'cache-mirror/share1': dataset already exists
    shfs: command failed: 1

     

    When i run the command "mkdir /mnt/cache-mirror/share1" the folder will be created AND i can see that are files in this folder.

    The problem is then fixed for some time.

     

    I had this problem also with rc6.

    Anybody else expierience this error?

×
×
  • Create New...