Jump to content

Nogami

Members
  • Posts

    19
  • Joined

  • Last visited

Report Comments posted by Nogami

  1. 8 hours ago, JorgeB said:

    I cannot reproduce this, I tested with 6.12 stable, but should be the same, while downloading with Firefox to a zfs pool it creates two files, one with 0 bytes with the final name and a temp one during the download:

     

     

    Thanks for checking, I just upgraded to stable, I'll test it later this evening and see if I can replicate it again.

    • Like 1
  2. Strange permissions issue related to cache drive set to ZFS format, fixed when I reformat cache to BTRFS or XFS.

     

    Having a strange permissions issue that seems to have cropped up after setting my NVME cache drive to ZFS (for the compression).

     

    When my browser (Firefox - no Google here) downloads a file, it saves it as temporary file, then renames it to the final filename when the download is complete.  My cache drive is then retaining a 0 byte file (the temporary file)  (which I have no permission to delete over SMB).

     

    image.thumb.png.bc9b7b52c6c9bc787e319a50a69d1820.png

     

    • I can delete or rename the part1(1).zip file normally.
    • the 0 byte file (no extended attributes) cannot be deleted or modified through a SMB connection with the same permissions.

     

    Files created in one-pass on the cache (copied from windows for example) don't seem to have this issue, it seems related to the way that files are streamed to the cache drive as a temporary download file, then renamed when complete and the original file should be deleted, which doesn't happen as it seems to lose the permissions to delete the temp file.


    Even after the files are moved from the ZFS cache to the XFS array, the problem persists.

     

    * Reformatted cache to BTRFS and the problem is gone, so seems specific to the way files are created and modified on ZFS.  The fact that it's Firefox doing it is irrelevant, as any software package is capable of writing files this way.

     

    Edit: running mover made no difference, but once I reformatted the cache drive to BTRFS (with compression), I could delete the 0 byte files.  Makes me think there was some sort of link that mover didn't take care of, but stopping the array, reformatting the cache, seemed to break the link to the old attributes and allow deleting. Very strange.  gonna stay on BTRFS for the time being.

     

    Edit2: Also tried cache as XFS and is fine with no side effects.

     

    Any ideas?

  3. Creating a new share and for some reason it's also creating a new ZFS dataset with the share name as well despite it being set to only use my cache drive.

     

    ie: create share "scanner", set it to use cache pool only.  Creates ZFS dataset "scanner".  Delete the ZFS dataset "scanner" and the "scanner" share vanishes as well.  Very strange.

     

    The share is set to use only the cache pool, however when I add some data to it, it actually goes into the ZFS dataset that it created.

     

    It only seems to be the "scanner" share that causes the issue.  If I create a new share with a different name, a ZFS dataset is not created and it seems fine.  Maybe a reboot is in order...

     

    Edit: deleted everything from the share, deleted the share, re-created it and now no mystery ZFS dataset created.  Maybe there was some sort of mystery symlink or something hanging around?  Strange.  Continuing to test.

  4. 4 hours ago, dlandon said:

    Go to a command line and do this command:

    /bin/df /mnt/remotes/mountpoint --output=size,used,avail

    and show the result.

     

    Here's what I got back, looks like it's reporting it properly though the CLI in 6.11.5 (I added -h for readability)

     

    image.thumb.png.70b0e07f5e95044bea9f230ff59ba740.png

     

     

    But in 6.11.5 SMB Shares:

    image.thumb.png.7d106a7a613b0666a810281c3ca35ce8.png

  5. When mounting a ZFS pool from 6.12 RC5 remotely on 6.11.5 through SMB, the available free space and the size of the overall storage is shown as the main array size, rather than the ZFS pool remaining space.

     

    Apologies if this is still related to not having both of my systems on the RC, the main server gets to move when we go stable.

     

    ZFS Pools on 6.12 RC5, doc_backup has 1.47TB used and 1.64TB remaining in the pool

    1978896328_zfspoolsize.thumb.png.f9e0e91cc481f17de8edc0d9e748ae77.png

     

    The one stock array on the 6.12 RC5 server with 940GB in use.

    1342639646_arraysize.thumb.png.282571c8a4becbffd65f767c7418127b.png

     

    What 6.11.5 sees when connecting remotely through SMB, doc_backup shows 940GB used, and 7TB free, not the real ZFS pool values.

    2016251568_reportedsize.thumb.png.22cd12d99c3075b4882392f507cfae74.png

×
×
  • Create New...