Nogami
-
Posts
19 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by Nogami
-
-
Strange permissions issue related to cache drive set to ZFS format, fixed when I reformat cache to BTRFS or XFS.
Having a strange permissions issue that seems to have cropped up after setting my NVME cache drive to ZFS (for the compression).
When my browser (Firefox - no Google here) downloads a file, it saves it as temporary file, then renames it to the final filename when the download is complete. My cache drive is then retaining a 0 byte file (the temporary file) (which I have no permission to delete over SMB).
- I can delete or rename the part1(1).zip file normally.
- the 0 byte file (no extended attributes) cannot be deleted or modified through a SMB connection with the same permissions.
Files created in one-pass on the cache (copied from windows for example) don't seem to have this issue, it seems related to the way that files are streamed to the cache drive as a temporary download file, then renamed when complete and the original file should be deleted, which doesn't happen as it seems to lose the permissions to delete the temp file.
Even after the files are moved from the ZFS cache to the XFS array, the problem persists.* Reformatted cache to BTRFS and the problem is gone, so seems specific to the way files are created and modified on ZFS. The fact that it's Firefox doing it is irrelevant, as any software package is capable of writing files this way.
Edit: running mover made no difference, but once I reformatted the cache drive to BTRFS (with compression), I could delete the 0 byte files. Makes me think there was some sort of link that mover didn't take care of, but stopping the array, reformatting the cache, seemed to break the link to the old attributes and allow deleting. Very strange. gonna stay on BTRFS for the time being.
Edit2: Also tried cache as XFS and is fine with no side effects.
Any ideas?
-
Creating a new share and for some reason it's also creating a new ZFS dataset with the share name as well despite it being set to only use my cache drive.
ie: create share "scanner", set it to use cache pool only. Creates ZFS dataset "scanner". Delete the ZFS dataset "scanner" and the "scanner" share vanishes as well. Very strange.
The share is set to use only the cache pool, however when I add some data to it, it actually goes into the ZFS dataset that it created.
It only seems to be the "scanner" share that causes the issue. If I create a new share with a different name, a ZFS dataset is not created and it seems fine. Maybe a reboot is in order...
Edit: deleted everything from the share, deleted the share, re-created it and now no mystery ZFS dataset created. Maybe there was some sort of mystery symlink or something hanging around? Strange. Continuing to test.
-
4 minutes ago, limetech said:
This is only for zpools created with 6.12.0-beta5 (not -rc5) which was the first "beta" which had zfs pool support.
Perfect, thank you! Brain wasn't keeping up with my eyes. Great job BTW, this is awesome!
- 1
-
Just wondering why it's recommended to erase and re-create zpools? My RC5 ZFS pools seemed to come in OK. Am I overlooking something basic (bit of a ZFS newb).
-
-
When mounting a ZFS pool from 6.12 RC5 remotely on 6.11.5 through SMB, the available free space and the size of the overall storage is shown as the main array size, rather than the ZFS pool remaining space.
Apologies if this is still related to not having both of my systems on the RC, the main server gets to move when we go stable.
ZFS Pools on 6.12 RC5, doc_backup has 1.47TB used and 1.64TB remaining in the pool
The one stock array on the 6.12 RC5 server with 940GB in use.
What 6.11.5 sees when connecting remotely through SMB, doc_backup shows 940GB used, and 7TB free, not the real ZFS pool values.
-
Unraid OS version 6.12.0-rc8 available
-
-
-
-
-
in Prereleases
Posted
Thanks for checking, I just upgraded to stable, I'll test it later this evening and see if I can replicate it again.