JorgeB Posted February 18, 2023 Share Posted February 18, 2023 54 minutes ago, Kilrah said: There's a reason the default is 3, very diminishing returns beyond that. Yep, this is just an example, it will vary with the type of data and hardware (this is with a 15 year old Xeon, newer CPUs should be closer in terms of time), but same data copied to all disks: Time to copy: disk1:compression=off - 19.6s disk2:zstd=3 - 30.7s disk3:zstd=10 - 32.6s disk4:zstd=15 - 71.9s Quote Link to comment
primeval_god Posted February 18, 2023 Share Posted February 18, 2023 A good tool to use for viewing compression is compsize. Unfortunately i dont know that there is an easy way to install it on unRAID. I personally had to build it locally with the slackbuild on unRAID 6.9.2. 1 Quote Link to comment
Hellomynameisleo Posted February 18, 2023 Author Share Posted February 18, 2023 10 hours ago, JorgeB said: Yep, this is just an example, it will vary with the type of data and hardware (this is with a 15 year old Xeon, newer CPUs should be closer in terms of time), but same data copied to all disks: Time to copy: disk1:compression=off - 19.6s disk2:zstd=3 - 30.7s disk3:zstd=10 - 32.6s disk4:zstd=15 - 71.9s I used peazip to see how zstd compared to 7zip compression method and 7zip compresses a lot more better than zstd both at their highest compression level. I think its just better to set a cron job to make 7zip zip up the files in a directory instead of having btrfs do its compression Quote Link to comment
ixit Posted July 19, 2023 Share Posted July 19, 2023 (edited) I see this setting now on Unraid 12.6.3. It seems to me that it would be extremely helpful to implement compression per user share, which seems to me to align more with how humans sort content, and then to be implemented as per path as the user shares are laid out on the underlying disks. Is this on the radar? While one could do this per disk and shuffle the media accordingly, that seems to add implementation complexity for the end user, such that adding an extra 500GB drive might be a solution one choses, rather than setting compression on an existing 8TB drive...by enabling compression on a share holding highly compressible books and documents. As such, approaching the feature with this user-share-centric thought may also reduce barriers to adoption. i.e. Disks 1-6 (each, individually would show...) FS BTRFS: Compression is not set per disk. The compression option is currently set to: Compression options available to User Shares. To set compression per disk, change your selection under the Compression options available to... setting under Settings-->Disk Settings. User share ebooks compression=yes User share familydocs compression=yes User share comics compression=no User share HEVC compression=no If possible, it may be wise to allow exclusively setting compression at the user level or at the disk level, where setting one disables the other, to avoid contested settings. IMHO setting them at the share level adds complexity, but I would hope to reduce unnecessary compute. I'm very new to the concept, just trying to save space on a lot of compressible data, with a fair amount of data I can't compress in the wings, and I know for me, at least, it would seem to help. Edited July 19, 2023 by ixit clarifications Quote Link to comment
itimpi Posted July 19, 2023 Share Posted July 19, 2023 I do not see how this could be implemented as compression happens at the file system level, and User Shares are an abstraction layer above that and are not aware of the underlying file system. Any compression would therefore need to be set at the disk level. Quote Link to comment
Kilrah Posted July 20, 2023 Share Posted July 20, 2023 17 hours ago, itimpi said: I do not see how this could be implemented as compression happens at the file system level, and User Shares are an abstraction layer above that and are not aware of the underlying file system. Any compression would therefore need to be set at the disk level. Compression can be set at folder level, same at dataset level for zfs. Since that's 2 supported filesystems that can do it it would be worth having the option in the share settings. Quote Link to comment
itimpi Posted July 20, 2023 Share Posted July 20, 2023 1 hour ago, Kilrah said: Compression can be set at folder level, same at dataset level for zfs. Since that's 2 supported filesystems that can do it it would be worth having the option in the share settings. This would then be a ZFS specific setting, not a generic setting that can be applied to the main array so it would only make sense for exclusive shares that are on ZFS pools. If it was to be implemented I would think it was easiest to do at the disk/pool level< not at the share level. Quote Link to comment
Kilrah Posted July 20, 2023 Share Posted July 20, 2023 17 minutes ago, itimpi said: This would then be a ZFS specific setting No since btrfs can do it too. Having a share level setting for compression on/off with a note that it enables it for that share's instances that are on filesystems where it's supported would make sense. Quote Link to comment
itimpi Posted July 20, 2023 Share Posted July 20, 2023 3 minutes ago, Kilrah said: No since btrfs can do it too. Having a share level setting for compression on/off with a note that it enables it for that share's instances that are on filesystems where it's supported would make sense. But what if the share is on a mixture of file systems only some of which support compression. Quote Link to comment
Kilrah Posted July 20, 2023 Share Posted July 20, 2023 37 minutes ago, Kilrah said: it enables it for that share's instances that are on filesystems where it's supported Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.