Rysz Posted April 19 Author Share Posted April 19 1 hour ago, DiscoDuck said: Has anyone tried to utilize the preload.so? https://github.com/trapexit/mergerfs/blob/2.39.0/README.md#preloadso To my understanding it should be possible to obtain native disk speeds in qbit/rtorrent if it's passed to the docker container(s) I haven't tried it but it is compiled into the plugin package and present on user's systems at: /usr/lib/mergerfs/preload.so So in theory it should be usable as explained in the README, if anyone tries it please let me know if it works... 🙂 Quote Link to comment
AgentXXL Posted April 19 Share Posted April 19 11 hours ago, Rysz said: Regarding Globbing, you need to escape the globbing characters as follows: mergerfs -o cache.files=partial,dropcacheonclose=true,category.create=mfs /mnt/disks/OfflineBU\* /mnt/addons/BUPool See also here for more details: https://github.com/trapexit/mergerfs#globbing Thanks! I read that last night but haven't given it a try yet. Simple enough solution. While it would be nice to prevent those drives from being seen by UD, at least it should be a workable system. Quote Link to comment
DiscoDuck Posted April 19 Share Posted April 19 6 hours ago, Rysz said: I haven't tried it but it is compiled into the plugin package and present on user's systems at: /usr/lib/mergerfs/preload.so So in theory it should be usable as explained in the README, if anyone tries it please let me know if it works... 🙂 From what I can tell it seems to be working. Set it up as per the Docker usage instructions and mapped path and variables. Also added this to my mergerfs mount options cache.files=per-process,cache.files.process-names="rtorrent|qbittorrent-nox" Cpu usage in htop down from ~15% to 1-2% and my speeds seems much more stable in qbittorrent. 1 Quote Link to comment
tazire Posted April 30 Share Posted April 30 I know nothing about mergerfs tbh but with the likely introduction of multiple arrays is this a good option for merging 2 array pools and then a cache pool as a single share, for want of a better word, assuming unraid doesnt introduce a solution for this themselves. And also will your new snapraid plugin complement a setup like that? Quote Link to comment
Rysz Posted April 30 Author Share Posted April 30 8 minutes ago, tazire said: I know nothing about mergerfs tbh but with the likely introduction of multiple arrays is this a good option for merging 2 array pools and then a cache pool as a single share, for want of a better word, assuming unraid doesnt introduce a solution for this themselves. And also will your new snapraid plugin complement a setup like that? I have no information about multi array support or how it will be implemented, sorry about that. mergerFS basically can pool multiple disk mount-points together, so regardless where the data is residing physically it can then be accessed through one single folder (the mergerFS mount-point). Some users use this functionality to pool data on the array and data residing in the cloud together (e.g. for Plex), others use it to pool multiple disks mounted through Unassigned Devices together for better accessing through a single custom share/location on the server. I'm not sure how well it would perform pooling already pooled data (if I understood you correctly there, since array pools and cache pools already are pooled you'd be pooling pooled data) and if or how that would interfere with Unraid. Regarding SnapRAID, that's not really got anything to do with mergerFS. Basically SnapRAID provides parity for multiple individual disks (mounted in their individual disk mount-points), but doesn't pool array data by default (as Unraid does with the user shares). So mergerFS could be used to pool a SnapRAID array (consisting of a bunch of individual disk mount-points) into one folder (the mergerFS mount-point) for e.g. later sharing through a custom SMB/NFS share. Hope that makes sense a bit. 🙂 Quote Link to comment
tazire Posted April 30 Share Posted April 30 2 minutes ago, Rysz said: I have no information about multi array support or how it will be implemented, sorry about that. mergerFS basically can pool multiple disk mount-points together, so regardless where the data is residing physically it can then be accessed through one single folder (the mergerFS mount-point). Some users use this functionality to pool data on the array and data residing in the cloud together (e.g. for Plex), others use it to pool multiple disks mounted through Unassigned Devices together for better accessing through a single custom share/location on the server. I'm not sure how well it would perform pooling already pooled data (if I understood you correctly there, since array pools and cache pools already are pooled you'd be pooling pooled data) and if or how that would interfere with Unraid. Regarding SnapRAID, that's not really got anything to do with mergerFS. Basically SnapRAID provides parity for multiple individual disks (mounted in their individual disk mount-points), but doesn't pool array data by default (as Unraid does with the user shares). So mergerFS could be used to pool a SnapRAID array (consisting of a bunch of individual disk mount-points) into one folder (the mergerFS mount-point) for e.g. later sharing through a custom SMB/NFS share. Hope that makes sense a bit. 🙂 100% Yea you answered everything. Mostly its my lack of understanding what either was. Thanks for that. Quote Link to comment
thatja Posted May 15 Share Posted May 15 Hello, my system seems to be crashing and I think mergerfs is the cause, this has only been happening recently, is there a way to downgrade to the previous version so I can test if it is indeed the new version of mergerfs? If so, how can I achieve this? Quote Link to comment
Rysz Posted May 15 Author Share Posted May 15 (edited) 2 hours ago, thatja said: Hello, my system seems to be crashing and I think mergerfs is the cause, this has only been happening recently, is there a way to downgrade to the previous version so I can test if it is indeed the new version of mergerfs? If so, how can I achieve this? Edit: I just saw your other support topic, let's continue this there for sake of completeness: Edited May 15 by Rysz Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.