Rysz Posted April 19 Author Share Posted April 19 1 hour ago, DiscoDuck said: Has anyone tried to utilize the preload.so? https://github.com/trapexit/mergerfs/blob/2.39.0/README.md#preloadso To my understanding it should be possible to obtain native disk speeds in qbit/rtorrent if it's passed to the docker container(s) I haven't tried it but it is compiled into the plugin package and present on user's systems at: /usr/lib/mergerfs/preload.so So in theory it should be usable as explained in the README, if anyone tries it please let me know if it works... 🙂 Quote Link to comment
AgentXXL Posted April 19 Share Posted April 19 11 hours ago, Rysz said: Regarding Globbing, you need to escape the globbing characters as follows: mergerfs -o cache.files=partial,dropcacheonclose=true,category.create=mfs /mnt/disks/OfflineBU\* /mnt/addons/BUPool See also here for more details: https://github.com/trapexit/mergerfs#globbing Thanks! I read that last night but haven't given it a try yet. Simple enough solution. While it would be nice to prevent those drives from being seen by UD, at least it should be a workable system. Quote Link to comment
DiscoDuck Posted April 19 Share Posted April 19 6 hours ago, Rysz said: I haven't tried it but it is compiled into the plugin package and present on user's systems at: /usr/lib/mergerfs/preload.so So in theory it should be usable as explained in the README, if anyone tries it please let me know if it works... 🙂 From what I can tell it seems to be working. Set it up as per the Docker usage instructions and mapped path and variables. Also added this to my mergerfs mount options cache.files=per-process,cache.files.process-names="rtorrent|qbittorrent-nox" Cpu usage in htop down from ~15% to 1-2% and my speeds seems much more stable in qbittorrent. 1 Quote Link to comment
tazire Posted April 30 Share Posted April 30 I know nothing about mergerfs tbh but with the likely introduction of multiple arrays is this a good option for merging 2 array pools and then a cache pool as a single share, for want of a better word, assuming unraid doesnt introduce a solution for this themselves. And also will your new snapraid plugin complement a setup like that? Quote Link to comment
Rysz Posted April 30 Author Share Posted April 30 8 minutes ago, tazire said: I know nothing about mergerfs tbh but with the likely introduction of multiple arrays is this a good option for merging 2 array pools and then a cache pool as a single share, for want of a better word, assuming unraid doesnt introduce a solution for this themselves. And also will your new snapraid plugin complement a setup like that? I have no information about multi array support or how it will be implemented, sorry about that. mergerFS basically can pool multiple disk mount-points together, so regardless where the data is residing physically it can then be accessed through one single folder (the mergerFS mount-point). Some users use this functionality to pool data on the array and data residing in the cloud together (e.g. for Plex), others use it to pool multiple disks mounted through Unassigned Devices together for better accessing through a single custom share/location on the server. I'm not sure how well it would perform pooling already pooled data (if I understood you correctly there, since array pools and cache pools already are pooled you'd be pooling pooled data) and if or how that would interfere with Unraid. Regarding SnapRAID, that's not really got anything to do with mergerFS. Basically SnapRAID provides parity for multiple individual disks (mounted in their individual disk mount-points), but doesn't pool array data by default (as Unraid does with the user shares). So mergerFS could be used to pool a SnapRAID array (consisting of a bunch of individual disk mount-points) into one folder (the mergerFS mount-point) for e.g. later sharing through a custom SMB/NFS share. Hope that makes sense a bit. 🙂 Quote Link to comment
tazire Posted April 30 Share Posted April 30 2 minutes ago, Rysz said: I have no information about multi array support or how it will be implemented, sorry about that. mergerFS basically can pool multiple disk mount-points together, so regardless where the data is residing physically it can then be accessed through one single folder (the mergerFS mount-point). Some users use this functionality to pool data on the array and data residing in the cloud together (e.g. for Plex), others use it to pool multiple disks mounted through Unassigned Devices together for better accessing through a single custom share/location on the server. I'm not sure how well it would perform pooling already pooled data (if I understood you correctly there, since array pools and cache pools already are pooled you'd be pooling pooled data) and if or how that would interfere with Unraid. Regarding SnapRAID, that's not really got anything to do with mergerFS. Basically SnapRAID provides parity for multiple individual disks (mounted in their individual disk mount-points), but doesn't pool array data by default (as Unraid does with the user shares). So mergerFS could be used to pool a SnapRAID array (consisting of a bunch of individual disk mount-points) into one folder (the mergerFS mount-point) for e.g. later sharing through a custom SMB/NFS share. Hope that makes sense a bit. 🙂 100% Yea you answered everything. Mostly its my lack of understanding what either was. Thanks for that. Quote Link to comment
thatja Posted May 15 Share Posted May 15 Hello, my system seems to be crashing and I think mergerfs is the cause, this has only been happening recently, is there a way to downgrade to the previous version so I can test if it is indeed the new version of mergerfs? If so, how can I achieve this? Quote Link to comment
Rysz Posted May 15 Author Share Posted May 15 (edited) 2 hours ago, thatja said: Hello, my system seems to be crashing and I think mergerfs is the cause, this has only been happening recently, is there a way to downgrade to the previous version so I can test if it is indeed the new version of mergerfs? If so, how can I achieve this? Edit: I just saw your other support topic, let's continue this there for sake of completeness: Edited May 15 by Rysz Quote Link to comment
djmallon Posted July 15 Share Posted July 15 Hey everyone, having a weird issue. Have been using mergerfs for a while to show two folders as one to get around some media player issues. Script would run after array and services were up. Lately it's stopped mounting and I'm unsure why. The same command works fine when I run it with the Userscripts plugin, but I'd prefer the automated mount/unmount tied to the array instead. Mount command: /usr/bin/mergerfs -f -o cache.files=partial -o dropcacheonclose=true -o category.create=mfs /mnt/user/data/media/music/16bit:/mnt/user/data/media/music/24bit /mnt/user/data/media/music/merged Unmount command: /usr/bin/mergerfs-fusermount -uz /mnt/user/data/media/music/merged Appreciate the help! Quote Link to comment
Rysz Posted July 15 Author Share Posted July 15 (edited) 14 minutes ago, djmallon said: Hey everyone, having a weird issue. Have been using mergerfs for a while to show two folders as one to get around some media player issues. Script would run after array and services were up. Lately it's stopped mounting and I'm unsure why. The same command works fine when I run it with the Userscripts plugin, but I'd prefer the automated mount/unmount tied to the array instead. Mount command: /usr/bin/mergerfs -f -o cache.files=partial -o dropcacheonclose=true -o category.create=mfs /mnt/user/data/media/music/16bit:/mnt/user/data/media/music/24bit /mnt/user/data/media/music/merged Unmount command: /usr/bin/mergerfs-fusermount -uz /mnt/user/data/media/music/merged Appreciate the help! Please post your diagnostics. In which scripts do you put your commands? Did you change anything on the system recently (OS update, etc...) ? Edited July 15 by Rysz Quote Link to comment
djmallon Posted July 15 Share Posted July 15 22 minutes ago, Rysz said: Please post your diagnostics. In which scripts do you put your commands? Did you change anything on the system recently (OS update, etc...) ? Sure thing, diagnostics attached. Mount command is under /etc/mergerfsp/array_start_complete.sh Unmount command is under /etc/mergerfsp/array_stop.sh No major changes to the system in the last month or so, just regular container and plugin updates. t330-diagnostics-20240715-1445.zip Quote Link to comment
Rysz Posted July 15 Author Share Posted July 15 (edited) 27 minutes ago, djmallon said: Sure thing, diagnostics attached. Mount command is under /etc/mergerfsp/array_start_complete.sh Unmount command is under /etc/mergerfsp/array_stop.sh No major changes to the system in the last month or so, just regular container and plugin updates. t330-diagnostics-20240715-1445.zip 162.44 kB · 1 download Jul 14 12:52:23 T330 mergerfs-plugin: mergerFS error -- array_start_complete.sh has timed out after 20s and was stopped with SIGTERM. Jul 15 13:37:40 T330 mergerfs-plugin: mergerFS error -- array_start_complete.sh has timed out after 20s and was stopped with SIGTERM. You need to remove the -f flag from your command, otherwise mergerFS won't go into background mode: -f run mergerfs in foreground The startup mechanism kills mergerFS, because it expects it to go into background mode and it doesn't, so it thinks it's stuck and terminates it. Edited July 15 by Rysz Quote Link to comment
djmallon Posted July 15 Share Posted July 15 10 minutes ago, Rysz said: Jul 14 12:52:23 T330 mergerfs-plugin: mergerFS error -- array_start_complete.sh has timed out after 20s and was stopped with SIGTERM. Jul 15 13:37:40 T330 mergerfs-plugin: mergerFS error -- array_start_complete.sh has timed out after 20s and was stopped with SIGTERM. You need to remove the -f flag from your command, otherwise mergerFS won't go into background mode: -f run mergerfs in foreground The startup mechanism kills mergerFS, because it expects it to go into background mode and it doesn't, so it think it's stuck and terminates it. Success! Thanks Rysz! 1 Quote Link to comment
Rafael Moraes Posted August 5 Share Posted August 5 I had the idea of using this mergerfs plugin to join two unraid servers into a single network mount point. The idea is this: I have 1 unraid server that is already filled with disks, nothing else fits, this server will be my main server where I will have all my applications and virtual machines, and including the merger fs assembly to assign it to my dockers containers and shares, downloads, jellyfin and etc. Then I will upload a second unraid server with parity disks, a cache and a fully functional array, as a good server should be, but this one will only have files. The idea is to create a share in the first, another share in the second and join these two shares using the unassigned devices network mount and mergerfs to join the two shares. In theory it should work perfectly, but I can't find a mounting folder that remains persistent over a reboot. for example: I create the folder /mnt/mergerfs and I run the command in the terminal and everything works. however, when I restart my server the /mnt/mergerfs folder no longer exists and the add-on's .sh script fails. I would like comments if my idea is useful or where I am going wrong in assembling mergerfs, maybe this will be useful for someone else. thanks Quote Link to comment
Rysz Posted August 5 Author Share Posted August 5 2 minutes ago, Rafael Moraes said: I had the idea of using this mergerfs plugin to join two unraid servers into a single network mount point. The idea is this: I have 1 unraid server that is already filled with disks, nothing else fits, this server will be my main server where I will have all my applications and virtual machines, and including the merger fs assembly to assign it to my dockers containers and shares, downloads, jellyfin and etc. Then I will upload a second unraid server with parity disks, a cache and a fully functional array, as a good server should be, but this one will only have files. The idea is to create a share in the first, another share in the second and join these two shares using the unassigned devices network mount and mergerfs to join the two shares. In theory it should work perfectly, but I can't find a mounting folder that remains persistent over a reboot. for example: I create the folder /mnt/mergerfs and I run the command in the terminal and everything works. however, when I restart my server the /mnt/mergerfs folder no longer exists and the add-on's .sh script fails. I would like comments if my idea is useful or where I am going wrong in assembling mergerfs, maybe this will be useful for someone else. thanks Unraid is stored in RAM so any changes to the filesystem, like creating a mountpoint folder, are lost on reboot. You can solve this by creating the mountpoint folder right before the actual mergerfs command in the same plugin's .sh scripts - here's an example: mkdir -p /mnt/addons/mergerfs sleep 1 <your mergerFS mount command here> But I would strongly advise to put the mountpoint folder in e.g. /mnt/addons/mergerfs as that is better protected against overflowing writes in case of network outages. If the network connection drops and your mountpoint folder is in /mnt/mergerfs instead of /mnt/addons/mergerfs, the Docker containers could continue to write to that unprotected location, eventually filling up all the rootfs ( / ) and bringing down the server. /mnt/addons/ is protected against such overflows by being located on a separate RAM-disk (and not the rootfs / ), so your Docker containers would just stop writing once that RAM-disk limit is violated. Please let me know if that worked out for you! Quote Link to comment
Rafael Moraes Posted August 5 Share Posted August 5 2 minutes ago, Rysz said: O Unraid é armazenado na RAM, então quaisquer alterações no sistema de arquivos, como criar uma pasta mountpoint, são perdidas na reinicialização. Você pode resolver isso criando a pasta mountpoint logo antes do comando mergerfs real nos scripts .sh do mesmo plugin - aqui está um exemplo: Mas eu recomendo fortemente colocar a pasta do ponto de montagem em, por exemplo, /mnt/addons/mergerfs, pois ela é mais protegida contra estouros de gravações em caso de interrupções de rede. Se a conexão de rede cair e sua pasta do ponto de montagem estiver em /mnt/mergerfs em vez de /mnt/addons/mergerfs , os contêineres do Docker podem continuar a gravar naquele local desprotegido, eventualmente preenchendo todos os rootfs ( / ) e derrubando o servidor. /mnt/addons/ é protegido contra tais estouros por estar localizado em um disco RAM separado (e não no rootfs / ), então seus contêineres do Docker simplesmente parariam de gravar quando esse limite de disco RAM fosse violado. Por favor, me avise se isso funcionou para você! Thanks for the tip about mounting it in "/mnt/addons/" , I had no idea that unraid was done that way. I've had similar problems with Ubuntu where the network mergerfs mount failed and filled my home disk hahaha curiosity: how do you know that mounting /addons doesn't fill the other disks? Quote Link to comment
Rysz Posted August 5 Author Share Posted August 5 8 minutes ago, Rafael Moraes said: Thanks for the tip about mounting it in "/mnt/addons/" , I had no idea that unraid was done that way. I've had similar problems with Ubuntu where the network mergerfs mount failed and filled my home disk hahaha curiosity: how do you know that mounting /addons doesn't fill the other disks? The Unassigned Devices plugin creates 1MB tmpfs RAM-disks for the /mnt subfolders: disks, remotes, addons and rootshare. This 1MB is enough to create the mountpoint folders, but if a mountpoint drops offline (e.g. due to a network outage) it also ensures a Docker cannot write more than 1MB to that location, which is now no longer in the network but on the RAM-disk (because the mountpoint dropped offline). /mnt/addons has become the standard location on Unraid where to put custom mounts and have the protection against such unwanted writes in case the mountpoint drops offline for whatever reason. You can see this if you run the command "df -h", as an example: tmpfs 1.0M 0 1.0M 0% /mnt/disks tmpfs 1.0M 0 1.0M 0% /mnt/remotes tmpfs 1.0M 0 1.0M 0% /mnt/addons tmpfs 1.0M 0 1.0M 0% /mnt/rootshare So it is best practice to put any custom mountpoints into /mnt/addons, e.g. /mnt/addons/mergerfs. Ideally you will use the array_start_complete.sh of mergerFS, so that the Unassigned Devices plugin is finished setting everything else up before mergerFS starts the mounting process. Here is an example array_start_complete.sh script with fictitious folder names: mkdir -p /mnt/addons/mergerfs sleep 1 mergerfs /mnt/remotes/A:/mnt/remotes/B:/mnt/disk1 /mnt/addons/mergerfs Quote Link to comment
Rafael Moraes Posted August 5 Share Posted August 5 10 minutes ago, Rysz said: O plugin Unassigned Devices cria discos RAM tmpfs de 1 MB para as subpastas /mnt : discos, remotos, complementos e rootshare. Esse 1 MB é suficiente para criar as pastas de ponto de montagem, mas se um ponto de montagem ficar offline (por exemplo, devido a uma queda de rede), ele também garante que um Docker não possa gravar mais de 1 MB naquele local, que agora não está mais na rede, mas no disco RAM (porque o ponto de montagem ficou offline). /mnt/addons se tornou o local padrão no Unraid onde colocar montagens personalizadas e ter proteção contra essas gravações indesejadas caso o ponto de montagem fique offline por qualquer motivo. Você pode ver isso se executar o comando " df -h ", por exemplo: Portanto, é uma prática recomendada colocar quaisquer pontos de montagem personalizados em /mnt/addons , por exemplo, /mnt/addons/mergerfs . O ideal é usar o array_start_complete.sh do mergerFS, para que o plugin Unassigned Devices termine de configurar todo o resto antes que o mergerFS inicie o processo de montagem. Aqui está um exemplo de script array_start_complete.sh com nomes de pastas fictícios: great! Keep maintaining this excellent plugin, it's very useful, it's like setting up a cluster with many NAS machines. thanks 1 Quote Link to comment
Rafael Moraes Posted August 6 Share Posted August 6 17 hours ago, Rysz said: The Unassigned Devices plugin creates 1MB tmpfs RAM-disks for the /mnt subfolders: disks, remotes, addons and rootshare. This 1MB is enough to create the mountpoint folders, but if a mountpoint drops offline (e.g. due to a network outage) it also ensures a Docker cannot write more than 1MB to that location, which is now no longer in the network but on the RAM-disk (because the mountpoint dropped offline). /mnt/addons has become the standard location on Unraid where to put custom mounts and have the protection against such unwanted writes in case the mountpoint drops offline for whatever reason. You can see this if you run the command "df -h", as an example: tmpfs 1.0M 0 1.0M 0% /mnt/disks tmpfs 1.0M 0 1.0M 0% /mnt/remotes tmpfs 1.0M 0 1.0M 0% /mnt/addons tmpfs 1.0M 0 1.0M 0% /mnt/rootshare So it is best practice to put any custom mountpoints into /mnt/addons, e.g. /mnt/addons/mergerfs. Ideally you will use the array_start_complete.sh of mergerFS, so that the Unassigned Devices plugin is finished setting everything else up before mergerFS starts the mounting process. Here is an example array_start_complete.sh script with fictitious folder names: mkdir -p /mnt/addons/mergerfs sleep 1 mergerfs /mnt/remotes/A:/mnt/remotes/B:/mnt/disk1 /mnt/addons/mergerfs Your code worked perfectly for boot assembly. However, I ran into another problem for my project. I would like to share the "/mnt/addons/mergerfs" folder on the network so that it is visible to both the unraids servers and the home network. However, the unassigned devices plugin can only mount network drives created within the unraid "shares", it simply does not fill the "/mnt/addons" folder. How can I do this sharing in an easy and secure way? When I use the dynamix file manager plugin I can navigate to the "/mnt/addons" folder and fill the data, but I cannot copy or move it to another folder because again the file manager only fills the "unraid shares" and not the folders Any ideas for my project? Quote Link to comment
Rysz Posted August 6 Author Share Posted August 6 32 minutes ago, Rafael Moraes said: Your code worked perfectly for boot assembly. However, I ran into another problem for my project. I would like to share the "/mnt/addons/mergerfs" folder on the network so that it is visible to both the unraids servers and the home network. However, the unassigned devices plugin can only mount network drives created within the unraid "shares", it simply does not fill the "/mnt/addons" folder. How can I do this sharing in an easy and secure way? When I use the dynamix file manager plugin I can navigate to the "/mnt/addons" folder and fill the data, but I cannot copy or move it to another folder because again the file manager only fills the "unraid shares" and not the folders Any ideas for my project? Go to Settings => SMB There you can put an additional share configuration into "Samba extra configuration", e.g.: [mergerfs] path = /mnt/addons/mergerfs comment = browseable = yes # Private writeable = no read list = write list = yourUserNameHere valid users = yourUserNameHere case sensitive = auto preserve case = yes short preserve case = yes vfs objects = catia fruit streams_xattr fruit:encoding = native Make sure to change yourUserNameHere to the username of the user you want to give access to. After that you should be able to access your custom share using Samba (SMB) and see your mergerFS files. 1 Quote Link to comment
Rafael Moraes Posted August 6 Share Posted August 6 5 hours ago, Rysz said: Go to Settings => SMB There you can put an additional share configuration into "Samba extra configuration", e.g.: [mergerfs] path = /mnt/addons/mergerfs comment = browseable = yes # Private writeable = no read list = write list = yourUserNameHere valid users = yourUserNameHere case sensitive = auto preserve case = yes short preserve case = yes vfs objects = catia fruit streams_xattr fruit:encoding = native Make sure to change yourUserNameHere to the username of the user you want to give access to. After that you should be able to access your custom share using Samba (SMB) and see your mergerFS files. it works perfectly! unraid should have a more simplified interface to mount a samba.config network drive without having to type code, but that's ok, linux is linux 1 Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.