testdasi Posted March 3, 2020 Share Posted March 3, 2020 35 minutes ago, livingonline8 said: I uploaded a file called "[take my home" as you can see I added "[" by mistake in the beginning of the file name... so I went to rclone_mount folder but to change it but I cannot, it keeps giving me an error message that says "an unexpected error is keeping you from renaming the file" Are you getting that error from Windows (cuz I'm pretty sure that's a Windows error)? Try deleting the file from the console. Quote Link to comment
Stevenson Chittumuri Posted March 3, 2020 Share Posted March 3, 2020 Hi I got this error after restarting the Unraid Server, and plex does this then crashes. I can verify that the mount is mounted, I was able to access the rclone mount by connecting to the server through my local network. Quote Link to comment
tsmebro Posted March 3, 2020 Share Posted March 3, 2020 32 minutes ago, Stevenson Chittumuri said: Hi I got this error after restarting the Unraid Server, and plex does this then crashes. I can verify that the mount is mounted, I was able to access the rclone mount by connecting to the server through my local network. I was getting the same error. I deleted any mergfs docker images and pulled the latest one like @teh0wner suggested above and the error went away 1 Quote Link to comment
Stevenson Chittumuri Posted March 3, 2020 Share Posted March 3, 2020 (edited) 16 minutes ago, tsmebro said: I was getting the same error. I deleted any mergfs docker images and pulled the latest one like @teh0wner suggested above and the error went away Oh man that sucks, so every time i restart Unraid I need to remove plex and deluge (cuz they use mergerfs folder) then install them/configure them again? Edit: Did you mean remove this? Edit2: Ayy it worked! Thanks man! Edited March 3, 2020 by Stevenson Chittumuri Quote Link to comment
DZMM Posted March 3, 2020 Author Share Posted March 3, 2020 49 minutes ago, Stevenson Chittumuri said: Oh man that sucks, so every time i restart Unraid I need to remove plex and deluge (cuz they use mergerfs folder) then install them/configure them again? No, disable autostart and let the scripts launch the dockers when the mounts are ready Quote Link to comment
DZMM Posted March 3, 2020 Author Share Posted March 3, 2020 (edited) 1 hour ago, tsmebro said: I was getting the same error. I deleted any mergfs docker images and pulled the latest one like @teh0wner suggested above and the error went away Out of interest what have you got your for Plex delete settings? I can't remember the exact wording, but did you disable the delete automatically if missing (or words to that effect) option? Edited March 3, 2020 by DZMM Quote Link to comment
Spatial Disorder Posted March 3, 2020 Share Posted March 3, 2020 (edited) @DZMM and @teh0wner Thanks for the help, I'm back up and running and have moved to the latest working scripts with no issues the last 24 hours. I'm fairly certain the new scripts were failing due to the latest mergerfs not getting pulled. Just for good measure I: Deleted all shares (local, rclone mount, mergerfs mount) Ran docker rmi [imageID] to get rid of the old mergerfs image After some reading up on rclone vs rclone-beta, I reverted back to rclone (I don't think this was the issue, but for this purpose would rather be on a stable release and I see nothing that needs the new stuff in the beta release). I'm sure it's fine either way. Pulled latest github scripts and modified variables for my setup Clean reboot for good measure Thanks for all the work putting this together 🤘 Edited March 3, 2020 by Spatial Disorder 1 Quote Link to comment
tsmebro Posted March 3, 2020 Share Posted March 3, 2020 40 minutes ago, DZMM said: Out of interest what have you got your for Plex delete settings? I can't remember the exact wording, but did you disable the delete automatically if missing (or words to that effect) option? I've got Plex set to not delete trash automatically Quote Link to comment
teh0wner Posted March 3, 2020 Share Posted March 3, 2020 (edited) 1 hour ago, Spatial Disorder said: @DZMM and @teh0wner Thanks for the help, I'm back up and running and have moved to the latest working scripts with no issues the last 24 hours. I'm fairly certain the new scripts were failing due to the latest mergerfs not getting pulled. Just for good measure I: Deleted all shares (local, rclone mount, mergerfs mount) Ran docker rmi [imageID] to get rid of the old mergerfs image After some reading up on rclone vs rclone-beta, I reverted back to rclone (I don't think this was the issue, but for this purpose would rather be on a stable release and I see nothing that needs the new stuff in the beta release). I'm sure it's fine either way. Pulled latest github scripts and modified variables for my setup Clean reboot for good measure Thanks for all the work putting this together 🤘 Glad it worked for you! I've been playing around with rclone and mergerfs a lot the past few days, so if you have any other questions fire away. The only thing I can't figure out which maybe @DZMM might be able to help, is getting hard-links to work with Sonarr/Radarr. It seems completely random when they work and when they dont. I have the following setup on my Sonarr/Radarr and Deluge /data -> /mnt/user/data/ so all docker containers are pointing to the same location with the same exact 'root mount' of /data. Deluge downloads to /data/gdrive_encrypt/vfs/downloads and media lives in /data/gdrive_encrypt_vfs/media, so everything pretty much 'local'. Sometimes a hard-link is created (as I can see the inode count is 2 using ls -li, and sometimes it's 1, which means hard link failed and Sonarr/Radarr had to copy). Any ideas why this might be happening? Edited March 3, 2020 by teh0wner Quote Link to comment
tsmebro Posted March 3, 2020 Share Posted March 3, 2020 1 hour ago, teh0wner said: Sometimes a hard-link is created (as I can see the inode count is 2 using ls -li, and sometimes it's 1, which means hard link failed and Sonarr/Radarr had to copy). Any ideas why this might be happening? Got me curious so I just checked mine. All my files that sonarr/radarr have moved from rtorrent show an inode count of 2 with files from sab showing 1 (guessing cause sab just moves them rather than copying?) The only difference I can see with my setup is all my containers paths are /user -> /mnt/user Not really sure if that's enough to make a difference though? I also have my local share set to use the cache drive Quote Link to comment
teh0wner Posted March 3, 2020 Share Posted March 3, 2020 Managed to grab some logs from Sonarr.. The error is quite obvious, but not quite sure what the solution is. 20-3-3 22:02:41.3|Debug|EpisodeFileMovingService|Hardlinking episode file: /data/local_storage/downloads/deluge/complete/ABC.mkv to /data/merged_storage/google_drive_encrypted_vfs/media/ABC.mkv 20-3-3 22:02:41.3|Debug|DiskTransferService|HardLinkOrCopy [/data/local_storage/downloads/deluge/complete/ABC.mkv] > [/data/merged_storage/google_drive_encrypted_vfs/media/ABC.mkv] 20-3-3 22:02:41.3|Debug|DiskProvider|Hardlink '/data/local_storage/downloads/deluge/complete/ABC.mkv' to '/data/merged_storage/google_drive_encrypted_vfs/media/ABC.mkv' failed. [v2.0.0.5338] Mono.Unix.UnixIOException: Invalid cross-device link [EXDEV]. at Mono.Unix.UnixMarshal.ThrowExceptionForLastError () [0x00005] in <4a040cc44eb54354b3d289eb2bbc1e23>:0 at Mono.Unix.UnixMarshal.ThrowExceptionForLastErrorIf (System.Int32 retval) [0x00004] in <4a040cc44eb54354b3d289eb2bbc1e23>:0 at Mono.Unix.UnixFileSystemInfo.CreateLink (System.String path) [0x0000c] in <4a040cc44eb54354b3d289eb2bbc1e23>:0 at NzbDrone.Mono.Disk.DiskProvider.TryCreateHardLink (System.String source, System.String destination) [0x00013] in M:\BuildAgent\work\5d7581516c0ee5b3\src\NzbDrone.Mono\Disk\DiskProvider.cs:182 Quote Link to comment
DZMM Posted March 3, 2020 Author Share Posted March 3, 2020 (edited) @teh0wner Not sure - looks right. I have to ask - do you have 'use hardlinks instead of copy' selected? Maybe you have to go up a level. I use /user -> /mnt/user/ for all my mappings and it seems to work all the time. Edited March 3, 2020 by DZMM Quote Link to comment
teh0wner Posted March 3, 2020 Share Posted March 3, 2020 (edited) 25 minutes ago, DZMM said: @teh0wner Not sure - looks right. I have to ask - do you have 'use hardlinks instead of copy' selected? Maybe you have to go up a level. I use /user -> /mnt/user/ for all my mappings and it seems to work all the time. I think I've figured it out, although not quite sure why the behaviour is as is.. My rclone_mount set-up looks as follows (relevant parts only) : RcloneRemoteName="google_drive_encrypted_vfs" LocalFilesShare="/mnt/user/data/staged_storage" RcloneMountShare="/mnt/user/data/remote_storage" MergerfsMountShare="/mnt/user/data/merged_storage" LocalFilesShare2="/mnt/user/data/local_storage" The following configuration fails to create a hard-link: Deluge downloads to LocalFileShare2/incomplete Deluge moves once complete to LocalFileShare2/complete Sonarr attemps to create hard-link to MergerfsMountShare/media The following configuration creates the hard-link successfully: Deluge downloads to LocalFileShare2/complete Deluge moves once complete to MergerfsMountShare/complete (and it's painfully slow) Sonarr create hard-link to MergerfsMountShare/media successfully I don't see though why Mergerfs wouldn't work in the first scenario, as LocalFilesShare2 is still 'merged' into a mount on MergerfsMountShare Edited March 3, 2020 by teh0wner Quote Link to comment
DZMM Posted March 3, 2020 Author Share Posted March 3, 2020 16 minutes ago, teh0wner said: Deluge downloads to LocalFileShare2/complete Deluge moves once complete to MergerfsMountShare/complete (and it's painfully slow) Sonarr create hard-link to MergerfsMountShare/media successfully Mergerfs is behaving as expected. LocalFileShare2/complete & MergerfsMountShare are seen as two different 'drives' so you get a slow CoW, whereas the last two are moving files within MergerfsMountShare so it hardlinks. ALL dockers have to use paths within the mergerfs mount to avoid issues like this i.e. Deluge needs to download to MergerfsMountShare/Complete - not a local path. Don't use the local paths for anything - my advice is for day-2-day usage, forget it's there. Quote Link to comment
teh0wner Posted March 4, 2020 Share Posted March 4, 2020 8 hours ago, DZMM said: Mergerfs is behaving as expected. LocalFileShare2/complete & MergerfsMountShare are seen as two different 'drives' so you get a slow CoW, whereas the last two are moving files within MergerfsMountShare so it hardlinks. ALL dockers have to use paths within the mergerfs mount to avoid issues like this i.e. Deluge needs to download to MergerfsMountShare/Complete - not a local path. Don't use the local paths for anything - my advice is for day-2-day usage, forget it's there. Actually, after the upload script has finished it's own thing, it looks like the hard-links are gone again, and the inode count is 1. Quote Link to comment
DZMM Posted March 4, 2020 Author Share Posted March 4, 2020 3 hours ago, teh0wner said: Actually, after the upload script has finished it's own thing, it looks like the hard-links are gone again, and the inode count is 1. Which makes sense because there's now a real copy of the file on gdrive because the local hardlink 'version' has 'gone', so there's no need for a local hardlink anymore! Quote Link to comment
takeover Posted March 4, 2020 Share Posted March 4, 2020 I got Discord webhook notifications working. Not sure if you would like to implement this into your scripts. Quote Link to comment
Urya Posted March 4, 2020 Share Posted March 4, 2020 Hi, First: thanks for this comprehensive guide and getting this all to work. It's been a great help to me so far. I recently moved houses and had to restart my server. Since then, I haven't been able to get Rclone to work properly. The mounting process goes fine (I had to remove some leftover files before the mountpoint would stick, but it's been succesful ever since) and both my local and mount_rclone folder seem fine. The mergerfs folder, however, seems to be causing weird issues. I can't open it in any way to check what's inside: Unraid's web explorer loads indefinitely, I can't use samba share for the mergerfs and even the terminal gets stuck loading when entering an 'ls'. My dockers can't peek inside either and think my folders are gone. What am I doing wrong? I've uploaded my recent log here: https://pastebin.com/NYU5ejj0 Quote Link to comment
DZMM Posted March 4, 2020 Author Share Posted March 4, 2020 @Urya see a few posts up about mergerfs build issues and how to fix Quote Link to comment
Urya Posted March 4, 2020 Share Posted March 4, 2020 1 hour ago, DZMM said: @Urya see a few posts up about mergerfs build issues and how to fix That fixed it, thanks! Sorry, didn't realize it covered a similar issue, it sounded a bit different. Quote Link to comment
trapexit Posted March 4, 2020 Share Posted March 4, 2020 (edited) I'm not fully following all the issues here so just going to make a few statements: 1) The container image I provided is just a builder. It just grabs the source and builds a static binary. I didn't realize people were using it as a dependency for other things. I made it to make it easier to build a static binary manually. The issue was that it was pulling from the master branch and I checked in something that broke building on Alpine (which I use for static building in the image). That image now will build the latest tagged version which right now is 2.29.0. 2) Not sure I follow the situations where people said things blocked. There are no known bugs in mergerfs though there are some kernel bugs in recent kernel versions. Not sure what version UnRAID uses or if it is impacted. I'd need more info / reproducible example to comment further. 3) hardlinks. If the software is trying to link across devices... it'll fail. If you have a path preserving policy and a link would result in cross devices... it'll fail. Only if link/create are not path preserving will it always succeed (since it clones the path on the same branch). Even if the underlying device is being linked to/from mergerfs it will fail. *Any* cross mount link or rename will fail. Edited March 4, 2020 by trapexit 3 Quote Link to comment
DZMM Posted March 4, 2020 Author Share Posted March 4, 2020 Thanks @trapexit. I think the message is getting out now and those unlucky enough to have installed the build with issues have probably updated by now. Quote Link to comment
trapexit Posted March 4, 2020 Share Posted March 4, 2020 Well, this is where I'm a bit confused. The "broken build" was literally a broken build. It didn't spit out a binary. So it couldn't have lead to any runtime mergerfs issues but there seems to be claims of that. Quote Link to comment
Tuftuf Posted March 6, 2020 Share Posted March 6, 2020 (edited) I'm already using rclone encrypted with tdrive on another os/app (plexguide) but I'm just not quite following how to mount my library on to Unraid. Watching the video and there are some differences. It could just be my head is hurting. What goes as the root folder? I can make the service keys, I have another system I can look at its rclone.conf that's mounting this tdrive. Just need to get the final pieces together to get it mounted on Unraid. root@Firefly:~# rclone config No remotes found - make a new one n) New remote s) Set configuration password q) Quit config n/s/q> n name> 13 Type of storage to configure. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value Storage> 13 client_id> 1.....................................apps.googleusercontent.com Google Application Client Secret Setting your own is recommended. Enter a string value. Press Enter for the default (""). client_secret> ....................... Scope that rclone should use when requesting access from drive. Enter a string value. Press Enter for the default (""). Choose a number from below, or type in your own value 1 / Full access all files, excluding Application Data Folder. \ "drive" scope> 1 ID of the root folder Enter a string value. Press Enter for the default (""). root_folder_id> Some progress Current remotes: Name Type ==== ==== tcrypt crypt tdrive drive Edited March 6, 2020 by Tuftuf Quote Link to comment
takeover Posted March 6, 2020 Share Posted March 6, 2020 If yout want to add the root folder ID manually you can find it when you go to your Google Drive in you web browser, the folder ID is the number in the URL. Or you can skip and when it asks if it is a Team Drive you select yes and it will let you pick your Team Drive. Either way same result Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.