stefan416

Members
  • Posts

    13
  • Joined

  • Last visited

Everything posted by stefan416

  1. I was thinking of how one would implement --exclude. I would have to put everything under the 'LocalFilesShare' it seems. I understand it for use with the downloads folder as you woudl want that to stay local. I cant wrap my head around how to implement that function to maintain folders by default and then move to a separate folder for upload because, as is, the script will upload files/folders as structured in the 'LocalFilesShare'.
  2. After thinking this all over it would seem that having everything bound to 'mount_mergerfs' would be optimal. The only problem with that is the original mount script is setup for a cloud only storage solution instead of hybrid local/cloud storage like mine right? Would it make sense to then have anything written to 'mount_mergerfs' copy to my 'media' folder instead of 'gdrive_media' which is the standard script config. 'upload/gdrive_media' would then be added to 'LocalFIlesShare2' to combine in mergerfs. The upload script would then point to 'upload'. It would seem that this problem might be solved by doing this inversion? -data |-----> local | |-----> media | | |-----> downloads | | |-----> tv | |-----> upload | |-----> gdrive_media | |-----> tv |-----> mount_mergerfs
  3. Ahhhh so you're saying what physical disk it's on? I though that if everything was under the same share (folder under /user) that the relevant subfolders would be created on all the disks included in that share so you wouldnt have to cross from one to another. Is my thinking incorrect on this or am I still missing the point?
  4. I see. I think thats how my first attempt was setup but then ran into the sonarr issue. It woudl be nice if sonarr could has a series in two locations at once instead of only being able to have one root folder. regarding having all the dockers reference mergerfs folder, I currently have my dockers all mounting /user which I thought would take care of the actual writing of moves issue but it only works when moving to and from /media and /upload. same with my /downloads and /upload. Maybe it doesnt matter if I have the mergerfs folder setup under /user vs /user/data? user /data |-----> /media/tv |-----> /upload/gdrive_media/tv |-----> /mergerfs/gdrive_media/tv |-----> /downloads user/mount_rclone
  5. hmmm. When I had first set everything up in the standard config Unraid created a share "mount_mergerfs" since mount_mergerfs was under /user. I was hoping by having it all under /data (which is the original media share) that everything would move instantaenously. am I mistaking how that all works?
  6. The restart did the trick! Do you know if transfers from /data/media to /data/mergerfs are supposed to be instant? moves from /data/media to /data/uploads are
  7. You're right that does seem like the better way to do it since It one step regardless (either through sonarr/radarr or in windows explorer). How would you suggest setting up my upload script? When I changed my localFilesShare to /mnt/data/upload i seem get a "gdrive_media" folder within my gdrive_media folder along with my outlined folders. On another note. would there be any harm in duplicating the mount script so one runs at array startup (along with the unmount script as outlined in the guide) and then one instance on a timed interval?
  8. The former. Id like to upload extra (lesser watched) media to the cloud. I think I have it working for the most part with some slight hiccups. I've set it up as follows; Since the mount script creates two shares (mount_mergerfs and mount_rclone)I have the following structure: *data ------> media (which contains my original library and download folders) ------> gdrive_media (functions as the "local" folder that the mount script uses) *mount_mergerfs *mount_rclone I use Sonarr/Radarr along with qBittorrent and Sabnzbd. By default I want everything I download to be stored locally so everything I download automatically gets put into the data/media/ folder. I then manually chose via my PC what to move over to the data/gdrive_media folder which then uploads everything correctly. I have Plex pointed to mount_mergerfs and it successfully finds all the media it is combined from data/media and data/gdrive_media. The hiccup I have now is Sonarr currently has the root folder set to data/media/tv as everything by default is local. If I then transfer something to the cloud it breaks as the files are now only located at mount_mergerfs. I've tried making Sonarr's root folder mount_mergerfs but it then transfers all downloads to the data/gdrive_media folder and uploads to the cloud. It does this with an actual move instead of a hardlink too so its slow. Here is my mount script: #!/bin/bash ###################### #### Mount Script #### ###################### ## Version 0.96.9.2 ## ###################### ####### EDIT ONLY THESE SETTINGS ####### # INSTRUCTIONS # 1. Change the name of the rclone remote and shares to match your setup # 2. NOTE: enter RcloneRemoteName WITHOUT ':' # 3. Optional: include custom command and bind mount settings # 4. Optional: include extra folders in mergerfs mount # REQUIRED SETTINGS RcloneRemoteName="gdrive_media" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash e.g. /mnt/user/mount_rclone RcloneMountDirCacheTime="720h" # rclone dir cache time LocalFilesShare="/mnt/user/data" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable RcloneCacheShare="/mnt/user/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone RcloneCacheMaxSize="350G" # Maximum size of rclone cache RcloneCacheMaxAge="336h" # Maximum age of cache files MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable DockerStart="sabnzbd plex sonarr radarr qbittorrentvpn emby" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page MountFolders=\{"movies,movies4k,tv,tv4k"\} # comma separated list of folders to create within the mount # Note: Again - remember to NOT use ':' in your remote name above # OPTIONAL SETTINGS # Add extra paths to mergerfs mount in addition to LocalFilesShare LocalFilesShare2="/mnt/user/data/media" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder. Enter 'ignore' to disable LocalFilesShare3="ignore" LocalFilesShare4="ignore" # Add extra commands or filters Command1="--rc" Command2="" Command3="" Command4="" Command5="" Command6="" Command7="" Command8=""
  9. I posted earlier and in the meantime tried combing through the posts here. Im trying to setup the following; - A standard Unraid media setup with plex, downloaders, media managers that I currently have in place. (uses /mnt/user/data currently) - Setup a remote library, using the guide on this forum, that I add specific files from the local library to be uploaded and held only in the cloud. Question is how to I accomplish this? Do I rejig everything so that my current library is a subfolder within the "local" folder created in the mount scripts? Do I simply add my current library directory to the "LocalFilesShares2" in the optional settings in the mount script? Since I have very limited experience with this I've tried experimenting using test files and if I add my library to "LocalFilesShares2" they appear in the mergerfs mount but moving any of the files "local" mount results in a full file move instead of a hardlink. Im guessing because its traversing unraid shares? Anyone know the best way to accomplish this?
  10. Hi all, Im trying to setup services accounts using the instructions found on the Autoclone git but cant for life of me even complete step 1. Im no programmer and have relied on the various guides on setting up different functions for unraid which have been a tremendous help given my lack of programming knowledge. I've installed python3 via nerdpack but where do I go from there? I'd appreciate any help anyone can provide.
  11. Hi, Im looking to run Wireguard in conjunction with the Pihole container and was wondering if it's possible to select another NIC other than the standard br0. Alternatively, is there a best way to set everything up? I can connect via my phone to the tunnel but receive resolution errors as, I'm assuming, the remote client isnt communicating with pihole. Thank you. Is there a best practice of setting the two up if I have two NICs?