Jump to content

stefan416

Members
  • Posts

    13
  • Joined

  • Last visited

Posts posted by stefan416

  1. 1 hour ago, DZMM said:

    read up on --exclude on the rclone forums - you can stop files being uploaded by folder, type, age, name etc

     

    I was thinking of how one would implement --exclude. I would have to put everything under the 'LocalFilesShare' it seems.

     

    I understand it for use with the downloads folder as you woudl want that to stay local. I cant wrap my head around how to implement that function to maintain folders by default and then move to a separate folder for upload because, as is, the script will upload files/folders as structured in the 'LocalFilesShare'.

     

  2. On 11/15/2021 at 12:28 PM, Roudy said:

     

    Your thinking is incorrect. The easiest way to see this on Unraid in under the "Main" tab. On the right hand side of the disks listed, you will see an icon. If you click on that for the different disks, you will see the same folders in multiple locations because the data is split across most of those disks. It also depends on your cache settings as well.

    After thinking this all over it would seem that having everything bound to 'mount_mergerfs' would be optimal. The only problem with that is the original mount script is setup for a cloud only storage solution instead of  hybrid local/cloud storage like mine right?

     

    Would it make sense to then have anything written to 'mount_mergerfs' copy to my 'media' folder instead of 'gdrive_media' which is the standard script config. 'upload/gdrive_media' would then be added to 'LocalFIlesShare2' to combine in mergerfs. The upload script would then point to 'upload'. It would seem that this problem might be solved by doing this inversion?

     

    -data

         |-----> local

         |         |-----> media

         |         |            |-----> downloads

         |         |            |-----> tv

         |         |-----> upload

         |                      |-----> gdrive_media

         |                                      |-----> tv

         |-----> mount_mergerfs

  3. 5 minutes ago, Roudy said:

     

    I don't think you are understanding what we're saying. It has nothing to do with the folder and everything to do with what drive that folder/data that Unraid thinks its located on. I appended an example of what Unraid would see your file structure as to hopefully help you understand. If you want instant transfers, you should look into the second option I recommended earlier that I did not test. I quoted it below for you. 

     

    user/data (sda1)

                |-----> /media/tv (sda1)

                |-----> /upload/gdrive_media/tv (sda1)

                |-----> /mergerfs/gdrive_media/tv (mergerfs drive)          

                |-----> /downloads (sda1)

    user/mount_rclone (rclone mount drive)

     

     

    Ahhhh so you're saying what physical disk it's on? I though that if everything was under the same share (folder under /user) that the relevant subfolders would be created on all the disks included in that share so you wouldnt have to cross from one to another. Is my thinking incorrect on this or am I still missing the point?

  4. 4 hours ago, DZMM said:

    Another way to update Plex (better in the long run IMO) is to add your new merged folder to Plex which contains ALL your media (remote, local & local pending upload), then scan it, and when it's finished scanning, remove your old local folders from Plex and all your media will be in Plex with just your mergerfs file references.

     

    This is a better solution, because if you decide to e.g. move a TV show in the future from local-->local-pending-->remote, this won't be transparent to Plex and you won't have to mess around with Emptying Trash or files becoming temporariliy unavailable.

    I see. I think thats how my first attempt was setup but then ran into the sonarr issue. It woudl be nice if sonarr could has a series in two locations at once instead of only being able to have one root folder.

     

    regarding having all the dockers reference mergerfs folder, I currently have my dockers all mounting /user which I thought would take care of the actual writing of moves issue but it only works when moving to and from /media and /upload. same with my /downloads and /upload. Maybe it doesnt matter if I have the mergerfs folder setup under /user vs /user/data?

     

    user /data

                |-----> /media/tv

                |-----> /upload/gdrive_media/tv

                |-----> /mergerfs/gdrive_media/tv           

                |-----> /downloads

    user/mount_rclone

  5. 43 minutes ago, Roudy said:

     

    They may not be because your system sees the mergerfs as a separate drive. You can move the files to the /data/uploads/ if it is faster, it will upload either way. The important part is to point Plex or your media server to the mergerfs share, as it will combine the local and the remote.

    hmmm. When I had first set everything up in the standard config Unraid created a share "mount_mergerfs" since mount_mergerfs was under /user. I was hoping by having it all under /data (which is the original media share) that everything would move instantaenously. am I mistaking how that all works?

  6. 2 hours ago, Roudy said:

     

    You may have to provide an example of what you are referring to on this one. I was picturing your file structure something like below from your mount script. If you're saying there are 2 gdrive_media folders, one inside the other, you may need to unmount and remount or restart. It shouldn't have ended up like that.

     

    /mnt/user/data/media/movies

    /mnt/user/data/media/tv

    /mnt/user/data/upload/gdrive_media/

     

     

    The restart did the trick!  Do you know if transfers from /data/media to /data/mergerfs are supposed to be instant? moves from /data/media to /data/uploads are

  7. 3 hours ago, Roudy said:

     

    There are going to be a couple of ways to approach this. 

     

    First, we will need to fix some parts of your script that may be causing some conflicts. You will want to distinctly separate LocalFilesShare from LocalFilesShare2, because the items that are in LocalFilesShare are awaiting upload to the remote drive. In your case, you have LocalFilesShare2 "/mnt/user/data/media" as a sub directory of LocalFilesShare "/mnt/user/data". You will want something more like the below.

     

    LocalFilesShare="/mnt/user/data/upload"

    LocalFilesShare2="/mnt/user/data/media"

     

    The easiest way I think to accomplish what you are wanting to do is to have 2 separate directories so you can see which are remote and which are local easily. So with this method, you wouldn't need the LocalFilesShare2 in your script. So you would have your local media in "/mnt/user/data/media/tv/" and remote media in "/mnt/user/mount_mergerfs/gdrive_media/tv/". When you set that up, you will establish another root folder in your Sonarr or Radarr (Settings>Madia Management>Root Folders) and add both paths for your TV shows. Then when you want to move it to the cloud, you can just edit the series' path from "/mnt/user/data/media/tv/Example Show" to "/mnt/user/mount_mergerfs/gdrive_media/tv/Example Show" and Sonarr will even offer to move the data for you.

     

    Once you have established that, you will just need to add the other folder to your Plex/media manager library. For Plex, you will edit the Library (Manage Library>Edit...) and add the other folder under the "Add folders" section. Then Plex will scan both locations for media and will detect the change from one folder to another after a short time and point the media there.

     

    The more difficult way in my opinion, would be to use the LocalFileShares2 and merge the local and remote folders entirely, but I you will have to manage each new show as it is added to have it add to the local path or the remote. After that, it should stay local. I use the word should* here because I didn't test it myself to ensure the behavior of it. 

     

    I hope that makes sense or helps. 

     

     

    You're right that does seem like the better way to do it since It one step regardless (either through sonarr/radarr or in windows explorer).

     

    How would you suggest setting up my upload script? When I changed my localFilesShare to /mnt/data/upload i seem get a "gdrive_media" folder within my gdrive_media folder along with my outlined folders.

     

    On another note. would there be any harm in duplicating the mount script so one runs at array startup (along with the unmount script as outlined in the guide) and then one instance on a timed interval?

  8. 3 hours ago, Roudy said:

     

    I want to make sure I understand everything correctly. You want to have a share that has local and remote files in it, and you will choose which ones you want local and which are remote? Or are you trying to choose certain files to have a copy of in the cloud as well?

    The former. Id like to upload extra (lesser watched)  media to the cloud. I think I have it working for the most part with some slight hiccups. I've set it up as follows;

     

    Since the mount script creates two shares (mount_mergerfs and mount_rclone)I have the following structure:

     

    *data

    ------> media (which contains my original library and download folders)

    ------> gdrive_media (functions as the "local" folder that the mount script uses)

          

    *mount_mergerfs

    *mount_rclone

     

    I use Sonarr/Radarr along with qBittorrent and Sabnzbd. By default I want everything I download to be stored locally so everything I download automatically gets put into the data/media/ folder. I then manually chose via my PC what to move over to the  data/gdrive_media folder which then uploads everything correctly.

     

    I have Plex pointed to mount_mergerfs and it successfully finds all the media it is combined from data/media and data/gdrive_media.

     

    The hiccup I have now is Sonarr currently has the root folder set to data/media/tv as everything by default is local. If I then transfer something to the cloud it breaks as the files are now only located at mount_mergerfs. I've tried making Sonarr's root folder mount_mergerfs but it then transfers all downloads to the data/gdrive_media folder and uploads to the cloud. It does this with an actual move instead of a hardlink too so its slow.

     

    Here is my mount script:
     

    #!/bin/bash
    
    ######################
    #### Mount Script ####
    ######################
    ## Version 0.96.9.2 ##
    ######################
    
    ####### EDIT ONLY THESE SETTINGS #######
    
    # INSTRUCTIONS
    # 1. Change the name of the rclone remote and shares to match your setup
    # 2. NOTE: enter RcloneRemoteName WITHOUT ':'
    # 3. Optional: include custom command and bind mount settings
    # 4. Optional: include extra folders in mergerfs mount
    
    # REQUIRED SETTINGS
    RcloneRemoteName="gdrive_media" # Name of rclone remote mount WITHOUT ':'. NOTE: Choose your encrypted remote for sensitive data
    RcloneMountShare="/mnt/user/mount_rclone" # where your rclone remote will be located without trailing slash  e.g. /mnt/user/mount_rclone
    RcloneMountDirCacheTime="720h" # rclone dir cache time
    LocalFilesShare="/mnt/user/data" # location of the local files and MountFolders you want to upload without trailing slash to rclone e.g. /mnt/user/local. Enter 'ignore' to disable
    RcloneCacheShare="/mnt/user/mount_rclone" # location of rclone cache files without trailing slash e.g. /mnt/user0/mount_rclone
    RcloneCacheMaxSize="350G" # Maximum size of rclone cache
    RcloneCacheMaxAge="336h" # Maximum age of cache files
    MergerfsMountShare="/mnt/user/mount_mergerfs" # location without trailing slash  e.g. /mnt/user/mount_mergerfs. Enter 'ignore' to disable
    DockerStart="sabnzbd plex sonarr radarr qbittorrentvpn emby" # list of dockers, separated by space, to start once mergerfs mount verified. Remember to disable AUTOSTART for dockers added in docker settings page
    MountFolders=\{"movies,movies4k,tv,tv4k"\} # comma separated list of folders to create within the mount
    
    # Note: Again - remember to NOT use ':' in your remote name above
    
    # OPTIONAL SETTINGS
    
    # Add extra paths to mergerfs mount in addition to LocalFilesShare
    LocalFilesShare2="/mnt/user/data/media" # without trailing slash e.g. /mnt/user/other__remote_mount/or_other_local_folder.  Enter 'ignore' to disable
    LocalFilesShare3="ignore"
    LocalFilesShare4="ignore"
    
    # Add extra commands or filters
    Command1="--rc"
    Command2=""
    Command3=""
    Command4=""
    Command5=""
    Command6=""
    Command7=""
    Command8=""

     

  9. I posted earlier and in the meantime tried combing through the posts here. Im trying to setup the following;

     

    - A standard Unraid media setup with plex, downloaders, media managers that I currently have in place. (uses /mnt/user/data currently)

    - Setup a remote library, using the guide on this forum, that I add specific files from the local library to be uploaded and held only in the cloud.

     

    Question is how to I accomplish this? Do I rejig everything so that my current library is a subfolder within the "local" folder created in the mount scripts? Do I simply add my current library directory to the "LocalFilesShares2" in the optional settings in the mount script?

     

    Since I have very limited experience with this I've tried experimenting using test files and if I add my library to "LocalFilesShares2" they appear in the mergerfs mount but moving any of the files "local" mount results in a full file move instead of a hardlink. Im guessing because its traversing unraid shares?

     

    Anyone know the best way to accomplish this?

  10. Hi all,

     

    Im trying to setup services accounts using the instructions found on the Autoclone git but cant for life of me even complete step 1. Im no programmer and have relied on the various guides on setting up different functions for unraid which have been a tremendous help given my lack of programming knowledge. I've installed python3 via nerdpack but where do I go from there? I'd appreciate any help anyone can provide.

  11. Hi,
     

    Im looking to run Wireguard in conjunction with the Pihole container and was wondering if it's possible to select another NIC other than the standard br0. Alternatively, is there a best way to set everything up? I can connect via my phone to the tunnel but receive resolution errors as, I'm assuming, the remote client isnt communicating with pihole. Thank you. Is there a best practice of setting the two up if I have two NICs?

×
×
  • Create New...