Jump to content

Roudy

Members
  • Posts

    105
  • Joined

  • Last visited

Posts posted by Roudy

  1. 19 hours ago, stefan416 said:

    The hiccup I have now is Sonarr currently has the root folder set to data/media/tv as everything by default is local. If I then transfer something to the cloud it breaks as the files are now only located at mount_mergerfs. I've tried making Sonarr's root folder mount_mergerfs but it then transfers all downloads to the data/gdrive_media folder and uploads to the cloud. It does this with an actual move instead of a hardlink too so its slow.

     

    There are going to be a couple of ways to approach this. 

     

    First, we will need to fix some parts of your script that may be causing some conflicts. You will want to distinctly separate LocalFilesShare from LocalFilesShare2, because the items that are in LocalFilesShare are awaiting upload to the remote drive. In your case, you have LocalFilesShare2 "/mnt/user/data/media" as a sub directory of LocalFilesShare "/mnt/user/data". You will want something more like the below.

     

    LocalFilesShare="/mnt/user/data/upload"

    LocalFilesShare2="/mnt/user/data/media"

     

    The easiest way I think to accomplish what you are wanting to do is to have 2 separate directories so you can see which are remote and which are local easily. So with this method, you wouldn't need the LocalFilesShare2 in your script. So you would have your local media in "/mnt/user/data/media/tv/" and remote media in "/mnt/user/mount_mergerfs/gdrive_media/tv/". When you set that up, you will establish another root folder in your Sonarr or Radarr (Settings>Madia Management>Root Folders) and add both paths for your TV shows. Then when you want to move it to the cloud, you can just edit the series' path from "/mnt/user/data/media/tv/Example Show" to "/mnt/user/mount_mergerfs/gdrive_media/tv/Example Show" and Sonarr will even offer to move the data for you.

     

    Once you have established that, you will just need to add the other folder to your Plex/media manager library. For Plex, you will edit the Library (Manage Library>Edit...) and add the other folder under the "Add folders" section. Then Plex will scan both locations for media and will detect the change from one folder to another after a short time and point the media there.

     

    The more difficult way in my opinion, would be to use the LocalFileShares2 and merge the local and remote folders entirely, but I you will have to manage each new show as it is added to have it add to the local path or the remote. After that, it should stay local. I use the word should* here because I didn't test it myself to ensure the behavior of it. 

     

    I hope that makes sense or helps. 

  2. 21 hours ago, Blindkitty38 said:

    Yes I am on the same subnet, Its a very basic network configuration for the time being

     

    I noticed you blacked out the IP in the photo from your original post. If it is a private IP, you don't have to black it out. What is the address you are trying to reach Deluge at? Do you have any VPNs or multiple network cards on the system you are trying to access Deluge on?

  3. On 11/12/2021 at 4:19 AM, francishefeng59 said:

    Hi, 

    I have switched from sabnzbd to sabnzbdvpn in order to use privadovpn. After configuring and importing ovpn.config, I failed to log onto the webui nor could radarr and sonarr connect with it. Disabling vpn, it works greatly. Appreciate if anyone tells me where i am wrong. 

     

    You will only be able to access the interface if it makes a successful connection to the VPN provider. What do the logs say?

  4. On 11/10/2021 at 8:27 PM, stefan416 said:

    I posted earlier and in the meantime tried combing through the posts here. Im trying to setup the following;

     

    - A standard Unraid media setup with plex, downloaders, media managers that I currently have in place. (uses /mnt/user/data currently)

    - Setup a remote library, using the guide on this forum, that I add specific files from the local library to be uploaded and held only in the cloud.

     

    Question is how to I accomplish this? Do I rejig everything so that my current library is a subfolder within the "local" folder created in the mount scripts? Do I simply add my current library directory to the "LocalFilesShares2" in the optional settings in the mount script?

     

    Since I have very limited experience with this I've tried experimenting using test files and if I add my library to "LocalFilesShares2" they appear in the mergerfs mount but moving any of the files "local" mount results in a full file move instead of a hardlink. Im guessing because its traversing unraid shares?

     

    Anyone know the best way to accomplish this?

     

    I want to make sure I understand everything correctly. You want to have a share that has local and remote files in it, and you will choose which ones you want local and which are remote? Or are you trying to choose certain files to have a copy of in the cloud as well?

  5. 3 hours ago, Playerz said:

    i am still exsperiencing the issue with the permissions, even after i updated mount script and put in the /000 part.

     

    Did you restart your system after the implementation? Is the a umask variable on your Sonarr docker? Can you shot what an "ls -l" looks like for your gdrive directory?

     

    3 hours ago, Playerz said:

    it's true that the path there sonarr should move it to, there is no dir for it to move to, but i think sonarr should create that path itself?

     

    Yes, Sonarr should make the path, but it does not have the rights to do it. 

  6. On 11/6/2021 at 6:57 PM, T0rqueWr3nch said:

    Nope, I haven't noted any ill effects so far.

     

    Unraid's umask seems to be set to wide open (checked by running umask -S which returns 000), so I really don't understand what could've changed. Reviewing this stuff does make me wonder again if I should be doing more for ransomware protection...

     

    I just made the switch to use the nobody:users as well. Going to see if it causes any issues for me as well. Definitely a safer implementation I believe. If there aren't any issues after a week or so, I think you should submit it for a pull request.

     

    I'm a little confused what the change was as well...

     

    If anyone want to try the nobody:users permissions for the rclone mount, add the below in the rclone_mount script.

     

    # create rclone mount

        --uid 99 \
        --gid 100 \

  7. 2 hours ago, T0rqueWr3nch said:

    While troubleshooting this, I also used this as an opportunity to update how I mount rclone. I passed the uid and gid arguments to mount as "nobody" and "users" (UID 99; GID 100). I might go back to just mounting as root again. Thoughts?

     

    I tried the same thing while troubleshooting. From what I researched, the --allow-other covers the base for other users using the share. I was thinking of making it always use the nobody:users permissions though. I haven't thought through if it would cause any issues I may be overlooking. Have you had any problems with it since running it that way?

  8. 1 hour ago, T0rqueWr3nch said:

    I assume you added these arguments to the rclone mount command?

     

    Yes, I added them in the "# create rclone mount" section. 

     

    1 hour ago, T0rqueWr3nch said:

    This feels like a umask issue...

     

    **Correction, you only need to add the below to the "# create rclone mount" section. **

     

        --umask 000 \

     

     

  9. I have the same issue with my Windows 10 VM. There are no logs in unRAID, libvirt, or on Windows that shows any kind of error. I notice that it seems to happen for me while playing a game and not really any other time. 

  10. On 4/27/2021 at 5:23 AM, INTEL said:

    Could I just create a folder inside mergerfs Downloads/completed, Downloads/incomplete for my DelugeVPN and download torrents directly in there?

     

    You will point Deluge to those directories. I would recommend using the "Labels" plugin in Deluge so it will separate your content as it downloads and puts it into the correct folder. 

     

    On 4/27/2021 at 5:23 AM, INTEL said:

    Not shure for what MountFolders are used for?

     

    That is just a list of folders that the mount will create when the script is run. It will create the directories that your media awaiting upload (LocalFilesShare) will go and the base file structure for your mergerfs. You can add or remove folders from there as needed.

  11. 20 hours ago, wgstarks said:

    I used this docker in the past and really liked it but ran into a few issues with deluge v2 so switched to qBittorrentVPN. I saw that the app has been updated so thought I would give it a try but it doesn't seem to be starting. Can't connect to the webUI. Looked at the supervisord log and it seems to be an issue with PIA. I'm using exactly the same openvpn directory with both dockers and qBit connects without any issue so I'm not sure what's going on. I've attached supervisord.log.

     

    supervisord.log 10.76 kB · 1 download

     

    I had a similar issue where the Deluge and Deluge Web UI weren't starting. I was using an older image and ended up just deleting the image and the directory and started from scratch. It worked after that, but I was unable to determine the root cause from the docker log and the logs in the container. I couldn't even manually start the service. 

  12. On 4/23/2021 at 2:40 AM, INTEL said:

    It's running every hour.

    It does upload, files I'm shure, just remote server doesnt see it (mount only)

     

    If the files are uploaded, you should see them on the remote PC. They are either there or they're not, so make sure you are mounting and uploading to the same location.

     

    On 4/23/2021 at 2:40 AM, INTEL said:

    I don't understand that bit about cache. Not shure how to disable rest of the script and leave mount only part?

     

    You can keep the cache, just be aware that if you update a file it may appear as a duplicate. It won't really improve performance unless you access the same file multiple times. It will download it the initial time and then play from cache after that. Hope that makes sense.

  13. 1 hour ago, INTEL said:

    Actualy what I find is that my remote server isn't aware of new uploads, cannot figure out why?

     

    How often is your upload script running? Can you manually run it and see if it appears on your remote computer?

     

    1 hour ago, INTEL said:

    Still not shure how would my mount script should look like on remote server?

     

    You will just have to use the first part of the script (Create Rclone Mount). You won't need anything after that. I caution about using a cache with the rclone mount if you are just reading as well. I say this because if you update a file on your system and it isn't the same file extension, it won't overwrite the existing media if its in cache. This will make the media server see duplicate files for the same media. It will eventually work itself out due to cache size/time, but something to be aware of.

    • Thanks 1
  14. On 4/17/2021 at 2:04 PM, chris_netsmart said:

    I have gotten to start again, but uninstalling it and reinstalling it  but I am still now able to webGUI onto it

     

     

    deluge settings

    image.png.dc2b5c2c64fd1728af5bf970e457b93c.png

     

    image.png.006db69844fda6ef5ab1fed07b2c0c70.png

     

     

    jdownloader

    settings

    image.thumb.png.a0e752ba2d569e43a0350fe268edcb82.png

     

    image.png.4102533528643534b662af8ad9c3d513.png

    logs

     

     

     

    I'm assuming you are using Djoss's image. In the settings it states "NOTE: This applies only when Network Type is set to Bridge. For other network types, port 5800 should be used instead." Give 5800 a try. 

  15. 2 hours ago, Stephan296 said:

    @Roudy

    
    [v3.0.6.1196] System.UnauthorizedAccessException: Access to the path is denied.
      at System.IO.File.Move (System.String sourceFileName, System.String destFileName) [0x00116] in <254335e8c4aa42e3923a8ba0d5ce8650>:0 
      at NzbDrone.Common.Disk.DiskProviderBase.MoveFileInternal (System.String source, System.String destination) [0x00000] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskProviderBase.cs:268 
      at NzbDrone.Mono.Disk.DiskProvider.MoveFileInternal (System.String source, System.String destination) [0x000a3] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Mono\Disk\DiskProvider.cs:306 
      at NzbDrone.Common.Disk.DiskProviderBase.MoveFile (System.String source, System.String destination, System.Boolean overwrite) [0x000e1] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskProviderBase.cs:255 
      at NzbDrone.Common.Disk.DiskTransferService.TryMoveFileVerified (System.String sourcePath, System.String targetPath, System.Int64 originalSize) [0x00047] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskTransferService.cs:487 
      at NzbDrone.Common.Disk.DiskTransferService.TransferFile (System.String sourcePath, System.String targetPath, NzbDrone.Common.Disk.TransferMode mode, System.Boolean overwrite) [0x004b9] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Common\Disk\DiskTransferService.cs:367 
      at NzbDrone.Core.MediaFiles.EpisodeFileMovingService.TransferFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Tv.Series series, System.Collections.Generic.List`1[T] episodes, System.String destinationFilePath, NzbDrone.Common.Disk.TransferMode mode) [0x00129] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\MediaFiles\EpisodeFileMovingService.cs:116 
      at NzbDrone.Core.MediaFiles.EpisodeFileMovingService.MoveEpisodeFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode) [0x00046] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\MediaFiles\EpisodeFileMovingService.cs:79 
      at NzbDrone.Core.MediaFiles.UpgradeMediaFileService.UpgradeEpisodeFile (NzbDrone.Core.MediaFiles.EpisodeFile episodeFile, NzbDrone.Core.Parser.Model.LocalEpisode localEpisode, System.Boolean copyOnly) [0x001ab] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\MediaFiles\UpgradeMediaFileService.cs:77 
      at NzbDrone.Core.MediaFiles.EpisodeImport.ImportApprovedEpisodes.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.MediaFiles.EpisodeImport.ImportMode importMode) [0x0029b] in M:\BuildAgent\work\63739567f01dbcc2\src\NzbDrone.Core\MediaFiles\EpisodeImport\ImportApprovedEpisodes.cs:109 

     

     

    Are the files local on unRAID or on a remote system? Which Sonarr docker are you using?

     

    Can you verify the "Root Folers" path is correct under "Settings>Media Management"

     

    Also, you can try to run the "Docker Safe New Perms" under "Tools" to see if that helps as well.

  16. 8 hours ago, INTEL said:

    second server doesn't pick up new movies, I can see only data I had uploaded before running mount script.

     

    I'm assuming you are using Plex. On the remote computer, it won't detect the file changes. You will have to manually scan the files or set it to scan periodically. There is also an "Autoscan" docker from Hotio in the APP store that may trigger the scanning as well. I haven't set it up on my remote box because the periodic scanning works fine for my case. 

  17. 4 hours ago, INTEL said:

    To sum it up....

     

    I have 2 unraid servers

    Server no1: would need to download files with sonarr/radarr/delugeVPN and move it to rclone_mount

    Server no2: Only to stream that files from rclone_mount

     

    Any idea how to set it up?

     

    I run a similar setup. Server number 2 wouldn't need the mergerfs since it is not downloading/uploading any content to cloud storage. You would just have to mount the rclone instance as a "Read" and use the same encryption keys. I personally created seperate oauth accounts to avoid API limits. Hope that helps.

×
×
  • Create New...