Kaizac

Members
  • Posts

    470
  • Joined

  • Days Won

    2

Posts posted by Kaizac

  1. 2 PSA's:

     

    1. If you want to use more local folders in your union/merge folder which are RO, you can use the following merge command and Sonarr will work. No access denied errors anymore. Use either mount_unionfs or mount_mergerfs depending on what you use.

    mergerfs /mnt/disks/local/Tdrive=RW:/mnt/user/LocalMedia/Tdrive=NC:/mnt/user/mount_rclone/Tdrive=NC /mnt/user/mount_unionfs/Tdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true

    2. If you have issues with the mount script not working at start of array because docker daemon is starting. Then just put your mount script on custom settings and run it every minute (* * * * *). It will then run after array start and will work.

     

    @nuhll both these fixes should be interesting for you.

  2. 5 minutes ago, Roken said:

    I'm a bit confused on how to map Sonarr et al with the updated scripts.  

    This is what I have for Sonarr, but the uploader doesn't move the files at all (it tries to delete them instead)

    
    /config <-> /mnt/user/appdata/sonarr
    /dev/rtc <- /dev/rtc
    /tv <-> /mnt/cache/local/google_vfs/tv
    /downloads <-> /mnt/cache/local/google_vfs/downloads/

    I have the local mergerfs folder on my cache drive so I can saturate my line as it's an SSD and capable of using my full gigabit.

    Am I doing something wrong here?  Seems like rclone is excluding the tv folder in /mnt/cache/local.

     

    You mount your dockers to /mnt/user/mount_mergerfs/google_vfs and then the proper subfolder (tv/movies/downloads/etc.). If you just put it on your cache it will only see the local stored files.

  3. I keep getting this error both with automatic import als manual import. But it only happens with upgrades to files:

     

    Quote

    Couldn't import episode /downloads/tv/Black.Sails.S03E01.XIX.1080p.BluRay.DD5.1.x264-SA89/121fcb5e9a1830912b96e6c54cf23baa.mkv: Access to the path "/tv/Black Sails/Season 3/Black Sails - S03E01 - XIX.mp4" is denied.

    Sab and Sonarr are pointing to the same directory and new series work fine, it's only when an existing file needs to be upgraded. Below the docker runs of sab and sonarr. Hopefully someone has an idea?

    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='sonarr' --net='br0.90' --log-opt max-size='10m' --log-opt max-file='1' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/dev/rtc':'/dev/rtc':'ro' -v '/mnt/user/mount_unionfs/Tdrive/Series/':'/tv':'rw' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/':'/downloads':'rw' -v '/mnt/user/mount_unionfs/':'/unionfs':'rw,slave' -v '/mnt/cache/appdata/sonarr':'/config':'rw' 'linuxserver/sonarr:preview'
    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='sabnzbd' --net='br0.90' --ip='192.168.90.10' --log-opt max-size='10m' --log-opt max-file='1' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'TCP_PORT_8080'='8080' -e 'TCP_PORT_9090'='9090' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/':'/downloads':'rw' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/Incompleet/':'/incomplete-downloads':'rw' -v '/mnt/user/mount_unionfs/':'/unionfs':'rw,slave' -v '/mnt/cache/appdata/sabnzbd':'/config':'rw' 'linuxserver/sabnzbd' 

     

  4. 12 minutes ago, DZMM said:

    @Kaizac - why do you need the recycling bin?   Maybe that's the problem

     

    
    docker run -d --name='sonarr' --net='br0.55' --ip='192.168.50.95' --cpuset-cpus='1,8,9,17,24,25' --log-opt max-size='50m' --log-opt max-file='3' -e TZ="Europe/London" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/dev/rtc':'/dev/rtc':'ro' -v '/mnt/user/':'/user':'rw' -v '/mnt/disks/':'/disks':'rw,slave' -v '/boot/config/plugins/user.scripts/scripts/':'/scripts':'rw' -v '/boot/config/plugins/user.scripts/scripts/unrar_cleanup_sonarr/':'/unrar':'rw' -v '/mnt/cache/appdata/dockers/sonarr':'/config':'rw' 'linuxserver/sonarr:preview'

     

    I'm not using the recycling bin, but I thought you might be doing that. I just don't get why Sonarr can't upgrade files and gets an access denied, when Radarr is working fine with the same settings.

     

    For downloads I like to unionfs/Tdrive/Downloads and for series I point to unionfs/Tdrive/Series. Both on r/w and mount_unionfs on rw-slave. I'm doubting I need remote mapping because sonarr and sab are on different IP's. But that isn't needed for Radarr either.

  5. I'm trying to run a ssh command through User Scripts. So I expect creating a .sh file is the right way to do that. But then how do I trigger it from User Scripts?

     

    And if I want to make it more complex by giving it this command when running:

    PYTHONIOENCODING=utf8
    
    * * * * * /path/to/script.sh

    How would I go about that? And what exactly should be the path if I for example put the .sh file in the User Scripts/scripts folder?

  6. @DZMM Do you never have the problem of the daemon docker not running when you run the mount script at startup? Nuhll has the same problem as me. I've put in a sleep of 30 but that's not enough. Will be increasing it more to try to get it fixed. But I find it strange that you don't have the same issue.

     

    @nuhll unfortunately I have the permission denied error again. Did it come back for you?

  7. 5 minutes ago, nuhll said:

    Ok ive upgraded unraid and changed it to cache.

     

    Still same problem I cant delete \mount_unionfs\google_vfs\Filme\movie (1995)\file.avi

     

    root@Unraid-Server:~# rm /mnt/user/mount_unionfs/google_vfs/Filme/Jumanji\ \(1995\)/Jumanji\ 1995.avi 
    rm: cannot remove '/mnt/user/mount_unionfs/google_vfs/Filme/Jumanji (1995)/Jumanji 1995.avi': Read-only file system

    Check your r/w settings for your mappings in your docker settings. Rw slave for mount unionfs and rw for the rest

  8. Just now, nuhll said:

    Hmm today i found some errors in radarr:

    20-1-11 12:23:02.4|Warn|ImportApprovedMovie|Couldn't import movie /downloads/completed/Filme/movie.1995.German.AC3.BDRip.x264-DHARMA-xpost/Jumanji.1995.German.AC3.BDRip.x264-DHARMA.mkv [v0.2.0.1459] System.UnauthorizedAccessException: Access to the path "/Archiv/Filme/movie (1995)/movie 1995.avi" is denied. at System.IO.File.Delete (System.String path) [0x00073] in <254335e8c4aa42e3923a8ba0d5ce8650>:0 at NzbDrone.Common.Disk.DiskProviderBase.DeleteFile (System.String path) [0x00068] uin C:\projects\radarr-usby1\src\NzbDrone.Common\Disk\DiskProviderBase.cs:205 at NzbDrone.Core.MediaFiles.RecycleBinProvider.DeleteFile (System.String path) [0x00054] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\RecycleBinProvider.cs:90 at NzbDrone.Core.MediaFiles.UpgradeMediaFileService.UpgradeMovieFile (NzbDrone.Core.MediaFiles.MovieFile movieFile, NzbDrone.Core.Parser.Model.LocalMovie localMovie, System.Boolean copyOnly) [0x0005b] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\UpgradeMediaFileService.cs:52 at NzbDrone.Core.MediaFiles.MovieImport.ImportApprovedMovie.Import (System.Collections.Generic.List`1[T] decisions, System.Boolean newDownload, NzbDrone.Core.Download.DownloadClientItem downloadClientItem, NzbDrone.Core.MediaFiles.MovieImport.ImportMode importMode) [0x00258] in C:\projects\radarr-usby1\src\NzbDrone.Core\MediaFiles\MovieImport\ImportApprovedMovie.cs:109

     

    Any idea? it cant import some new movies? I can access that file via smb, tho. \\192.168.86.103\mount_unionfs\google_vfs\Filme\movie (1995)

    Radarr has some issues lately. Had the same issues last couple of days, but now seem to have fixed it. Change your appdata link from /user/ to /cache/.

  9. @DZMM sorry but in your first post you wrote this:

     

    Quote

    Either finish any existing uploads from rclone_upload before updating, or move pending downloads from /mnt/user/rclone_upload to the new /mnt/user/local folder, or create a version of the new upload script to upload from /mnt/user/local 

    I've tried to understand what you're saying here, but I really can't. What exactly is the difference in user/rclone_upload with user/local? They are both local shares which you include in your union/merge. Maybe I'm missing something in your changes, since my configuration was a bit different because of more local folders.

  10. 1 minute ago, DZMM said:

    --dir-cache-time can be large as you want - uploads flush the cache.  No real reason, just decided to put a larger number in for the (rare) days when no new content added

    --fast-list - yep, that shouldn't be there.  I forgot to delete when I removed the rc command.

     

     

    You have a double --fast-list in your upload code. Probably not an issue, but might want to remove it.

     

    So far I've just migrated everything over and it seems to be working fine! Don't understand the hardlinking much yet, because I don't use torrents much so don't have the seeding issue. Will have to change parts of my folder structure though to get in line with the new standard.

  11. 1 hour ago, DZMM said:

    Everything is self-contained in the script - no need to touch CA, nerd tools etc except to install the rclone plugin.

     

    Re the mergerfs docker - I'm not an expert, but it's building it direct from the mergerfs author's repo, so the script will only need changing if he updates his build options which I think will be unlikely:

     

    https://github.com/trapexit/mergerfs#build-options

    Ok understood.

     

    In your mount command you have --dir-cache-time 720h. This used to be 72h. Why the change?

    And you also started using --fast-list in the mount command. I thought this didn't work in the mount command only when doing transfers for example. Has that changed?

  12. 3 hours ago, sauso said:

    Maybe post the step you are having issues with.  Include screenshots of what you doing.

    I commend your effort. The total lack of research or effort put in by a lot of newcomers in this topic is totally killing my interest in helping them.

     

    I've helped a few in here to totally set it up through a remote session, but all of them put in a lot of work themselves to try to get it to work.

     

    What I mostly see happening now is people coming it saying it doesn't work and then we should magically, understand where the fix is. This whole solution is not mainstream and it never will be. Even when DZMM and others here like me have it running without huge issues, there are still some problems we might forget because solving them has become second nature to us. Think about Api bans, using seperate mounts for different dockers, RAM filling up., etc. Things like a video guide will not give the whole picture of how and why it is functioning like it is and thus it will just move the stress to troubleshooting when it's running.

     

    So my advice: don't waste more time on the people who want it on a silver platter. Maybe I come across harsh, but I'm used trying to find solutions myself and then presenting what I have tried and where I ran into issues.

     

    • Like 1