• Posts

  • Joined

  • Days Won


Everything posted by Kaizac

  1. I'm trying to setup the Service Accounts for uploading. How does this work when you have multiple team drives for different storage. So 1 for media, 1 for backups, 1 for cloud storage, etc. Would you then create multiple upload scripts with each it's own project and SA's?
  2. Well to each his own. For mainstream media usenet is vastly superior if set up right. If you have access to private trackers and also need non-mainstream media then torrents can bring more to the table. Either way I think with your setup/wishes you can use rclone for your backups and replace Crashplan with it. But you don't need all this elaborate configuration for it. Just create a Gdrive/Team Drive, DO NOT mount it. Just upload to it, and let removed/older data be written to a seperate folder within Gdrive. If you get infected then it can't directly access your mount files. And in case of encrypted/infected files being uploaded you will have your old media to rollback to. Just have to remember that when you want to access your backups you have to mount the rclone mount/gdrive first to see the files. Or if you don't use encryption you can just see them through the browser.
  3. Why are you on torrents? Move to usenet and get rid of that seeding bullshit. Also you can just direct play 4k from your gdrive. I do with up to 80 Gb files and it's fine. You might consider a seed box though. You can use torrents and move with gigabit speed to gdrive.
  4. 2 PSA's: 1. If you want to use more local folders in your union/merge folder which are RO, you can use the following merge command and Sonarr will work. No access denied errors anymore. Use either mount_unionfs or mount_mergerfs depending on what you use. mergerfs /mnt/disks/local/Tdrive=RW:/mnt/user/LocalMedia/Tdrive=NC:/mnt/user/mount_rclone/Tdrive=NC /mnt/user/mount_unionfs/Tdrive -o rw,async_read=false,use_ino,allow_other,func.getattr=newest,category.action=all,category.create=ff,cache.files=partial,dropcacheonclose=true 2. If you have issues with the mount script not working at start of array because docker daemon is starting. Then just put your mount script on custom settings and run it every minute (* * * * *). It will then run after array start and will work. @nuhll both these fixes should be interesting for you.
  5. Asking it again cause I'm very curious. Can you share your merger command?
  6. You download to the merge folder but it will write to your ssd. Rclone uploads the files from your sad.
  7. You mount your dockers to /mnt/user/mount_mergerfs/google_vfs and then the proper subfolder (tv/movies/downloads/etc.). If you just put it on your cache it will only see the local stored files.
  8. With unionfs or mergerfs? If mergerfs would you mind sharing your merger command?
  9. Yep I removed one of my local folders which was in my mergerfs and now Sonarr works. Too bad that doesn't work.
  10. I keep getting this error both with automatic import als manual import. But it only happens with upgrades to files: Sab and Sonarr are pointing to the same directory and new series work fine, it's only when an existing file needs to be upgraded. Below the docker runs of sab and sonarr. Hopefully someone has an idea? root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='sonarr' --net='br0.90' --log-opt max-size='10m' --log-opt max-file='1' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'TCP_PORT_8989'='8989' -e 'PUID'='99' -e 'PGID'='100' -v '/dev/rtc':'/dev/rtc':'ro' -v '/mnt/user/mount_unionfs/Tdrive/Series/':'/tv':'rw' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/':'/downloads':'rw' -v '/mnt/user/mount_unionfs/':'/unionfs':'rw,slave' -v '/mnt/cache/appdata/sonarr':'/config':'rw' 'linuxserver/sonarr:preview' root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='sabnzbd' --net='br0.90' --ip='' --log-opt max-size='10m' --log-opt max-file='1' -e TZ="Europe/Berlin" -e HOST_OS="Unraid" -e 'TCP_PORT_8080'='8080' -e 'TCP_PORT_9090'='9090' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/':'/downloads':'rw' -v '/mnt/user/mount_unionfs/Tdrive/Downloads/Incompleet/':'/incomplete-downloads':'rw' -v '/mnt/user/mount_unionfs/':'/unionfs':'rw,slave' -v '/mnt/cache/appdata/sabnzbd':'/config':'rw' 'linuxserver/sabnzbd'
  11. I'm not using the recycling bin, but I thought you might be doing that. I just don't get why Sonarr can't upgrade files and gets an access denied, when Radarr is working fine with the same settings. For downloads I like to unionfs/Tdrive/Downloads and for series I point to unionfs/Tdrive/Series. Both on r/w and mount_unionfs on rw-slave. I'm doubting I need remote mapping because sonarr and sab are on different IP's. But that isn't needed for Radarr either.
  12. I'm trying to run a ssh command through User Scripts. So I expect creating a .sh file is the right way to do that. But then how do I trigger it from User Scripts? And if I want to make it more complex by giving it this command when running: PYTHONIOENCODING=utf8 * * * * * /path/to/script.sh How would I go about that? And what exactly should be the path if I for example put the .sh file in the User Scripts/scripts folder?
  13. @DZMM did you configure recycling bin in your Sonarr instance? If not would you mind sharing your docker settings, which folders you put on r/w slave and which on normaly r/w. I'm still having import issues, but only when upgrading files. Getting an access denied error.
  14. @DZMM Do you never have the problem of the daemon docker not running when you run the mount script at startup? Nuhll has the same problem as me. I've put in a sleep of 30 but that's not enough. Will be increasing it more to try to get it fixed. But I find it strange that you don't have the same issue. @nuhll unfortunately I have the permission denied error again. Did it come back for you?
  15. Try fixing your share settings through safe permissions under tools
  16. Check your r/w settings for your mappings in your docker settings. Rw slave for mount unionfs and rw for the rest
  17. Radarr has some issues lately. Had the same issues last couple of days, but now seem to have fixed it. Change your appdata link from /user/ to /cache/.
  18. @DZMM sorry but in your first post you wrote this: I've tried to understand what you're saying here, but I really can't. What exactly is the difference in user/rclone_upload with user/local? They are both local shares which you include in your union/merge. Maybe I'm missing something in your changes, since my configuration was a bit different because of more local folders.
  19. You have a double --fast-list in your upload code. Probably not an issue, but might want to remove it. So far I've just migrated everything over and it seems to be working fine! Don't understand the hardlinking much yet, because I don't use torrents much so don't have the seeding issue. Will have to change parts of my folder structure though to get in line with the new standard.
  20. Ok understood. In your mount command you have --dir-cache-time 720h. This used to be 72h. Why the change? And you also started using --fast-list in the mount command. I thought this didn't work in the mount command only when doing transfers for example. Has that changed?