axeman

Members
  • Posts

    502
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

axeman's Achievements

Enthusiast

Enthusiast (6/14)

3

Reputation

  1. I wonder if the bandaid answer is to have a script that runs before the upload and move the metadata to the array. Probably not the best solution though. Would've been great if MergerFS can orchestrate which files go to which folder.
  2. I might have asked this before (can't seem to find it)... but are the exclusions on the upload script? Do you have something else to move them so that they are somewhere on the array - or do they basically stay in /local/ ? I have mergerFS joining my files from my Array: /mnt/user/Videos, Cache Drive: /mnt/user/local, and of course Rclone mount/cloud. Wondering if there's a way to force mergerFS to only copy certain files directly to /mnt/user/Videos.
  3. With ESXi 7, you can directly boot off the USB.
  4. I used MC and went to each drive and created a \local\ folder - and moved any movies from whatever path they were at to the \local\ folder. so that you are moving within the disk.
  5. why do this on UnRaid. My windows machine maps to the unraid shares for Emby, Sonarr etc.
  6. I do have it mapped on my windows machine as a drive letter. not sure if that's the same - but again - no issues.
  7. My Emby server is on a Windows machine and it access the mergerfs share like any other unraid share. Zero difference. \\tower\mergerfs\Videos etc ...
  8. Just tower/mergerfs. The only? downside, is that emby also creates the metadata there (I have it configured to save metadata to folders). So all those small files count toward the 400K teamdrive limit. If it gets too much, I can always just create a local metadata folder on the Emby Server - and let it store metadata there. But right now, it's not a huge problem.
  9. Okay - so I have the script setup somewhat as intended. Tower/local - this is where the stuff that will get uploaded goes. Tower/videos - all my other "non cloud" videos (kids movies. Need available even if the cloud is down due to ISP issue. Tower/rclone - this is where all my gdrive mounts are directly mounted. I don't touch this, except maybe to see what's local/cloud Tower/mergerfs - combines Tower/local, Tower/Videos and Tower/RClone So emby server library has paths presented as: Tower/mergerfs/Videos/TV or Tower/mergerfs/videos/kids
  10. That's just how I have it - because of circumstance, really. UnRaid and Emby (Sonnar too) were on different VMs for ages. I just added the scripts to UnRaid, and updated the existing instances to point to the mounts on UnRaid. I didn't have to do anything else. I also have non cloud shares that I still need UnRaid for - so to me, having all things storage related be on UnRaid server (local and cloud), and presentation and gathering on a separate machine is a good separation of concerns.
  11. I maybe missing the mark here - but let UnRaid run the scripts and share the data. Whereever Plex is, just point to the UnRaid shares. That's exactly how my current setup is. None of my stuff here runs in the UnRaid dockers. The only downside - is if the mount goes down, your library might get wonky. Typically, Sonarr will complain about it - Emby doesn't do anything other than stall.
  12. I am certainly no expert in this - but I believe you can accomplish this by running another instance of the script that points at your 4K collection - and set the option to NOT create merger FS mount for that script. MergerfsMountShare="ignore" at the top variables. Then on other script that you have the MergerFS, you update LocalFilesShare2 or whatever to include the path you create above. I have something similar with my TV shows. I have TV shows that are in-progress and TV shows that are completed separated out. The completed ones are on the cloud mount, the in-progress ones are local. The scenario is different because the libraries meant to show up separately. However, I'd imagine it'd work for your purpose as well.
  13. Bjur, if you already know what your passwords are, try creating a new remote in rclone config and see if you can see your unencrypted data.
  14. Same here. I even tried adjusting the sleep time to 120 seconds. Didn't seem to help.