Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. We can't really help you if you don't post your scripts or your logs
  2. Yes, files older than 90 days. Mergerfs handles all the upgrades. If it's a local file it just replaces, if it's in the cloud it deletes the cloud version and puts the new copy in the local folder for upload.
  3. sorry, no idea what's going on there and I've no experience of Debian. Somebody else might help you though!
  4. I did stumble onto a similar workaround, where I set a static IP up on my laptop that I bought a few months ago, in order to access the server. I'm loving the new pools and they are a great addition. It's particularly nice to be able to see the disk activity for drives that were previously UDs.
  5. thanks, but you lost me there as I think you've exceeded my technical skills! I'm going to try creating a new VM whilst the first is running and then copy across a backup file
  6. Thanks - that fits my scenario perfectly as my other non-cache pools contain pool only shares.
  7. I'm not sure if gdrive based mounts have the same performance issues, but once a certain number of files are stored the IO performance of tdrives starts to degrade - there's a post in here somewhere from @watchmeexplode5 where he did some testing. One of the advantages of moving to a tdrive, is once you've setup your first one it's easy to create multiple tdrives so that you can spread out your files.
  8. If only the cache pool has shares that are set to yes or prefer, and the other pools are set to Only, will it work?
  9. Also, there's a problem with the logic on the green 'All Files protected' icon in shares. I have two shares (domains and iso) setup as cache only that use my 'Computers' pool that is configured as a JBOD. They show up as 'protected' shares - the files aren't protected as it's a single pool. highlander-diagnostics-20200822-1021.zip
  10. New beta user. I've finished making the switch which was time consuming as I've created new pools for my appdata (mainly Plex) and VMs, which meant moving a lot of data around, formating drives etc. It all went fairly smoothly, but there's one new addition that really didn't help me. I passthrough my primary GPU to a VM AND also run a pfSense VM. If my array doesn't start, I have to stop it, unplug the USB, undo the VFIO-bind edits so that I have a screen, and then reboot to be able to access the server, as I can't use another computer to access the server because of the VM. In 6.9.0 it looks like if there's a unclean shutdown it turns off the disk auto start option - I'm sure this is a new feature. This means I have to go through the steps above every time which is a pain. Is it possible to make this optional or to remove as it's a real pain for people with headless servers AND a pfsense VM who can't get into the server over the LAN.
  11. I know how to expand a W10 vdisk in unRAID and then use computer management to expand. For a pfSense VM, after expanding the image size in unRAID, do I need to do anything similar in pfSense to utilise the extra space? Thanks
  12. Nothing jumps out to me as looking wrong. @Kaizac @watchmeexplode5 ????
  13. I haven't come across this problem before - I think because I was moving my music mtdrive-->gdrive. You're welcome. I'm happy to share as I'm sure it's saving people $$$ in drives etc.
  14. I think it's because you are using the remotes that decrypt the files - you need to use the 'raw' remotes that process the encrypted files i.e. rclone move tdrive_upload:crypt/**************************fk40rft2t0t7neqvb9hi7psvm8cg" "tdrive_movies_bio:crypt/*********************7neqvb9hi7psvm8cg" because you are moving the files server-side i.e. encrypted.
  15. I mount them all in separate folders - sorry if I put them all in the same folder in my write-up - that's a mistake. Re navigation, it shouldn't matter as you should never ever need to use or visit the rclone mounts unless you are troubleshooting - the mergerfs folder is the only one you use day-2-day. Sounds like you got it working then?
  16. Can you post your mount script (haven't seen yet) and your upload script. Somewhere there's a path difference, or it doesn't like the dash in Plex-Media and you need to use an underscore.
  17. Dunno - I was going to say after reading your logs that everything was 'working', so try a reboot. Sometimes when people are repeatedly mounting to debug the system seems to go a bit loopy.
  18. LocalFilesShare="ignore" - looking at your unionfs script, shouldn't this be LocalFilesShare="/mnt/user/rclone_upload/google_vfs"?? Otherwise, looks fine and your files from gdrive_media_vfs: should be showing in the mergerfs mount. If still not working, post the logs
  19. all looks fine - it even says it's overwritten the existing mountcheck file which is a good sign. When you say 'no files' - what are you using to look? post your mount scripts for both unionfs and mergerfs as there's something different in there.
  20. You need to run the unmount script or manually delete the /mnt/cache/appdata/other/rclone/remotes/$RcloneRemoteName/dockers_started file - somehow you've got the scripts running slightly out of sync
  21. For clarity, here's my flow: 1. Mount Tdrive_1, tdrive_2, tdrive_3 etc - do not create a mergerfs mount for tdrive_2 and upwards 2. Create a single mergerfs mount with local folder and the Tdrive_1 mount, tdrive_2 mount, tdrive_3 mount etc 3. Point plex, sonarr etc at the mergerfs mount 4. new files get added to the mergerfs mount's local folder 5. upload script moves ALL files from the local folder to tdrive_1 6. I looked at tdrive_1 and worked what the encrypted name was e.g. for TV_Shows adasfghertdfghthgdf 7. TV_shows should be in tdrive_2, so extra script runs overnight to move tdrive_1/crypt/adasfghertdfghthgdf to tdrive_2/crypt/adasfghertdfghthgdf 8. The move is done by rclone so it adjusts the tdrive_1 and tdrive_2 mounts in real-time, so in mergerfs mount files haven't 'moved' from /TV_Shows and everyone is happy
  22. No, because rclone is moving the files - server side from tdrive_x-->tdrive_y, so it is aware of the change of location. Because both tdrive_x and tdrive_y are included in my expanded mergerfs mount, plex, sonarr etc aren't aware that the real location has moved - it's the same approach as for when files move from local to gdrive. Yes, and then use my extra script or similar to 'sort' the files server side into the right tdrives.
  23. All looks good. The reason I do a tdrive-->tdrive transfer as an extra step, is because I only have one mergerfs mount that combines all my tdrives and local files. The upload script for the associated local folder uploads all the files to a single tdrive, so behind the scenes I move the files to their correct tdrive. You could run an upload script for each tdrive, but for my setup that be a pain to traffic manage and the tdrive-->tdrive transfers are mega fast as they are done server side.
×
×
  • Create New...