Jump to content

DZMM

Members
  • Posts

    2,801
  • Joined

  • Last visited

  • Days Won

    9

Everything posted by DZMM

  1. One of my disks won't mount and I'm getting a Unmountable disk present error. There's an option to format the drive, but I want to see if I can re-mount it rather than going for the nuclear option. There's nothing critical on the drive, but I'd rather not have to mess around restoring files if I can avoid it. Some good news is I think it's still in warranty if it's a dud now - although who knows how long it will take to get a replacement currently...... Diagnostics attached - thanks in advance for any help. highlander-diagnostics-20200422-1909.zip
  2. I only create one mergerfs mount - I mount the other remotes in other scripts, and then add those remote paths to the final script as extra "local" folders to include.
  3. I just had a quick go at making the script support multiple remotes, but I couldn't find ways to do certain bits. At the moment, I just run the script once per mount - annoying but once setup you'll forget you've done it.
  4. Correct - just make sure gdrive_vfs is using your existing teamdrive. I would also make sure all your dockers have the mapping /user --> /mnt/user and then within each docker point them to e.g. /user/mount_mergerfs/downloads and /user/mount_mergerfs/media/tv etc or somethign similar. ALL mappings have to be to the mount_mergerfs mount if you want the full file transfer benefits, hardlinking etc. if you need to update plex paths I wrote a post a few posts up on how to preserve metadata.
  5. 1. Shouldn't matter but I've had problems mounting to /mnt/disks in the past, so I recommend /mnt/user 2. Correct add files to the mergerfs location. The local location can be anywhere so there are no files to move - just make your local path /mnt/disks/UDrive/GoogleDriveUploads which will also be the upload folder 3. Was just trying to help users create folders in the right place - fell free to make your own 4. I think so, sounds like an incomplete install - it will get recreated if needed 5. nope that's right - the name is probably bad as it doesn't 'unmount' anymore 6. I think the upload will fail 7. yes - rclone will just resume on next run
  6. Thanks. @jonathanm could you try as well please. @watchmeexplode5 I might roll back the discord update as I think it's causing some problems - not sure why I don't run it myself.
  7. Out of curiousity can you try this version of the upload script please - https://github.com/BinsonBuzz/unraid_rclone_mount/blob/84c00906b5019a3ad873a636ad0f003e267608b8/rclone_upload It's the version before the last change which added discord notifications which I haven't tested myself as I don't use discord, although the pull request looked ok.
  8. What do you have as RcloneRemoteName and RcloneUploadRemoteName in your scripts? You shouldn't be getting %remotename%
  9. @drogg if you are certain there's not an upgrade instance running, go into /mnt/user/appdata/other/rclone and your upload remote folder and manually delete the upload running checker file, or run the cleanup script. Basically the upload script didn't exit properly at some point.
  10. yes. Use my scripts to: in your rclone config make sure server_side_across_configs is set to true mount script to mount your gdrive folder create your service accounts. I'd add about 100 if you're doing server side copies as they blow through the 750GB in no time as you discovered - 15 to 16 are needed per Gbps transferring full pelt per day in the upload script set your team drive as the destination and the gdrive from #2 as the source
  11. @Stinkpickle I updated #3 as you should always use the mergerfs mount in dockers
  12. Oh I see. I would do the following: LocalFilesShare2="/mnt/user/Media" in the mount script so that files from /mnt/user/Media just appear in /mnt/user/sgdrive_media/sgdrive_media. Make sure plex has empty trash automatically off In plex add the new /mnt/user/sgdrive_media_rclone/sgdrive_media /mnt/user/sgdrive_media or /mnt/user/sgdrive_media_mergerfs folders and remove the old /mnt/user/Media Also update sonarr, radarr locations etc Do a full plex scan - it will update all the file locations and keep all your existing metadata Once scan complete, check all looks ok, then manually empty trash Move files from /mnt/user/Media to /mnt/user/sgdrive_media_local Remove /mnt/user/Media from LocalFilesShare2 as shouldn't need it now use /mnt/user/sgdrive_media_local going forwards Edit: Personally I also would use this as opportunity to change MergerfsMountShare="/mnt/user/sgdrive_media" to MergerfsMountShare="/mnt/user/sgdrive_media_mergerfs" to make things easier to follow Edit2: Fixed #3
  13. Unless I'm missing something, just do LocalFilesShare="/mnt/user/Media/" and then those files will get moved to google
  14. mountcheck file either doesn't exist in the source folder to be copied to gdrive, or the target folder doesn't exist
  15. I'd move everything to the team drive - as @testdasi said as soon as you start understanding what you can do with this setup, you'll want a teamdrive. To move: - stop all mounts, dockers etc - create a team drive and note the ID - it'll be in the url - add your google account as a user of the teamdrive if needed - within google drive move all the files from the gdrive folder to the teamdrive folder - ensure all the paths are the same i.e. gdrive/crypt/sub-folder -->tdrive/crypt/sub-folder. This should be pretty quick for 1TB - change your rclone config via the plugin settings page (easiest) to: [tdrive] type = drive scope = drive service_account_file = /mnt/user/appdata/other/rclone/service_accounts/sa_tdrive.json team_drive = Team DRIVE ID server_side_across_configs = true if you're setting up service_accounts at the same time. I'd advise doing this if you've got a decent connection and/or will be uploading more than 750GB/day now or in the future. If not, then change to: [tdrive] type = drive client_id = xxxxxxxxx.apps.googleusercontent.com client_secret = xxxxxxxxxxxxxxxx scope = drive root_folder_id = TEAM DRIVE ID token = {"access_token":"xxxxxxxxxxxxxxxxxx"} server_side_across_configs = true Edit: if you're using gdrive_media_vfs for your decrypted remote remember to change: remote = gdrive:crypt to remote = tdrive:crypt Then re-mount and if all looks ok, start your dockers.
  16. teamdrive NOT team https://support.google.com/a/users/answer/9310351#!/
  17. Something's wrong as folders created either in /local or /mount_mergerfs should behave like normal folders i.e. radarr/sonarr etc adding/upgrading/removing when they want. Some apps like krusader and Plex need to be started after the mount, but that's the only problem I'm aware of. What are your docker mappings and rclone mount options?
  18. You're welcome. What are you going to do with your empty HDDs? I sold mine, even my parity drive as I don't need it now. 12 simultaneous streams is good going - I think I've only hit 10 once over Christmas.
  19. So, is everyone else's server getting hammered with everyone staying at home?
  20. Sometimes when I convert books, my docker image fills up very fast and doesn't go down. I think this means the conversion occurs within the docker. Is there a way to fix this please as sometimes I have to delete my whole image. Also, how do I stop the Trash-99 folder being created? Thanks
  21. https://nguvu.org/pfsense/pfsense-baseline-setup/ https://nguvu.org/pfsense/pfsense-inbound_vpn/
  22. Actually any schedule is fine - that's why the checker file is there to stop additional upload jobs starting if there's an instance already running. If you don't make user script changes while an instance is running, the logs should still keep updating to show you what's happening.
×
×
  • Create New...