Jump to content

Coolsaber57

Members
  • Content Count

    67
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Coolsaber57

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Where can I go to verify where the logs are being written?
  2. Just a heads up, this container was filling my entire docker image with logs and I had to cap it in order to get it to stop doing that. As a side note, it shows that SABnzbd v2.3.9 is now available.
  3. Hmm, I went ahead and updated to the latest version and I can't connect to the VPN or the WebUI page now. It worked fine with the workaround posted earlier (forcing it to an earlier version), but not anymore. I also don't see any errors in the log. Also, as a side note, if you try to go back to the previous version, it will tell you the command failed and create an orphaned image. I had to remove all orphaned containers, then go to CA and re-add the docker container. Luckily nothing happened to my settings and it all works now pointing to the older version.
  4. Is there a list of GPUs to transcode power that I might be able to peruse? e.g. A GTX 1060 can transcode 2x 1080p h264 streams or 1 4K H265 stream I'm thinking of picking up a used GTX card and want to make sure I get one that won't be a waste.
  5. Thank you! I'll give it a try and see if that fixes the issue I was having.
  6. Hey @binhex I wanted to see if this was something you might have control over or if it was the maintainer: I've been trying to get an issue fixed with playback when my Bluetooth headset is connected to my emby client (Nvidia Shield) and Emby is pointing out that this docker uses an FFFMPEG build that is higher than what is officially supported by Emby (4.1). Is there a reason that FFMPEG in this docker container uses a different version that what's supported by Emby? Should that even matter?
  7. Hey there, quick question: where do files get transcoded to? Is it the appdata folder or the mapped folder in the Template? Edit: actually I think I was able to answer my own question:
  8. Agreed. So far, the best I've found is actually just using Emby, which I already use for audio/video
  9. That's all I really want to do as well - sync from Google Drive into my Unraid share. I actually want to set up two: One to sync the Google Photos folder into my Photos share, and one to sync everything else. I did get it to work and after updating my Krusader docker container to rw slave, I can see the files in the mounted folder. And yes, the google-drive-myusername is the Config name. For those other options in your sync command, is that to speed up the sync? Are there recommended flags we should be using?
  10. To clarify, for my future sync commands, I should be using: rclone sync google-drive-myusername: /mnt/user/Multimedia/Photos/google-photos-myfolder/ if google-drive-myusername is my rclone config name?
  11. Ok, so it appears that there's some issue with some of the files in the Google drive that rclone doesn't like. And you were right, it seems to work better with folders. For a lot of the documents I have in my gdrive account, it shows IO errors - not sure if there's some kind of permissions setting either in rclone or gdrive that I need to set. Any suggestions?
  12. That makes sense, so I updated it to show: #!/bin/bash #---------------------------------------------------------------------------- #first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access #there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure #you only need as many as what you need to mount for dockers or a network share mkdir -p /mnt/disks/google-drive-myfolder #mkdir -p /mnt/disks/dropbox #mkdir -p /mnt/disks/google #mkdir -p /mnt/disks/secure #This section mounts the various cloud storage into the folders that were created above. rclone mount --max-read-ahead 1024k --allow-other google-drive-myusername: /mnt/disks/google-drive-myfolder & The myusername matches the name of the config. I then saved it to User Scripts, unmounted, then mounted. However, when I try to test it with this command: rclone sync /mnt/disks/google-drive-myfolder/brewing.xlsx /mnt/user/Multimedia/brewing.xlsx I still see these errors when trying to sync a file: 2019/01/24 10:23:38 ERROR : : error reading source directory: directory not found 2019/01/24 10:23:38 ERROR : Local file system at /mnt/user/Multimedia/brewing.xlsx: not deleting files as there were IO errors 2019/01/24 10:23:38 ERROR : Local file system at /mnt/user/Multimedia/brewing.xlsx: not deleting directories as there were IO errors 2019/01/24 10:23:38 ERROR : Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors So maybe it's not actually mounting correctly, but I can see the folders using the lsd command in the terminal, which tells me that the config is correct.
  13. I too am struggling to get Google Drive to sync anything. I'm using the following Config: [google-drive-folder] type = drive token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"2019-01-24T02:00:33.009530267-05:00"} and with the following mount script: #!/bin/bash #---------------------------------------------------------------------------- #first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access #there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure #you only need as many as what you need to mount for dockers or a network share mkdir -p /mnt/disks/google-drive-myfolder #mkdir -p /mnt/disks/dropbox #mkdir -p /mnt/disks/google #mkdir -p /mnt/disks/secure #This section mounts the various cloud storage into the folders that were created above. rclone mount --max-read-ahead 1024k --allow-other google1: /mnt/disks/google-drive-myfolder & #rclone mount --max-read-ahead 1024k --allow-other dropbox: /mnt/disks/dropbox & #rclone mount --max-read-ahead 1024k --allow-other google: /mnt/disks/google & #rclone mount --max-read-ahead 1024k --allow-other secure: /mnt/disks/secure & i commented out the stuff I'm not using (copied from the SpaceInvaderOne video). When I use the LSD command via the Terminal, I see two folders (Google Photos) and another I had set up previously, so I know it's mounted. However, when I try to run the following command (to sync my Google Photos folder to my Unraid Share): rclone sync -v '/mnt/disks/google-drive-myfolder/Google Photos' /mnt/user/Multimedia/Photos/google-photos-myfolder/ I get the following: 2019/01/24 01:42:40 ERROR : : error reading source directory: directory not found 2019/01/24 01:42:40 INFO : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: Waiting for checks to finish 2019/01/24 01:42:40 INFO : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: Waiting for transfers to finish 2019/01/24 01:42:40 ERROR : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: not deleting files as there were IO errors 2019/01/24 01:42:40 ERROR : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: not deleting directories as there were IO errors 2019/01/24 01:42:40 ERROR : Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors Anyone see anything I'm doing blatantly wrong?