Jump to content

Coolsaber57

Members
  • Content Count

    63
  • Joined

  • Last visited

Community Reputation

2 Neutral

About Coolsaber57

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Coolsaber57

    [Plugin] Linuxserver.io - Unraid Nvidia

    Is there a list of GPUs to transcode power that I might be able to peruse? e.g. A GTX 1060 can transcode 2x 1080p h264 streams or 1 4K H265 stream I'm thinking of picking up a used GTX card and want to make sure I get one that won't be a waste.
  2. Coolsaber57

    [Support] binhex - Emby

    Thank you! I'll give it a try and see if that fixes the issue I was having.
  3. Coolsaber57

    [Support] binhex - Emby

    Hey @binhex I wanted to see if this was something you might have control over or if it was the maintainer: I've been trying to get an issue fixed with playback when my Bluetooth headset is connected to my emby client (Nvidia Shield) and Emby is pointing out that this docker uses an FFFMPEG build that is higher than what is officially supported by Emby (4.1). Is there a reason that FFMPEG in this docker container uses a different version that what's supported by Emby? Should that even matter?
  4. Coolsaber57

    [Support] binhex - Emby

    Hey there, quick question: where do files get transcoded to? Is it the appdata folder or the mapped folder in the Template? Edit: actually I think I was able to answer my own question:
  5. Coolsaber57

    [Support] Linuxserver.io - Lychee

    Agreed. So far, the best I've found is actually just using Emby, which I already use for audio/video
  6. Coolsaber57

    [Plugin] rclone

    Got it, thank you.
  7. Coolsaber57

    [Plugin] rclone

    That's all I really want to do as well - sync from Google Drive into my Unraid share. I actually want to set up two: One to sync the Google Photos folder into my Photos share, and one to sync everything else. I did get it to work and after updating my Krusader docker container to rw slave, I can see the files in the mounted folder. And yes, the google-drive-myusername is the Config name. For those other options in your sync command, is that to speed up the sync? Are there recommended flags we should be using?
  8. Coolsaber57

    [Plugin] rclone

    To clarify, for my future sync commands, I should be using: rclone sync google-drive-myusername: /mnt/user/Multimedia/Photos/google-photos-myfolder/ if google-drive-myusername is my rclone config name?
  9. Coolsaber57

    [Plugin] rclone

    Ok, so it appears that there's some issue with some of the files in the Google drive that rclone doesn't like. And you were right, it seems to work better with folders. For a lot of the documents I have in my gdrive account, it shows IO errors - not sure if there's some kind of permissions setting either in rclone or gdrive that I need to set. Any suggestions?
  10. Coolsaber57

    [Plugin] rclone

    That makes sense, so I updated it to show: #!/bin/bash #---------------------------------------------------------------------------- #first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access #there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure #you only need as many as what you need to mount for dockers or a network share mkdir -p /mnt/disks/google-drive-myfolder #mkdir -p /mnt/disks/dropbox #mkdir -p /mnt/disks/google #mkdir -p /mnt/disks/secure #This section mounts the various cloud storage into the folders that were created above. rclone mount --max-read-ahead 1024k --allow-other google-drive-myusername: /mnt/disks/google-drive-myfolder & The myusername matches the name of the config. I then saved it to User Scripts, unmounted, then mounted. However, when I try to test it with this command: rclone sync /mnt/disks/google-drive-myfolder/brewing.xlsx /mnt/user/Multimedia/brewing.xlsx I still see these errors when trying to sync a file: 2019/01/24 10:23:38 ERROR : : error reading source directory: directory not found 2019/01/24 10:23:38 ERROR : Local file system at /mnt/user/Multimedia/brewing.xlsx: not deleting files as there were IO errors 2019/01/24 10:23:38 ERROR : Local file system at /mnt/user/Multimedia/brewing.xlsx: not deleting directories as there were IO errors 2019/01/24 10:23:38 ERROR : Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors So maybe it's not actually mounting correctly, but I can see the folders using the lsd command in the terminal, which tells me that the config is correct.
  11. Coolsaber57

    [Plugin] rclone

    I too am struggling to get Google Drive to sync anything. I'm using the following Config: [google-drive-folder] type = drive token = {"access_token":"REDACTED","token_type":"Bearer","refresh_token":"REDACTED","expiry":"2019-01-24T02:00:33.009530267-05:00"} and with the following mount script: #!/bin/bash #---------------------------------------------------------------------------- #first section makes the folders for the mount in the /mnt/disks folder so docker containers can have access #there are 4 entries below as in the video i had 4 remotes amazon,dropbox, google and secure #you only need as many as what you need to mount for dockers or a network share mkdir -p /mnt/disks/google-drive-myfolder #mkdir -p /mnt/disks/dropbox #mkdir -p /mnt/disks/google #mkdir -p /mnt/disks/secure #This section mounts the various cloud storage into the folders that were created above. rclone mount --max-read-ahead 1024k --allow-other google1: /mnt/disks/google-drive-myfolder & #rclone mount --max-read-ahead 1024k --allow-other dropbox: /mnt/disks/dropbox & #rclone mount --max-read-ahead 1024k --allow-other google: /mnt/disks/google & #rclone mount --max-read-ahead 1024k --allow-other secure: /mnt/disks/secure & i commented out the stuff I'm not using (copied from the SpaceInvaderOne video). When I use the LSD command via the Terminal, I see two folders (Google Photos) and another I had set up previously, so I know it's mounted. However, when I try to run the following command (to sync my Google Photos folder to my Unraid Share): rclone sync -v '/mnt/disks/google-drive-myfolder/Google Photos' /mnt/user/Multimedia/Photos/google-photos-myfolder/ I get the following: 2019/01/24 01:42:40 ERROR : : error reading source directory: directory not found 2019/01/24 01:42:40 INFO : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: Waiting for checks to finish 2019/01/24 01:42:40 INFO : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: Waiting for transfers to finish 2019/01/24 01:42:40 ERROR : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: not deleting files as there were IO errors 2019/01/24 01:42:40 ERROR : Local file system at /mnt/user/Multimedia/Photos/google-photos-myfolder: not deleting directories as there were IO errors 2019/01/24 01:42:40 ERROR : Attempt 1/3 failed with 1 errors and: not deleting files as there were IO errors Anyone see anything I'm doing blatantly wrong?
  12. Coolsaber57

    [Support] Linuxserver.io - Letsencrypt (Nginx)

    Yeah, not sure how many times I'm going to do that 😂
  13. Coolsaber57

    [Support] Linuxserver.io - Lychee

    Yep, I moved on from Lychee for now. I'm also concerned that it imports all of the pictures into the database, which means the files are no longer on my Share, but my Docker/VM ssd (unassigned drive). I am still looking for a solution that will allow you to keep the photos where they are. I'm trying Photoshow at the moment, but it's quite slow.
  14. Coolsaber57

    [Support] Linuxserver.io - Letsencrypt (Nginx)

    Ok I've run into an odd issue, trying to figure out where I am going wrong. I'm trying to proxy the Photoshow docker container with the following config under the https://photos.mydomain.com: # For Photoshow server { listen 443 ssl; listen [::]:443 ssl; server_name photos.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_photos photoshow; proxy_pass http://$upstream_photos:8083; } } However, when I start up the LE container, I get an error, and the container doesn't start: e': "No such container: 0a7b5297b0bc" Here's my photoshow config for reference: Am I missing something really obvious? Edit: just realized I should have set upstream_photos and used port 80. Resolved my issue.
  15. Coolsaber57

    [Support] Linuxserver.io - Letsencrypt (Nginx)

    Hey this is not necessarily what you're looking for, but I had a much easier time passing the Firefox container, then just accessing the Unraid UI inside the firefox container. Much less headache IMO.