Jump to content

stefer

Members
  • Posts

    39
  • Joined

  • Last visited

Posts posted by stefer

  1. EDIT : it works, see bottom edit 

    In ddns-updater , is there a way to force ipv4 only?  I use namecheap, i provided my provider, domain, host ( "*" ) and my password in the json file, it connects to namecheap, but says malformed ip...

    I added the IPV6_PREFIX variable with the /64 value and still no go (I use Rogers, a quick google pointed out they use /64) 

    EDIT : ok looking at the docker's log, I see that it finds my IP fine... it's the any.mydomain.com that it doesn't like... i replaced with @ and that works...   Will add my subdomains now.  

  2. Hi.  I have an issue with the files that are sorted out by SongKong.  They are now owned by root and not the user nobody.  Is there a way to fix that?  I can change ownership with tools or command line... but it'd be nice if the moved files get the right access from the get go. Thanks!

  3. On 6/17/2019 at 11:14 AM, Nischi said:

    Having the same problem with "Bus 005 Device 002: ID 046d:08c9 Logitech, Inc. QuickCam Ultra Vision"

    Did you ever get it to work? My /dev/ does not seem to have any usb for this camera as well, can find it under /dev/bus/usb/ tho, but that's not working to passthrough to the docker.

    Nope, gave up and only used my Raspberry Pi instead.

    • Like 1
  4. I can't seem to passthrough my camera.

    lsusb gives me this

    Bus 002 Device 003: ID 045e:075d Microsoft Corp. LifeCam Cinema

     

    I passthrough a device pointing to : /dev/bus/usb/002/003

    And the container doesn't recognize the camera.  (no cameras found)

     

    And in /dev there's no device with ttyusb# either...

     

     

  5. Hi,

     

    I'm getting a 404 error when trying to reverse proxy this with nginx.

     

    Here's my location block :

     

    	location  /ttrss/ {
    	proxy_pass http://192.168.1.69:7845;
    	proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
    	}

    Any ideas?  My other location blocks for other dockers work fine.

     

    Edit : This is in my letsencrypt nginx config.

    Should I move that to the ttrss nginx config?

  6. Ok I think i've got it.  I've added a server directive and it works.  Even got a directive for port 80 to redirect to the https counterpart.  But is there a way to have a catch all port 80 directive that forwards to it's https counterpart ?  Ie : any request to http://subdomain.domain.com to https://subdomain.domain.com ?

     

    server {
    	listen 80;
    	server_name library.mydomain.com;
    	return 301 https://$host$request_uri;
    }
    server {
    	listen 443;
    	server_name library.mydomain.com;
    	
    		location / {
    		proxy_bind              $server_addr;
                    proxy_pass              http://<redactedIP>:<redactedPORT>;
                    proxy_set_header        Host            $http_host;
                    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header        X-Scheme        $scheme;
                   }
            
    }

     

     

  7. Ok so I managed to get letsencrypt going.  I added all my subdomains to my namecheap dns manager.  They all register with letsencrypt.  I edited my default file for nginx to have sonarr and radarr respond to /sonarr and /radarr.


    But how do I manage to have them work with https://<service>.mydomain.com ?

    EDIT :

    I've added another server block to test out one application, but it doesn't work (calibre-web), when i try to load https://library.<retracted>.com , I get the nginx welcome page...  Note : before i did that, i had a location /library going with the same info and that worked like a charm.

     

    server {
            server_name library.retracted.com;
            location / {
    
                    proxy_bind              $server_addr;
                    proxy_pass              http://192.168.1.69:8083;
                    proxy_set_header        Host            $http_host;
                    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header        X-Scheme        $scheme;
                    proxy_set_header        X-Script-Name   /library;        }
    }
    

     

  8. On 12/20/2017 at 9:54 PM, Squid said:

     

    Are your applications using any SMB mounts made by Unassigned Devices?  If so, this is the probable answer

     

    https://forums.lime-technology.com/topic/57181-real-docker-faq/?page=2&tab=comments#comment-608304

     

    Hi.  I'm using SMB mounts made by Unassigned Devices...  and I have the same issue he does...  Thing is, I can't find how to force SMB1, i follow the instructions to go in settings, Unassigned Devices, and hitting the "force smb1" setting but its nowhere to be found in that page... 

     

    EDIT : went with NFS instead... hopefully that doesn't give me issues too, grrr....  soon will have enough space to move ALL off my readynas

     

  9. Hi, i'm having a strange issue with Nzbget + sickbeard + nzbtomedia ...

     

    It's related to nzbtomedia but i figured i'd try here first since... it looks like my nzbget.conf gets overwritten?

     

    When SB sends a download to nzbget, it downloads fine, but when it calls the script, it says that SB didn't answer...

    So i checked SB, and SB is up and running... restarted that container for good measures... but NZBget still says its not answering...

     

    SO i looked at my nzbtomedia config in nzbget... for some reason when i look at the nzbtomedia part, sb adress keeps getting rewritten to   "http://192.168.1.69:8081"  for some reason..

    I change it to 192.168.1.69, save, reload nzbget, and it gets back to that!

     

    I ssh to unraid and edit nzbget.conf manually while the docker is off, save the file then start nzbget again...

     

    After a bit of testing i figured out the nzbtosickbeard section is not really important, its the nzbtomedia script that gets called.  I noticed that it's the sbhost in that section that keeps going back to "localhost" instead of "192.168.1.69"

     

    It works fine for a while (2-3 weeks) and then starts acting up again.  What could make it reset itself like that?

     

    Any idea?

     

    Thanks!

     

     

  10. Thanks.  I understand what user shares are vs disk shares.  It's the whole adding it back to the pool ordeal that confused me...  I'm used to my NASes and raid in general...

     

    Using unbalance should fix the issue as long as I stop my dockers so they don't write anything new and to make sure the drive is completely empty before stopping the array and formatting. 

    It's funny you replied as I was reading this (one of your replies) in another tab : 

     

     

    Will have to run the script to fix my access rights first though...

     

    After reading a bunch of posts, I think I will stick with btrfs...  maybe down the line invest in a good APC ups...

    I have to upgrade most of my drives sooner than later anyway so I could format the new ones as XFS as I replace them.

     

  11. From what i can see when I ssh to my server, some shares have folders that are owned by nobody, and some are owned by a user that i created just for myself for personal stuff, budget etc...  BUT the items that are owned by my user are nowhere near the personal folder.  I wonder if it's because I used a Windows machine to copy stuff over those shares and I might have used that username when I mapped the drive in explorer.... which would explain the ownership.

    I COULD fix it with the Docker Safe New Permissions command... 

     

    Question, when you say :

    Quote

    You can then use the newly empty XFS disk as a target for copying the contents of one of your other disks, lather, rinse, repeat.

     

    Once it's formatted... the array is started, but it won't part of it?  Or will it?

    And how do you copy content over?  Using unBalance again?

    At what point do the XFS disks become part of the array and are protected by the parity?  Once they've all been formatted?  

  12. I have very stable power (now), but power failures do occur from time to time...  Twice this year so far.   And my UPS is not strong enough to support the PC for more than 5 seconds...

     

    My important stuff is on dropbox.  Yeah the conversion sounds painful... even if it's easier like you explained.  Since i get this error when I run the calculate option :

     

    There are some permission issues with the folders/files you want to move
    5080 file(s)/folder(s) with an owner other than 'nobody'
    0 file(s)/folder(s) with a group other than 'users'
    268 folder(s) with a permission other than 'drwxrwxrwx'
    1356 files(s) with a permission other than '-rw-rw-rw-' or '-r--r--r--'
    You can find more details about which files have issues in the log file (/boot/logs/unbalance.log)
    At this point, you can move the folders/files if you want, but be advised that it can cause errors in the operation
    You are STRONGLY suggested to install the Fix Common Problems plugin, then run the Docker Safe New Permissions command

    I'm thinking it's either my Dropbox docker, or NZBget/Sickbeard/Couchpotato dockers...   I don't really know how to find out though, I checked the log and i'm not really sure what to look for.

     

  13. I made the mistake (from what I gather after reading some posts) of setting my default fs to btrfs...

     

    Now i came across this post and I'm wondering how to get this done (converting to xfs).   

     

    I would use unbalance to get stuff off one disk, then format it to xfs? how?  stop array, ssh to array, format it with fdisk? or is there a tool in the webgui that I don't see, or a plugin to help?

     

    Then what?  When I start the array, unraid will use the newly formatted drive in the storage pool and then I unbalance the others one by one until I have formattted all of them to xfs?

     

    OR does unraid create a 2nd pool that is xfs formatted?

     

    Or should I not fret and keep it as btrfs?

  14. This is what I have in the docker config :

     

    DeepinScreenshot_select-area_20170721124308.png.e01828767dc404e43197140d6ff543f0.png

     

    This is what i see in the web ui :

     

    DeepinScreenshot_select-area_20170721124329.thumb.png.da05efaf2b7acd205c6fb70d83ba3b15.png

     

    When I click on All books, I don't see anything, its empty.  Notice at the bottom the /config path... well, this one is not editable.  It's almost like i'd have to move my library to /mnt/user/appdata/Calibre-server

  15. 5 minutes ago, Eyeheartpie said:

    There's an option when you're installing to add another path. That's where you define the host path. You have to click the plus at the bottom of the server config page. You can add it after the fact by going back into the docker config page. 

    Well I did that, i added my library location there... but where do I specify to the docker that I want to use that path for my library?  Or is there a specific container path to use and it'll be automatic?

  16. I'm reading the instructions... i'm trying to install the calibre-server, it works, but it says to click install, says to put library location and port, but I don't have any box to put the location in...

    Installing it anyway, running the webui shows me an empty library since i didn't specify where it's located...  

     

    I have a user share called Calibre where it's located. After adding it as another available share, it's the same deal, I don't see where i can specify to the docker to use THAT location.  Any idea what I missed?

×
×
  • Create New...