stefer

Members
  • Posts

    39
  • Joined

  • Last visited

Posts posted by stefer

  1. EDIT : it works, see bottom edit 

    In ddns-updater , is there a way to force ipv4 only?  I use namecheap, i provided my provider, domain, host ( "*" ) and my password in the json file, it connects to namecheap, but says malformed ip...

    I added the IPV6_PREFIX variable with the /64 value and still no go (I use Rogers, a quick google pointed out they use /64) 

    EDIT : ok looking at the docker's log, I see that it finds my IP fine... it's the any.mydomain.com that it doesn't like... i replaced with @ and that works...   Will add my subdomains now.  

  2. Hi.  I have an issue with the files that are sorted out by SongKong.  They are now owned by root and not the user nobody.  Is there a way to fix that?  I can change ownership with tools or command line... but it'd be nice if the moved files get the right access from the get go. Thanks!

  3. On 6/17/2019 at 11:14 AM, Nischi said:

    Having the same problem with "Bus 005 Device 002: ID 046d:08c9 Logitech, Inc. QuickCam Ultra Vision"

    Did you ever get it to work? My /dev/ does not seem to have any usb for this camera as well, can find it under /dev/bus/usb/ tho, but that's not working to passthrough to the docker.

    Nope, gave up and only used my Raspberry Pi instead.

    • Like 1
  4. unraid-diagnostics-20180603-1438.zip

     

    I was in the middle of a parity check (was almost done) and all of a sudden my server rebooted.  I heard the fans spin down, then the bios beep, and it booted all by itself.

    Had a power failure yesterday...  (I KNOW, i'm saving up for a decent UPS)

     

    Fix common problems tells me that I have some hardware error and to install the nerdpack plugin with mce.  Thing is, when i run mce, it tells me my amd processor is incompatible, to use the edac amd pack instead but that is not listed in the files that nerdpack can install...

     

    Any tip on what else I can test/try?

     

    Diagnostics attached.

     

    Thanks!

  5. I can't seem to passthrough my camera.

    lsusb gives me this

    Bus 002 Device 003: ID 045e:075d Microsoft Corp. LifeCam Cinema

     

    I passthrough a device pointing to : /dev/bus/usb/002/003

    And the container doesn't recognize the camera.  (no cameras found)

     

    And in /dev there's no device with ttyusb# either...

     

     

  6. I've had a drive x'ed out for a few days, had a replacement sent overnight and it's pre-clearing right now.  But I just noticed that the sde device which I would think is faulty, is seen in unassigned devices as sdj...

    Now does that mean that the drive is usable but for some reason the sdx is mixed up and unraid can't find it?  Should I mount it and try the scan again, or with the correct flag?

     

    Any idea?  

    It did have 2 unrecoverable sectors in the smart status for a while and was meaning to replace it ASAP.  I had an unclean shutdown last week where the whole server stopped responding and would not respond to SSH.  Web interface was only responding to clicking the tabs but nothing else, I could not initiate a reboot and/or array stop.

     

    After a force reboot :/ it did a parity check and never noticed that the autocorrect was on in the scheduler...  It did correct some.  The drive stopped working a couple of days after the parity check ended.

    2018-04-25 20_15_30-UNRAID_Main.png

    parity-checks.log

  7. Hi,

     

    I'm getting a 404 error when trying to reverse proxy this with nginx.

     

    Here's my location block :

     

    	location  /ttrss/ {
    	proxy_pass http://192.168.1.69:7845;
    	proxy_set_header Host $host;
         proxy_set_header X-Real-IP $remote_addr;
         proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; 
    	}

    Any ideas?  My other location blocks for other dockers work fine.

     

    Edit : This is in my letsencrypt nginx config.

    Should I move that to the ttrss nginx config?

  8. Ok I think i've got it.  I've added a server directive and it works.  Even got a directive for port 80 to redirect to the https counterpart.  But is there a way to have a catch all port 80 directive that forwards to it's https counterpart ?  Ie : any request to http://subdomain.domain.com to https://subdomain.domain.com ?

     

    server {
    	listen 80;
    	server_name library.mydomain.com;
    	return 301 https://$host$request_uri;
    }
    server {
    	listen 443;
    	server_name library.mydomain.com;
    	
    		location / {
    		proxy_bind              $server_addr;
                    proxy_pass              http://<redactedIP>:<redactedPORT>;
                    proxy_set_header        Host            $http_host;
                    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header        X-Scheme        $scheme;
                   }
            
    }

     

     

  9. Ok so I managed to get letsencrypt going.  I added all my subdomains to my namecheap dns manager.  They all register with letsencrypt.  I edited my default file for nginx to have sonarr and radarr respond to /sonarr and /radarr.


    But how do I manage to have them work with https://<service>.mydomain.com ?

    EDIT :

    I've added another server block to test out one application, but it doesn't work (calibre-web), when i try to load https://library.<retracted>.com , I get the nginx welcome page...  Note : before i did that, i had a location /library going with the same info and that worked like a charm.

     

    server {
            server_name library.retracted.com;
            location / {
    
                    proxy_bind              $server_addr;
                    proxy_pass              http://192.168.1.69:8083;
                    proxy_set_header        Host            $http_host;
                    proxy_set_header        X-Forwarded-For $proxy_add_x_forwarded_for;
                    proxy_set_header        X-Scheme        $scheme;
                    proxy_set_header        X-Script-Name   /library;        }
    }
    

     

  10. On 12/20/2017 at 9:54 PM, Squid said:

     

    Are your applications using any SMB mounts made by Unassigned Devices?  If so, this is the probable answer

     

    https://forums.lime-technology.com/topic/57181-real-docker-faq/?page=2&tab=comments#comment-608304

     

    Hi.  I'm using SMB mounts made by Unassigned Devices...  and I have the same issue he does...  Thing is, I can't find how to force SMB1, i follow the instructions to go in settings, Unassigned Devices, and hitting the "force smb1" setting but its nowhere to be found in that page... 

     

    EDIT : went with NFS instead... hopefully that doesn't give me issues too, grrr....  soon will have enough space to move ALL off my readynas

     

  11. Hi SSD, I'm in the same situation.  And awaiting a brand new HDD tomorrow...   So I can just start an UNRAID trial on a different USB drive, boot my old dell in living room, and pre-clear it there?  I do have a USB 2.0 enclosure but I think it'll take way to long to pre-clear a 10TB in that...

     

    I'm replacing my 4TB parity with this 10TB, and once it have redone the parity rebuild, i'm replacing my 1.5TB with the old 4TB parity.  I think my next upgrade will be a PCI-E sata card to add a few ports so that I can pre-clear drives more easily in the future...

     

     

  12. Hi, i'm having a strange issue with Nzbget + sickbeard + nzbtomedia ...

     

    It's related to nzbtomedia but i figured i'd try here first since... it looks like my nzbget.conf gets overwritten?

     

    When SB sends a download to nzbget, it downloads fine, but when it calls the script, it says that SB didn't answer...

    So i checked SB, and SB is up and running... restarted that container for good measures... but NZBget still says its not answering...

     

    SO i looked at my nzbtomedia config in nzbget... for some reason when i look at the nzbtomedia part, sb adress keeps getting rewritten to   "http://192.168.1.69:8081"  for some reason..

    I change it to 192.168.1.69, save, reload nzbget, and it gets back to that!

     

    I ssh to unraid and edit nzbget.conf manually while the docker is off, save the file then start nzbget again...

     

    After a bit of testing i figured out the nzbtosickbeard section is not really important, its the nzbtomedia script that gets called.  I noticed that it's the sbhost in that section that keeps going back to "localhost" instead of "192.168.1.69"

     

    It works fine for a while (2-3 weeks) and then starts acting up again.  What could make it reset itself like that?

     

    Any idea?

     

    Thanks!

     

     

  13. Thanks.  I understand what user shares are vs disk shares.  It's the whole adding it back to the pool ordeal that confused me...  I'm used to my NASes and raid in general...

     

    Using unbalance should fix the issue as long as I stop my dockers so they don't write anything new and to make sure the drive is completely empty before stopping the array and formatting. 

    It's funny you replied as I was reading this (one of your replies) in another tab : 

     

     

    Will have to run the script to fix my access rights first though...

     

    After reading a bunch of posts, I think I will stick with btrfs...  maybe down the line invest in a good APC ups...

    I have to upgrade most of my drives sooner than later anyway so I could format the new ones as XFS as I replace them.

     

  14. From what i can see when I ssh to my server, some shares have folders that are owned by nobody, and some are owned by a user that i created just for myself for personal stuff, budget etc...  BUT the items that are owned by my user are nowhere near the personal folder.  I wonder if it's because I used a Windows machine to copy stuff over those shares and I might have used that username when I mapped the drive in explorer.... which would explain the ownership.

    I COULD fix it with the Docker Safe New Permissions command... 

     

    Question, when you say :

    Quote

    You can then use the newly empty XFS disk as a target for copying the contents of one of your other disks, lather, rinse, repeat.

     

    Once it's formatted... the array is started, but it won't part of it?  Or will it?

    And how do you copy content over?  Using unBalance again?

    At what point do the XFS disks become part of the array and are protected by the parity?  Once they've all been formatted?