Jump to content

NeoMatrixJR

Members
  • Posts

    53
  • Joined

  • Last visited

Posts posted by NeoMatrixJR

  1. So...I did two things to get mine working.
    1.) change the "Config File" mapping from /romm/config.yml = /some/mnt/path/appdata/romm/config.yml  to /romm = /some/mnt/path/appdata/romm

    2.) That initial config generated an empty FOLDER [...]/appdata/romm/config.yml/  -- DELETE THIS FOLDER, and copy the contents from here: https://github.com/zurdi15/romm/blob/release/examples/config.example.yml into a new config.yml FILE

     

    That at least started up....now to explore from there!

  2. On 7/10/2023 at 4:52 PM, ppunraid said:

    Has tcpdump been moved to another plugin? it used to be in the nerdpack. Or is it now native to unraid? I get command not found when I try to run it.

    Second this...I could have sworn I used TCPDump on my unRAID server before....can we get this added back?

  3. 35 minutes ago, mathgoy said:

    Hi gents,

     

    I just setup duplicati but and setup the /source path as being my /mnt/usr folder so I can see all of it and unfortunately, the Duplicati GUI won' t show any of my folders:

     

    Any idea what would be the fix?

     

    Thanks

     

    image.png.f93f30df270ff0e4301896b0998b54f4.png

    "Source data" will be the folders you pick.  Expand "Computer" and look for /source, then pick folders to back up.

  4. On 1/6/2021 at 4:10 PM, drsparks68 said:

    Is there any indication that this project is still active?  Looks like @testdasi hasn't logged in since October. 

     

    On 1/7/2021 at 12:36 PM, falconexe said:

     

     

    I sure hope he's OK. That would be a shame! I'll let you know if I hear from him at all via a DM.

    Looks like he may just be fairly inactive.  Here's activity on his github:image.png.2d61ebb8b581ccaf252defca4a15b14b.png

  5. 7 hours ago, binhex said:

    can you try a different browser, also try a incognito tab and try the web ui in that.

    <facepalm> thanks...you can ignore me now.  It works in edge or incognito chrome...I'm guessing an add-on is mucking with it.  I thought I had disabled any of the ones that could be fooling with it but I guess not.

  6. 1 hour ago, binhex said:

    what browser are you using?, it works fine with Chrome Browser on a Windows and Mac PC in my testing, can you also post a screenshot?

    Chrome Windows 10 you can see the text moving off to the right...and I've started typing...you can see the red cursor block moved over...but no text shows up before it.

    image.thumb.png.fd22ccd0a012887b9a9eb43f6654cb2b.png

  7. Originally posted as bug, but getting denied bug status.  I'm seeing more posts about this than mine here in General Support, so I hope we can garner some attention sooner or later. Re-posting here as requested....

    Getting a lot of this spammed to my logs:
     

    Apr 13 15:08:44 THECONSTRUCT nginx: 2020/04/13 15:08:44 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:44 THECONSTRUCT nginx: 2020/04/13 15:08:44 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:44 THECONSTRUCT nginx: 2020/04/13 15:08:44 [error] 27570#27570: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Apr 13 15:08:44 THECONSTRUCT nginx: 2020/04/13 15:08:44 [error] 27570#27570: *1454591 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"
    Apr 13 15:08:45 THECONSTRUCT nginx: 2020/04/13 15:08:45 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:45 THECONSTRUCT nginx: 2020/04/13 15:08:45 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:45 THECONSTRUCT nginx: 2020/04/13 15:08:45 [error] 27570#27570: nchan: Out of shared memory while allocating channel /dockerload. Increase nchan_max_reserved_memory.
    Apr 13 15:08:45 THECONSTRUCT nginx: 2020/04/13 15:08:45 [error] 27570#27570: *1454592 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/dockerload?buffer_length=0 HTTP/1.1", host: "localhost"
    Apr 13 15:08:45 THECONSTRUCT nginx: 2020/04/13 15:08:45 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:45 THECONSTRUCT nginx: 2020/04/13 15:08:45 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:45 THECONSTRUCT nginx: 2020/04/13 15:08:45 [error] 27570#27570: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Apr 13 15:08:45 THECONSTRUCT nginx: 2020/04/13 15:08:45 [error] 27570#27570: *1454593 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory.
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: *1454594 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: nchan: Out of shared memory while allocating channel /shares. Increase nchan_max_reserved_memory.
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: *1454595 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/shares?buffer_length=1 HTTP/1.1", host: "localhost"
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Apr 13 15:08:46 THECONSTRUCT nginx: 2020/04/13 15:08:46 [error] 27570#27570: *1454596 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"
    Apr 13 15:08:47 THECONSTRUCT nginx: 2020/04/13 15:08:47 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:47 THECONSTRUCT nginx: 2020/04/13 15:08:47 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:47 THECONSTRUCT nginx: 2020/04/13 15:08:47 [error] 27570#27570: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Apr 13 15:08:47 THECONSTRUCT nginx: 2020/04/13 15:08:47 [error] 27570#27570: *1454597 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"
    Apr 13 15:08:47 THECONSTRUCT nginx: 2020/04/13 15:08:47 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:47 THECONSTRUCT nginx: 2020/04/13 15:08:47 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:47 THECONSTRUCT nginx: 2020/04/13 15:08:47 [error] 27570#27570: nchan: Out of shared memory while allocating channel /dockerload. Increase nchan_max_reserved_memory.
    Apr 13 15:08:47 THECONSTRUCT nginx: 2020/04/13 15:08:47 [error] 27570#27570: *1454598 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/dockerload?buffer_length=0 HTTP/1.1", host: "localhost"
    Apr 13 15:08:48 THECONSTRUCT nginx: 2020/04/13 15:08:48 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:48 THECONSTRUCT nginx: 2020/04/13 15:08:48 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:48 THECONSTRUCT nginx: 2020/04/13 15:08:48 [error] 27570#27570: nchan: Out of shared memory while allocating channel /disks. Increase nchan_max_reserved_memory.
    Apr 13 15:08:48 THECONSTRUCT nginx: 2020/04/13 15:08:48 [error] 27570#27570: *1454599 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/disks?buffer_length=1 HTTP/1.1", host: "localhost"
    Apr 13 15:08:48 THECONSTRUCT nginx: 2020/04/13 15:08:48 [crit] 27570#27570: ngx_slab_alloc() failed: no memory
    Apr 13 15:08:48 THECONSTRUCT nginx: 2020/04/13 15:08:48 [error] 27570#27570: shpool alloc failed
    Apr 13 15:08:48 THECONSTRUCT nginx: 2020/04/13 15:08:48 [error] 27570#27570: nchan: Out of shared memory while allocating channel /cpuload. Increase nchan_max_reserved_memory.
    Apr 13 15:08:48 THECONSTRUCT nginx: 2020/04/13 15:08:48 [error] 27570#27570: *1454600 nchan: error publishing message (HTTP status code 507), client: unix:, server: , request: "POST /pub/cpuload?buffer_length=1 HTTP/1.1", host: "localhost"

    Eventually this leads to /var/log being 100% full.  After that, webUI slows to a crawl or becomes unusable in some areas until I delete syslog.1. 

    This can be slowed by doing:

    /etc/rc.d/rc.nginx restart
    /etc/rc.d/rc.nginx reload

    but eventually it kicks off the logspam again.  Not sure what's causing it.

    I'm trying to gather diagnostics at this time but it's either failing or taking WAAAY to long.  Will post if I'm able to get them downloaded.  Otherwise, ask for specific files and I'll grab them.

     

    If anyone knows if there's a way to collect diagnostics on the CLI let me know.

    theconstruct-diagnostics-20200423-1714.zip

  8. On 12/16/2018 at 3:41 PM, wedge22 said:

    I think it has something to do with the SabNZB docker container, I just pulled the container sizes and I doubt it should be over 20GB.

     

     

    dockercontainer size.PNG

    Where did you get this?

  9. On 7/28/2018 at 9:19 AM, HocEman said:

     

    I have LetsEncrypt, Minio and Duplicati dockers running on Unraid on my local network.  I have multiple remote PCs (Windows) running Duplicati and backing up to Minio on my Unraid server over the internet.  I have reverse proxy setup for Minio as a subfolder (https://mydomain/minio) and I can successfully connect to Minio via a web browser both locally and remotely using that URL.

     

    If I use SSL and mydomain/minio in Duplicati from the remote PCs I get the "Failed to connect: The request signature we calculated does not match the signature you provided. Check your key and signing method." error.  If I use that same URL on my local network I get a different error from Duplicati: "Failed to connect: Error making request with Error Code Forbidden and Http Status Code Forbidden. No further error information was returned by the service."

     

    The only way I can successfully connect to Minio through Duplicati is if I configure port forwarding on my router and use mydomain:portnumber as the URL (non SSL) both locally and remotely (I can use the IP Address:portnumber locally as well). 

     

    It would be nice to have this working via the reverse proxy so it is more secure and I do not have to have that port open and exposed, but I just can't figure it out. 

     

    Thanks in advance to any suggestions!

     

    When you set your server in duplicati are you adding /minio to the end?  If you are, try removing that.  I found it didn't need it.

  10. 41 minutes ago, HocEman said:

     

    @Tango I am having this exact same problem.  Did you ever find a solution?

    Not sure if a solution is posted, but I've been doing this for awhile now.  Are you on the same network or server?  I guess one difference is my minio target is off-site.  I also have an on-network duplicati -> minio from my desktop to my server, but it's not using https....

  11. My apologies if I missed this or the answer should be obvious, but is there any way to put my VPN clients on the same subnet as the rest of my LAN pc's or assign IPs from my router's DHCP?  I have applications that look for 2 machines to be on the same LAN (or as some state...on the same WiFi <facepalm>) and they won't work over VPN in my current configuration.  Also, should I want to expand beyond 2 clients...is there a different...non-AS docker I should be using?

  12. Running your Transmission VPN docker...whenever it's up and running it seems somehow it's eating traffic directed at other dockers.  Running nginx on container 80, host 8081, and traffic forwarded from 80 --> 8081 on my router...no traffic ever reaches my nginx docker while transmission is up.

     

    Oddly, it's only external traffic.  Traffic from within my network directed at 8081 gets to the appropriate docker fine.

  13. 5 minutes ago, saarg said:

     

    Not much we can do without any info. 

    You need to say more than it fails. Provide the docker run command, LE container log and pictures of your port forwarding. 

    Docker Command:
    root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="letsencrypt" --net="bridge" --privileged="true" -e TZ="America/Chicago" -e HOST_OS="unRAID" -e "EMAIL"="<redacted>" -e "URL"="<redacted>.us" -e "SUBDOMAINS"="www,<sd1>,<sd2>" -e "ONLY_SUBDOMAINS"="false" -e "DHLEVEL"="2048" -e "VALIDATION"="http" -e "DNSPLUGIN"="" -e "HTTPVAL"="true" -e "PUID"="99" -e "PGID"="100" -p 8080:80/tcp -p 4343:443/tcp -v "/mnt/user/appdata/letsencrypt":"/config":rw linuxserver/letsencrypt

     

    Forwarding

    image.png.6d8b4bb59d47c009d340c63c5864d2ed.png

     

    Container log:

    [s6-init] making user provided files available at /var/run/s6/etc...exited 0.
    [s6-init] ensuring user provided files have correct perms...exited 0.
    [fix-attrs.d] applying ownership & permissions fixes...
    [fix-attrs.d] done.
    [cont-init.d] executing container initialization scripts...
    [cont-init.d] 10-adduser: executing...
    
    -------------------------------------
    _ ()
    | | ___ _ __
    | | / __| | | / \
    | | \__ \ | | | () |
    |_| |___/ |_| \__/
    
    
    Brought to you by linuxserver.io
    We gratefully accept donations at:
    https://www.linuxserver.io/donations/
    -------------------------------------
    GID/UID
    -------------------------------------
    
    User uid: 99
    User gid: 100
    -------------------------------------
    
    [cont-init.d] 10-adduser: exited 0.
    [cont-init.d] 20-config: executing...
    [cont-init.d] 20-config: exited 0.
    [cont-init.d] 30-keygen: executing...
    using keys found in /config/keys
    [cont-init.d] 30-keygen: exited 0.
    [cont-init.d] 50-config: executing...
    Variables set:
    PUID=99
    PGID=100
    TZ=America/Chicago
    URL=redacted.dns
    SUBDOMAINS=www,<sd1>,<sd2>
    EXTRA_DOMAINS=
    ONLY_SUBDOMAINS=false
    DHLEVEL=2048
    VALIDATION=http
    DNSPLUGIN=
    EMAIL=<redacted>
    STAGING=
    
    Backwards compatibility check. . .
    No compatibility action needed
    2048 bit DH parameters present
    SUBDOMAINS entered, processing
    SUBDOMAINS entered, processing
    Sub-domains processed are: -d www.redacted.dns -d <sd1>.redacted.dns -d <sd2>.redacted.dns
    E-mail address entered: <redacted>
    http validation is selected
    Generating new certificate
    Saving debug log to /var/log/letsencrypt/letsencrypt.log
    Plugins selected: Authenticator standalone, Installer None
    Obtaining a new certificate
    Performing the following challenges:
    http-01 challenge for <sd2>.redacted.dns
    http-01 challenge for <sd1>.redacted.dns
    http-01 challenge for redacted.dns
    http-01 challenge for www.redacted.dns
    Waiting for verification...
    Cleaning up challenges
    Failed authorization procedure. www.redacted.dns (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://www.redacted.dns/.well-known/acme-challenge/edxukoZRwU7EPlVpHFg142PTBBrqyU2G94dp_KmApA0: Timeout, redacted.dns (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://redacted.dns/.well-known/acme-challenge/hMixnVqD8nuDQ8WFP4rYw2NL-lWDNx-gqifbSt0Yy8Y: Timeout, <sd1>.redacted.dns (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://<sd1>.redacted.dns/.well-known/acme-challenge/gjtxzSA3iVJOr5-uEKmtJaFH_1F-u9aG1g_03Km8fYI: Timeout, <sd2>.redacted.dns (http-01): urn:ietf:params:acme:error:connection :: The server could not connect to the client to verify the domain :: Fetching http://<sd2>.redacted.dns/.well-known/acme-challenge/IzNN34bA1ALo-7NdbqWlN63vQNvvo4PP-pvea9FCpKE: Timeout
    
    IMPORTANT NOTES:
    - The following errors were reported by the server:
    
    Domain: www.redacted.dns
    Type: connection
    Detail: Fetching
    http://www.redacted.dns/.well-known/acme-challenge/<redacted>:
    Timeout
    
    Domain: redacted.dns
    Type: connection
    Detail: Fetching
    http://redacted.dns/.well-known/acme-challenge/<redacted>:
    Timeout
    
    Domain: <sd1>.redacted.dns
    Type: connection
    Detail: Fetching
    http://<sd1>.redacted.dns/.well-known/acme-challenge/<redacted>:
    Timeout
    
    Domain: <sd2>.redacted.dns
    Type: connection
    Detail: Fetching
    http://<sd2>.redacted.dns/.well-known/acme-challenge/<redacted>:
    Timeout
    
    To fix these errors, please make sure that your domain name was
    entered correctly and the DNS A/AAAA record(s) for that domain
    contain(s) the right IP address. Additionally, please check that
    your computer has a publicly routable IP address and that no
    firewalls are preventing the server from communicating with the
    client. If you're using the webroot plugin, you should also verify
    that you are serving files from the webroot path you provided.
    ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container

     

×
×
  • Create New...