lewispm

Members
  • Posts

    23
  • Joined

  • Last visited

Posts posted by lewispm

  1. I have a similar problem to this post : 

     

    My webui looks like this, but there's not warnings or errors on the screen.  It used to do this occasionally, but not it is pretty consistent.

    I have some dynamix plugins installed, and that post says it was a cause, but I'm not sure what to do since I don't have that exact plugin intalled.

    Thanks for any help.

  2. I have a debian 10 vm running on my unraid box and one app (within debian) complains that there's not enough lockable memory:

    Quote

    WARNING: You may not have a high enough lockable memory limit, see ulimit -l

    the output of

    ulimit -l

    is 64.

    Google tells me to change the "max locked memory" limit in /etc/security/limits.conf, but after doing this and rebooting it doesn't change.

     

    Is this a setting in the way I set up the VM on unraid, or is this just a debian setting?

  3. On 2/14/2020 at 6:23 PM, bling said:

    FYI you can run both the web server and consumer in a single docker container by using a bash script:

    
    #! /bin/bash
    
    /sbin/docker-entrypoint.sh document_consumer &
    /sbin/docker-entrypoint.sh runserver 0.0.0.0:8000 --insecure --noreload &
    wait

    save this file into a volume that's mounted in the container.  i just put this in the appdata directory.

    then turn on advanced view and override the entry point, e.g.

    
    --entrypoint /usr/src/paperless/data/entry.sh

    clear out the 'post arguments', since you're doing that in the bash script now.

    I did this and the document_consumer would run, but the webserver wasn't running.  There was an error in the log about /etc/passwd being locked, not sure if that was the problem.

    I switched the two lines in the entry.sh (listing the webserver first, then the document_consumer second, as below) and it works now. 

    #! /bin/bash
    
    /sbin/docker-entrypoint.sh runserver 0.0.0.0:8000 --insecure --noreload &
    /sbin/docker-entrypoint.sh document_consumer &
    wait

    And I also had to make the file executable (chmod +x).

  4. Does the consumer reach into directories in the consume directory or just consume in the root?  (/consume)

     

    ScannerPro added a (/ScannerPro) directory in my /consume directory and I can't figure out how to remove it.

     

    And paperless hasn't consumed it yet, I assume thats why.

     

  5. 14 hours ago, Djoss said:

    Check the few last posts, @Karatekid had the same issue.

     

    But you probably need to add the following under the Advanced tab of your proxy host:

     

    
    add_header X-Frame-Options "SAMEORIGIN";

     

    Then environment variable is only for the NginxProxyManager UI itself.

    This didn't work.  Here's my advanced tab.  The warning remains.

    I restarted npm docker (not sure if that needs to be done or not) and it still persists.  Do I need to restart nextcloud?

    Screen Shot 2020-01-08 at 9.47.21 AM.png

  6. I am getting a couple security warnings on nextcloud, same as I've seen on here.  

    Quote

    The "X-Frame-Options" HTTP header is not set to "SAMEORIGIN". This is a potential security or privacy risk, as it is recommended to adjust this setting accordingly.

    The "Referrer-Policy" HTTP header is not set to "no-referrer", "no-referrer-when-downgrade", "strict-origin", "strict-origin-when-cross-origin" or "same-origin". This can leak referer information.

    The project instructions at:

    Quote

    say to set the variables thusly:

    Quote

    You can configure the X-FRAME-OPTIONS header value by specifying it as a Docker environment variable. The default if not specified is deny.

    ... environment: X_FRAME_OPTIONS: "sameorigin" ...

    ... -e "X_FRAME_OPTIONS=sameorigin" ...

    After doing this the security headers scan shows the same result, that x_frame_options and referrer policy are still not set.

     

    Is the attached screen shot the way to accomplish this?  Because it didn't work.  How should I do this?

     

    Screen Shot 2020-01-06 at 5.42.16 PM.png

  7. Ok, just tried it again, and actually read the subdomain conf comments at the top and I figured it out.

     

    Here's what I did, in case you want to do the same:

     

    1. under the config for the letsencrypt docker, add plex as a subdomain. Apply, then check the logs that it accepted it, and says "server ready" at the bottom.

    2. config for plex docker, select proxynet as network.  ( I think you already have this)

    3. edit /appdata/letsencrypt/nginx/proxy-confs/plex.subdomain.conf.sample

     

    # make sure that your dns has a cname set for plex, if plex is running in bridge mode, the below config should work as is, for host mode,
    # replace the line "proxy_pass https://$upstream_plex:32400;" with "proxy_pass https://HOSTIP:32400;" HOSTIP being the IP address of plex
    # in plex server settings, under network, fill in "Custom server access URLs" with your domain (ie. "https://plex.yourdomain.url:443")
    
    server {
        listen 443 ssl;
        listen [::]:443 ssl;
    
        server_name plex.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
        proxy_redirect off;
        proxy_buffering off;
    
        # enable for ldap auth, fill in ldap details in ldap.conf
        #include /config/nginx/ldap.conf;
    
    
        location / {
            # enable the next two lines for http auth
            #auth_basic "Restricted";
            #auth_basic_user_file /config/nginx/.htpasswd;
    
            # enable the next two lines for ldap auth
            #auth_request /auth;
            #error_page 401 =200 /login;
    
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_plex plex;
            proxy_pass http://$upstream_plex:32400;
    
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
    
            proxy_set_header X-Plex-Client-Identifier $http_x_plex_client_identifier;
            proxy_set_header X-Plex-Device $http_x_plex_device;
            proxy_set_header X-Plex-Device-Name $http_x_plex_device_name;
            proxy_set_header X-Plex-Platform $http_x_plex_platform;
            proxy_set_header X-Plex-Platform-Version $http_x_plex_platform_version;
            proxy_set_header X-Plex-Product $http_x_plex_product;
            proxy_set_header X-Plex-Token $http_x_plex_token;
            proxy_set_header X-Plex-Version $http_x_plex_version;
            proxy_set_header X-Plex-Nocache $http_x_plex_nocache;
            proxy_set_header X-Plex-Provides $http_x_plex_provides;
            proxy_set_header X-Plex-Device-Vendor $http_x_plex_device_vendor;
            proxy_set_header X-Plex-Model $http_x_plex_model;
        }
    }

    I didn't have to change this file, but if your plex docker is different than "plex" (i.e. binhex-plex) you'll have to edit that under "set $upstream_plex"

     

    4. Save this file BUT REMOVE THE .sample from the file name.

     

    5.  As per the last line in the comments of this file - go into plex settings and :


    # in plex server settings, under network, fill in "Custom server access URLs" with your domain (ie. "https://plex.yourdomain.url:443")

     

    Then I navigated to plex.mydomain.com and it worked.

     

    Hope it helps!

  8. Thanks for the info, this is exactly what I am trying to do.  

     

    I have a question about your solution for Plex.  

     

    Quote

    3.  In the file in /mnt/appdata/letsencrypt/nginx/proxy-confs/plex.domain.conf > change the line "proxy_pass https://$upstream_plex:32400" to proxy_pass https://UnRaidServerIP:32400

    Doesn't this bypass the nginx proxy and just go to the plex instance on the unraid server?

     

    I got emby to work with the following nginx proxy conf:

     

    # make sure that your dns has a cname set for emby, if emby is running in bridge mode, the below config should work as is, although,
    # the container name is expected to be "emby", if not, replace the line "set $upstream_emby emby;" with "set $upstream_emby <containername>;"
    # for host mode, replace the line "proxy_pass http://$upstream_emby:8096;" with "proxy_pass http://HOSTIP:8096;" HOSTIP being the IP address of emby
    # in emby settings, under "Advanced" change the public https port to 443, leave the local ports as is, set the "external domain" to your url,
    # and set the "Secure connection mode" to "Handled by reverse proxy"
    # to enable password access, uncomment the two auth_basic lines
    
    server {
        listen 443 ssl;
    
        server_name emby.*;
    
        include /config/nginx/ssl.conf;
    
        client_max_body_size 0;
    
        location / {
            auth_basic "Restricted";
            auth_basic_user_file /config/nginx/.htpasswd;
            include /config/nginx/proxy.conf;
            resolver 127.0.0.11 valid=30s;
            set $upstream_emby binhex-emby;
            proxy_pass http://$upstream_emby:8096;
            proxy_set_header Range $http_range;
            proxy_set_header If-Range $http_if_range;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection "upgrade";
        }
    }

    I'm trying to do the same with plex, but when I do, it doesn't remotely connect.

  9. 44 minutes ago, itimpi said:

    I have frequently done this without problems!  I also see no reply that suggests this would not work (since despite its name mover does not use the ‘mv’ command to move files).  If it not working this suggests some other issue is coming into play.   It might be worth activating mover logging and trying again to see if useful messages saying why it is not working appear in the syslog.

    You are right, it worked.  Now I switched it back to use cache:no.

     

    Thanks, I misunderstood a reply earlier.

     

  10. 7 hours ago, itimpi said:

    It is safe to do this.    However it might be easier to temporarily set the orphaned share to Use Cache=Yes and then run mover which will achieve the same effect.   When mover completes the move you can set the share back to setting you want.

    That was my plan, but a reply above says mover won't do this move.  And I can confirm this, as I already set cache =yes and run the mover,and it's still orphaned

  11. Ok, now I need help fixing this problem.  

     

    The data is in the correct share, but orphaned, and won't be moved. 

     

    Is it safe to move the files disk share to disk share?  So I copy from the correct share on the cache drive to the correct share on one of the array drives?

     

    Then I can delete the orphaned directory on the cache drive, right?

     

     That seems like my only option.

  12. On 8/17/2018 at 5:03 PM, master00 said:

    Hi guys, 

     

    has anyone tried to run preview generator on docker? Can you guide me please?

     

    I have tried to create the variable path /config/www/nextcloud so i can run occ from docker exec but i get this error on container log: 

     

    importas: fatal: unable to exec export: No such file or directory

     

    Am i missing something? I would like to pre generate all the thumbnails so mobile nextcloud goes faster

     

    I would love preview generator to work, but haven't gotten it to work for me yet either. 

     

    The command to run an occ command from docker console is:

     

    # sudo -u abc php /config/www/nextcloud/occ yourcommand

    if you get it to generate your previews for you, let me know how you did it!

     

     

  13. Quote

    And because the mv resulted in them now being in a cache-no user share, mover won't touch them, since it only moves cache-yes shares from cache to array, and cache-prefer share from array to cache.

     

    Since I get a warning from the "fix common problems" plugin about files on the cache disk in a share that has "use cache:no" wouldn't it make sense for the mover to move files from "use cache:no" shares to the array?   Would it be harmful for the mover to do this?

     

  14. Quote

    When moving between paths that are at the same mount (/mnt/user) linux simply renames them so the directories are changed on the same disk instead of copying to the destination disk and then deleting from the source disk. That explains why the files were still on cache after the mv. 

     

    And because the mv resulted in them now being in a cache-no user share, mover won't touch them, since it only moves cache-yes shares from cache to array, and cache-prefer share from array to cache.

    Makes sense.

     

    Quote

    Also it might be worth noting that "honoring" user share settings only applies to writing of new files. unRAID never automatically does anything to files that have already been written except as the result of the already explained actions of the mover.

     

    You would have to copy from source user share to destination user share to get it to honor the (write) settings of the destination user share, then delete them from the source user share.

     

    So the "mv" command doesn't write the files to the disk (in the sense that you are describing), that occurred when they were placed in the original share?

     

    But a "cp" command would cause a write action on the share, "honoring" the preferences, correct?

     

    Quote

    And since you are working at this level, you should also be aware that you must never mix disks and user shares when moving/copying since that can result in data loss when it tries to overwrite the file at the same physical disk location it is trying to copy it from. Linux doesn't know that /mnt/user/somepath might actually be /mnt/disk1/somepath, for example.

    My /mnt/user in both source and destination is the safe and correct way to do this task, then, right?

     

    Thanks for all the info.  It is very helpful!

  15. I created a user script that invokes a "mv" command in the command line to move files from a share with a "use cache: yes" preference to a share with "use cache: no"  preference.

     

    The mv command syntax is this:  

    mv /mnt/user/shareUsingCache/folder/* /mnt/user/shareNotUsingCache/folder/

     

    I thought using the /mnt/user would allow the software to place the files in their correct location based on the share rules (use cache:no).  However, after the move, the files were on the cache drive.  I thought the "mover" would fix this, but after the move they are still on the cache.

     

    As a temporary fix, I have changed the setting on the destination share to "use cache:yes" so now the mover should move them off the next time it runs.  I would like to know what I am doing wrong, and a way to move these files via a command line script that will honor the share preferences.

     

    Thanks.

  16. Not sure if this is question for the nextcloud docker or nginx, but here goes.  (On a side note, a "search this thread" function on this website would help tremendously.)

     

    I am getting "error 413" on some larger file uploads from an ipad.  After research, I think its due to the "client_max_body_size" which I edited in "nginx/site-confs/nextcloud" to 16384m (and I also tried 0 to disable checking) and I still get the error.  There's nothing related in the nginx or nextcloud logs.  I also tried changing "proxy_max_temp_file_size" in the same file to 16384m to no avail.

     

    Any ideas?

     

  17. I got the UEFI shell on my first boot following that video also.  He mentions in there to "remember to hit any key to boot," and that was my problem.

     

    I rebooted, and started the VNC immediately and pressed any key when the prompt came up and it booted normally.

     

    In my case, the shell came up when I didn't "press any key," as per the video.

     

    Hope it helps.

  18. I have a server with multiple NICs that I'd like to leverage to remove some data hogs from my home network.

     

    I want to set up a Windows VM with BlueIris to be a home camera server.  I'd like to have the cameras come in on their own network into the Windows VM using one of the NICs on the server, keeping that traffic off my home network.  Then the BI server needs its webserver to have access to the internet.  Is this possible, and how would I set it up?

     

    I also have a DVR (SageTV) that uses IP based tuners (HDHR) that I'd like to directly connect to another NIC on the server and have the same setup as above.  ie, the DVR sees the tuners on a private network, and is able to serve to the home network for viewing.