lewispm

Members
  • Content Count

    23
  • Joined

  • Last visited

Everything posted by lewispm

  1. I started to, then misread it and thought since I didn't have the broken config file I didn't need to. After your reply, I tried it and it worked. Thanks!
  2. I have a similar problem to this post : My webui looks like this, but there's not warnings or errors on the screen. It used to do this occasionally, but not it is pretty consistent. I have some dynamix plugins installed, and that post says it was a cause, but I'm not sure what to do since I don't have that exact plugin intalled. Thanks for any help.
  3. I have a debian 10 vm running on my unraid box and one app (within debian) complains that there's not enough lockable memory: the output of ulimit -l is 64. Google tells me to change the "max locked memory" limit in /etc/security/limits.conf, but after doing this and rebooting it doesn't change. Is this a setting in the way I set up the VM on unraid, or is this just a debian setting?
  4. I did this and the document_consumer would run, but the webserver wasn't running. There was an error in the log about /etc/passwd being locked, not sure if that was the problem. I switched the two lines in the entry.sh (listing the webserver first, then the document_consumer second, as below) and it works now. #! /bin/bash /sbin/docker-entrypoint.sh runserver 0.0.0.0:8000 --insecure --noreload & /sbin/docker-entrypoint.sh document_consumer & wait And I also had to make the file executable (chmod +x).
  5. what is "that field" in this case? I put document_consumer in "Post_arguments"
  6. Does the consumer reach into directories in the consume directory or just consume in the root? (/consume) ScannerPro added a (/ScannerPro) directory in my /consume directory and I can't figure out how to remove it. And paperless hasn't consumed it yet, I assume thats why.
  7. reverse proxy worked for me just fine. I use NginxProxyManager
  8. I changed mine to look like yours. I can't restart nextcloud right now, but I'm sure that will fix me up. Thanks for the help!
  9. Both of the headers I have listed in my advanced tab are NOT set according to https://securityheaders.com
  10. This didn't work. Here's my advanced tab. The warning remains. I restarted npm docker (not sure if that needs to be done or not) and it still persists. Do I need to restart nextcloud?
  11. I am getting a couple security warnings on nextcloud, same as I've seen on here. The project instructions at: say to set the variables thusly: After doing this the security headers scan shows the same result, that x_frame_options and referrer policy are still not set. Is the attached screen shot the way to accomplish this? Because it didn't work. How should I do this?
  12. Ok, just tried it again, and actually read the subdomain conf comments at the top and I figured it out. Here's what I did, in case you want to do the same: 1. under the config for the letsencrypt docker, add plex as a subdomain. Apply, then check the logs that it accepted it, and says "server ready" at the bottom. 2. config for plex docker, select proxynet as network. ( I think you already have this) 3. edit /appdata/letsencrypt/nginx/proxy-confs/plex.subdomain.conf.sample # make sure that your dns has a cname set for plex, if plex is running in br
  13. Thanks for the info, this is exactly what I am trying to do. I have a question about your solution for Plex. Doesn't this bypass the nginx proxy and just go to the plex instance on the unraid server? I got emby to work with the following nginx proxy conf: # make sure that your dns has a cname set for emby, if emby is running in bridge mode, the below config should work as is, although, # the container name is expected to be "emby", if not, replace the line "set $upstream_emby emby;" with "set $upstream_emby <containername>;" # for hos
  14. You are right, it worked. Now I switched it back to use cache:no. Thanks, I misunderstood a reply earlier.
  15. That was my plan, but a reply above says mover won't do this move. And I can confirm this, as I already set cache =yes and run the mover,and it's still orphaned
  16. Ok, now I need help fixing this problem. The data is in the correct share, but orphaned, and won't be moved. Is it safe to move the files disk share to disk share? So I copy from the correct share on the cache drive to the correct share on one of the array drives? Then I can delete the orphaned directory on the cache drive, right? That seems like my only option.
  17. I would love preview generator to work, but haven't gotten it to work for me yet either. The command to run an occ command from docker console is: # sudo -u abc php /config/www/nextcloud/occ yourcommand if you get it to generate your previews for you, let me know how you did it!
  18. Since I get a warning from the "fix common problems" plugin about files on the cache disk in a share that has "use cache:no" wouldn't it make sense for the mover to move files from "use cache:no" shares to the array? Would it be harmful for the mover to do this?
  19. Makes sense. So the "mv" command doesn't write the files to the disk (in the sense that you are describing), that occurred when they were placed in the original share? But a "cp" command would cause a write action on the share, "honoring" the preferences, correct? My /mnt/user in both source and destination is the safe and correct way to do this task, then, right? Thanks for all the info. It is very helpful!
  20. I created a user script that invokes a "mv" command in the command line to move files from a share with a "use cache: yes" preference to a share with "use cache: no" preference. The mv command syntax is this: mv /mnt/user/shareUsingCache/folder/* /mnt/user/shareNotUsingCache/folder/ I thought using the /mnt/user would allow the software to place the files in their correct location based on the share rules (use cache:no). However, after the move, the files were on the cache drive. I thought the "mover" would fix this, but after the move they are still on the cache
  21. Not sure if this is question for the nextcloud docker or nginx, but here goes. (On a side note, a "search this thread" function on this website would help tremendously.) I am getting "error 413" on some larger file uploads from an ipad. After research, I think its due to the "client_max_body_size" which I edited in "nginx/site-confs/nextcloud" to 16384m (and I also tried 0 to disable checking) and I still get the error. There's nothing related in the nginx or nextcloud logs. I also tried changing "proxy_max_temp_file_size" in the same file to 16384m to no avail. Any
  22. I got the UEFI shell on my first boot following that video also. He mentions in there to "remember to hit any key to boot," and that was my problem. I rebooted, and started the VNC immediately and pressed any key when the prompt came up and it booted normally. In my case, the shell came up when I didn't "press any key," as per the video. Hope it helps.
  23. I have a server with multiple NICs that I'd like to leverage to remove some data hogs from my home network. I want to set up a Windows VM with BlueIris to be a home camera server. I'd like to have the cameras come in on their own network into the Windows VM using one of the NICs on the server, keeping that traffic off my home network. Then the BI server needs its webserver to have access to the internet. Is this possible, and how would I set it up? I also have a DVR (SageTV) that uses IP based tuners (HDHR) that I'd like to directly connect to another NIC on the serv