casperse

Members
  • Posts

    810
  • Joined

  • Last visited

Everything posted by casperse

  1. Hi All I keep having access problems caused by file permissions I have the following in Radarr And in the docker settings: My only option is to keep running "New permissions" under Tools Strange thing is that its mostly .srt files that cant be read by the player before updating permissions?
  2. Sorry Squid my mistake! (long day) I found the error I had some limits to logs etc on the same docker under Extra Parameters: and made a error fixed and its running again Synchting is back using 100% but now the other apps and the Unraid UI should still be responsive I hope! :-) Thanks - Of all the dockers Synchting is the most demanding! even more than Plex
  3. Every time I add this to Synchting it will not start? --cpu-shares=2
  4. I have an example in the docker FAQ about CPU shares. For even more examples, and further options to prioritize docker apps over unRaid / VMs / etc then you need to google "docker run reference" for the parameters to pop into the extra parameters section I just had a situation yesterday where Synchting was causing 100% CPU load! causing everything to be unresponsive! Stopping that docker and my CPU usage fell to 34% So would the best solution here be to prioritize other Dockers above making sure they always have enough power to run? Or somehow to minimize what CPU allocation Synchting can use as a max?
  5. I moved from Resilio to Synchting because Resilio didn’t work - when moving files already synched it would download them all over again! Synchting supports atomic move! But I have a major problem with CPU usage it’s spikes to 100% and unraid and even Plex gets unresponsive - So can someone recommend some settings to “control” Synchting overload?
  6. Perfect that did it! - So NO need to change anything in the default conf for the: # main server block? I thought you said that was needed? Is there any security implications, I can see that any subdomain I can think of will now always point to domain_1 anything*.domain_1 anything*.domain_2 anything*.domain_3 all --> will point to the domain set for the "Heimdal subfolder sample" which was for domain_1 (Nextcloud) Normally I guess you would get a "This site can’t be reached" Or is this because each domain have a A record and a C name *.domain1-> A record? so Letsencrypt just forwards everything to the domain_1 I have been playing with this all day :-) hoping to remove my old Synology setup [UPDATE]: Nextcloud works but cant connect to the IOS app, switching Nextcloud to Domain_2 and using Domain_1 with Emby resolved that, Nextcloud wants the sample file for the Subdomain not the subfolder? Everything seem to work! But I am getting alot of Unraid log errors? I can see that the IP is from my Laptop that I used to test whit Apr 12 20:52:59 SERVER nginx: 2020/04/12 20:52:59 [error] 10389#10389: *34579 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:01 SERVER nginx: 2020/04/12 20:53:01 [error] 10389#10389: *34593 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:02 SERVER nginx: 2020/04/12 20:53:02 [error] 10389#10389: *34599 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34604 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34607 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34612 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34615 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:04 SERVER nginx: 2020/04/12 20:53:04 [error] 10389#10389: *34618 recv() failed (104: Connection reset by peer) while reading upstre Apr 12 21:59:13 SERVER nginx: 2020/04/12 21:59:13 [error] 10389#10389: *56034 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" Apr 12 21:59:13 SERVER nginx: 2020/04/12 21:59:13 [error] 10389#10389: *56036 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: ::1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "localhost"
  7. Ok I have almost read through the entire thread and on page 167 I found the missing parameter to insert the extra domain names! LOL I now have 3 domains added and getting certificates! Domain_1 --> Nextcloud (OK) Domain_2 --> Ombi (Not working) sub-domain.Domain_2 (OK) sub-domain.Domain_3 (OK) But I still can't get the two main domains to co-exist... I know it's how I add the two servers to the default conf? I have created the two main domain on the sample from Heimdahl.subfolder.conf.sample and created: "nextcloud.subfolder.conf" "ombi.subfolder.conf" I just need some help on how to define the servers in the appdata\letsencrypt\nginx\site-confs\defaults (conf) My addition in Yellow
  8. So again copying the sample from Heimdahl.subfolder.conf.sample and creating the "nextcloud.subfolder.conf" Than adding the two servers to the appdata\letsencrypt\nginx\site-confs\defaults conf (Removing the two lines for the htpassword in the example below) # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; Then of course updating the nextcloud PHP configuration to the domain and not the sub.domian I have been reading your old posts today :-) Did I forget something? Would sub.domains still work? bitwarden.domain_2 Or would I need to define them as servers also? Update: Adding domain should be like this right? I thought I had made some A record wrong but if I just enter one domain it works, but if I add more domains I get this error: ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container On page 167 I found a note about creating this extra field for more domains? But it talks about subdomains? would I be able to do as shown below?
  9. Dam it was right in front of me! missed it didn't have https ! (I did try swapping ports) I think I understand how it works now! So if I copy the Heimdahl template to use with nextcloud Then how to I set the right domain to point to each? Domain_1 --> Ombi (THIS WORKS NOW! :-) Domain_2 --> Nextcloud I can't see how Letsencrypt can tell which domain should point to each specific docker? Thanks again! this is awesome!
  10. Yes its defined in the "ombi.subfolder.conf" and I left it as default, like in the Nextcloud conf. (subdomain) video setup to the default port 443: set $upstream_port 443; right? (I tried changing it to 3579 makes no difference) Just thought that I would need some configuration "link" between the two dockers and the 2 domains: domain_1 --> ombi IP:3579 (I am waiting with domain_2 until I have cracked the first main domain_2--> nextcloud IP: (PHP config.php) Getting the sub.domain working was so simple, would it be better and easier to setup a DNS verification instead using a wildcard SSL Certificate? The cert. is working for both main and sub domains so I guess it doesn't really matter Update: I also found this guide - https://blog.linuxserver.io/2019/04/25/letsencrypt-nginx-starter-guide/#usingheimdallasthehomepageatdomainroot And it's exactly like you told me, cant see any errors - but for some reason it doesn't work... must be missing something
  11. I have been reading! And thanks to you and this very long thread I am almost there Exercise "Setup Ombi with main domain": 0) Confirm in the log that Letsencrypt gets certificates for everything 1) Change Docker to use custom Proxynet (Networktype) 2) Use template heimdall.subfolder.conf.sample and add your docker name (This case: ombi) rename it "ombi.subfolder.conf" \rootshare\appdata\letsencrypt\nginx\proxy-confs\ombi.subfolder.conf location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app ombi; set $upstream_port 443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } 3) Comment out location / in: appdata\letsencrypt\nginx\site-confs\default # main server block server { listen 443 ssl http2 default_server; listen [::]:443 ssl http2 default_server; root /config/www; index index.html index.htm index.php; server_name _; <--- Add my domains here? # enable subfolder method reverse proxy confs include /config/nginx/proxy-confs/*.subfolder.conf; # all ssl related config moved to ssl.conf include /config/nginx/ssl.conf; # enable for ldap auth #include /config/nginx/ldap.conf; client_max_body_size 0; # location / { # try_files $uri $uri/ /index.html /index.php?$args =404; # } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass 127.0.0.1:9000; fastcgi_index index.php; include /etc/nginx/fastcgi_params; } 4) Port setup on docker its the proxy that forwards the port 443 -> Dockers And it look like it gets the port from the docker itself "proxy_pass $upstream_proto://$upstream_app:$upstream_port;" So not sure if I need to specify the Ombi port:3579 somewhere But where do I specify which main domain"1" should be used for Ombi? This should be in the # main server block in the default file above right? server_name domain1; server_name domain2; 5) I also found this "Add your domain name to the trusted domains array?" (Don't know what that's about) I apologize for not figuring this out myself - I have spent a lot of time on trial & error Most on Google use linux and command lines not these very nice configuration files
  12. Oh didnt see that thanks! Would I still be able to use subdomain for other Dockers? under this top-domain? The current version supports multiple domains like: domain1, domain2 Adding any subdomain to this in the configuration would then create cert. for these subdomain under both domains correct? Is it problematic to also change Nextcloud to its own domain instead of using a subdomain? (Have read many post in this thread about Nextcloud and that Subdomain is the way to get it working, not one about using a main domain) Again thanks for your help! much appreciated
  13. Thanks! So this sample for subfolder would allow me to use the main domain? Just updating the app naming to another docker? I wanted to use the main domain on "Ombi" and I can see that there is a template for using it but again it's for a sub.domain (The docker is auth. by Plex service so I would not need the .htpasswd) # In order to use this location block you need to edit the default file one folder up and comment out the / location location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app heimdall; <----- "Replace with alternative Docker name" set $upstream_port 443; set $upstream_proto https; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } 1000 Thanks I have googled this for hours but didnt find anything......
  14. I found this list for the controllers: https://wiki.unraid.net/Hardware_Compatibility#PCI_IDE_Controllers But not sure I am calculating this correct more drives on the above 24 drives vs 16 drives but twice the bandwidth going from 6 -> 12Gbs
  15. Ok I got it all working with my own subdomain and a A record pointing to my new fixed IP :-) But, can I change the subdomain to the main domain? (I have enabled it in the docker to false, and I can see that it pulls the certificate) But all the conf.samples are for a subdomain where and how can I setup the main domain? Strange I can only find support for reverse proxy using subdomains? (But my old Synology could do both subdomains and main domain?)
  16. I seem to have the same problem, My Pfsense VM dosent auto start? Looking at the log I can see this: 2020-04-07 16:49:48.026+0000: 31248: warning : qemuDomainObjTaint:9301 : Domain id=1 name='pfSense 2' uuid=009d5016-771a-4082-09e4-25423bed1eb9 is tainted: high-privileges 2020-04-07 16:49:48.026+0000: 31248: warning : qemuDomainObjTaint:9301 : Domain id=1 name='pfSense 2' uuid=009d5016-771a-4082-09e4-25423bed1eb9 is tainted: host-cpu But I haven't set any High privileges on the VM settings?
  17. Hi All I just upgraded to 6.8.3 and noticed some errors in my log file: nginx: 2020/04/06 08:56:07 [alert] 10364#10364: worker process 32447 exited on signal 6 nginx: 2020/04/06 08:56:09 [alert] 10364#10364: worker process 32470 exited on signal 6 nginx: 2020/04/06 08:56:11 [alert] 10364#10364: worker process 32583 exited on signal 6 nginx: 2020/04/06 08:56:13 [alert] 10364#10364: worker process 32608 exited on signal 6 nginx: 2020/04/06 08:56:15 [alert] 10364#10364: worker process 32612 exited on signal 6 nginx: 2020/04/06 08:56:17 [alert] 10364#10364: worker process 32751 exited on signal 6 nginx: 2020/04/06 08:56:19 [alert] 10364#10364: worker process 908 exited on signal 6 nginx: 2020/04/06 08:56:21 [alert] 10364#10364: worker process 993 exited on signal 6 nginx: 2020/04/06 08:56:23 [alert] 10364#10364: worker process 1105 exited on signal 6 nginx: 2020/04/06 08:56:25 [alert] 10364#10364: worker process 1113 exited on signal 6 nginx: 2020/04/06 08:56:27 [alert] 10364#10364: worker process 1222 exited on signal 6 nginx: 2020/04/06 08:56:29 [alert] 10364#10364: worker process 1248 exited on signal 6 nginx: 2020/04/06 08:56:31 [alert] 10364#10364: worker process 1257 exited on signal 6 nginx: 2020/04/06 08:56:33 [alert] 10364#10364: worker process 1395 exited on signal 6 kernel: python[19988]: segfault at 6e6f68747988 ip 000014c29b707ba0 sp 000014c294a06c68 error 6 in ld-musl-x86_64.so.1[14c29b6fa000+47000] kernel: Code: 48 8b 47 10 89 f1 48 39 47 18 75 12 48 c7 c0 fe ff ff ff 48 d3 c0 f0 48 21 05 ec fe 06 00 48 8b 57 18 48 8b 47 10 48 89 42 10 <48> 89 50 18 48 8b 47 08 48 89 c2 48 83 e0 fe 48 83 ca 01 48 89 57 Log files attached here: Anything to worry about? I did google this and found this post LINK but I didn't have any VM's with this config..... plexzone-diagnostics-20200406-1057.zip
  18. Hi All This is more a question to the people who are running Unraid for a much longer time than me! 😉 So normally I manage to work with the files before they are moved from my Cache to the Array during the night But today I wanted to move some files (Yes I have set up the rootshare so I don't know why its moving any data across the array?) Anyway I got speed like these: Update the went up after 30min.... to around 88 MB/s still the drives supports Max sustained speed of 210 MB/s Moving files from a 10T Seagate Ironwolf enterprise "drive 8" to my Cache drive (MM2_SSD_970_EVO_Plus_2TB_S4J4NG0M919326F) So I was thinking my system might need a faster controller? - 12 Gb/s I have the slot to support it? And it would be able to run all 24 drives not just the 16 on this controller My system specs: MB - Gigabyte C246-WU4 CPU - Intel’s Xeon E-2176G GPU - NVIDIA Quadro P2000 (Plex/Emby) RAM - 64G Crucial - DDR4 - ECC DIMM 288-PIN -2666 MHz PC4-21300 -CL19 LAN: 2 On Board + Intel Pro 1000 VT Quad Port NIC (EXPI9404VTG1P20) Controller: LSI Logic SAS 9201-16i PCI-e 2.0 x8 SAS 6Gbs HBA Card (Orig. old server controller not chinese one) The rest of the drives are using the Sata MB controller: 10 x Sata III 6 Gb/s Drives: Cache - Samsung 2tb M.2 (MM2_SSD_970_EVO_Plus_2TB_S4J4NG0M919326F) IronWolf 10TB Seagate Exos X14 12TB HP 3TB (7200 drives model: MB3000ECWLQ) few left to be replaced.... So would I benefit from buying the SAS 9305-24i Host Bus Adapter? During running the Mover and running all the dockers and 1-2 VM's I can see that the CPU load is pretty high Also I cant even open the LOG terminal on Unraid during any Mover running? is this normal? Normal CPU load is between 15-50%: Update: I think the high CPU is caused by the Plex docker doing Thumbnails and Metadata? (Killing it and things seems to go back to normal) So maybe spending the same money on a bigger CPU? Intel® Xeon® E-2288G Processor (I really like the ratio of power consumption vs cores on the one I have its 95W on the above new one and only 80W but again you cant really count on these!!!) So upgrading the CPU or the Controller? As always your input and knowledge is much appreciated Thanks Casperse
  19. I have the same error in my Log file but I dont have any VM's with this -1 value?
  20. Hi All I started to have Plex docker just stopping? So I disabled auto update in the Unraid Plugin but I still get these "Shutdowns" I then got the logs right after a shutdown/stop of the Docker and I can see this: "Shutting down with signal 15 (Terminated)" This would normally mean that the system is shutting down, but Unraid and all my other Dockers run perfectly So how do I find out what is issuing this command to plex on Unraid? UPDATE: I have also excluded the backup of Appdata for /mnt/cache/appdata/plex/ so it shouldnt stop the Plex docker should it? Okay : I think it's the update vs the backup plugin that need more time difference in order not to conflict with each other... more date = more time
  21. @SpaceInvaderOne I finally got everything working now but I would like to know if you have any special rules for the VM's on Unraid? My Unraid server IP is used and shared by the Docker and the same gateway (subnet) Unraid server IP: 192.168.0.6 like most people have... I have virtual machines VM's on the Unraid server with their own fixed IP like: 192.168.0.18 BUT If I route any traffic through the Pfsense for the server Unraid IP, dockers etc on the 192.168.0.6 it will overrule any traffic coming from my VM having IP: 192.168.0.18 and route everything over the rule set for the Unraid server IP 192.168.0.6 hosting the VM's !!! 😞 Do I need to passthrough NIC's to my VM's? in order to separate them from the Unraid server IP?
  22. Same her but since the router and ISP is only for this server it doesn't really matter if the server is down I have run into another problem that I hope you might can answer... The server IP is used and shared by the Docker and the same gateway (subnet) Unraid server IP: 192.168.0.10 There are VM's on the Unraid server with their own IP like 192.168.0.40 on the Br0 (Bridged IP) = 192.168.0.10 If I route any traffic through the Pfsense for the server Unraid IP, dockers etc on the 192.168.0.6 it will overrule any traffic coming from my VMs? and route everything over the rule set for the server IP? So is this only possible to route traffic from my VM's if they have a real physical NIC's that I can use and passthrough to my VM's?
  23. I have 2 NIC on the MB for Unraid and I have 4 NIC's on the Pfsense VM would that be enough? Update my Unifi supports 2 x ISP on the USG3 - But I really like all the options I have to do VPN and Alias rules, pfBlockerNG and so much more in PFsense! (Also looking into having a 2x10G card for the Pfsense when my ISP upgrade their infrastructure, cheapest 10G router you can have :-) I think I will use the Pfsense with ISP2 only and keep ISP1 for my Unifi and Home Now I just need to find a way to separate traffic from Dockers in Pfsense by Port traffic? and not IP's.... That should be possible?
  24. Yes only one gateway for the Unraid server (I can manually change it if one of my ISP's goes down.... Thats fine the Unraid server can have the same gateway to ISP2 (Pfsense VM on Unraid server) I have created Firewall Aliases that will route selected Host IP traffic through the ISP2 I just need to use the two NIC's on the server for two different IP's that I can select on each Docker? I can see that in the Docker settings I have this: But I cant get one docker to use 192.168.0.6 and another to use 192.168.0.7 (Same gateway) Is this also not possible? Br Casperse
  25. Does anyone know if there is a Video tutorial for OpenVPN and PIA in Pfsense? The introduction video talked about installing Openvpn on pfsense but I cant find the video? Thanks! 🙂