-
Posts
810 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Posts posted by casperse
-
-
3 hours ago, Energen said:
You do have youtube as a subdomain in the letsencrypt docker right? That might be the cause of the cert error.
And yes, the link is for another docker. I've never used it but it seems to make nginx proxy confs easier.
Yes youtube. is the subdomain :-)
I get the feeling this is simple if I know how to substitute the path to https://youtube.domain/youtube-dl
-
4 minutes ago, Energen said:
I've been playing around with this for a while now and only had various levels of success... using what you have now if you change the line to this:
proxy_pass $upstream_proto://$upstream_app:$upstream_port/youtube-dl;
I can get the page to load, but it doesn't load all the graphics, and I don't know if it would actually work either.
I'm not sure if the problem is that the youtube-dl-server container runs on http and letsencrypt makes everything https. In one configuration I was trying my nginx error log had a certificate handshake failed because youtube-dl-server had no ssl.
But however ngixproxymanager works apparently some other guys got a subdomain to work properly
Thanks @Energen much appreciated have been trying so many things... (http is not a problem have other dockers with subdomain and they work fine!)
Using this and with your added line (didn't make any difference):
I still have to write the : https://youtube.domain.com/youtube-dl and then it works but get a cert. error
My conf file is now like this:
# make sure that your dns has a cname set for youtube-dl-server and that your youtube-dl-server container is named youtube-dl-server server { listen 443 ssl; listen [::]:443 ssl; server_name youtube.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app youtube-dl-server; set $upstream_port 8080; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_set_header Range $http_range; proxy_set_header If-Range $http_if_range; } location ~ (/youtube-dl/)?/socket { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app youtube-dl-server; set $upstream_port 8080; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port/youtube-dl; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection $http_connection; } }
The link you shared is to use another docker?
-
Ok I just need to know how to add the /youtube-dl into the proxy-confs for this to work
If I write the https://sub1.domain.com/youtube-dl it works but I get a cert. error
Tried adding ~ /youtube-dl/ but that didn't work either
-
2 hours ago, casperse said:
EDIT: Create a subdomain template for youtube-dl-server?
Ok so I found the example for a subfolder and normally I would use the template for "adguard.subdomain.conf.sample" to create the subdomain for this?
But I just cant get it working?
The subfolder template:
# Works with this youtube-dl Fork: https://github.com/nbr23/youtube-dl-server location /youtube-dl { return 301 $scheme://$host/youtube-dl/; } location ^~ /youtube-dl/ { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app youtube-dl-server; set $upstream_port 8080; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_redirect off; rewrite /youtube-dl(.*) $1 break; proxy_set_header Referer ''; proxy_set_header Host $upstream_app:8080; }
and my efforts:
# make sure that your dns has a cname set for adguard and that your adguard container is named adguard server { listen 443 ssl; listen [::]:443 ssl; server_name youtube.*; include /config/nginx/ssl.conf; client_max_body_size 0; # enable for ldap auth, fill in ldap details in ldap.conf #include /config/nginx/ldap.conf; # enable for Authelia #include /config/nginx/authelia-server.conf; location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth #auth_request /auth; #error_page 401 =200 /ldaplogin; # enable for Authelia #include /config/nginx/authelia-location.conf; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app youtube-dl-server; set $upstream_port 8080; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location /control { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app youtube-dl-server; set $upstream_port 8080; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } }
I know I am missing some small thing? (Log shows that cert. and all domains are okay and they work for my other dockers?)
I can see that the default path is: "/youtube-dl" and that the subfolder take that into account
-
Hi All
I have letsencrypt working with different domains and subdomains and it works great.
Problem is that I set this up some time ago and forgot some things...
So I just wanted to add the youtube-dl-server to a subdomain and I found the template:
# Works with this youtube-dl Fork: https://github.com/nbr23/youtube-dl-server
location /youtube-dl {
return 301 $scheme://$host/youtube-dl/;
}location ^~ /youtube-dl/ {
# enable the next two lines for http auth
#auth_basic "Restricted";
#auth_basic_user_file /config/nginx/.htpasswd;# enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf
#auth_request /auth;
#error_page 401 =200 /login;
include /config/nginx/proxy.conf;
resolver 127.0.0.11 valid=30s;
set $upstream_app youtube-dl-server;
set $upstream_port 8080;
set $upstream_proto http;
proxy_pass $upstream_proto://$upstream_app:$upstream_port;
proxy_redirect off;
rewrite /youtube-dl(.*) $1 break;
proxy_set_header Referer '';
proxy_set_header Host $upstream_app:8080;
}But in the other templates I could define the server name "path" and in this I cant specify the "sub.domain.com"
If I use the above and the ports are correct (Defaults) mapped 8080 - 8080 then I get the OMBI app that I have on another main domain?
I also looked at the Ombi conf. and that works but doesn't have any server name defined either?
# In order to use this location block you need to edit the default file one folder up and comment out the / location location / { # enable the next two lines for http auth #auth_basic "Restricted"; #auth_basic_user_file /config/nginx/.htpasswd; # enable the next two lines for ldap auth, also customize and enable ldap.conf in the default conf #auth_request /auth; #error_page 401 =200 /login; include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app ombi; set $upstream_port 3579; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port;
And I just found that my old setup might have some "errors"
Domain_1.com and sub.domain_2.com both point to the Ombi webpage?
Br
Casperse
-
Ok so I had it running for:
And then I got the (freeze):
And I cant reboot it just freezes and i cannot access the VM tab either
Logging in is very slow but doable...
Any fix on this, I really hate to "CUT POWER" every 49 days....
- 1
-
On 7/31/2020 at 9:06 AM, johnnie.black said:
It's logged as an actual disk problem, but since it passed the extended test it's OK for now, keep monitoring, you'll need to clear the errors to stop getting the failed report.
Cache temp is normal if it's after some writes, you can set a custom temp warning for it.
Isn't this a little high in temperature (My former samsung 1TB NVME was lower)
Event: Unraid Cache disk temperature
Subject: Alert [PLEXZONE] - Cache disk overheated (78 C)
Description: Samsung_SSD_970_EVO_Plus_2TB_S4J4NG0M919326F (nvme0n1)
Importance: alert -
On 4/10/2020 at 2:36 PM, johnnie.black said:
If the extended SMART passed disk is OK for now, rebooting will clear the array errors, possibly even just clicking "clear stats" button on main page, not sure.
Main page right in front off me the whole time - Thanks!
-
1 hour ago, johnnie.black said:
It's logged as an actual disk problem, but since it passed the extended test it's OK for now, keep monitoring, you'll need to clear the errors to stop getting the failed report.
Cache temp is normal if it's after some writes, you can set a custom temp warning for it.
Where can I clear the errors? Thanks 👍
-
Hi All
I keep getting a drive fail and my disk 20 is showing read errors (I have run extended SMART disk check)
Attached smart report & Diag below
BTW it looks like the Parity check is started again now at 21% (the error notification E-mail said it was:
Parity check in progress.
Total size: 12 TB
Elapsed time: 21 hours, 20 minutes
Current position: 8.95 TB (74.6 %)
Estimated speed: 139.1 MB/sec
Estimated finish: 6 hours, 5 minutes
Sync errors corrected: 0And yes the cache drive is often high temperature but according to others that's normal for a NVMe drive?
I am considering adding this NVME heatsink I know it should be high but I think its to high (I have 3 industri fans full speed in the case + 2 on the back)
Looking forward to hearing from you - should the disk be replaced?
Best regards
Casperse
Update it looks like the error i a week old:
can I acknowledge the error somewhere to stop getting it in the notification e-mail? (id the drive is ok now?)
-
Hi All
I have started to see some weird things after some weeks the server "slows" down and on my main page the cpu monitoring is blank (In any browser)
The system runs with all the dockers and VM's but I cant get a diagnostic before I do a reboot or worst case cut the power?
(I will try to get a diagnsotic after the next rebbot)
Anyone here that have seen this before?
Br
Casperse
After a reboot I got the diagnostic files, hope someone can help me find the cause?
-
5 hours ago, squelch said:
I just updated plex and now it won't start again.
Execution error
Server error
Is the window unraid spits out when attempting to start.
root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name='plex' --net='host' -e TZ="America/New_York" -e HOST_OS="Unraid" -e 'VERSION'='docker' -e 'NVIDIA_VISIBLE_DEVICES'='GPU-8746aba1-0848-abca-4528-4f640b678b58' -e 'TCP_PORT_32400'='32400' -e 'TCP_PORT_3005'='3005' -e 'TCP_PORT_8324'='8324' -e 'TCP_PORT_32469'='32469' -e 'UDP_PORT_1900'='1900' -e 'UDP_PORT_32410'='32410' -e 'UDP_PORT_32412'='32412' -e 'UDP_PORT_32413'='32413' -e 'UDP_PORT_32414'='32414' -e 'PUID'='99' -e 'PGID'='100' -v '/mnt/user/plexshares/Movies/':'/movies':'rw' -v '/mnt/user/plexshares/TV Shows/':'/tv':'rw' -v '/mnt/user/Media/Music/':'/music':'rw' -v '/tmp':'/transcode':'rw' -v '/mnt/user/appdata/plex':'/config':'rw' --runtime=nvidia 'linuxserver/plex' 854f2503e585b2eb9817e8b67040d73a6038955308f2d9b633b39bc297d1dbb1 /usr/bin/docker: Error response from daemon: OCI runtime create failed: container_linux.go:346: starting container process caused "process_linux.go:449: container init caused "process_linux.go:432: running prestart hook 0 caused \"error running hook: exit status 1, stdout: , stderr: nvidia-container-cli: initialization error: cuda error: invalid device ordinal\\n\""": unknown. The command failed.
Not saying it's the same problem but I also couldn't start Plex (I run the Org. Plex docker) and I added the log option and then it started again?
--runtime=nvidia --log-opt max-size=50m --log-opt max-file=1
(I am also having a backup docker installed with the Linux server Plex docker that I can start if the other one doesn't start after a update, I recommend having this setup you can point both of them to the same media meta-data folder so it doesn't really take up too much storage and one of them always work :-)
-
11 minutes ago, hotio said:
You did enable advanced mode, right?
Yes (Always 🙂 ) there is no "EDIT" function to change the path you define to the download path?
And my download disk is a unassigned drive (No spin-up of the other drives)
-
37 minutes ago, Squid said:
I have no problem changing this in all the other dockers - But I cant see any way to change it in this one Unpackerr?
(Seems hardcoded)
-
Hi All
I keep getting a error on (Fix common problems) my mounted unassigned drive?
I can't find a way to change the docker for Unpackerr to RW/Slave?
Can anyone help me?
-
Does this indicate that it cant connect to the server right? (Never seen this before :-)
Does anyone see the same? its just for Sonarr & radarr
Been like this all morning... all other dockers is fine
-
37 minutes ago, Squid said:
Deluge / Radarr has the container paths don't match. Change Radarr's container path of /downloads -> /mnt/disks/SEED/downloads to be /data -> /mnt/disks/SEED/downloads
Deluge is telling Radarr the movie is in /data/movies/.... But Radarr doesn't have a /data hence the message
https://forums.unraid.net/topic/57181-docker-faq/?tab=comments#comment-566086
Argh I need to stop thinking that these "default" docker inputs are mandatory - everything is configurable!
Thanks!
-
Hi All
Sofar everything works when I set it up externally server, but I am trying to move it to my local Unraid server
Also read many posts about the advantages of having a separate UAT drive so I also did this
Keeping the array from spinning up every time you download files and moving them later
Mapping is always what is causing everyone problems or its access rights! :-)
And this is probably also the case here I just can't put my finger on where I screwed up!
Deluge mappings: (Unassigned disk)
Radarr:
Same path!
Deluge moves them correctly from .incomplete to movies:
Radarr also shows the correct path in the UI:
But in the log files in Radarr it says:
QuoteImport failed, path does not exist or is not accessible by Radarr: /data/movies/George. Ensure the path exists and the user running Radarr has the correct permissions to access this file/folder
Then looking at the permissions I can't see anything wrong:
Summery:
Deluge /data --> /mnt/disks/SEED/downloads/
Radarr /downloads -> /mnt/disks/SEED/downloads/
Deluge moves files from .incomplete --> movies
/mnt/disks/SEED/downloads/.incomplete
/mnt/disks/SEED/downloads/movies
Radarr moving files to array.... not working
That should work - shouldn't it?
It's driving me nuts... I have tried so many options and using sublevel folders and not a root folder of a UA - drive nothing works
Last option is to use the Path mappings! but I am running everything local so shouldnt be needed should it?
(localhost doesn't work? I am using IP for the download clients)
As always new eyes on the problem and inputs are most welcome!
-
Hi Everyone
I have installed Nextcloud and everything is working I then would like to map shared Unraid drives to Nextcloud and I think there is some "Mapping" problems
Unraid have a great feature where you can copy a shared read setting to other shares (Making sure they are the same!)
Example: Two shared folders with the exact same SMB settings in the Unraid configuration one works the other doesn't and any other folder I try to share also does not work
Again read rights are copied from the one that works?
One positive side is that there is also another option to share locally shared files
In the Nextcloud Docker you set a path:
Then in Nextcloud:
And this would work...
Can anyone give any idea on what I can do next in regards to get SMB working for more than one share?
Have anyone this working with more than one SMB share?
Any docker commands I can use to see internal mappings?
This doesn't work: docker exec -it name nextcloud config
Thanks!
-
17 hours ago, dariusz65 said:
Can you post your settings for 3615 6.2.3? I'm getting hard disk not found. I'm using 1.3b bootloader.
Then you forgot to change the controller in the xml (Had the same problem when I started)
change hdd bus sata controller from 0 to 1 <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/XPEnology_3/vdisk2.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='1' bus='0' target='0' unit='3'/> </disk>
-
Mounting unassigned devices as SMB shares?
Normally I don't have any problems with UAD shares
And I can mount any internal Unraid shared folder to NextCloud
But for some reason I can get UAD shares working with Nextcloud?
Could this be related to the Smb v1 or v2 thing? or because its a shared drive and not a folder share?
If I write \\192.168.0.6\ I get listed all shares (except UAD drive shares) but writing the share drive names works
\\192.168.0.6\domains_ssd\ or is this just not possible because its not a folder share but a whole drive share?
Example:
-
@Squid sorry I was just trying to point out how important autostart of the VM is...
Since this is now my router to my ISP for this server only
I can see the log file is useless sorry Attached my diagnostics
Thanks
-
Hi All
I now have a Pfsense router as a VM - So if this VM dosent autostart there is no internet to the server?
The Docker autostart perfectly
The log from the boot:
Apr 22 19:21:44 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface veth4efa0a4.IPv6 with address fe80::a01b:a4ff:fe7d:884. Apr 22 19:21:44 SERVER avahi-daemon[10180]: New relevant interface veth4efa0a4.IPv6 for mDNS. Apr 22 19:21:44 SERVER avahi-daemon[10180]: Registering new address record for fe80::a01b:a4ff:fe7d:884 on veth4efa0a5.*. Apr 22 19:23:23 SERVER kernel: veth84a7df6: renamed from eth0 Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER avahi-daemon[10180]: Interface vethd967554.IPv6 no longer relevant for mDNS. Apr 22 19:23:23 SERVER avahi-daemon[10180]: Leaving mDNS multicast group on interface vethd967554.IPv6 with address fe80::410:92ff:fe6c:114e. Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER kernel: device vethd967554 left promiscuous mode Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER avahi-daemon[10180]: Withdrawing address record for fe80::410:92ff:fe6c:115e on vethd967554. Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER kernel: veth715f555: renamed from eth0 Apr 22 19:23:32 SERVER avahi-daemon[10180]: Interface veth1d3fcb8.IPv6 no longer relevant for mDNS. Apr 22 19:23:32 SERVER avahi-daemon[10180]: Leaving mDNS multicast group on interface veth1d3fcb8.IPv6 with address fe80::5469:97ff:feac:a308. Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER kernel: device veth1d3fcb8 left promiscuous mode Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER avahi-daemon[10180]: Withdrawing address record for fe80::5469:97ff:feac:a308 on veth1d3fcb8. Apr 22 19:27:28 SERVER kernel: vfio-pci 0000:0a:00.0: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0a:00.1: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0b:00.0: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0b:00.1: enabling device (0000 -> 0003) Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered blocking state Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered disabled state Apr 22 19:27:31 SERVER kernel: device vnet0 entered promiscuous mode Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered blocking state Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered forwarding state Apr 22 19:27:32 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fe2c:e872. Apr 22 19:27:32 SERVER avahi-daemon[10180]: New relevant interface vnet0.IPv6 for mDNS. Apr 22 19:27:32 SERVER avahi-daemon[10180]: Registering new address record for fe80::fc54:ff:fe2c:e873 on vnet0.*. Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered blocking state Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered disabled state Apr 22 19:27:35 SERVER kernel: device vnet1 entered promiscuous mode Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered blocking state Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered forwarding state Apr 22 19:27:37 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc27:ebff:feb8:e5c9. Apr 22 19:27:37 SERVER avahi-daemon[10180]: New relevant interface vnet1.IPv6 for mDNS. Apr 22 19:27:37 SERVER avahi-daemon[10180]: Registering new address record for fe80::fc28:ebef:feb8:e5c9 on vnet1.*. Apr 22 19:28:00 SERVER root: Fix Common Problems Version 2020.04.19
I have looked on the forum and the things I found had no effect on this?
Running: 6.8.3
Br
Casperse
-
Not sure the controller replacement will make a big difference
So going with the CPU upgrade...
Anyone in this forum who have some experience with this CPU?
Docker Version "not available"
in General Support
Posted
I am also getting something like this one is fine the other is "Not available"
I have updated and rebooted the server what's next?