Jump to content

casperse

Members
  • Content Count

    323
  • Joined

  • Last visited

Everything posted by casperse

  1. Hi All I have started to see some weird things after some weeks the server "slows" down and on my main page the cpu monitoring is blank (In any browser) The system runs with all the dockers and VM's but I cant get a diagnostic before I do a reboot or worst case cut the power? (I will try to get a diagnsotic after the next rebbot) Anyone here that have seen this before? Br Casperse After a reboot I got the diagnostic files, hope someone can help me find the cause? plexzone-diagnostics-20200609-0910.zip
  2. Not saying it's the same problem but I also couldn't start Plex (I run the Org. Plex docker) and I added the log option and then it started again? --runtime=nvidia --log-opt max-size=50m --log-opt max-file=1 (I am also having a backup docker installed with the Linux server Plex docker that I can start if the other one doesn't start after a update, I recommend having this setup you can point both of them to the same media meta-data folder so it doesn't really take up too much storage and one of them always work :-)
  3. Yes (Always 🙂 ) there is no "EDIT" function to change the path you define to the download path? And my download disk is a unassigned drive (No spin-up of the other drives)
  4. I have no problem changing this in all the other dockers - But I cant see any way to change it in this one Unpackerr? (Seems hardcoded)
  5. Hi All I keep getting a error on (Fix common problems) my mounted unassigned drive? I can't find a way to change the docker for Unpackerr to RW/Slave? Can anyone help me?
  6. Does this indicate that it cant connect to the server right? (Never seen this before :-) Does anyone see the same? its just for Sonarr & radarr Been like this all morning... all other dockers is fine
  7. Argh I need to stop thinking that these "default" docker inputs are mandatory - everything is configurable! Thanks!
  8. Hi All Sofar everything works when I set it up externally server, but I am trying to move it to my local Unraid server Also read many posts about the advantages of having a separate UAT drive so I also did this Keeping the array from spinning up every time you download files and moving them later Mapping is always what is causing everyone problems or its access rights! :-) And this is probably also the case here I just can't put my finger on where I screwed up! Deluge mappings: (Unassigned disk) Radarr: Same path! Deluge moves them correctly from .incomplete to movies: Radarr also shows the correct path in the UI: But in the log files in Radarr it says: Then looking at the permissions I can't see anything wrong: Summery: Deluge /data --> /mnt/disks/SEED/downloads/ Radarr /downloads -> /mnt/disks/SEED/downloads/ Deluge moves files from .incomplete --> movies /mnt/disks/SEED/downloads/.incomplete /mnt/disks/SEED/downloads/movies Radarr moving files to array.... not working That should work - shouldn't it? It's driving me nuts... I have tried so many options and using sublevel folders and not a root folder of a UA - drive nothing works Last option is to use the Path mappings! but I am running everything local so shouldnt be needed should it? (localhost doesn't work? I am using IP for the download clients) As always new eyes on the problem and inputs are most welcome!
  9. Hi Everyone I have installed Nextcloud and everything is working I then would like to map shared Unraid drives to Nextcloud and I think there is some "Mapping" problems Unraid have a great feature where you can copy a shared read setting to other shares (Making sure they are the same!) Example: Two shared folders with the exact same SMB settings in the Unraid configuration one works the other doesn't and any other folder I try to share also does not work Again read rights are copied from the one that works? One positive side is that there is also another option to share locally shared files In the Nextcloud Docker you set a path: Then in Nextcloud: And this would work... Can anyone give any idea on what I can do next in regards to get SMB working for more than one share? Have anyone this working with more than one SMB share? Any docker commands I can use to see internal mappings? This doesn't work: docker exec -it name nextcloud config Thanks!
  10. Then you forgot to change the controller in the xml (Had the same problem when I started) change hdd bus sata controller from 0 to 1 <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/XPEnology_3/vdisk2.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='1' bus='0' target='0' unit='3'/> </disk>
  11. Mounting unassigned devices as SMB shares? Normally I don't have any problems with UAD shares And I can mount any internal Unraid shared folder to NextCloud But for some reason I can get UAD shares working with Nextcloud? Could this be related to the Smb v1 or v2 thing? or because its a shared drive and not a folder share? If I write \\192.168.0.6\ I get listed all shares (except UAD drive shares) but writing the share drive names works \\192.168.0.6\domains_ssd\ or is this just not possible because its not a folder share but a whole drive share? Example:
  12. @Squid sorry I was just trying to point out how important autostart of the VM is... Since this is now my router to my ISP for this server only I can see the log file is useless sorry Attached my diagnostics Thanks diagnostics-20200423-0802.zip
  13. Hi All I now have a Pfsense router as a VM - So if this VM dosent autostart there is no internet to the server? The Docker autostart perfectly The log from the boot: Apr 22 19:21:44 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface veth4efa0a4.IPv6 with address fe80::a01b:a4ff:fe7d:884. Apr 22 19:21:44 SERVER avahi-daemon[10180]: New relevant interface veth4efa0a4.IPv6 for mDNS. Apr 22 19:21:44 SERVER avahi-daemon[10180]: Registering new address record for fe80::a01b:a4ff:fe7d:884 on veth4efa0a5.*. Apr 22 19:23:23 SERVER kernel: veth84a7df6: renamed from eth0 Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER avahi-daemon[10180]: Interface vethd967554.IPv6 no longer relevant for mDNS. Apr 22 19:23:23 SERVER avahi-daemon[10180]: Leaving mDNS multicast group on interface vethd967554.IPv6 with address fe80::410:92ff:fe6c:114e. Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER kernel: device vethd967554 left promiscuous mode Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER avahi-daemon[10180]: Withdrawing address record for fe80::410:92ff:fe6c:115e on vethd967554. Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER kernel: veth715f555: renamed from eth0 Apr 22 19:23:32 SERVER avahi-daemon[10180]: Interface veth1d3fcb8.IPv6 no longer relevant for mDNS. Apr 22 19:23:32 SERVER avahi-daemon[10180]: Leaving mDNS multicast group on interface veth1d3fcb8.IPv6 with address fe80::5469:97ff:feac:a308. Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER kernel: device veth1d3fcb8 left promiscuous mode Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER avahi-daemon[10180]: Withdrawing address record for fe80::5469:97ff:feac:a308 on veth1d3fcb8. Apr 22 19:27:28 SERVER kernel: vfio-pci 0000:0a:00.0: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0a:00.1: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0b:00.0: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0b:00.1: enabling device (0000 -> 0003) Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered blocking state Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered disabled state Apr 22 19:27:31 SERVER kernel: device vnet0 entered promiscuous mode Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered blocking state Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered forwarding state Apr 22 19:27:32 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fe2c:e872. Apr 22 19:27:32 SERVER avahi-daemon[10180]: New relevant interface vnet0.IPv6 for mDNS. Apr 22 19:27:32 SERVER avahi-daemon[10180]: Registering new address record for fe80::fc54:ff:fe2c:e873 on vnet0.*. Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered blocking state Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered disabled state Apr 22 19:27:35 SERVER kernel: device vnet1 entered promiscuous mode Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered blocking state Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered forwarding state Apr 22 19:27:37 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc27:ebff:feb8:e5c9. Apr 22 19:27:37 SERVER avahi-daemon[10180]: New relevant interface vnet1.IPv6 for mDNS. Apr 22 19:27:37 SERVER avahi-daemon[10180]: Registering new address record for fe80::fc28:ebef:feb8:e5c9 on vnet1.*. Apr 22 19:28:00 SERVER root: Fix Common Problems Version 2020.04.19 I have looked on the forum and the things I found had no effect on this? Running: 6.8.3 Br Casperse
  14. Not sure the controller replacement will make a big difference So going with the CPU upgrade... Anyone in this forum who have some experience with this CPU?
  15. So I went back experimenting with this in Unraid 6.8.3: And I got the following results: DS3615xs up to DSM_DS3615xs_25423_6.2.3 ! - Performance seem really good even LAN DS3617xs I can only get to DSM_DS3617xs_23739_6.2.0 anything higher and it breaks? If anyone have had better luck then please share 😏
  16. Do you mean these lines? # redirect all traffic to https server { listen 80 default_server; listen [::]:80 default_server; server_name _; return 301 https://$host$request_uri;
  17. Woouu that seems really complicated! Is this to fix the LAN speed? I just followed the guide on this page and on the link here to the Xpenology forum Yes its trial and error I have tried them all and the highest DSM I could get working was with: XPEnology_3 DSM_DS3615xs_24922 v6.2.2 XPEnology_2 DSM_DS3617xs_23739_6.2 XPEnology DSM_DS3615xs_6.1.7 https://xpenology.com/forum/topic/24168-dsm-621-on-unraid-vm/ Biggest problem is LAN speed on virtual lan? Virtual LAN is needed for MAC addresses if you want to use licences with DS Cam (I have 4 LAN port (HW) ported to the VM, so I thought this might solve the problem but since I need specific MAC address this is a no/go) QUESTION: I would really like to know how people use HD storage with a Xpenology virtual server? Do you create one large terabyte vdisk2.qcow2 and mount this? Or do you use a unassigned drive and mount that to the VM? I would like to have: DS Camera (Nice app low CPU usage) - have bought licenses for my old Synology DS Photo/Moments DS Backup tools/DS Cloud DS Note All have nice apps and low power requirements UPDATE: Ok didnt know about the docker project... https://github.com/segator/xpenology-docker But what's the advantages? from building your own VM? Docker: Latest commit3e18362on Mar 8, 2019
  18. Hi All I keep having access problems caused by file permissions I have the following in Radarr And in the docker settings: My only option is to keep running "New permissions" under Tools Strange thing is that its mostly .srt files that cant be read by the player before updating permissions?
  19. Sorry Squid my mistake! (long day) I found the error I had some limits to logs etc on the same docker under Extra Parameters: and made a error fixed and its running again Synchting is back using 100% but now the other apps and the Unraid UI should still be responsive I hope! :-) Thanks - Of all the dockers Synchting is the most demanding! even more than Plex
  20. Every time I add this to Synchting it will not start? --cpu-shares=2
  21. I have an example in the docker FAQ about CPU shares. For even more examples, and further options to prioritize docker apps over unRaid / VMs / etc then you need to google "docker run reference" for the parameters to pop into the extra parameters section I just had a situation yesterday where Synchting was causing 100% CPU load! causing everything to be unresponsive! Stopping that docker and my CPU usage fell to 34% So would the best solution here be to prioritize other Dockers above making sure they always have enough power to run? Or somehow to minimize what CPU allocation Synchting can use as a max?
  22. I moved from Resilio to Synchting because Resilio didn’t work - when moving files already synched it would download them all over again! Synchting supports atomic move! But I have a major problem with CPU usage it’s spikes to 100% and unraid and even Plex gets unresponsive - So can someone recommend some settings to “control” Synchting overload?
  23. Perfect that did it! - So NO need to change anything in the default conf for the: # main server block? I thought you said that was needed? Is there any security implications, I can see that any subdomain I can think of will now always point to domain_1 anything*.domain_1 anything*.domain_2 anything*.domain_3 all --> will point to the domain set for the "Heimdal subfolder sample" which was for domain_1 (Nextcloud) Normally I guess you would get a "This site can’t be reached" Or is this because each domain have a A record and a C name *.domain1-> A record? so Letsencrypt just forwards everything to the domain_1 I have been playing with this all day :-) hoping to remove my old Synology setup [UPDATE]: Nextcloud works but cant connect to the IOS app, switching Nextcloud to Domain_2 and using Domain_1 with Emby resolved that, Nextcloud wants the sample file for the Subdomain not the subfolder? Everything seem to work! But I am getting alot of Unraid log errors? I can see that the IP is from my Laptop that I used to test whit Apr 12 20:52:59 SERVER nginx: 2020/04/12 20:52:59 [error] 10389#10389: *34579 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:01 SERVER nginx: 2020/04/12 20:53:01 [error] 10389#10389: *34593 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:02 SERVER nginx: 2020/04/12 20:53:02 [error] 10389#10389: *34599 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34604 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34607 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34612 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:03 SERVER nginx: 2020/04/12 20:53:03 [error] 10389#10389: *34615 recv() failed (104: Connection reset by peer) while reading upstream, client: 192.168.0.34, server: , request: "GET //wsproxy/5700/ HTTP/1.1", upstream: "http://127.0.0.1:5700/", host: "192.168.0.6" Apr 12 20:53:04 SERVER nginx: 2020/04/12 20:53:04 [error] 10389#10389: *34618 recv() failed (104: Connection reset by peer) while reading upstre Apr 12 21:59:13 SERVER nginx: 2020/04/12 21:59:13 [error] 10389#10389: *56034 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: 127.0.0.1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "127.0.0.1" Apr 12 21:59:13 SERVER nginx: 2020/04/12 21:59:13 [error] 10389#10389: *56036 FastCGI sent in stderr: "Primary script unknown" while reading response header from upstream, client: ::1, server: , request: "GET /admin/api.php?version HTTP/1.1", upstream: "fastcgi://unix:/var/run/php5-fpm.sock:", host: "localhost"
  24. Ok I have almost read through the entire thread and on page 167 I found the missing parameter to insert the extra domain names! LOL I now have 3 domains added and getting certificates! Domain_1 --> Nextcloud (OK) Domain_2 --> Ombi (Not working) sub-domain.Domain_2 (OK) sub-domain.Domain_3 (OK) But I still can't get the two main domains to co-exist... I know it's how I add the two servers to the default conf? I have created the two main domain on the sample from Heimdahl.subfolder.conf.sample and created: "nextcloud.subfolder.conf" "ombi.subfolder.conf" I just need some help on how to define the servers in the appdata\letsencrypt\nginx\site-confs\defaults (conf) My addition in Yellow
  25. So again copying the sample from Heimdahl.subfolder.conf.sample and creating the "nextcloud.subfolder.conf" Than adding the two servers to the appdata\letsencrypt\nginx\site-confs\defaults conf (Removing the two lines for the htpassword in the example below) # auth_basic "Restricted"; # auth_basic_user_file /config/nginx/.htpasswd; Then of course updating the nextcloud PHP configuration to the domain and not the sub.domian I have been reading your old posts today :-) Did I forget something? Would sub.domains still work? bitwarden.domain_2 Or would I need to define them as servers also? Update: Adding domain should be like this right? I thought I had made some A record wrong but if I just enter one domain it works, but if I add more domains I get this error: ERROR: Cert does not exist! Please see the validation error above. The issue may be due to incorrect dns or port forwarding settings. Please fix your settings and recreate the container On page 167 I found a note about creating this extra field for more domains? But it talks about subdomains? would I be able to do as shown below?