Jump to content

casperse

Members
  • Content Count

    322
  • Joined

  • Last visited

Community Reputation

6 Neutral

About casperse

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Not saying it's the same problem but I also couldn't start Plex (I run the Org. Plex docker) and I added the log option and then it started again? --runtime=nvidia --log-opt max-size=50m --log-opt max-file=1 (I am also having a backup docker installed with the Linux server Plex docker that I can start if the other one doesn't start after a update, I recommend having this setup you can point both of them to the same media meta-data folder so it doesn't really take up too much storage and one of them always work :-)
  2. Yes (Always 🙂 ) there is no "EDIT" function to change the path you define to the download path? And my download disk is a unassigned drive (No spin-up of the other drives)
  3. I have no problem changing this in all the other dockers - But I cant see any way to change it in this one Unpackerr? (Seems hardcoded)
  4. Hi All I keep getting a error on (Fix common problems) my mounted unassigned drive? I can't find a way to change the docker for Unpackerr to RW/Slave? Can anyone help me?
  5. Does this indicate that it cant connect to the server right? (Never seen this before :-) Does anyone see the same? its just for Sonarr & radarr Been like this all morning... all other dockers is fine
  6. Argh I need to stop thinking that these "default" docker inputs are mandatory - everything is configurable! Thanks!
  7. Hi All Sofar everything works when I set it up externally server, but I am trying to move it to my local Unraid server Also read many posts about the advantages of having a separate UAT drive so I also did this Keeping the array from spinning up every time you download files and moving them later Mapping is always what is causing everyone problems or its access rights! :-) And this is probably also the case here I just can't put my finger on where I screwed up! Deluge mappings: (Unassigned disk) Radarr: Same path! Deluge moves them correctly from .incomplete to movies: Radarr also shows the correct path in the UI: But in the log files in Radarr it says: Then looking at the permissions I can't see anything wrong: Summery: Deluge /data --> /mnt/disks/SEED/downloads/ Radarr /downloads -> /mnt/disks/SEED/downloads/ Deluge moves files from .incomplete --> movies /mnt/disks/SEED/downloads/.incomplete /mnt/disks/SEED/downloads/movies Radarr moving files to array.... not working That should work - shouldn't it? It's driving me nuts... I have tried so many options and using sublevel folders and not a root folder of a UA - drive nothing works Last option is to use the Path mappings! but I am running everything local so shouldnt be needed should it? (localhost doesn't work? I am using IP for the download clients) As always new eyes on the problem and inputs are most welcome!
  8. Hi Everyone I have installed Nextcloud and everything is working I then would like to map shared Unraid drives to Nextcloud and I think there is some "Mapping" problems Unraid have a great feature where you can copy a shared read setting to other shares (Making sure they are the same!) Example: Two shared folders with the exact same SMB settings in the Unraid configuration one works the other doesn't and any other folder I try to share also does not work Again read rights are copied from the one that works? One positive side is that there is also another option to share locally shared files In the Nextcloud Docker you set a path: Then in Nextcloud: And this would work... Can anyone give any idea on what I can do next in regards to get SMB working for more than one share? Have anyone this working with more than one SMB share? Any docker commands I can use to see internal mappings? This doesn't work: docker exec -it name nextcloud config Thanks!
  9. Then you forgot to change the controller in the xml (Had the same problem when I started) change hdd bus sata controller from 0 to 1 <disk type='file' device='disk'> <driver name='qemu' type='qcow2' cache='writeback'/> <source file='/mnt/user/domains/XPEnology_3/vdisk2.img'/> <target dev='hdd' bus='sata'/> <address type='drive' controller='1' bus='0' target='0' unit='3'/> </disk>
  10. Mounting unassigned devices as SMB shares? Normally I don't have any problems with UAD shares And I can mount any internal Unraid shared folder to NextCloud But for some reason I can get UAD shares working with Nextcloud? Could this be related to the Smb v1 or v2 thing? or because its a shared drive and not a folder share? If I write \\192.168.0.6\ I get listed all shares (except UAD drive shares) but writing the share drive names works \\192.168.0.6\domains_ssd\ or is this just not possible because its not a folder share but a whole drive share? Example:
  11. @Squid sorry I was just trying to point out how important autostart of the VM is... Since this is now my router to my ISP for this server only I can see the log file is useless sorry Attached my diagnostics Thanks diagnostics-20200423-0802.zip
  12. Hi All I now have a Pfsense router as a VM - So if this VM dosent autostart there is no internet to the server? The Docker autostart perfectly The log from the boot: Apr 22 19:21:44 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface veth4efa0a4.IPv6 with address fe80::a01b:a4ff:fe7d:884. Apr 22 19:21:44 SERVER avahi-daemon[10180]: New relevant interface veth4efa0a4.IPv6 for mDNS. Apr 22 19:21:44 SERVER avahi-daemon[10180]: Registering new address record for fe80::a01b:a4ff:fe7d:884 on veth4efa0a5.*. Apr 22 19:23:23 SERVER kernel: veth84a7df6: renamed from eth0 Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER avahi-daemon[10180]: Interface vethd967554.IPv6 no longer relevant for mDNS. Apr 22 19:23:23 SERVER avahi-daemon[10180]: Leaving mDNS multicast group on interface vethd967554.IPv6 with address fe80::410:92ff:fe6c:114e. Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER kernel: device vethd967554 left promiscuous mode Apr 22 19:23:23 SERVER kernel: docker0: port 2(vethd967554) entered disabled state Apr 22 19:23:23 SERVER avahi-daemon[10180]: Withdrawing address record for fe80::410:92ff:fe6c:115e on vethd967554. Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER kernel: veth715f555: renamed from eth0 Apr 22 19:23:32 SERVER avahi-daemon[10180]: Interface veth1d3fcb8.IPv6 no longer relevant for mDNS. Apr 22 19:23:32 SERVER avahi-daemon[10180]: Leaving mDNS multicast group on interface veth1d3fcb8.IPv6 with address fe80::5469:97ff:feac:a308. Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER kernel: device veth1d3fcb8 left promiscuous mode Apr 22 19:23:32 SERVER kernel: docker0: port 3(veth1d3fcb8) entered disabled state Apr 22 19:23:32 SERVER avahi-daemon[10180]: Withdrawing address record for fe80::5469:97ff:feac:a308 on veth1d3fcb8. Apr 22 19:27:28 SERVER kernel: vfio-pci 0000:0a:00.0: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0a:00.1: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0b:00.0: enabling device (0000 -> 0003) Apr 22 19:27:29 SERVER kernel: vfio-pci 0000:0b:00.1: enabling device (0000 -> 0003) Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered blocking state Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered disabled state Apr 22 19:27:31 SERVER kernel: device vnet0 entered promiscuous mode Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered blocking state Apr 22 19:27:31 SERVER kernel: br0: port 2(vnet0) entered forwarding state Apr 22 19:27:32 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface vnet0.IPv6 with address fe80::fc54:ff:fe2c:e872. Apr 22 19:27:32 SERVER avahi-daemon[10180]: New relevant interface vnet0.IPv6 for mDNS. Apr 22 19:27:32 SERVER avahi-daemon[10180]: Registering new address record for fe80::fc54:ff:fe2c:e873 on vnet0.*. Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered blocking state Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered disabled state Apr 22 19:27:35 SERVER kernel: device vnet1 entered promiscuous mode Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered blocking state Apr 22 19:27:35 SERVER kernel: br0: port 3(vnet1) entered forwarding state Apr 22 19:27:37 SERVER avahi-daemon[10180]: Joining mDNS multicast group on interface vnet1.IPv6 with address fe80::fc27:ebff:feb8:e5c9. Apr 22 19:27:37 SERVER avahi-daemon[10180]: New relevant interface vnet1.IPv6 for mDNS. Apr 22 19:27:37 SERVER avahi-daemon[10180]: Registering new address record for fe80::fc28:ebef:feb8:e5c9 on vnet1.*. Apr 22 19:28:00 SERVER root: Fix Common Problems Version 2020.04.19 I have looked on the forum and the things I found had no effect on this? Running: 6.8.3 Br Casperse
  13. Not sure the controller replacement will make a big difference So going with the CPU upgrade... Anyone in this forum who have some experience with this CPU?
  14. So I went back experimenting with this in Unraid 6.8.3: And I got the following results: DS3615xs up to DSM_DS3615xs_25423_6.2.3 ! - Performance seem really good even LAN DS3617xs I can only get to DSM_DS3617xs_23739_6.2.0 anything higher and it breaks? If anyone have had better luck then please share 😏
  15. Do you mean these lines? # redirect all traffic to https server { listen 80 default_server; listen [::]:80 default_server; server_name _; return 301 https://$host$request_uri;