• Posts

  • Joined

  • Last visited

Everything posted by Jorgen

  1. One thing that could lead to some of those symptoms is running out of space in the /downloads directory. for example: I have my downloads go to a disk outside the array mounted by unassigned devices, and somehow the disk got unmourned but the mount point remained. This caused deluge to write all downloads into a temp ram area in unraid, which filled up quickly and caused issues. I never found any logs showing this problem, just stumbled upon it by chance while troubleshooting.
  2. Since you’re new to unraid, have you looked at Spaceinvaderone’s video guides? There are tweaks you can do on the Windows side to get it to work better as an unraid VM. I had similar CPU spiking issues until I tweaked the MSI interrupt settings inside Windows. The hyper-v changes in this thread also helped of course. I’m not actually sure if the MSI interrupts were covered in this video series, could also have been in:
  3. 1. Stop container 2. Backup Prowlarr appdata folder 3. Delete everything in prowlarr appdata folder 4. Start container you can also uninstall container after step 1 and reinstall it after step 3
  4. I had problems with this inotify command. It ran and created the text file, but nothing was ever logged to the file. I can only get it to log anything by removing *[!ramdisk] AND pointing it to mnt/cache/appdata. Just curious if anyone can explain why this is? I have my appdata share defined (cache prefer setting) as /mnt/cache/appdata for all containers and as the default in docker settings, but I would have thought that /mnt/user/appdata should still work?
  5. Yes, deluge only does torrents, you’ll need something like NZBget for Usenet Sent from my iPhone using Tapatalk
  6. Use localhost instead of docker network IP Sent from my iPhone using Tapatalk
  7. Sounds like the morhterboard, but it's defenitly not for certain. First step would be to hook up a monitor and keybord directlly to the server, power up and see if it gets past the POST stage. if it doesn't you'll need to dig into the beep codes to identify which component is faulty. This will be your first hurdle, I have the same mobo and it doesn't have a built-in speaker, so you'll need to rig somehting up yourself....
  8. Yeah that should work. Looks like other ubiquity products auto-renews the selfsigned cert on Boot if it’s within a certain amount of days from expiry. Not sure if unifi does the same? Sent from my iPhone using Tapatalk
  9. Ah ok. The controller already ships with a self-signed cert, you should be able to extract it from /config/data/keystore or even download it from the controller web page using the browser “inspect cert” functions. I assume Safari have those somewhere. Unless you need it for your own domain name, then you’ll need to create it with pfsense and import it into the keystore as per above Sent from my iPhone using Tapatalk
  10. Depends on your situation. To start with you need your own domain, pointing to your unifi controller IP. This guide will walk you through creating a new cert specifically for your unifi domain/sub-domain: I think you need to register for the unifi forum to access it. It also has info on how the default keystore works. For this docker the files are in /config/data (which is also mapped to your appdata share). You need to create a new keystore using the "unifi" alias and the default password "aircontrolenterprise". All commands can be run from the docker console. If you already have an existing wildcard cert for your domain you should be able to import it. You'll need to turn it into a pkcs12 then convert that to a keystore that unifi will accept. Something like this if you have a private key and signed cert: Caveat: I never got it to work for me. My controller is only avialbe on my LAN, I don't have an existing wildcard cert for my domain and I didn't want to pay for one, and using the free certs form LetsEncrypt required a public IP + refresh every 90 days which seems complicated for this use case. So I put it in the too hard basket.
  11. Since the topic of mismatched docker path mappings comes up quite often with Radarr/Sonarr and download clients, maybe this diagram helps to visualize the three levels of folder config and how they interact? Important to realize is that the application running inside the docker container knows nothing about the Unraid shares. It can ONLY access folders you have specifically added as a container path in the docker config.
  12. Q25 here: Sent from my iPhone using Tapatalk
  13. Nice solution @TurboStreetCar and thanks for sharing! Sent from my iPhone using Tapatalk
  14. Oh ok, that file is in the cocker image and needs to be patched with your changes every time you update the container. I was thinking of scripting the change via “extra parameters” but after some research it appears that is not available. See this thread for background and potential workaround using user scripts. /topic/58700-passing-commandsargs-to-docker-containers-request/?do=findComment&comment=670979 Deluge daemon needs to be started with the —logrotate option for it to work. And it’s started by one of binhex’s scripts that is part of the image. So you’re in the same situation as you log modifications. Either binhex Updates the image to support logrotate, or you need to patch that script yourself For persistent logs, I think logrotate would be the better option, but there are other ways. Here are some random thoughts, I’m no particular order of suitability or ease to implement… - user script parses the logs on a schedule and writes the required data into a persistent file outside the container - user script simply copies the whole log file into persistent storage (you’ll end up with lots of duplication though) - write your own deluge plug-in to export the data to a persistent file - identify another trigger to script your own log file, e.g. are the torrents added by radarr that might have better script support? Sent from my iPhone using Tapatalk
  15. --sysctl="net.ipv4.conf.all.src_valid_mark=1" is only needed for Wireguard. Since you're using OVPN change this back to the default: --cap-add=NET_ADMIN Can you double check the LAN_NETWORK value, specifically the mask, using Q4 here: I actually think it's correct, but you might be using another mask on your network. While you're on that page, check if any other FAQs match your case, if you haven't done that already. Your log still looks like a successful start of the VPN tunnel and the applications, but it's hard to tell as it's not a complete log. If the changes above doesn't fix the problem, can you upload complete debug logs please? Remember to redact any user name and passwords from the logs before uploading here, they could be in multiple places.
  16. If you describe what you did to configure this we'll have a better chance of answering that question. Generally, anything you add to a running container, via the container CLI for example, is purged when you rebuild the container. But there are way to pass in extra commands on start of a new container. So it depends on how you are making those changes to the logging function. Looks like what you want is to start the deluge daemon with the option --logrotate, see below. This is not currently possible as far as I know, but Binhex COULD add in another environment variable to control this (or just turn it on by default for everyone). Similar to how we can already control the log level for the daemon. You might have to bribe him with some beer money though...
  17. Don't port-forward on your router, it has no effect on the VPN tunnel, it just adds security risks. The logs show a successful start, so the VPN tunnel should be up. The symptoms you describe definitely sounds like a mismatch between the LAN_NETWORK range on the container ( and the computer you're accessing it from. What's the IP of your PC and the unraid server?
  18. While the first one sounds like a good idea, you need to request that from the app developers, not the container dev. So add your feature request here: For the second one, you can do this already with a custom filter
  19. Wow, that is an awesome plug-in, not sure how I’ve managed to miss it! Thanks @SimonF Sent from my iPhone using Tapatalk
  20. BTW, my reply only applies to this part, I have no idea what happens if the VM boots wihtout the device present, then you add it later. But I assume that the plugins Squid mentions would solve for that part. I've raised a feature request to have the optional startup policy added to the VM form view:
  21. Currently, unraid requires all USB devices added to a VM to be present on startup of that VM, or the startup fails. Libvirt actually supports adding USB devices as optional for VM startup using hostdev startuppolicy, see details below. But this can only be done in the XML view. It would be great if we could have another checkbox in the form view to specify that we want a specific USB device to be included with startuppolicy = optional. This will reduce the need to edit the XML directly, which always risks breaking the VM and deters many users. Something like this perhaps: Libvirt supports adding USB devices as optional for VM startup, using the hostdev source startupolicy = optional. For example: <hostdev mode='subsystem' type='usb'> <source startupPolicy='optional'> <vendor id='0x1234'/> <product id='0xbeef'/> </source> </hostdev> It seems to work well in my limited testing, after some initial unrelated problems, see:
  22. Or: /topic/71159-auto-vm-start-interrupted-by-missing-usb-device/?do=findComment&comment=654162 Sent from my iPhone using Tapatalk
  23. Yes, add another variable to DelugeVPN docker Type = Path Name = completed Container path = /completed Host Path = /volume1/videos Access mode = Read/Write Open Deluge Web UI, go to Preferences/Downloads, enable Move completed to and type in /completed Important: If you use any other integrated docker apps like Radarr, Sonarr etc. you'll need to add the same path variable to those dockers with the exact same name and capitalization. This is because when Deluge has completed a download and moved it to the new destination, it will tell Radarr that the file is available at /completed/ If you don't add the matching path variable to the Radarr docker, Radarr doesn't have a way to access /completed and the import will fail. This is all working on unraid, I assume Synology works the same way...
  24. I think you need to bypass the proxy for access to devices on your local LAN, otherwise the requests are proxied through the VPN tunnel which doesn't have access to your local devices. Add your local LAN network range (e.g. to the Ignore Hosts field in the Ubuntu Proxy manager.
  25. Ha ha, to be fair it’s a very long FAQ! And you’re venturing into the most complicated scenario with your setup Sent from my iPhone using Tapatalk