Rick_Sanchez

Members
  • Posts

    51
  • Joined

  • Last visited

Everything posted by Rick_Sanchez

  1. Could you post a working configuration file for gluetun? Preferably using a Wireguard configuration?
  2. I've got this working when using the path: /dev/bus/usb/001/002 But, after creating a symlink (in case the Bus and Device numbers are changed after a restart) the print jobs are not sent. The symlink is /dev/printer1 Any ideas on how to fix this?
  3. How did you all keep your USB device path static in the container? I think after reboot it resets. I am unable to find this printer in /dev/
  4. This is how I keep mine persistent: Part 1: # lsusb for vendor:product idProduct # udevadm info -a -n /dev/ttyUSB0 | grep '{serial}' | head -n1 where USB0 is the device you are looking for # OR; udevadm info -a -n /dev/bus/usb/000/000 | grep '{serial}' | head -n1 if you can't find the USB device, search if it's in the BUS # nano /etc/udev/rules.d/99-usb-rules.rules make your own rules folder Part 2: # cp /etc/udev/rules.d/99-usb-rules.rules /boot/config/rules.d/99-usb-rules.rules copy your rules to the boot config # nano /boot/config/go # add and save --> cp /boot/config/rules.d/99-serial-rules.rules /etc/udev/rules.d/99-serial-rules.rules # chmod 644 /etc/udev/rules.d/99-serial-rules.rules
  5. Why not docker Duplicacy? Can you expand what you mean about it not supporting local storage? In the docker container you should map the internal path to the share so it can "see" the share to back it up. I've got Borg + Vorta running and it's a pretty nice setup. I was able to set up Borg with a simple config file, and it's been rock solid since. I'm not much of a CLI user myself, but this one is user friendly. You can get a free Borgmatic repo 20GB to practice uploading as well. This would be my first recommendation just because of how powerful the deduplication is. Duplicaci is my secondary that I'm playing with, backing up to a cloud. The GUI was semi-intuitive but I had to do some research on how to use it. rsync is easy CLI that you can set up with a userscript, but, I'm not sure about versioning. This seems like a straight copy. Duplicati can burn in a dumpster fire.
  6. Can anyone muster up a step-by-step install for this? And then maybe pin to the thread?
  7. That's a great question that maybe one of the dev's can answer. This would depend if /etc/udev/ is overwritten during the update process.
  8. Has anyone found a solution to running homeassistant supervised in a Docker container? I've attempted the VM method, but this always seems to cause mounting issues for my zigbee and zwave sticks.
  9. *** [ DIAGNOSING ]: Networking [✓] IPv4 address(es) bound to the eth0 interface: 192.168.20.100/24 does not match the IP found in /etc/pihole/setupVars.conf (https://discourse.pi-hole.net/t/use-ipv6-ula-addresses-for-pi-hole/2127) [✗] No IPv6 address(es) found on the eth0 interface. *** [ DIAGNOSING ]: Name resolution (IPv4) using a random blocked domain and a known ad-serving domain [✗] Failed to resolve kerebro.com via localhost (127.0.0.1) [✗] Failed to resolve kerebro.com via Pi-hole (192.168.20.100) [✓] doubleclick.com is 216.58.195.14 via a remote, public DNS server (8.8.8.8) *** [ DIAGNOSING ]: Discovering active DHCP servers (takes 10 seconds) /opt/pihole/piholeDebug.sh: line 1228: 27046 Killed pihole-FTL dhcp-discover *** [ DIAGNOSING ]: Pi-hole processes [✗] lighttpd daemon is inactive [✗] pihole-FTL daemon is inactive I'm curious if anyone has run into these issues recently and how to solve them.
  10. The Indexer is your news hoster. Sabnzbd is the download client. The URL base is the ip address of where you are hosting it.
  11. NZB just seems to “work” and has great speeds. I’d probably stick with them and use deluge for manual or as a backup, but I’ve turned mine off to make life easier. You might be able to join a private indexer for niche stuff!
  12. Nzbplanet is pretty good. If you can afford it a lifetime pass is better than month to month.
  13. I’d recommend that setup. Give it a try and let us know what you think. Make sure to turn off deluge in sonarr after you set it up to test it out!
  14. I think our options are to try the Auto Remove Plus plugin, or, you can venture to the world of NZBs (much much faster, and does in fact auto delete). Otherwise may have to do the ol' manual method or find a script to fill the gap.
  15. What are your volume mappings? Does the container path match between sonarr and deluge? Are you using a socks5 proxy? And is your sonarr spitting out any errors?
  16. Here is one I like that has really good reviews: APC 1500VA UPS Battery Backup It should give enough power in an outage for a safe shutdown for at least a few devices.
  17. That's good to hear. I think you had the torrents seeding for a long time, or for a large ratio so they would just sit in deluge. You could toy with the values and try to set them to 0.1 while testing to see if that's possibly causing them to sit in deluge. Here is the hail-mary. This may not be the most "secure" setup, but you can try to run Sonarr / Radarr docker with "Privileged" turned on. Again, some people get really upset about turning this option on, but hey if we're going to try everything may as well give that a shot too. Alternatively, try Deluged in "Privileged" mode on as well. Let me know if this sticks. Also, there is an addon for Deluge called "Auto Remove Plus" that you could try to install, but that might get hairy. Another thought - make sure your paths match in both Sonarr and Deluge for downloads. And I think we already reviewed having the proper label applied for sonarr. Did you right click on the label in Deluge and set the path (i.e. Downloads/Complete/tv)? Have you looked into NZBs? They are fast and "just work." Might be something else to consider, although this comes with more costs to sign up for their services.
  18. How about the Tools -> Docker Safe New Permissions option? Maybe this will fix some permission issue that might be stopping us from importing. The other settings SHOULD work to import...but use what works for now. Who are the user:owners of the downloads folder? (Terminal, then cmd to parent folder, then "ls -al")
  19. It sounds like this issue is popping up for some folks. May want to check the dev github for more answers, or they may need to create a hotfix
  20. Are you using a proxy by chance to access deluge? Did you download, install, and enable the WebAPI plugin / egg file? Also you may not need the WebUI checkbox checked in order to get it into organizr. Alternatively, you can install Heimdall - it was pretty quick and easy to setup.
  21. Alright let's try to change one thing at a time, test it, and see if it works. Deluge Downloads: make sure folder structure downloads/incomplete ----- and "move completed to" ----- downloads/complete Queue: Share Ratio 0; Time Ratio 0; Time 1 Queue: Share Ratio Reached 0; pause torrent Restart and test if working if not, then try Permissions in Sonarr: Uncheck the set permissions (it works for me without having to change permissions) If none of those work, let's go back to the drawing board
  22. Hey, a watch folder is not a bad idea (although I don't use one). Simplest way may be to add a folder in your downloads directory. I.e. let's say you have /download/complete and download/incomplete Create a /download/watch folder. Add watch folder functionality to Deluge, and add the 'download/watch' folder path to auto-add .torrent files now move your .torrent files to this folder, and deluge should add it to download
  23. Hey for posterity - did you also enter http://? i.e. http://deluge.ip.address:delugeport
  24. It would be easier to just wireguard into your home network / unraid machine and access Deluge. Would this option work for you? You can setup remote tunneled access this way.
  25. If you are leaning towards an Unraid issue - I would check your Unraid logs and post them if anything seems out of place. Curious if any of your drives are having issues or errors?