Rick_Sanchez

Members
  • Posts

    59
  • Joined

  • Last visited

Everything posted by Rick_Sanchez

  1. I prefer visual studio code
  2. this solved a lot of headache for me. I'm not sure why the device name is not resolving, but entering in the IP address solved my issue.
  3. I am running into a permissions error with docker containers on Unraid accessing a shared NFS folder on a Synology NAS. The architecture I have currently is Synology -> user 'docker' UID 1038 GID 100 Synology -> shared folder 'media' -> allow user 'docker' read/write permissions Synology -> shared folder 'media' -> NFS permissions -> Allow IP address of Unraid, squash: no mapping, enable asynchronous Unraid -> NFS shares -> share mounted and accessible via CLI in /mnt/remotes/share Unraid -> docker container I've tried UID 99 GID 100 I've tried UID 1038 GID 100 Arr apps are still getting permissions issues Does the UID of unraid need to match the UID of synology while using NFS?
  4. could anyone share their working changedetection.io + browserless config? I am getting error 200 when searching for a product restock with the chrome browser and am curious if this is a template issue or website issue that I'm trying to track
  5. Has anyone found a way to run gravity sync alongside pihole in a docker image?
  6. I'm trying to build from scratch with HDD: 3x20TB, 2x14TB SSD: 4TB, 2TB, 1TB NVME: two slots available What is the best way to configure to have 1) a media only pool (don't care if this is not backed up or on parity. Size ~20TB) 2) a personal file pool (want this backed up with high fault tolerance to protect data, but infrequently accessed. Size ~5TB) 3) a photography pool (backed up. Fast read access from networked PC. ~6TB) 4) a cache pool for docker and plex, either combined or separate (fastest. ~??? TB required?) I have the option to purchase more hard drives to make the format work from the get go I'm looking for help to determine which way to mix/match drives, and zfs vs brtfs vs xfs I am building a PC for this purpose. I will try to have 10Gbe and USB4 capability to add a DAS or something like that down the road if needed. Thank you for your help!!
  7. I'm still having a slight issue with this -> I've applied the changes above Something continues to change my Pictures SMB folder from 0777 to 0770. Is there somehow to create a new user that can't modify folder permissions?
  8. Could you post a working configuration file for gluetun? Preferably using a Wireguard configuration?
  9. I've got this working when using the path: /dev/bus/usb/001/002 But, after creating a symlink (in case the Bus and Device numbers are changed after a restart) the print jobs are not sent. The symlink is /dev/printer1 Any ideas on how to fix this?
  10. How did you all keep your USB device path static in the container? I think after reboot it resets. I am unable to find this printer in /dev/
  11. This is how I keep mine persistent: Part 1: # lsusb for vendor:product idProduct # udevadm info -a -n /dev/ttyUSB0 | grep '{serial}' | head -n1 where USB0 is the device you are looking for # OR; udevadm info -a -n /dev/bus/usb/000/000 | grep '{serial}' | head -n1 if you can't find the USB device, search if it's in the BUS # nano /etc/udev/rules.d/99-usb-rules.rules make your own rules folder Part 2: # cp /etc/udev/rules.d/99-usb-rules.rules /boot/config/rules.d/99-usb-rules.rules copy your rules to the boot config # nano /boot/config/go # add and save --> cp /boot/config/rules.d/99-serial-rules.rules /etc/udev/rules.d/99-serial-rules.rules # chmod 644 /etc/udev/rules.d/99-serial-rules.rules
  12. Why not docker Duplicacy? Can you expand what you mean about it not supporting local storage? In the docker container you should map the internal path to the share so it can "see" the share to back it up. I've got Borg + Vorta running and it's a pretty nice setup. I was able to set up Borg with a simple config file, and it's been rock solid since. I'm not much of a CLI user myself, but this one is user friendly. You can get a free Borgmatic repo 20GB to practice uploading as well. This would be my first recommendation just because of how powerful the deduplication is. Duplicaci is my secondary that I'm playing with, backing up to a cloud. The GUI was semi-intuitive but I had to do some research on how to use it. rsync is easy CLI that you can set up with a userscript, but, I'm not sure about versioning. This seems like a straight copy. Duplicati can burn in a dumpster fire.
  13. Can anyone muster up a step-by-step install for this? And then maybe pin to the thread?
  14. That's a great question that maybe one of the dev's can answer. This would depend if /etc/udev/ is overwritten during the update process.
  15. Has anyone found a solution to running homeassistant supervised in a Docker container? I've attempted the VM method, but this always seems to cause mounting issues for my zigbee and zwave sticks.
  16. *** [ DIAGNOSING ]: Networking [✓] IPv4 address(es) bound to the eth0 interface: 192.168.20.100/24 does not match the IP found in /etc/pihole/setupVars.conf (https://discourse.pi-hole.net/t/use-ipv6-ula-addresses-for-pi-hole/2127) [✗] No IPv6 address(es) found on the eth0 interface. *** [ DIAGNOSING ]: Name resolution (IPv4) using a random blocked domain and a known ad-serving domain [✗] Failed to resolve kerebro.com via localhost (127.0.0.1) [✗] Failed to resolve kerebro.com via Pi-hole (192.168.20.100) [✓] doubleclick.com is 216.58.195.14 via a remote, public DNS server (8.8.8.8) *** [ DIAGNOSING ]: Discovering active DHCP servers (takes 10 seconds) /opt/pihole/piholeDebug.sh: line 1228: 27046 Killed pihole-FTL dhcp-discover *** [ DIAGNOSING ]: Pi-hole processes [✗] lighttpd daemon is inactive [✗] pihole-FTL daemon is inactive I'm curious if anyone has run into these issues recently and how to solve them.
  17. The Indexer is your news hoster. Sabnzbd is the download client. The URL base is the ip address of where you are hosting it.
  18. NZB just seems to “work” and has great speeds. I’d probably stick with them and use deluge for manual or as a backup, but I’ve turned mine off to make life easier. You might be able to join a private indexer for niche stuff!
  19. Nzbplanet is pretty good. If you can afford it a lifetime pass is better than month to month.
  20. I’d recommend that setup. Give it a try and let us know what you think. Make sure to turn off deluge in sonarr after you set it up to test it out!
  21. I think our options are to try the Auto Remove Plus plugin, or, you can venture to the world of NZBs (much much faster, and does in fact auto delete). Otherwise may have to do the ol' manual method or find a script to fill the gap.
  22. What are your volume mappings? Does the container path match between sonarr and deluge? Are you using a socks5 proxy? And is your sonarr spitting out any errors?
  23. Here is one I like that has really good reviews: APC 1500VA UPS Battery Backup It should give enough power in an outage for a safe shutdown for at least a few devices.
  24. That's good to hear. I think you had the torrents seeding for a long time, or for a large ratio so they would just sit in deluge. You could toy with the values and try to set them to 0.1 while testing to see if that's possibly causing them to sit in deluge. Here is the hail-mary. This may not be the most "secure" setup, but you can try to run Sonarr / Radarr docker with "Privileged" turned on. Again, some people get really upset about turning this option on, but hey if we're going to try everything may as well give that a shot too. Alternatively, try Deluged in "Privileged" mode on as well. Let me know if this sticks. Also, there is an addon for Deluge called "Auto Remove Plus" that you could try to install, but that might get hairy. Another thought - make sure your paths match in both Sonarr and Deluge for downloads. And I think we already reviewed having the proper label applied for sonarr. Did you right click on the label in Deluge and set the path (i.e. Downloads/Complete/tv)? Have you looked into NZBs? They are fast and "just work." Might be something else to consider, although this comes with more costs to sign up for their services.
  25. How about the Tools -> Docker Safe New Permissions option? Maybe this will fix some permission issue that might be stopping us from importing. The other settings SHOULD work to import...but use what works for now. Who are the user:owners of the downloads folder? (Terminal, then cmd to parent folder, then "ls -al")