Rick_Sanchez

Members
  • Posts

    59
  • Joined

  • Last visited

Posts posted by Rick_Sanchez

  1. 2 hours ago, Greygoose said:

     

    Hi sdub,

     

    thanks for the reply mate.

     

    yes i edited using notepad++ in windows 11. I have dont this with other files and its been ok but it certainly could be the issue.

     

    EDIT:  How do you edit the yaml files?, ie what software?

    I prefer visual studio code

  2. On 2/8/2019 at 3:49 PM, dorian said:

    @casperse

     

    Been busy at work.

     

    Ok, so it was actually @ken-ji who figured it out. When assigning a Host use the IP, DO NOT use the hostname it doesn't work (shares won't load). However, you've said that you've tried this. Can you confirm that in your case you've tried //<IP>/software?

     

    Two other things to consider are:

    1) That you'll want your NAS to have a static IP as your not using a hostname.

    2) That you don't want to have reserved characters in your password as the system management uses a bash shell; so things like  | & ; ( ) < > space tab ; $ # * @

     

    Hope that helps, as that's pretty much all I did.

    this solved a lot of headache for me. I'm not sure why the device name is not resolving, but entering in the IP address solved my issue.

  3. I am running into a permissions error with docker containers on Unraid accessing a shared NFS folder on a Synology NAS.

     

    The architecture I have currently is

    Synology -> user 'docker' UID 1038 GID 100

    Synology -> shared folder 'media' -> allow user 'docker' read/write permissions

    Synology -> shared folder 'media' -> NFS permissions -> Allow IP address of Unraid, squash: no mapping, enable asynchronous

     

    Unraid -> NFS shares -> share mounted and accessible via CLI in /mnt/remotes/share

    Unraid -> docker container

    I've tried UID 99 GID 100

    I've tried UID 1038 GID 100

     

    Arr apps are still getting permissions issues

     

    Does the UID of unraid need to match the UID of synology while using NFS?

  4. could anyone share their working changedetection.io + browserless config? I am getting error 200 when searching for a product restock with the chrome browser and am curious if this is a template issue or website issue that I'm trying to track

  5. I'm trying to build from scratch with

     

    HDD: 3x20TB, 2x14TB

    SSD: 4TB, 2TB, 1TB

    NVME: two slots available

     

    What is the best way to configure to have

    1) a media only pool (don't care if this is not backed up or on parity. Size ~20TB)

    2) a personal file pool (want this backed up with high fault tolerance to protect data, but infrequently accessed. Size ~5TB)

    3) a photography pool (backed up. Fast read access from networked PC. ~6TB)

    4) a cache pool for docker and plex, either combined or separate (fastest. ~??? TB required?)

     

    I have the option to purchase more hard drives to make the format work from the get go

    I'm looking for help to determine which way to mix/match drives, and zfs vs brtfs vs xfs

     

    I am building a PC for this purpose. I will try to have 10Gbe and USB4 capability to add a DAS or something like that down the road if needed.

    Thank you for your help!!

     

     

  6. On 1/28/2021 at 4:27 PM, froland80 said:

    After a bit more digging found several related posts (after understanding that this is about forcing a user over SMB):

    Solution is to add the following lines to your /boot/config/smb-extra.conf

     

    [Global]

      force create mode = 0666

      force directory mode = 0777

      force user = nobody

      force group = users

      create mask = 0666

     

    Marking as solved.

    I'm still having a slight issue with this -> I've applied the changes above

     

    Something continues to change my Pictures SMB folder from 0777 to 0770. Is there somehow to create a new user that can't modify folder permissions?

  7. On 6/23/2021 at 5:53 PM, jj_uk said:

    So if I add this to the go file, will that work at restart?


     

    echo "SUBSYSTEM==\"tty\", ATTRS{idVendor}==\"0451\", ATTRS{idProduct}==\"16a8\", ATTRS{serial}==\"serial num removed\", SYMLINK+=\"zigbee2mqtt\"" > /etc/udev/rules.d/99-usb-serial.rules
    

     

    This is how I keep mine persistent:

     

    Part 1:

    # lsusb for vendor:product idProduct


    # udevadm info -a -n /dev/ttyUSB0 | grep '{serial}' | head -n1

    where USB0 is the device you are looking for


    # OR; udevadm info -a -n /dev/bus/usb/000/000 | grep '{serial}' | head -n1

    if you can't find the USB device, search if it's in the BUS


    # nano /etc/udev/rules.d/99-usb-rules.rules

    make your own rules folder

     

    Part 2:

    # cp /etc/udev/rules.d/99-usb-rules.rules /boot/config/rules.d/99-usb-rules.rules

    copy your rules to the boot config
    # nano /boot/config/go
    # add and save --> cp /boot/config/rules.d/99-serial-rules.rules /etc/udev/rules.d/99-serial-rules.rules
    # chmod 644 /etc/udev/rules.d/99-serial-rules.rules

  8. Why not docker Duplicacy? Can you expand what you mean about it not supporting local storage?

    In the docker container you should map the internal path to the share so it can "see" the share to back it up.

     

    I've got Borg + Vorta running and it's a pretty nice setup. I was able to set up Borg with a simple config file, and it's been rock solid since. I'm not much of a CLI user myself, but this one is user friendly. You can get a free Borgmatic repo 20GB to practice uploading as well. This would be my first recommendation just because of how powerful the deduplication is.

     

    Duplicaci is my secondary that I'm playing with, backing up to a cloud. The GUI was semi-intuitive but I had to do some research on how to use it.

     

    rsync is easy CLI that you can set up with a userscript, but, I'm not sure about versioning. This seems like a straight copy.

     

    Duplicati can burn in a dumpster fire.

  9. Has anyone found a solution to running homeassistant supervised in a Docker container?

    I've attempted the VM method, but this always seems to cause mounting issues for my zigbee and zwave sticks.

  10. *** [ DIAGNOSING ]: Networking
    [✓] IPv4 address(es) bound to the eth0 interface:
       192.168.20.100/24 does not match the IP found in /etc/pihole/setupVars.conf (https://discourse.pi-hole.net/t/use-ipv6-ula-addresses-for-pi-hole/2127)
    
    [✗] No IPv6 address(es) found on the eth0 interface.
    
    *** [ DIAGNOSING ]: Name resolution (IPv4) using a random blocked domain and a known ad-serving domain
    [✗] Failed to resolve kerebro.com via localhost (127.0.0.1)
    [✗] Failed to resolve kerebro.com via Pi-hole (192.168.20.100)
    [✓] doubleclick.com is 216.58.195.14 via a remote, public DNS server (8.8.8.8)
    
    *** [ DIAGNOSING ]: Discovering active DHCP servers (takes 10 seconds)
    /opt/pihole/piholeDebug.sh: line 1228: 27046 Killed                  pihole-FTL dhcp-discover
    
    *** [ DIAGNOSING ]: Pi-hole processes
    [✗] lighttpd daemon is inactive
    [✗] pihole-FTL daemon is inactive

    I'm curious if anyone has run into these issues recently and how to solve them.

  11. 8 hours ago, vid1953 said:

    I am trying to set up sonarr. Indexer sabnzbd What is a URL BASE? Also when you try to watch a video for help, most are from years ago and the newest setup is compleatly differant

     The Indexer is your news hoster.

     

    Sabnzbd is the download client. The URL base is the ip address of where you are hosting it.

  12. 10 hours ago, yanksno1 said:

    @LintHart When I tried those on my 2 categories (one for tv-sonarr, one for radarr) it wouldn't import. I'm about to give up there haha. Hopefully it'll start working again one day.

     

    @Rick_Sanchez Able to fully set up everything for usenet! Got NZBGet's network path going through my Deluge VPN setup so that's awesome (minus losing the webui llink). Found a few indexers (def costly, but worth it in my opinion). Download speeds are nice! Now need to find an indexer for some older shows (a couple form '07). Weird one of them did have the 1st 2 seasons for that, but not the last 2. But NZBGet does seem to auto delete and move everything properly. Still wondering what's going on with Delgue. 

     

    NZB just seems to “work” and has great speeds. I’d probably stick with them and use deluge for manual or as a backup, but I’ve turned mine off to make life easier. You might be able to join a private indexer for niche stuff!

    • Like 1
  13. 3 minutes ago, yanksno1 said:

    @LintHart Glad it's working from you now. What setting is it and where? Just tested a file again and still won't auto delete the torrent (in Deluge) and original file. 

     

    I might sign up for an account with NewsHosting, and use this setup to dive into the world of NZBs. Just a bit weary it'll solve my issue since I'm having it here. 

    I’d recommend that setup. Give it a try and let us know what you think. Make sure to turn off deluge in sonarr after you set it up to test it out!

    • Like 1
  14. 1 hour ago, yanksno1 said:

     

    Yup, they match and seem to be working between sonarr and deluge. It's just when it finishes, it doesn't delete (in sonarr and the original downloaded file after it imports). 

     

    Deluge:

    628654561_ScreenShot2021-02-18at11_06_42AM.thumb.png.b689ecb82fb5f876f36a9ac489d80b66.png

     

    Sonarr:

    274414814_ScreenShot2021-02-18at11_07_13AM.thumb.png.c0bb741dc0b72634bbb48418e635703c.png

     

    Had to look up what a socks5 proxy was, so def not using that (just straight OpenVPN with Deluge and PIA).

     

    Now my memory usage is up to 99%. Kinda freaking me out haha. 

     

    UPDATE: Memory spiked and then Overall Load completely spiked and everything crashed. Rebooted and Memory usage seems to be back at a normal state around 22%. No idea what caused that. Tested downloading a new torrent and same thing though. 

    I think our options are to try the Auto Remove Plus plugin, or, you can venture to the world of NZBs (much much faster, and does in fact auto delete). Otherwise may have to do the ol' manual method or find a script to fill the gap.

  15. 6 hours ago, yanksno1 said:

    I now seem to have a memory issue I haven't had before. It's up to 94-95% and I shouldn't be up that high, no VM's running or anything like that. Can you guys take a look at my logs to see if you spot anything? 

     

    1507387142_ScreenShot2021-02-17at22_33_10PM.thumb.png.cefaf9604fd493005d4d2c138bd1e2a8.png

     

    tower-syslog-20210218-0330.zip 24.3 kB · 0 downloads

    What are your volume mappings? Does the container path match between sonarr and deluge?

     

    Are you using a socks5 proxy?

     

    And is your sonarr spitting out any errors?

  16. 1 hour ago, yanksno1 said:

     

    Okay, tried it again with your settings again and now it's importing! No idea why it wasn't before and now it is. Perhaps it was after running the Docker Safe New Permissions? But still not auto deleting in Deluge and the original file as well. 

     

    797779262_ScreenShot2021-02-17at11_34_07AM.png.0bf29b4a93af66851554e1f44cbace9b.png

     

    Tried the user/owners Terminal lookup, and able to do that I think. Here's a screenshot of that for you. 


    48641683_ScreenShot2021-02-17at5_28_40PM.thumb.png.58bce86af6a596f2c7e613bf03fd392d.png

     

    Anything else you want me to try?

    That's good to hear.

     

    I think you had the torrents seeding for a long time, or for a large ratio so they would just sit in deluge. You could toy with the values and try to set them to 0.1 while testing to see if that's possibly causing them to sit in deluge.

     

    Here is the hail-mary. This may not be the most "secure" setup, but you can try to run Sonarr / Radarr docker with "Privileged" turned on. Again, some people get really upset about turning this option on, but hey if we're going to try everything may as well give that a shot too. Alternatively, try Deluged in "Privileged" mode on as well. Let me know if this sticks.

     

    Also, there is an addon for Deluge called "Auto Remove Plus" that you could try to install, but that might get hairy. 

     

    Another thought - make sure your paths match in both Sonarr and Deluge for downloads. And I think we already reviewed having the proper label applied for sonarr. Did you right click on the label in Deluge and set the path (i.e. Downloads/Complete/tv)?

     

    Have you looked into NZBs? They are fast and "just work." Might be something else to consider, although this comes with more costs to sign up for their services.

  17. 11 hours ago, yanksno1 said:

     

    Tried it with those settings and they didn't import! Freaked me out a bit when I switched it back and at first they didn't either. But after a second time they did after I switched back. Then tried switching it from my cache drive to one of my array shares and that didn't do it either. Turned off the permissions settings also. The Downloads Folders paths are always working right. It's just when the torrent finishes in Deluge it won't delete the torrent there, nor delete the original file on the Cache Downloads share drive. Should also mention I did replace the Disk 1 HDD, that failed. Best part of Unraid not losing any data haha. Any other things/screenshots you want me to try/take? Really appreciate the help! 

     

    472440107_ScreenShot2021-02-16at5_22_37PM.thumb.png.0cba1df5c934f6fd823afcf256e8ea8d.png

    How about the Tools -> Docker Safe New Permissions option? Maybe this will fix some permission issue that might be stopping us from importing.

     

    The other settings SHOULD work to import...but use what works for now. Who are the user:owners of the downloads folder? (Terminal, then cmd to parent folder, then "ls -al")