4554551n

Members
  • Posts

    53
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

4554551n's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. I'll see if he responds, maybe he knows something off hand. If not I'll post it. I've got a lot of dockers and plugins, idk what personal info they could leak.
  2. You can't see it in the screenshot, but my mouse pointer is hovering over Cache_vms, and you can see it's not selecting and greyed out as an option
  3. Hi, Could anyone please advise how to configure the edit docker settings to use a DOH (DNS over HTTPS provider). I would like to use the mullvad server for all upstream checks. I've been trying to use the DOH client docker, with the following config ## Google's resolver, good ECS, good DNSSEC #[[upstream.upstream_ietf]] # url = "https://doh.mullvad.net/dns-query" # weight = 50 ## CloudFlare's resolver, bad ECS, good DNSSEC ## ECS is disabled for privacy by design: https://developers.cloudflare.com/1.1.1.1/nitty-gritty-details/#edns-c> [[upstream.upstream_ietf]] url = "https://cloudflare-dns.com/dns-query" weight = 50 But when I enable the mullvad ones I get HTTP error from upstream https://doh.mullvad.net/dns-query: 400 Bad Request In the logs. ALTERNATIVELY Is there a way to feed the pihole container through the binhex-deluge-vpn docker? I've got my jacket going through it. If I could feed pihole through it, I would be able to use mullvad's regular DNS they use for users on the VPN. But there's a lot more port management with pihole than there is jackett. I tried and failed.
  4. Yes, I have 1 drive in the cache_vms pool, and it's set to xfs no problem. The issue is I am then unable to allocate that pool to a share-that is where it is greyed out.
  5. I followed spaceinvaders instructions here: To create an xfs cache disk. I cannot add it as a cache pool for a share. It's greyed out, and cannot be changed for an existing share, or added for a new share. Deleting the pool and recreating it with BTRFS works as expected. Trying again with xfs gets the consistent issue where it's greyed out for selection (on the share screen, where you allocate a specific pool to a share). Also, weirdly, it's showing as using 14GB, when it's just created, unused, with nothing there: (image attached) Could someone please advise
  6. No, so depending on the VPN you're using (presumably pia) you need to go into your pia account and forward a port from there. They will give you a port to use, and you plug that port into deluge. Then you can add this magnet link as a torrent to deluge: magnet:?xt=urn:btih:4d60844af7e42602086266fcde02971a4fea29cf&dn=checkmyiptorrent%20Tracking%20Link&tr=http%3a%2f%2f143.110.208.40%2f It'll create a torrent that in the status tab under tracker status, will give you an error with the IP of your VPN connection. Then you can use something like: https://portchecker.co/ To plug in the IP from the error, and the port from pia you've put into deluge to see if it's working. You don't need to worry about your router, because the router isn't the end point of the connection, the exit server of the vpn connection is the end point that other torrent clients will see and try to connect to. Though depending on how you have your vpn set up, the server you connect to *may*change, when you restart the docker/server or something else maybe. So just keep an eye on that and if it does change use the port checker again, but pia should be able to keep the port forwarded regardless of external IP, just keep an eye on it periodically.
  7. Hey all. I found about 15 threads on this topic but no solutions. I'm using all @binhex containers. Deluge, sonarr, radarr. The issue is that when a torrent completed, deluge pauses it. radarr/sonarr both import the completed torrent. But they do not delete the torrent from cache, and they do not delete the torrent/extra crap that comes with the file from deluge. I've confirmed that they can download the file. I confirmed that deluge stops the torrent. I've confirmed that sonarr/radarr both have the tickbox to remove the completed torrent. I am not seeing any errors in any logs. I have recently rebuilt the server. I've moved to a new server with more drives etc, and migrated my old everything, including restoring my dockers. Prior to this, sonarr and radarr were deleting the files from cache upon ingestion, but they still weren't deleting the torrent and extra files. Now I appear to have gone back a step and they're not even deleting the file on ingestion. Please can I get some help with this, I must have read every thread on this topic and looked at every setting. (I've even tried the label addon in deluge and setting lables in the connection settings in radarr/sonarr. That didn't work, and caused errors on restart of deluge as deluge seems to disable the label addon on restart-neither here nor there really) Spent days on this. Please someone help
  8. Ok, I have a fix with the help of one Spad from the discord chat. So. 1. Stop nextcloud docker 2. Create new /appdata folder as /appdata/nextcloud-custom 3. Move /appdata/nextcloud/custom-cont-init.d and custom-services.d there with their contents, but make sure they're root owned 4. Delete readme files from both of those folders 5. In the docker settings, create 2 new volume mounts, pointing to those locations. -container path /custom-services.d -/mnt/user/appdata/nextcloud-custom/custom-services.d-fix/ -Same for cont-init -Make sure they are both read only 6. Start docker They've made a change where they now look for files in that old location and if it's there, generate that log message. There is a bug where that log message tries to be read as a script, fails, and we get what we get. A fix is in the pipe. Be sure you deleted the readme files from the folders, or it'll generate a slightly different, but related error. If you skipped the step, do it, then you have to go into docker settings, and force an update on the container to get unraid to recreate it.
  9. I am having a similar issue, probably the same cause. Docker won't start, getting errors about this file. It looks like it wants /etc/s6-overlay/s6-rc.d/user/contents.d/custom-svc-README.txt/run to be an executable script, shebang and all. But it's just that message and it can't execute it. I THINK. Though how to fix it, no idea. Still struggling. Deleting it and restarting the containter, it goes straight back. Deleting it, or the entire /etc/s6-overlay/s6-rc.d/user/contents.d/custom-svc-README.txt while it's running also doesn't work. I would really like to know the fix too
  10. Something I've just discovered is if you do this, you end up with this issue here: https://github.com/rustdesk/rustdesk/issues/499 The solution as mentioned is you need to copy the key files from hbbs to hbbr, then restart hbbr.
  11. Yea, i got the pub key all sorted, encryption is working well, it's just mandating it, so I can open it to the world and not have randoms try use it. Ok, so there are two containers, the RustDeskServer and RustDeskServer-relay. I seem to need both (internally at least, doesn't just run with the server. don't know why). So I can go into RustDeskServer>edit and in post arguments where it has /usr/bin/hbbs, I change it to /usr/bin/hbbs -k _ and in RustDeskServer-relay>edit /usr/bin/hbbr becomes /usr/bin/hbbr -k _ Is that correct? Also, if I do this, and open it to the world, that would stop randoms from using it legitimately. Assuming there's some issue/vulnerability in rustdesk (I'm paranoid), how much protection does unraid offer me if someone connects on one of those ports and starts doing shit? I don't think I can even use port triggering for bonus protection because it starts with an inbound connection, yea? Also, looking at the instructions, it seems to need ./hbbs -r <relay-server-ip[:port]> -k _ So If I set hbbs -k _, how is the relay server IP being worked into it? Also, just for the record, you're amazing and thank you for your time!
  12. Could you please help me with exactly what I should be putting there? I don't really know docker. It's extra confusing because it wouldn't be an "extra" parameter, it would be modifying part of the existing docker yml, wouldn't it? Because it would already have the command to run the program in there wouldn't it?
  13. How can I perform the steps found here https://rustdesk.com/docs/en/self-host/install/#key To mandate encryption using your containers? I'd like to stop anyone else who finds the open ports from using them. Also, I'm still unclear on relay vs server, could you please explain in a little more detail? I currently have it set up using both internally, encrypted and unencrypted both work. I haven't opened the ports yet until I can enforce encryption, not just allow it.
  14. YES! OK! If anyone is having problems with the spaceinvader guide getting the 400 error ***proxy_set_header Host $host;*** ^^^ This right here is the problem! Comment that out, and it'll work, I don't know why, I don't know what it does, but that's the problem! You'll also need to comment out proxy_redirect off; Because it gives that line 19 error, not sure what that's about either, but the logs point straight to it so I can see people have worked it out. But # proxy_set_header Host $host; That's the poison causing the 400 error. ******For bonus points, those wanting to secure their onlyoffice, you need to add the following to the onlyoffice docker***** Variable Name:secret Key:JWT_ENABLED Value:true and Variable Name:Secret Key Key:JWT_SECRET Value: password123 Then you can add password123 in your secret key right under ONLYOFFICE docs address for a secure connection *mic drop* That took me like 5 hours, enjoy it! PS: Change "password123" to something else, that's my password and no one else is allowed to use it, I own the NFT