naxos

Members
  • Posts

    21
  • Joined

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

naxos's Achievements

Noob

Noob (1/14)

2

Reputation

  1. I'm trying to connect this to an existing backblaze b2 bucket and failing somewhere. In my template I have: Host path 2 = /mnt/user/data-media/ RCLONE_MEDIA_SHARES = /media/<folder-with-same-name-as-b2-bucket> RCLONE_REMOTE_NAME = b2:<existing-bucket-name> RCLONE_OPERATION = copy RCLONE_DIRECTION = both I went through the initial config in the console. It said no remotes found, do I want to create a new one? I said, yes and entered info for my existing b2 bucket. I entered the bucket's applicationKeyId and Application Key, as mentioned here. I can access the UI, and in the Explorer I can see my existing bucket and its contents. I tried deleting a file through the explorer, but it remains in the bucket. I tried adding a file to the /media/<folder-with-same-name-as-b2-bucket> on Unraid, but it doesn't get copied to the bucket. Both the local-to-remote and remote-to-local reports say, "Failed to create file system for "b2:<bucket-name>/media/<bucket-name>": didn't find section in config file" The rclone.conf file exists in the right place. It only contains the details for my b2 bucket. rclone ls <bucket-name>: returns "NOTICE: Config file "/home/nobody/.config/rclone/rclone.conf" not found - using defaults" Can anyone see where I'm scewing this up?
  2. Looks like a permissions issue. Did you see this post?
  3. Update: After a lot more work I'm now able to access my containers through the cloudflare tunnel. Game changer! I think my main problems were some sloppy copy/paste in my nginx.conf file and not paying close enough attention to which port numbers were mapped to http and https in Swag. What's interesting is that I can only get this to work with a tunnel created on the CLI. I've tried twice to use a tunnel created through the cloudflare UI and I can't get that to work. I feel like I tried all the config options in the UI, like different IP addresses and localhost, but I must be missing something. The tunnel is active and I can see in the logs when requests come through, but I just get various error messages depending on which IP I use or localhost. I don't think it really matters, but it bothers me when I can't figure something out. If anyone has this working with a UI-created tunnel, please let me know. Next challenges are figuring out how to enable LAN access with my custom domain and properly securing the external access. ------------------------------------------- After a solid 6 hours on this, I throw myself to the mercy of the good people in this great community. Most of this stuff is new to me, so it is 100% likely to be user error. I followed Ibracorps' guides for setting up the Cloudflare tunnel and configuring swag. I think I have scrapped everything and started over 4 times now. I've also carefully read through this whole topic and tried all the suggestions. I'm getting the "unable to reach the origin service" messages. In a browser, mydomain.com gives me a 502 error. 2022-10-04T05:14:46Z ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" cfRay=754b6f95f8fb7ab7-LAX originService=https://192.168.1.107:8001 2022-10-04T05:14:46Z ERR Request failed error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" connIndex=3 dest=https://mydomain.com/favicon.ico ip=xxx.xx.xxx.77 type=http Things I've tried: Setting noTLSVerify: true Setting originServerName to subdomain.mydomain.com (subdomain has a valid CNAME record) Trying both http and https for the service One very simple thing I'd like to confirm is exactly which IP address I should use for the service in the cloudflared config file. My swag port mappings are: 172.18.0.5:443/TCP -->192.168.1.107:8001 172.18.0.5:80/TCP __> 192.168.1.107:44301 I access my Unraid UI at http://192.168.1.107. I've been assuming I should use the 192.168 IPs as the service (edit: I've also tried the 172.18 IPs). Is that correct? Because most of this is new to me I'm limited in my ability to troubleshoot because I don't know how to tell exactly where in the chain the problem is. Seems like I'm either pointing the tunnel to swag improperly, or I've got a problem with my swag setup that's causing it to not respond. I welcome any and all suggestions.
  4. I've also been trying to figure this out. Data will persist through restarts, but it's gone after a container update. Do you have to store the data in an external DB to keep it through updates?
  5. Thank you very much! That worked for me to. I'll have to go back and compare your steps to what I originally did so I can understand what I got wrong.
  6. I guess I'm just too far out of my depth. I followed those instructions, but the netdata.conf I can access on the host is not the netdata.conf that the container is using. If anyone else has or gets this working, I'd greatly appreciate any details on how you did it.
  7. I've got this running well and monitoring my docker containers, but I can't get it to see my customized netdata.conf. In the template I've got: host path: /mnt/cache/appdata/netdata/netdata.conf container path: /etc/netdata/netdata.conf mountain as rw I've edited the netdata.conf in /mnt/cache, but when I go to ip-address:19999/netdata.conf it shows the default file with everything commented out. What do I have wrong?
  8. Well, and it just started working today, both with Mullvad and Cloudflare DNS servers. I have no idea why. I guess I'll take it.
  9. I just started having a DNS problem with this container and I've maxed out my limited Linux chops trying to figure it out. I've spent about 2 hours on this and done a lot internet searching, but haven't found anyone having this problem recently. Odds are I'm missing something basic. SABnzbd "Status and Interface Options" screen says "Connection Failed!" for Public IPv4 address and Nameserver / DNS Lookup. I turned Debug logging on, but the logfile doesn't seems to be telling me anything: sabnzbd.log Relevant details: Container has worked flawlessly until yesterday, though I probably hadn't used it in about a week before that My binhex-deluge container has no problems, using the same VPN setup; and I have no problems with any of my other containers or with the host I'm using Mullvad VPN I've always used the Mullvad DNS servers, but have also tried 1.1.1.1,1.0.0.1 I've restarted the container at least 20 times in the course of troubleshooting ifconfig inside the container shows I have an IPv4 address from Mullvad, which SABnzbd shows as the local IPv4 address I can ping the Mullvad and Cloudflare DNS server IPs from inside the container Running dig inside the container yields "connection timed out; no servers could be reached" Anyone have any thoughts on how I can fix this?
  10. This is probably a docker noob question, but I haven't run into this problem with other dockers yet. The container will start, but the log says : ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong? I assume this is because I'm not mapping my volumes correctly. I've read through this thread and did some internet searching, but can't figure it out. Can anyone set me straight? Here are my volume mappings:
  11. I've got an i5-11400. At this point it seems like my one remaining problem is that all remote streaming, using clients capable of direct streaming, is forced to transcode at 2 Mbps which takes it down to SD, I assume because they're being forced through the Plex relay, which caps at 2 Mbps. This did not happen on my HP Mini with an I5-8500T with a bare metal install of Plex. Looks like I'm going to have to give myself a crash course on docker and Unraid networking to figure this out. I'm also going to get my HP server back online and confirm that it will still let remote clients direct stream.
  12. Just an update in case it helps anyone else, unchecking "Enable HDR tone mapping" in Settings > Transcoder appears to have fixed the problem for me. I tested with three simultaneous 1080p transcodes and they all did hardware transcoding with no problem. This is with /dev/dri added to the Plex container as a device.
  13. Interestingly, when I removed /dev/dri as a device and added it as a path I wasn't getting any hardware transcoding. Changing it back to a device re-enabled hardware transcoding.
  14. I forced transcoding on the shield for testing. I definitely direct play on the LAN. I was testing with Edge Chromium on a Windows 10 laptop and an Android 11 phone with the Plex app. Both can direct play with no problem. I had started digging into the Plex logs, but wanted to make sure the problem wasn't related to adding /dev/dri as a device and not a path. Thank you very much for all your help! I'll work my remaining transcoding issue over on the Plex forums.