naxos

Members
  • Posts

    21
  • Joined

Everything posted by naxos

  1. I'm trying to connect this to an existing backblaze b2 bucket and failing somewhere. In my template I have: Host path 2 = /mnt/user/data-media/ RCLONE_MEDIA_SHARES = /media/<folder-with-same-name-as-b2-bucket> RCLONE_REMOTE_NAME = b2:<existing-bucket-name> RCLONE_OPERATION = copy RCLONE_DIRECTION = both I went through the initial config in the console. It said no remotes found, do I want to create a new one? I said, yes and entered info for my existing b2 bucket. I entered the bucket's applicationKeyId and Application Key, as mentioned here. I can access the UI, and in the Explorer I can see my existing bucket and its contents. I tried deleting a file through the explorer, but it remains in the bucket. I tried adding a file to the /media/<folder-with-same-name-as-b2-bucket> on Unraid, but it doesn't get copied to the bucket. Both the local-to-remote and remote-to-local reports say, "Failed to create file system for "b2:<bucket-name>/media/<bucket-name>": didn't find section in config file" The rclone.conf file exists in the right place. It only contains the details for my b2 bucket. rclone ls <bucket-name>: returns "NOTICE: Config file "/home/nobody/.config/rclone/rclone.conf" not found - using defaults" Can anyone see where I'm scewing this up?
  2. Looks like a permissions issue. Did you see this post?
  3. Update: After a lot more work I'm now able to access my containers through the cloudflare tunnel. Game changer! I think my main problems were some sloppy copy/paste in my nginx.conf file and not paying close enough attention to which port numbers were mapped to http and https in Swag. What's interesting is that I can only get this to work with a tunnel created on the CLI. I've tried twice to use a tunnel created through the cloudflare UI and I can't get that to work. I feel like I tried all the config options in the UI, like different IP addresses and localhost, but I must be missing something. The tunnel is active and I can see in the logs when requests come through, but I just get various error messages depending on which IP I use or localhost. I don't think it really matters, but it bothers me when I can't figure something out. If anyone has this working with a UI-created tunnel, please let me know. Next challenges are figuring out how to enable LAN access with my custom domain and properly securing the external access. ------------------------------------------- After a solid 6 hours on this, I throw myself to the mercy of the good people in this great community. Most of this stuff is new to me, so it is 100% likely to be user error. I followed Ibracorps' guides for setting up the Cloudflare tunnel and configuring swag. I think I have scrapped everything and started over 4 times now. I've also carefully read through this whole topic and tried all the suggestions. I'm getting the "unable to reach the origin service" messages. In a browser, mydomain.com gives me a 502 error. 2022-10-04T05:14:46Z ERR error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" cfRay=754b6f95f8fb7ab7-LAX originService=https://192.168.1.107:8001 2022-10-04T05:14:46Z ERR Request failed error="Unable to reach the origin service. The service may be down or it may not be responding to traffic from cloudflared: EOF" connIndex=3 dest=https://mydomain.com/favicon.ico ip=xxx.xx.xxx.77 type=http Things I've tried: Setting noTLSVerify: true Setting originServerName to subdomain.mydomain.com (subdomain has a valid CNAME record) Trying both http and https for the service One very simple thing I'd like to confirm is exactly which IP address I should use for the service in the cloudflared config file. My swag port mappings are: 172.18.0.5:443/TCP -->192.168.1.107:8001 172.18.0.5:80/TCP __> 192.168.1.107:44301 I access my Unraid UI at http://192.168.1.107. I've been assuming I should use the 192.168 IPs as the service (edit: I've also tried the 172.18 IPs). Is that correct? Because most of this is new to me I'm limited in my ability to troubleshoot because I don't know how to tell exactly where in the chain the problem is. Seems like I'm either pointing the tunnel to swag improperly, or I've got a problem with my swag setup that's causing it to not respond. I welcome any and all suggestions.
  4. I've also been trying to figure this out. Data will persist through restarts, but it's gone after a container update. Do you have to store the data in an external DB to keep it through updates?
  5. Thank you very much! That worked for me to. I'll have to go back and compare your steps to what I originally did so I can understand what I got wrong.
  6. I guess I'm just too far out of my depth. I followed those instructions, but the netdata.conf I can access on the host is not the netdata.conf that the container is using. If anyone else has or gets this working, I'd greatly appreciate any details on how you did it.
  7. I've got this running well and monitoring my docker containers, but I can't get it to see my customized netdata.conf. In the template I've got: host path: /mnt/cache/appdata/netdata/netdata.conf container path: /etc/netdata/netdata.conf mountain as rw I've edited the netdata.conf in /mnt/cache, but when I go to ip-address:19999/netdata.conf it shows the default file with everything commented out. What do I have wrong?
  8. Well, and it just started working today, both with Mullvad and Cloudflare DNS servers. I have no idea why. I guess I'll take it.
  9. I just started having a DNS problem with this container and I've maxed out my limited Linux chops trying to figure it out. I've spent about 2 hours on this and done a lot internet searching, but haven't found anyone having this problem recently. Odds are I'm missing something basic. SABnzbd "Status and Interface Options" screen says "Connection Failed!" for Public IPv4 address and Nameserver / DNS Lookup. I turned Debug logging on, but the logfile doesn't seems to be telling me anything: sabnzbd.log Relevant details: Container has worked flawlessly until yesterday, though I probably hadn't used it in about a week before that My binhex-deluge container has no problems, using the same VPN setup; and I have no problems with any of my other containers or with the host I'm using Mullvad VPN I've always used the Mullvad DNS servers, but have also tried 1.1.1.1,1.0.0.1 I've restarted the container at least 20 times in the course of troubleshooting ifconfig inside the container shows I have an IPv4 address from Mullvad, which SABnzbd shows as the local IPv4 address I can ping the Mullvad and Cloudflare DNS server IPs from inside the container Running dig inside the container yields "connection timed out; no servers could be reached" Anyone have any thoughts on how I can fix this?
  10. This is probably a docker noob question, but I haven't run into this problem with other dockers yet. The container will start, but the log says : ERROR: Couldn't read "/opt/splunk/etc/splunk-launch.conf" -- maybe $SPLUNK_HOME or $SPLUNK_ETC is set wrong? I assume this is because I'm not mapping my volumes correctly. I've read through this thread and did some internet searching, but can't figure it out. Can anyone set me straight? Here are my volume mappings:
  11. I've got an i5-11400. At this point it seems like my one remaining problem is that all remote streaming, using clients capable of direct streaming, is forced to transcode at 2 Mbps which takes it down to SD, I assume because they're being forced through the Plex relay, which caps at 2 Mbps. This did not happen on my HP Mini with an I5-8500T with a bare metal install of Plex. Looks like I'm going to have to give myself a crash course on docker and Unraid networking to figure this out. I'm also going to get my HP server back online and confirm that it will still let remote clients direct stream.
  12. Just an update in case it helps anyone else, unchecking "Enable HDR tone mapping" in Settings > Transcoder appears to have fixed the problem for me. I tested with three simultaneous 1080p transcodes and they all did hardware transcoding with no problem. This is with /dev/dri added to the Plex container as a device.
  13. Interestingly, when I removed /dev/dri as a device and added it as a path I wasn't getting any hardware transcoding. Changing it back to a device re-enabled hardware transcoding.
  14. I forced transcoding on the shield for testing. I definitely direct play on the LAN. I was testing with Edge Chromium on a Windows 10 laptop and an Android 11 phone with the Plex app. Both can direct play with no problem. I had started digging into the Plex logs, but wanted to make sure the problem wasn't related to adding /dev/dri as a device and not a path. Thank you very much for all your help! I'll work my remaining transcoding issue over on the Plex forums.
  15. Apparently I am not able to follow simple instructions very well this week. I removed the lines from my go file and rebooted. ls -la /dev/dri now looks good. Earlier you said to add /dev/dri as a path to the Plex container but I haven't been able to find any specific details. I assume /dev/dri is the host path, but what should the container path be? I tried adding it as as device, as many other posts suggest, and it seems like I am getting hardware transcode on my Shield, but on my phone app and laptop browser, when I force transcode I just get a spinning wheel after clicking play. P.S. Thanks for the reminder about the Bluetooth issue. I had it on my list of things to look at after I get this transcoding issue fixed, but I disabled it in the BIOS when I rebooted.
  16. I installed the Intel GPU TOP plugin. ls -la /dev/dri returns "cannot access '/dev/dri': No such file or directory." Diagnostics attached. galaga-diagnostics-20211209-1629.zip EDIT: Attached a picture with confirmation that the iGPU is enabled in the BIOS:
  17. I completed all the above steps and attached are my diagnostics after a reboot. Thank you for trying to help me. I don't have a dedicated GPU installed. The one brief time I had a /dev/dri folder I tried mounting it to the Plex container as a path. galaga-diagnostics-20211208-1456.zip
  18. Yes, the iGPU is enabled in the BIOS, set to GPU (and not PEG). I had gone through the thread you linked yesterday, but went through it again today. I did these steps: Removed all text from from i915.conf and removed all gpu-related lines from syslinux.conf uninstalled intel gpu top plugin rebooted reinstalled intel gpu top plugin rebooted - still no /dev/dri folder added "options i915 force_probe=4c8a" to i915.conf rebooted - still no /dev/dri folder added "i915.force_probe=4c8B" to syslinux.conf rebooted - now there was a /dev/dri folder, but ls -la /dev/dri/ gave me the message about no i915 being present (can't remember the exact message) So, this is where I am currently. Any other thoughts? It may be that I just need to sit on my hands until Unraid and/or Plex have releases that make this work out of the box.
  19. Sorry. Of course I forget to include any details on my very first forum post. Unraid v6.9.2 cpu: i5-11400 mobo: MSI b650 mortar wifi I have read through so many posts on this that my brain is soup ... and I may well have missed a post that already has the solution I need.
  20. My current problem is that the Intel GPU Top plugin is not creating the /dev/dri direrectory. I'm new to Unraid and just built my first server. Loving it so far but can't get hardware transcoding in Plex. Last night I installed the Intel GPU Top plugin and I swear it created the /dev/dri directory. I remember being surprised it was that easy. But I still couldn't get hardware transcoding going. Today I realized I had missed the instructions for adding the /dev/dri path to the Plex container. After I added it and tried to start the container it failed and the logs showed that there is no /dev/dri path. Sure enough it's gone. I did reboot this morning, before doing this, so I'm assuming that's when I lost it. I've tried uninstalling/reinstalling the plugin, but still no /dev/dri. I'm way out of my depth here so would appreciate any thoughts on what to try next.