shooga

Members
  • Posts

    184
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

shooga's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. Thanks for the response. I'm getting a ton of: May 30 09:36:21 Bunker move: move: file <filename> May 30 09:36:21 Bunker move: move_object: <filename> File exists And it's actually happening for other shares that I wasn't trying to relocate to the other cache. Strangely, when I look at the disks directly (on the array) I don't see all of the files that it is referring to. I see other threads where the suggestion is to delete the files on the array and trigger mover again, but in this case I don't see the files to delete them. In most cases, I don't see the conflict. Another note (probably just an unrelated bug): the GUI displays the wrong cache name (I changed the name of the primary cache) on the Shares tab.
  2. I'm trying to do basically the same thing (move a share from one cache pool to another) so I thought I'd try this thread rather than starting a new one. I have tried the suggestions above: Change share from 'prefer cache' to 'yes' Disable docker and VMs in settings Trigger mover Change share back to 'prefer cache' with the new cache Trigger mover Re-enable docker and VMs However, my share is left unchanged with all of the files remaining on the old cache - they never even get moved to the array. I'm not aware of anything holding the files open. What could be going wrong? Thanks!
  3. Thanks @bigmakfor the response. I had added :443 while trying different things I found in my research - it didn't work and I've removed it now. Turns out I didn't need to add a location for esphome specifically (/a0d7b954_esphome), but needed to add the /api/hassio_ingress location. Saw that in your config and thought it was worth a try. That fixed it! Now it works for esphome and vscode. Thanks again! Just to be clear for anyone else looking for help, this is the section that I needed to add. Maybe it's in the latest config sample with the container, but it wasn't in mine. location /api/hassio_ingress { resolver 127.0.0.11 valid=30s; set $upstream_app 192.168.1.205; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; }
  4. I'm using this container successfully as a proxy for several other containers and also for a VM running Home Assistant. I modified the included config so that it would work with the VM and it seems fine except for websockets in some of the hassio add-ons. Websockets work fine via the local IP address, but not via the proxy. Is there a reason that I can't simply add the necessary websocket config lines to the / location? That seems to kill the whole thing. As it is, I have tried to add another location for the base url of the add-on that I'm trying to enable websockets for (esphome here, but I've also tried vscode). It's not working and I believe it's most likely because I'm not configuring the proxy correctly. Proxy config is below. Any help would be greatly appreciated! server { listen 443 ssl; listen [::]:443 ssl; server_name homeassistant.*; include /config/nginx/ssl.conf; client_max_body_size 0; location / { include /config/nginx/proxy.conf; resolver 127.0.0.11 valid=30s; set $upstream_app 192.168.1.205; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; } location /api/websocket { resolver 127.0.0.11 valid=30s; set $upstream_app 192.168.1.205; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } location /a0d7b954_esphome/ { resolver 127.0.0.11 valid=30s; set $upstream_app 192.168.1.205; set $upstream_port 8123; set $upstream_proto http; proxy_pass $upstream_proto://$upstream_app:$upstream_port; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; proxy_set_header Host $host:443; proxy_http_version 1.1; proxy_set_header Upgrade $http_upgrade; proxy_set_header Connection "upgrade"; } }
  5. Thanks for all of the work on this plugin. My question is about HW decoding, which I understand isn't strictly the plugin, but hopefully it's ok to ask here. This is my first update since adding a P2000 to my server. I expected to need to re-enable HW decoding after the update (manually or via a script), but when I run 'nvidia-smi dmon -s u' it looks like the decoder is still in use. So now I'm confused. Am I missing something?
  6. PM with shipping cost sent. Please let me know if you're interested.
  7. I thought it was sold too, but the buyer went radio silent on me. It's still available. I'll PM you the shipping cost in a few minutes.
  8. I haven't gotten around to parting this out yet. Anyone interested at $250?
  9. I did something very similar to my own BPN-DE350SS cages - even used the same fans. The only difference is that I used a flat sheet of foam that I cut holes in. Got this at a local hardware store and it worked great. I also saw great improvements in temps and noise levels (these cages were pretty terrible stock). Pics are here: https://photos.app.goo.gl/gpkAiMNTGXSQds4x6 Incidentally, mine are for sale if anyone wants them.
  10. Ok. Quick update. I was trying to assign a reserved IPv6 address to my server (in pfsense) and may have done that wrong. After removing the reservation and using DHCP, I now see IPv6 showing up in the docker settings. But my containers are still not getting addresses assigned. Feels like progress, but maybe not. The subnet for Unraid still shows as /128, but now br0 is showing /64.
  11. Follow-up question here as I'm digging into this. I'd prefer to stick with automatic IPv6 settings on Unraid. Should the subnet that is passed to Unraid be configured on my router? I'm using pfsense and it looks like it's a /64 mask everywhere I can find it. Totally possible I'm missing something there, so if it's really a pfsense question then I'll ask on their forums.
  12. Ah, ok. That makes sense. Does that setting reference the subnet that I've setup via my router (which is /64)? Looks like I need to switch to a static address and manual settings to change the subnet. Will have to do that later when I'm home and have direct access to everything. Don't want things to go wrong while I'm remote. Thanks again for your help!
  13. Hmm. How do I expose those settings? Here's all I have, even with Docker stopped.
  14. Ok, thanks for the response. Here are the two settings pages. I've blocked out my IPv6 address for the server in case it's publicly addressable (I'm still pretty new to IPv6).
  15. I have Unraid configured for IPv6 in the network settings and that seems to be working fine. However, none of my docker containers are given IPv6 addresses and also show no IPv6Gateway when using docker inspect. Doesn't seem to matter if the container is using host, bridge, or br0 for its network. What could I be doing wrong?