BurntOC

Members
  • Posts

    74
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

BurntOC's Achievements

Rookie

Rookie (2/14)

1

Reputation

  1. @Josh.5 I just stumbled across this and I'm looking forward to trying it out. I'm running the latest Unraid with an Nvidia RTX. Do I need to use the --runtime=nvidia and GPU UUID or not? Your note under deprecated seems to suggest not, but I see it referenced almost everywhere else so it's been confusing to me. TIA.
  2. I searched this thread and generally online for an answer to this, but I don't see it or I missed it. I've been running swag to front end a couple of dozen containers for a year or so and it has worked great. I tried adding another one today and I went to ssh into it to modify the config file and I'm getting an error that the target actively refused it. I've made no changes to my network, and I've restarted the container and even rebooted Unraid but I'm still getting the same error. Any ideas on what I might be missing? NVM - Needed more coffee. I remembered I ssh into Unraid and then go to the appdata from there rather than ssh into the swag container IP.
  3. As I posted this on a long holiday weekend and hadn't seen responses in about a week, I thought I'd try bumping it once to see if anybody has any ideas. Sent from my GM1917 using Tapatalk
  4. So I've spent probably 2-3 hours searching, looking through posts, and double-checking my container paths. While I do have 21 containers currently, I keep hearing that's fairly normal and 15GB-20GB should work for most people. I got notifications it was filling up in the past so I'd upped the size to 30GB, but a couple of days ago while on a trip I got notice it was almost full again. I'm at 40GB but it sounds like this should be unnecessary. I'm hoping one of the experts here can spot something I'm obviously missing. I've attached a diagnostics.zip. Most things have been running since I expanded the size again, but I had stopped the Grafana, Influxdb, Telegraf, and Splunk containers for a bit because they'd been the most recent additions and I was trying to make doubly sure I didn't run out of space while I was on the road. Just wanted to provide all the info in case any of it is relevant. unraid1-diagnostics-20210601-0617.zip
  5. Very much appreciated. Sent from my GM1915 using Tapatalk
  6. Thank you,@doron. This may be the path I take. Sent from my GM1915 using Tapatalk
  7. So I've had a great experience with Unraid as my core system running Windows as a VM for some of my gaming, but with some of these games blocking running on VM I'm considering running Windows 10 with ESXi and virtualizing Unraid. My system has an AMD Ryzen 5900x and a X570 motherboard, with a 3060 Ti GPU passed through to the VM and a 1660 being used - maybe unnecessarily - for things in Unraid like Plex transcoding. Most of the time my CPU load with the one VM and about 20 containers is in the 4-11% range (when not gaming). I use the X570 SATA connectors for all my array drives, parity, and even the dedicated SSD I pass through to the VM. I have an LSI SAS card with SAS to SATA cables, but I'd rather not use it if I can use the X570 SATA connections. I don't really have access to a slot and I'd have to give up my quad NIC or the 1660 to make room. What I've read lead me to believe I need to use a PCI controller card for things like spindown, SMART, etc. to work. I beleive that's because it has to be actually passed through, which would make it unavailable to the Windows host. Is that correct?
  8. Thank you very much, @IFireflyl. This is exactly what I needed to know.
  9. So I read the OP and several pages of this thread, including the latest posts but I don't see any posts speaking to an issue of concern for me. I don't have my server accessible except via VPN tunnel and that's how I like it. As with my Unifi gear, I don't want to enable remote access because I don't trust that a compromise on the vendor side won't facilitate access to my equipment. Am I correct in understanding that you don't have to enable remote access mode? If so, and if online flash backup is stil supported in that scenario, I take it there isn't any major security risk in what's on the flash drive (IIRC it's primarily boot and array config info). Thanks for any insights here. I was using the Appdata backup plugin to handle my flash drive as well, but it looks like that's no longer supported so it's forcing me to look at other options, unfortunately.
  10. I tried following the steps posted elsewhere to rename the repository and use the fix plugin to fix the XML file. I can no longer access my container login page via https. If I try via http I get the error page someone posted about it only being accessible via https, even though I have a trusted cert available. It just isn't listening yet. If I go to the /admin URL then I get the same thing you're showing here, and I have no idea where to get that authentication key from. Any help would be appreciated. This was working just fine until I tried to "fix" it to the new repository. EDIT - figured it out. Primarily it was because I needed to update my reverse proxy to account for the new app name. That really fixed it. With respect to the "authentication key", I remembered it was in the container settings and I have it written down in the event I needed it anyway. I think I'm good to go now..
  11. Code Thank you for the prompt reply, and please excuse my delayed follow up. I guess it doesn't auto-follow topics I create so I just saw your message. I'll dive into that to take a look. I didn't see the part in my diags with a macvlan msg, but I must've missed it.
  12. Running the latest version of Unraid on a pretty new 5900x x570 build. Per the title, the system becomes unstable after a while. This has been going on for some time. I usually realize it when I'm trying to access one of my containerized apps from another computer and it doesn't respond. When I go to Unraid it the dashboard will usually eventually come up after a much longer than usual period of time, but none of my containers show on the Dashboard and the Docker tab is missing. I can SSH in, but attempting to restart or shutdown via the console, dashboard, or SSH commands never gets it done and I have to hard power off, which is an unclean shutdown. This last time it finally caused some parity errors I'm having to correct so time to escalate to the smart people. I've attached a diagnostics file I pulled this last time while the system was in this "Docker hung, refusing to shutdown" state. Hope someone can help me, and that this in turn can identify whatever problem is going on. unraid1-diagnostics-20210424-1336.zip
  13. I've read through the Video Guide post on this, but I know that the method changed a bit in 6.9 vs S1's video and I'm going to be using the rebuild-dndc container as well (though that's immaterial to my core question here, but it might help someone in the thread piggyback in the future), so I thought I'd create a new post. I've tryinging to route some -arrr and sabnzbd containers through a VPN container and I think I only have one question left. FWIW, while I'm a huge fan of binhex I'm testing with hotio's qbittorrent-vpn container because it doesn't require privileged mode for wireguard and it doesn't reset my wg0.conf permissions from 600 to 755 on every start like the others I've tested. I'm clear on creating the container network, and about identifying the master container in rebuild-dndc. I understand the core part about removing port like 8989 from my Sonarr container and adding that to the VPN container, but here's what I don't understand: What do I do about running multiple containers through if more than one are using 8080, for example, for their web gui access? How will I add them and still access each app's gui? I guess as a note the qbittorrent-vpn itself uses 8080 as well, does the master container using the port complicate things any more? Really excited to try this.
  14. Is this not a security concern? Nothing to be concerned about?
  15. NOTE: I posted this in the binhex-qbittorrent thread as well. This one gets more traffic but if I get to an answer in either I'll post the update in both for others who may run across the issue in the future. I finally had the chance to set this and the binhex-qbittorent containers for evaluation as they're the last major containers I wanted I'd not gotten around to yet. Most everything looks good, but whenever I launch either container I get this in the logs: Warning: `/config/wireguard/wg0.conf' is world accessible I've seen some people include that in their log captures here, but I've not found the resolution. I thought that deleting perms.txt and restarting the containers would address, but the behavior is the same. Whether I delete perms.txt or leave it as be, it changes the 600 permissions I'd set on the file manually back to 755. Can someone help me resolve this?