BurntOC

Members
  • Posts

    74
  • Joined

  • Last visited

Everything posted by BurntOC

  1. @Josh.5 I just stumbled across this and I'm looking forward to trying it out. I'm running the latest Unraid with an Nvidia RTX. Do I need to use the --runtime=nvidia and GPU UUID or not? Your note under deprecated seems to suggest not, but I see it referenced almost everywhere else so it's been confusing to me. TIA.
  2. I searched this thread and generally online for an answer to this, but I don't see it or I missed it. I've been running swag to front end a couple of dozen containers for a year or so and it has worked great. I tried adding another one today and I went to ssh into it to modify the config file and I'm getting an error that the target actively refused it. I've made no changes to my network, and I've restarted the container and even rebooted Unraid but I'm still getting the same error. Any ideas on what I might be missing? NVM - Needed more coffee. I remembered I ssh into Unraid and then go to the appdata from there rather than ssh into the swag container IP.
  3. As I posted this on a long holiday weekend and hadn't seen responses in about a week, I thought I'd try bumping it once to see if anybody has any ideas. Sent from my GM1917 using Tapatalk
  4. So I've spent probably 2-3 hours searching, looking through posts, and double-checking my container paths. While I do have 21 containers currently, I keep hearing that's fairly normal and 15GB-20GB should work for most people. I got notifications it was filling up in the past so I'd upped the size to 30GB, but a couple of days ago while on a trip I got notice it was almost full again. I'm at 40GB but it sounds like this should be unnecessary. I'm hoping one of the experts here can spot something I'm obviously missing. I've attached a diagnostics.zip. Most things have been running since I expanded the size again, but I had stopped the Grafana, Influxdb, Telegraf, and Splunk containers for a bit because they'd been the most recent additions and I was trying to make doubly sure I didn't run out of space while I was on the road. Just wanted to provide all the info in case any of it is relevant. unraid1-diagnostics-20210601-0617.zip
  5. Very much appreciated. Sent from my GM1915 using Tapatalk
  6. Thank you,@doron. This may be the path I take. Sent from my GM1915 using Tapatalk
  7. So I've had a great experience with Unraid as my core system running Windows as a VM for some of my gaming, but with some of these games blocking running on VM I'm considering running Windows 10 with ESXi and virtualizing Unraid. My system has an AMD Ryzen 5900x and a X570 motherboard, with a 3060 Ti GPU passed through to the VM and a 1660 being used - maybe unnecessarily - for things in Unraid like Plex transcoding. Most of the time my CPU load with the one VM and about 20 containers is in the 4-11% range (when not gaming). I use the X570 SATA connectors for all my array drives, parity, and even the dedicated SSD I pass through to the VM. I have an LSI SAS card with SAS to SATA cables, but I'd rather not use it if I can use the X570 SATA connections. I don't really have access to a slot and I'd have to give up my quad NIC or the 1660 to make room. What I've read lead me to believe I need to use a PCI controller card for things like spindown, SMART, etc. to work. I beleive that's because it has to be actually passed through, which would make it unavailable to the Windows host. Is that correct?
  8. Thank you very much, @IFireflyl. This is exactly what I needed to know.
  9. So I read the OP and several pages of this thread, including the latest posts but I don't see any posts speaking to an issue of concern for me. I don't have my server accessible except via VPN tunnel and that's how I like it. As with my Unifi gear, I don't want to enable remote access because I don't trust that a compromise on the vendor side won't facilitate access to my equipment. Am I correct in understanding that you don't have to enable remote access mode? If so, and if online flash backup is stil supported in that scenario, I take it there isn't any major security risk in what's on the flash drive (IIRC it's primarily boot and array config info). Thanks for any insights here. I was using the Appdata backup plugin to handle my flash drive as well, but it looks like that's no longer supported so it's forcing me to look at other options, unfortunately.
  10. I tried following the steps posted elsewhere to rename the repository and use the fix plugin to fix the XML file. I can no longer access my container login page via https. If I try via http I get the error page someone posted about it only being accessible via https, even though I have a trusted cert available. It just isn't listening yet. If I go to the /admin URL then I get the same thing you're showing here, and I have no idea where to get that authentication key from. Any help would be appreciated. This was working just fine until I tried to "fix" it to the new repository. EDIT - figured it out. Primarily it was because I needed to update my reverse proxy to account for the new app name. That really fixed it. With respect to the "authentication key", I remembered it was in the container settings and I have it written down in the event I needed it anyway. I think I'm good to go now..
  11. Code Thank you for the prompt reply, and please excuse my delayed follow up. I guess it doesn't auto-follow topics I create so I just saw your message. I'll dive into that to take a look. I didn't see the part in my diags with a macvlan msg, but I must've missed it.
  12. Running the latest version of Unraid on a pretty new 5900x x570 build. Per the title, the system becomes unstable after a while. This has been going on for some time. I usually realize it when I'm trying to access one of my containerized apps from another computer and it doesn't respond. When I go to Unraid it the dashboard will usually eventually come up after a much longer than usual period of time, but none of my containers show on the Dashboard and the Docker tab is missing. I can SSH in, but attempting to restart or shutdown via the console, dashboard, or SSH commands never gets it done and I have to hard power off, which is an unclean shutdown. This last time it finally caused some parity errors I'm having to correct so time to escalate to the smart people. I've attached a diagnostics file I pulled this last time while the system was in this "Docker hung, refusing to shutdown" state. Hope someone can help me, and that this in turn can identify whatever problem is going on. unraid1-diagnostics-20210424-1336.zip
  13. I've read through the Video Guide post on this, but I know that the method changed a bit in 6.9 vs S1's video and I'm going to be using the rebuild-dndc container as well (though that's immaterial to my core question here, but it might help someone in the thread piggyback in the future), so I thought I'd create a new post. I've tryinging to route some -arrr and sabnzbd containers through a VPN container and I think I only have one question left. FWIW, while I'm a huge fan of binhex I'm testing with hotio's qbittorrent-vpn container because it doesn't require privileged mode for wireguard and it doesn't reset my wg0.conf permissions from 600 to 755 on every start like the others I've tested. I'm clear on creating the container network, and about identifying the master container in rebuild-dndc. I understand the core part about removing port like 8989 from my Sonarr container and adding that to the VPN container, but here's what I don't understand: What do I do about running multiple containers through if more than one are using 8080, for example, for their web gui access? How will I add them and still access each app's gui? I guess as a note the qbittorrent-vpn itself uses 8080 as well, does the master container using the port complicate things any more? Really excited to try this.
  14. Is this not a security concern? Nothing to be concerned about?
  15. NOTE: I posted this in the binhex-qbittorrent thread as well. This one gets more traffic but if I get to an answer in either I'll post the update in both for others who may run across the issue in the future. I finally had the chance to set this and the binhex-qbittorent containers for evaluation as they're the last major containers I wanted I'd not gotten around to yet. Most everything looks good, but whenever I launch either container I get this in the logs: Warning: `/config/wireguard/wg0.conf' is world accessible I've seen some people include that in their log captures here, but I've not found the resolution. I thought that deleting perms.txt and restarting the containers would address, but the behavior is the same. Whether I delete perms.txt or leave it as be, it changes the 600 permissions I'd set on the file manually back to 755. Can someone help me resolve this?
  16. I finally had the chance to set this and the binhex-delugevpn containers for evaluation as they're the last major containers I wanted I'd not gotten around to yet. Most everything looks good, but whenever I launch either container I get this in the logs: Warning: `/config/wireguard/wg0.conf' is world accessible I've seen some people include that in their log captures here, but I've not found the resolution. I thought that deleting perms.txt and restarting the containers would address, but the behavior is the same. Whether I delete perms.txt or leave it as be, it changes the 600 permissions I'd set on the file manually back to 755. Can someone help me resolve this?
  17. I couldn't leave well enough alone and I tried to install VM Backup from CA over the version I'd installed per my posts a few days ago. It failed, as the poster above indicated, with various issues from the one he/she described to a md5 check on a file it pulled. AND, I started seeing the problem you're desribing above. Unfortunately, if I delete the plugin out of /config/plugins it still won't let me go back to my previous version either. I'm going to try to delete the vmbackup folder out of plugins and just leave it be until @JTok has some time to put that new version out. Too many hiccups at the moment.
  18. Thanks for this. I know this is a weird issue but if I remove VirtIO Drivers ISO from my VM then it starts crashing within a minute - two at the most. Over and over. If I put that back, it works fine. I know it shouldn't do that, but it does, and it doesn't matter whether I store the ISO on the array or on a UA drive, the VM Backup script gives this error and fails if that ISO is in the settings: /tmp/vmbackup/scripts/default/user-script.sh: line 424: vdisk_types["$vdisk_path"]: bad array subscript Would it be difficult to add a toggle to ignore those, or make it do that by default as most people wouldn't want those included anyway?
  19. Per the title, basically. I’m running 6.9 RC2 and it’s been rock-solid. I have about 15 containers and 1 VM that I use for gaming. The VM has been solid as well, except for one thing. I tried removing the VirtIO drivers from settings because the VM Backup plugin was erroring on it for some reason, but when I do that the VM typically crashes within 45 seconds to 2 minutes of me logging in. If I add them again then it is stable again. I thought that was just for mounting the VirtIO drivers as a CD drive for the initial drivers install and after that was done it could be “ejected” or removed. Is this a likely bug, or does someone here know why it would behave this way?
  20. Last update of the day. I found others with this issue in the thread, and it lead me to believe it was dying on an ISO I still had mounted and/or the SSD I pass through directly by-id. I wanted to keep the latter if possible, so I removed the VirtIO Drivers ISO and ran it again and the backup appears to have completed just fine. Haven't tried a restore, but it looks really promising. EDIT - adding that normal backup of this VM was about 107 GB in size, but with zstandard it is 39 GB. That sounds a little too small, but there are no errors thrown. EDIT2 - welp, now the VM BSODs a lot. Re-adding the Virtio Drivers ISO in the VM settings seems to have stabilized it, but the VM Backup script is failing again with the error above.
  21. I just had a chance to config and attempt a backup. I'd had this working in the past, though I'm pretty sure I'd tweaked a few things because of open issues mentioned in the OP and throughout the thread. FWIW, my initial backup attempt with the latest version and basically stock mostly works (at least there are few errors), but this single error seems to be bypassing the actual backup: 2021-03-01 10:12:49 information: /mnt/user/_backups_unraid1/VMs/GamingPC exists. continuing. /tmp/vmbackup/scripts/default/user-script.sh: line 424: vdisk_types["$vdisk_path"]: bad array subscript
  22. @Roman Melnik @CS01-HS Okay, I got the new version installed on 6.9 RC2 and I expanded my comfort zone a bit to boot. Here's how I did it - anyone following these steps is doing so at their own risk. Step 1 - went to OP's beta Github repo for this here. Step 2 - forked a copy for me to edit Step 3 - edited the vmbackup.plg file to change the max to 6.13 (in my case, arbitrary just to get me through the next several revs hopefully official support will resume) Step 4 - downloaded the zip of the repository files, extracted the zip, and copied the files to /boot/temp Step 5 - Used Install Plugins to browse to /boot/temp and selected the vmbackup.plg file. Step 6 - Done. VM Backup is back in my Settings area.
  23. Ah, I should've been more patient, probably. Thankfully it's just a single VM I don't mess with much except for gaming, LOL. Sent from my GM1915 using Tapatalk
  24. I probably screwed up the process somehow, but using the link wouldn't complete because of a URL error somewhere in the script. I foolishly uninstalled my existing version and I tried the v21 and v22 file URLs and I couldn't get that either. I will probably try downloading them to install locally b/c I want to get it back. If that fails, can the dev post the steps needed to bypass that check for those of us that are willing to take the risk until a tested version is available?
  25. So I got an initial response on Reddit that seems to make a lot of sense. In a nutshell, the response was that: Yes, with a 5900x I could probably power through, but there may be a more elegant way. This would involve: 1. swap the 1660 to slot 1 and the 3060 Ti to slot 2 to address the POST issue and as the 1660 will be "always available". 2. install the nvidia driver plugin (which I'd not done as I thought it would conflict with the VM GPU passthrough for some reason). 3. reconfigure the VM as needed to address any bus/slot changes 4. configure containers to leverage the nvidia driver builds where available