kennelm

Members
  • Posts

    91
  • Joined

  • Last visited

Everything posted by kennelm

  1. Is the issue here the difference between dockerhub and github? Not everything in github is in dockerhub, correct?
  2. I'm an idiot. Somehow, my vdisk migrated from the cache drive to the array. I moved it back and that did the trick. Squid, to your point about the protected array being 4x slower, I was seeing 2 orders of magnitude of slower progress with inserts/updates to the database. Sent from my SM-G975U using Tapatalk
  3. Yes, I should have been more clear that I've been experimenting, moving shares back and forth from cache to the array (and forcing the mover to run after making changes). Am I wrong to believe the mover will handle the shift from one place to the other based on the cache settings for the share? I thought I observed that behavior when I changed the settings and forced the mover to run. Sounds like you are pretty certain that the slow performance is due to NOT being on the cache drive? I wondered if there are other settings I need to look into. Larry
  4. Hello, I have terrible MariaDB insert/update performance on an unRaid VM running Ubuntu 20.04 LTS. The virtual image is stored at /mnt/user/domains/ which I've tried both on cache disk (btrfs) and within the array (xfs). Is there something obvious I should be looking for? As a test, I installed the MariaDB docker image and the data directory is on spinning disks (/mnt/usr/appdata). Tests reveal this database is much faster than the same database on the VM. In both cases, the same unRaid server (i7-8700K, 32GB RAM) is the host machine. Thoughts appreciated. Larry tower-diagnostics-20220618-1155.zip
  5. I've been running OpenVPN via Unraid docker for some time and it works great. I just noticed that WireGuard is being offered as a preferred alternative so I decided to install that and try it out. I have to say the install and client setup with QR Code was a breeze. I want to use WireGuard as a tunnel into my LAN, so I set it up that way. Now, I'm reading that in order to do this and play nice with my VMs and other docker stuff, I need to define a static route in my router that sends the traffic over to WireGuard. I cannot do this with my Eero mesh router. Am I correct that a static route is needed for my use case? Other than installing another device that can receive the traffic and forward to WireGuard, is there another way? Do I have to move off of OpenVPN, assuming the docker might eventually be pulled from the unraid marketplace? Thanks!
  6. Thanks for the list. I'll be ordering a new card from it. Can installation of a driver at the O/S level affect the BIOS? Seems like any awareness of hardware at the BIOS boot screen would require flashing the BIOS itself? Just for my sanity, is it possible for the Marvell 9215 to cause the flaky behavior I described? I preclear the drive, add it to the array, parity rebuilds it, and then it pretty much immediately flips to a red X. This seems to be the pattern now with drives larger than 4TB.
  7. I need your advice on debugging a problem with my server. I have a Marvell SATA controller to expand the server from 6 to 8 hard drives (I now know that I shouldn't be using such a cheap controller, so I plan to look into that). Here is the hardware report from lspci: # lspci | grep SATA 00:17.0 SATA controller: Intel Corporation 200 Series PCH SATA controller [AHCI mode] 02:00.0 SATA controller: Marvell Technology Group Ltd. 88SE9215 PCIe 2.0 x1 4-port SATA 6 Gb/s Controller (rev 11) I noticed that in the BIOS, the 6 drives connected directly to the motherboard are reported, but the 2 drives connected to the Marvell controller are not reported. Yet, upon boot, fdisk -l reports all 8 drives (plus usb flash and m.2 cache). This seems odd, no? Shouldn't the BIOS see all eight drives? I'm pretty sure that the Marvell controller was working with 4TB drives, but recently, when I attempted to upgrade to an 8TB drive, the drive keeps going offline with a red X. Do we know if this Marvell 9215 SATA controller has trouble with 8TB drives? If this is likely the issue I am having, then I'll be moving quickly to locate and install something better. Is there a SATA controller that is recommended? Or, should I be looking at a SAS HBA like the LSI 9211-8i?? Thanks, Larry
  8. I'm interested in running a squid proxy server I found at docker hub: https://hub.docker.com/r/sameersbn/squid There's no unraid template for this container that I can find, so I'm looking for guidance on how I can get this up and running. I was able to install the container, but I haven't configured it properly, so it won't respond to requests. For example, I am not sure what kind of network config it needs: host, bridge, or custom. I can see that the documentation suggests it needs at least two parameters: --publish 3128:3128 --volume /srv/docker/squid/cache:/var/spool/squid I think I can pass these in as extra parameters in the unraid docker template? Or should I just set up a mapping in the template? Can I assume that /srv/docker/squid/cache is intended to be a path on the host and should be adapted as /mnt/cache/appdata/squid? Advice appreciated!
  9. Mine is not running automatically. Is there a way to schedule it?
  10. Good idea. None found. It very well could have been a docker container I misconfigured, tested, and then deleted.
  11. Not sure how that got there, but it was mostly empty, so I renamed it to see what happens. Meanwhile, the warning went away.
  12. I noticed I have a /mnt/user/cache directory. Maybe that is it? tower-diagnostics-20201126-1904.zip
  13. I just noticed this warning on my server: Share cache is identically named to a disk share While there *may be* (doubtful) valid use cases for having a share named identically to a disk share, this is only going to cause some confusion as if disk shares are enabled, then you will have duplicated share names, and possibly if disk shares are not enabled, then you might not be able to gain access to the share. This is usually caused by moving the contents of one disk to another (XFS Conversion?) and an improperly placed slash. The solution is to move the contents of the user share named cache to be placed within a validly named share. Ask for assistance on the forums for guidance on doing this. I checked the Shares tab on the console and I do not see a share named "cache." I do have a device named "cache" on the Main tab. Thoughts?
  14. Yes, the router needs the MAC address of the NIC to make the reservation. I'm comfortable making the change -- just verifying before I get into it and take down all my stuff.
  15. I got a new Eero mesh router and I later noticed that my OpenVPN docker container was not responding. Turns out, I forgot to port-forward, so I hop on the router and find the right spot to do that. After several tries, I realized that this stupid router will not forward a port without first setting up an IP reservation (but I don't need that since I have unraid set to a static IP locally). So I try and try and realize this will not work on a device already set to a static IP. My question is...before I stop all my VMs and Docker containers and make the network change to DHCP, will unraid (more specifically, my VMs and containers) be happy with a "static" IP handed out via DHCP IP reservation? Just confirming before I shut 'er down and try. Larry
  16. Assuming you had OpenDNS running and that caused your issue, and it has been turned off, check the DNS servers being used on your clients and servers: cat /etc/resolve.conf on Linux. ipconfig /all on Windows. Maybe the OpenDNS servers are still in the cache? Larry
  17. First, I reset the OpenDNS servers on my router. In my case, I used Google's: 8.8.8.8 and 8.8.4.4. You can also default to the one's provided by your ISP. Then, on the Windows machine where I access the unraid console, I had to flush the DNS servers: ipconfig /flushdns After that, I reinstalled the container and it worked. I have not taken the time to understand why, but I plan to, or if someone already knows, please weigh in.
  18. OK, I figured this out. I had configured OpenDNS at my router to experiment with parental controls and that definitely interfered with this container. Larry
  19. Thanks for the reply. That is exactly what I decided to do, so your reply validates my approach. I was just a little leery of doing anything that might ruin the image and prevent a recovery. I remain confused as to how all this happened. Does disabling VMs in the settings menu cause this kind of destruction? I did notice that the VM came up with a different network configuration. It was originally set to a static IP, and it came up with DHCP. Not sure why. Otherwise, it seems to be working (fingers crossed). My next step is to look into best practices for backing up the configuration as well as the VM vdisk image. I found the user script from squid and got that set up. Appears to be working and making a backup of both my XML and nvram files. Now, I just need to figure out the best way to back up the vdisk image. Maybe that's just a simple cp of the file once the VM is stopped. I see spaceinvader is suggesting Krusader from sparkyballs... Larry
  20. All, I have a major problem I could use assistance with. I have been running an Ubuntu VM that works great. In the course of playing around with my network bridging, I disabled VMs in the settings menu, and now that I have turned that back on, I seem to have lost the fact of this Ubuntu VM in the console. I know the VM image still exists, but the configuration is gone. What is the best way to recover the settings so that the VM can be restarted? last thing I want to do is something silly that overlays the existing image with a new install. Unfortunately, it looks like a new libvirt image was created and replaced the old one for which I don;t have a backup. Would it be possible to just create a new Ubuntu VM that I think matches as best I can recall the configuration of the one I already have, and then edit the XML to pick up the vdisk of the original VM?
  21. OK, I've been running this container with success for many months, and then earlier this week, I tried to VPN into my unraid server and I found the container is no longer working. Before I keep digging, did something change? Is there a known issue? Initially, I found that the WebUI doesn't work. So I poke around and I see this in the container log: [cont-init.d] 50-interface: executing... /var/run/s6/etc/cont-init.d/50-interface: line 9: /usr/local/openvpn_as/scripts/confdba: No such file or directory /var/run/s6/etc/cont-init.d/50-interface: line 10: /usr/local/openvpn_as/scripts/confdba: No such file or directory /var/run/s6/etc/cont-init.d/50-interface: line 11: /usr/local/openvpn_as/scripts/confdba: No such file or directory /var/run/s6/etc/cont-init.d/50-interface: line 12: /usr/local/openvpn_as/scripts/confdba: No such file or directory [cont-init.d] 50-interface: exited 127. [cont-init.d] 99-custom-scripts: executing... [custom-init] no custom files found exiting... [cont-init.d] 99-custom-scripts: exited 0. [cont-init.d] done. [services.d] starting services ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory [services.d] done. ./run: line 3: /usr/local/openvpn_as/scripts/openvpnas: No such file or directory My /config path on the host for this container is not /usr/local/openvpn-as. It's /mnt/cache/appdata/openvpn-as/. So that looks weird to me. Since the last time I checked this container, I stood up a VM and changed the network settings, so maybe that is related? Not sure. I'm guessing this is something really silly, but so far I haven't cracked the code... Diagnostics attached. Appreciate any guidance on getting this back up and running. tower-diagnostics-20200611-1954.zip
  22. OK, I'll try running it again to see what happens. In this case, I wasn't aware there was anything wrong until after I rebooted, and of course the logs were flushed when it came back up. Had I know there was a problem, I would have captured the diagnostics. LK
  23. So, I noticed the warning from Unraid that I was not running the Dynamix SSD TRIM Plugin, so I set about installing it and having it run on my SSD. I restarted the Unraid server via the console to trigger the unraid warning message again, only to see that the cache drive was declared missing! It took a cold power-down and restart to get it back in the list and get it back into the server config. Should I turn this thing off?
  24. If I exclude a disk that already has files for a given share, does that interfere with my ability to read the files that are already on the excluded disk? Or does an exclusion only affect write operations? Update: I just found this statement in another post: Includes/Excludes -- whether local or global -- only apply to writes. No disks are excluded when showing the contents of a share for reading.