DougP

Members
  • Posts

    8
  • Joined

  • Last visited

Everything posted by DougP

  1. My apologies for not getting back to this swiftly; I no longer work for the company and am free-lancing support for this Unraid server. Here is the latest syslog, immediately after a reboot. tower-syslog-20210621-1311.zip
  2. I ran the scrub and here are the results (not much): Scrub started: Thu Apr 29 14:41:28 2021 Status: finished Duration: 0:01:54 Total to scrub: 270.54GiB Rate: 2.37GiB/s Error summary: verify=13 csum=4 Corrected: 0 Uncorrectable: 17 Unverified: 0 I'll reboot, verify BIOS settings mentioned in JorgeB's post (I'm on a Ryzen X570/3950X - thank you for that, Jorge), and rebuild and upload the diagnostics. I suspect the corruption happened when I was fighting hardware pass-through to a VM and had to hard-reset the machine a few times; that matches the info in the Ryzen stability thread. To be continued! Many thanks for your help thus far.
  3. I've noticed my log file getting full for seemingly no reason and found it filled with the following error: BTRFS error (device nvme0n1p1): bad tree block start, want 31566561280 have 16777216 The numbers on each line in the log file are different. I can't see any other symptoms; all of my VMs, shares, and Docker containers are functioning as far as I can tell. I've attached the diagnostics archive. Should I run a BTRFS Scrub operation on the cache drive? Is this a sign of an impending hardware failure? tower-diagnostics-20210429-1343.zip
  4. I found the solution to this particular problem: I had to enable "Host access to custom networks" in the Advanced Docker settings and enable the desired Subnet.
  5. I ended up wiping all of the network settings by deleting the network configuration files and starting from scratch. That worked through a couple of reboot cycles but, unfortunately, the problem has resurfaced. The two networks I'm having trouble with are on USB3 Ethernet Dongles which, I expect, could be part of the problem. Unfortunately all of my PCIe slots are already being used and USB3 is the only way I can get onto these networks..
  6. br2 is the network I'm trying to attach the docker container to. This would be subnet 172.17.0.0/24. I currently have no docker containers attached to that bridge (since it's not an option). I assume that the 172.17.0.0/16 route connected to "docker0" is illegitimate so I tried to delete it - to no avail; the delete button asks me to confirm but never actually removes the route. I suspect this might help point towards a solution. EDIT: I deleted the 172.7.0.0/16 route using the command line: ip route del 172.17.0.0/16 The route was removed from the table but it hasn't helped me apply br2 to a docker container.
  7. I am trying to assign a Unifi Controller docker container to an external network so it can manage our Access Points. The NIC is attached to the external network and was able to receive an IP address via DHCP. The interface (br2, 172.17.0.0/24) is enabled in the Docker settings page. Unfortunately, I've seen two issues: At boot-time I see an error message that says "br2 not found" and br2 isn't available in the Docker settings for Docker images: I'm at a loss here. What do I need to do to enable br2 on my Docker images?
  8. Ryzen 3950X X570 AORUS ELITE GTX 1060 GT710 unRAID 6.9.0-beta35 The 1060 is dedicated to a Windows VM and the GT710 is assigned to a Linux Mint VM. The Windows system is superbly stable, with the VM running for months without rebooting. Unfortunately this is only partly true on the Linux machine; if I run the VM with non-accelerated drivers (xserver-xorg-video-noveau) it is perfectly stable but performance is, as you would expect, less than stellar and as soon as I install/enable video drivers, the VM becomes unusable. Upon reboot, it will sometimes make it to the desktop before crashing. Other times it crashes as soon as the video drivers are initialized. When the machine crashes, the CPU usage as indicated on the unRAID dashboard shows a single core running at 100%, jumping to and from seemingly random cores and the display-out is disabled entirely. If I force-close the VM from the dashboard, I am able to re-start it and the display works properly until the GPU drivers initialize. Does anybody have any suggestions on how I can get the video drivers to work with pass-through?