• Content Count

  • Joined

  • Last visited

Community Reputation

36 Good

About civic95man

  • Rank

Recent Profile Visitors

573 profile views
  1. I'm not sure how true this is, but I was under the impression that it was broadcast traffic, in conjunction with static IPs that was causing macvlan to s*** the bed and cause kernel panics. Now I'm not sure if this is true with every vendor, but Ubiquiti switches/routers do not route broadcast packets between networks (so I was told/read), which was the idea behind creating vlans for docker containers with static IPs. I also think it was thrown out there that certain network adapters may be more prone to this than others. This is a very good point since it could b
  2. That is very strange, but if it seems to be the root cause then its good to know. I guess if you stress test your system without it installed and don't see any call traces then you've found your solution!
  3. I would be surprised if that was the cause; specifically, the VM is isolated from your unraid system. That corsair pump is just another USB peripheral as far as unraid, the VM, and Windows is concerned (much like a mouse or keyboard). If you want to pursue this route then it can't hurt anything.
  4. Just do a flash backup from the web UI [main] -> [boot device -> flash] -> [flash backup] and keep it somewhere safe. If things go sideways, you just copy everything back to the flash drive. I would then update from within unraid
  5. I was thinking of trying the 1G option since your processor supports it, but 2M still works. So you basically tell the kernel to set aside XXXX contiguous block of memory of 2M in size (instead of the default 4K). The kicker here is that some applications, besides the VM, can and will use the hugepages, so plan accordingly. If you want 16G set aside as hugepages then you would put "hugepages=8192", since 16GB / 2MB = 8192. I think the mis-configured hugepages was causing the oom killing spree in those diagnostics. This is in regard to the transition from pri
  6. Thats good then. Sounds like you're all set. So I could try and describe how to do the process but its easier to just reference another post I am assuming that you have the nvme mounted by unassigned devices. Basically just copy the VM image file from the cache to the nvme using your choice of methods; although I recommend using the --sparse=always option as it keeps the image size smaller. Then edit the VM to point to the new disk location (XML editor may be easier) If you have any questions, feel free to come back here
  7. Looks like you've made a few steps in the right direction. Now for the bad news, I see an issue right away with that screenshot. The php warning at the top (Warning: parse_ini_file.........) seems to indicate that your flash drive dropped offline again. Either one of two things come to mind: 1. you should use a USB2 port for the flash drive. 2. when you passed through the USB device for blue iris in your windows VM, you passed through an entire USB controller that the flash drive is one, or the drive itself, on accident. If you just need a single USB device, then
  8. Looked through your diagnostics (both) and still see the OOM errors. The first diagnostics, which the system ran for about 2 days, was full of them as you stated. I find it very odd that your memory seems very fragmented, and thus why it can't allocate an order 4 block of contiguous memory - especially after a fresh reboot. Here is a suggestion: have you tried using Hugepages for your VM? It's typically only needed for very large capacities, or if you are suffering performance issues; however, in this case, it's worth a shot. Here is a post about how to utilize it:
  9. I'm still on the 6.8 series but the 6.9 seems to have what you need. Autostarting the VM is always a risky proposition as you could run into problems as you've seen. My personal preference is that unless it's running some kind of critical task (such as pfsense) then I don't see any reason to autostart. Again, that is just my personal preference. Last I looked at your logs (and in the screenshot), it looked like the audio device is already split into it's own group. I would probably make sure the VM is set to manually start. Then install the nvme (d
  10. Yes, the problem is that I had "stubbed" several components and when the new GPU was added, the PCIe assignments changed but the stubbed assignments didn't - meaning that several items (disk controllers, network adapters) disappeared. I just had to edit my vfio-pci file and I was good
  11. So it looks like we've found the root cause of the problem. By any chance did you start a VM in those diagnostics? What it looks like to me in this snippet is that something took control of the USB controller on 0000:09:00.3, which appears to be where your unraid flash drive is located: Jan 18 19:57:21 PCServer kernel: xhci_hcd 0000:09:00.3: remove, state 1 Jan 18 19:57:21 PCServer kernel: usb usb6: USB disconnect, device number 1 Jan 18 19:57:21 PCServer kernel: usb 6-4: USB disconnect, device number 2 Jan 18 19:57:21 PCServer kernel: sd 1:0:0:0: [sdb] Synchronizing SCSI cach
  12. So it seems to be VM related. Might be good to post diagnostics AFTER this happens again. That snippet of the syslog left out a lot of details and I saw reference to another OOM error. Also, I looked into your previous OOM error from the first post one last time and I can *kinda* see how it gave you the error. If anyone is curious, technically, you ran out of memory on the Normal zone and couldn't assign a contiguous block (order of 4). I don't know why it didn't use DMA32 zone. Maybe someone else can answer that. seems to be related to intel integrated graphic
  13. I'm assuming that you are passing some hardware to the VMs, such as a GPU. I don't see any mention of stubbing hardware via the vfio-pci.cfg file but I can only assume that you are (can't remember if that shows up in the diagnostics). You will most likely need to update that config file as well as your VMs once you install your nvme. Whenever you install new hardware that interfaces with the PCIe bus, it can shift the existing allocations around. This can cause issues with stubbed hardware, where something that shouldn't have been stubbed suddenly is (i.e. USB ports with the unraid flash dr
  14. you'll want to look into the 6.9 beta which allows multiple pools. Although it's still a beta, it seems really stable with many people using it.
  15. That's good to know, and good to point out. I don't boot into the GUI mode but it is a nice option if required so I'd like to have that option. I run a X10SRA-F and remember seeing a note that a new VGA driver was required when updating the BMC to 3.80 or later (I'm on 3.88 now)