Jump to content

JasonM

Members
  • Content Count

    45
  • Joined

  • Last visited

Community Reputation

2 Neutral

About JasonM

  • Rank
    Advanced Member

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Interesting. Every setup is a little different and there may be some variations in processor and motherboard capability. AFAIK, there isn't a specific disadvantage to using stub. If it works and you're getting the game performance you want, then I'd say you're all set. Until the next problem, of course. Living that homelab life.
  2. Using stub employs a generic driver. Since you have the required virtualization hardware, vfio will use the dedicated driver. And it's reversed for that one. You'd use "vfio-pci.ids" in place of the other. Also, make sure you're stubbing all the functions of the card. You're showing two, but the card may or may not have more. The two you have are likely the graphics and sound controllers integral to the card. My card also has USB and serial controllers that I have to stub and pass to the VM for it to start properly. This may not be an issue for your card, but it's something to look for.
  3. See this thread: You’ll need to modify your Syslinux file to isolate the card. Use vfio instead of stub.
  4. I have a similar setup. I never encountered this issue, but I’ve always had the gaming GPU stubbed and passed to the VM. If you stub the card, the NVIDIA plugin won’t see it at all, thus eliminating any possibility of it using the wrong card. I suggest trying that if you don’t need the 1070 outside VM usage.
  5. Not sure if this is technical enough, but it does have the full HTML login screen which does not exist in <6.8. Kernel version I have is 5.3.6.
  6. Thanks. Did that twice already, but I'll give it another shot. The issue may actually be specific to 802.3ad bonding. I tried to bond the two 10g ports and got the same result I was getting with the bonded 1g ports. I'm going to discontinue using 802.3ad for the time being.
  7. I did tray all 4 ports with the same result. The 10g ports (which are onboard) work fine. I was hesitant to post diags since there was a *possibility* it was related to the know issue, and I didn't want anyone to waste time digging through them. Y'all have more important things to do. I'm up and running on a 10g port. Like I said, the only reason I raised the bug report was because the quad card's ports were detected, reported as cable connected, full duplex, etc. So it didn't seem to be a "no driver" issue. At this point, it appears it is indeed related. In case it helps in the dev effort, I'm posting diags here, but please be assured I'm not expecting any specific help or troubleshooting. unraid-diagnostics-20191013-1701.zip
  8. I did see that; however, I did not immediately assume it was related since the NIC is detected by unRAID. With no driver at all, I’d think the card would not be seen at all. In my case, it’s there, seems to be working normally, but doesn’t get an IP. I do have 10g ports also, but haven’t had a chance to try them.
  9. After updating from 6.7.2 to 6.8.0-rc1, Intel PRO/1000 VT Quad Port (EXPI9404VT) fails to get IP address. Server not reachable via static address, and setting to DHCP results in self-assigned IP.
  10. Unfortunately, there is no fix for this. The 750 has a Marvell controller which has known and irreconcilable issues with the kernel. It's time to start looking at new controllers. I have two 750s collecting dust in my office which I can't even give away. The LSI 9207-8i or 9207-16i can be had on eBay for very reasonable prices, and you can get them pre-flashed into IT mode, so they'll work OOB.
  11. When running nvidia-smi dmon, I can see both enc and dec activity. Interestingly, it seems something with Tautulli isn't caught up yet. Tautulli still shows "hw" for encoding only, not decoding. Is anyone seeing "hw" for both with this new version of Plex and 6.7.2?
  12. Is it your observation from the logs that Mover is the problem? I changed it to monthly and will see how things look in the morning.
  13. Yes, I do have cache drives, but I only use cache-only shares. There is nothing for mover to move. I wish there were a way to disable mover completely, but there is no readily apparent way to do so. Mover is set for 1AM. I have scheduled tasks spread out enough such that they should not conflict with each other: 1AM Mover 2AM Auto update plugins 3AM Auto update containers 4AM CA backup 5AM SSD TRIM
  14. I've been trying to isolate this issue for a while. It has survived hardware changes and complete USB rebuild. At some point overnight, the system becomes unresponsive. Dockers and VMs cease to operate. The web UI is sometimes available, and when it is, shows 2-3 CPUs pegged at 100%. Interacting with the web UI in this state works for a few navigation clicks before completely locking up. On other mornings, the web UI won't load at all. In all cases, a hard reboot is required. The server runs all day without issues. In an attempt to get more data, I installed a user script that tails the syslog onto the flash drive since I'm not able to get a log at time of crash. Regular diags as well as this log are attached. Around line 460, the system seems to enter an endless loop. On one occasion, I noticed a strange message about CA Backup in my browser status bar, of which I took a screen shot. All three files are attached. Any help nailing down this issue is greatly appreciated. unraid-diagnostics-20190911-1207.zip syslog-2019-09-09_0637.txt
  15. Thanks for the shout out. Helping each other is what this community is all about. Also, I see above that you're passing your keyboard and mouse from the host to the VM. Since you're handing off the USB controller from your GPU, you can connect a USB hub to that USB-C port on the back of the GPU, and then plug the input devices into that. This way, you don't have to pass them through at all, and you also get true USB Plug-and-play support in windows. You can connect flash drives and the like without unRAID even knowing they're there. I do this with a dedicated PCI USB card, but it works with that port on the back of the GPU just as well.