jonp
-
Posts
6442 -
Joined
-
Last visited
-
Days Won
24
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by jonp
-
-
On 4/17/2022 at 8:17 AM, agarkauskas said:
Ladies and gentleman! Problem solved with RC4. 😃
WOO HOO!! So glad to hear this!!
-
Wow thanks for catching this and reproducing. @Squid has made us aware and we will investigate.
- 1
-
Hi there,
Unfortunately these types of issues can happen when you use AMD-based devices (CPU/GPU) for use with VFIO. There is just a lack of consistency with the experience across kernel and package updates. These issues don't seem to plague NVIDIA users. There is a limit to how much we can do to support AMD when they aren't even supporting this themselves. I wouldn't call this as much a "bug" with Unraid as it is with the kernel itself and from our perspective, having problems with AMD-based GPUs and GPU pass through is a known issue and limitation of AMD. Hopefully AMD will do a better job supporting VFIO in the future.
-
This is going to be a real bear to figure out ;-). I definitely think this is related to CPU assignments and NUMA nodes. Will keep investigating...
-
This system has two physical socketed CPUs though, yes? And did you have to do anything special to align the GPU and memory allocations on the previous versions of Unraid?
-
@agarkauskas really hoping to hear back from you on these questions. In addition, we're starting to wonder if this might be due to the dual CPUs you're using in your setup. Did you perhaps change any of your configurations with CPU assignments or isolations? Did you ever align your memory allocation for your VMs to the right NUMA nodes in alignment with your GPUs?
-
Included TPM support as of 6.10-rc2
-
Thank you for reporting. I am making the dev team aware.
- 1
-
Hi @Mirai Misaka, if you try to install a new Windows VM from scratch, do you still get the Code 43 error? We need to isolate whether this is an issue within the guest OS or something to do with the host configuration. @bigbangus had a similar issue after updating to 6.10, but found the problem was that the NVIDIA driver needed updating there.
Another possible issue is whether or not this system features an integrated graphics device and whether that device is enabled or not. Generally speaking when you want to pass through a GPU, you need one GPU per guest and one for the host. In your setup, you seem to only have the two GPUs.
I know that some users have found workarounds to making the primary GPU pass through, but NVIDIA does not officially support that configuration as an FYI: https://nvidia.custhelp.com/app/answers/detail/a_id/5173/~/geforce-gpu-passthrough-for-windows-virtual-machine-(beta)
QuoteDo you need to have more than one GPU installed or can you leverage the same GPU being used by the host OS for virtualization?
One GPU is required for the Linux host OS and one GPU is required for the Windows virtual machine.
-
And are both of your VMs experiencing this same issue with each GPU? If you're having bad performance in Windows outside of games as well, this is unlikely a bug with GPU pass through, but rather, something amiss with CPU virtualization/pinning. Can you confirm both of those Windows VMs you have experience the same issue and that the issue persists even with only one VM running at a time?
-
@nickp85 and @JesterEE please attach your system diagnostics when you have a moment so I can review those. Again, I have been unable to recreate the issue in the lab, so we're going to need more help from you guys to figure out what's going on.
-
On 11/4/2021 at 9:33 AM, bigbangus said:
I'm experiencing the same issue with 6.10rc2 trying to pass my primary GPU 1660Ti to VM. Code 43 all day. Roll back to 6.9.2 and it's fine. Tried a bunch of common sense stuff like you did but still no success.
Please attach your system diagnostics.
-
Hi there,
For those having this issue, please report back with a copy of your system diagnostics, and any relevant A/B testing you've done comparing 6.9-series VM performance to 6.10. For game-specific testing, please list the games you've tried. We're actively trying to recreate these issues in the lab, but so far, no dice.
-
Hi there,
I've been trying to recreate these reported performance issues, but in my own testing, I'm not able to. Can you confirm this performance drop is occurring with other games as well or just Cyberpunk?
-
Hi there,
One issue could be that if you're loading the GPU driver to use the GPU with a Docker container, then you can't also use that GPU with a virtual machine. GPUs need to have their driver stubbed in order to be used in a VM. When you use the NVIDIA plugin, it installs the driver for the card which prevents you from using it in a VM. This is not a bug.
-
Just want you guys to know that we're tracking all of these VM issues and I'm looking into them this week. Thank you for the reports!
- 3
-
7 hours ago, jj666 said:
Hello there,
Like mentioned in the recent unraid podcast, I am wondering if the SMB Multichannel is enabled in this release candidate.
If so, second question, I have two servers both with bonded network connections (802.3ad), is it recommended to keep bonded to use multichannel, or remove?
Cheers,
-jj-
Ready for these uber-complicated instructions? Just kidding! It's easy!
First you'll need to stop the array, then navigate to the Settings > SMB Settings page. From here, modify the SMB Extras section and add the following:
server multi channel support = yes aio read size = 1 aio write size = 1
Save the changes and then start the array.
WARNING: THIS IS STILL CONSIDERED EXPERIMENTAL! We haven't done sufficient testing with this yet, so feel free to use it, but do so at your own risk.
Something else worth mentioning is that according to the Samba project, as recently as a few days ago Samba 4.15-rc2 was released and there was this interesting note in there about multi-channel: https://wiki.samba.org/index.php/Samba_4.15_Features_added/changed#.22server_multi_channel_support.22_no_longer_experimental
- 1
-
Please retest this issue on 6.10-rc1.
-
Hey everyone, just a quick update on this issue. The main problem we've faced is the inability to recreate this issue in our labs. We are still actively working on it, but if anyone here knows the full solution, we are open to providing a bounty for it. Just PM me and so long as the fix isn't a hack or workaround, we will gladly compensate you for your time and work.
-
Thanks for submitting this and including your diagnostics. I will make an effort to reproduce this on my end.
-
Hi there and thanks for completing that task and getting the diagnostics updated. I've honestly never seen those particular messages in a server before, so we will need to do some investigating to see what's going on.
-
Hi there,
As an FYI, I wouldn't have reported this as a "bug" just yet, as this could be something amiss with your system. If this was a bug related to Samba, I'm fairly certain we'd have a plethora of users reporting the same thing.
The first thing I'd like you to try is rebooting in safe mode. See if you can reproduce the issue in that mode. If so, please reattach your diagnostics after you've reproduced it in safe mode so we can review again. Thanks.
-
Seconding what @itimpi has stated here, we will be updating docs to reflect best practices. Thanks for bringing this to our attention and please, continue to make us aware of anything that is confusing, inaccurate, or out of date with the wiki.
- 1
-
Hi everyone,
Thank you for your patience with us on this and @bonienl for taking point on trying to recreate the issue. We are discussing this internally and will continue to do so until we have something to share with you guys. Issues like these can be tricky to pin down, so please bare with us while we attempt to do so.
- 2
[6.10.rc7] No login web UI after trying to start new VM 11?
-
-
-
-
-
in Prereleases
Posted
So glad to hear this. Was in the middle of researching more on this last night and had to turn in before I could figure out a solution for you. You can probably imagine my surprise and excitement when today I see you have fixed it on your own!!