-
Posts
1881 -
Joined
-
Last visited
-
Days Won
8
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by jbartlett
-
-
I have an Intel i211 onboard nic plus an Aquantia. When this happens to me, I typically boot clean on a reset button press.
-
6.8.1 RC 1
My passed through USB keyboard was still returned back to the OS during the Windows 10 boot process
-
Changed Status to Open
-
Changed Status to Retest
-
Adding for posterity that RC8 also updated the Aquantia 10G drivers (which is a good thing). Probably from the downgrade of the kernel. I don't have the network dropping out issues any longer.
RC7 root@VM1:~# ethtool -i eth0 driver: atlantic version: 5.3.8-Unraid-kern firmware-version: 1.5.44 RC8 root@VM1:~# ethtool -i eth0 driver: atlantic version: 2.0.3.0-kern firmware-version: 3.1.44
-
I had a similar issue with my Asus Zenith Extreme Alpha threadripper MB except that I couldn't power it down and have it stay off - it would immediately power itself back on. Resetting the BIOS to defaults and then applying the needed changes for VMs fixed.
-
3 hours ago, navilov said:
To incorporate this driver as a module, pick M here: the module will be called ipvlan.
Is there a link missing or a post to more information? There's not much to go on here.
-
2 hours ago, Can0nfan said:
it does seem most likely to be an issue introduced in unRAID 6.7 RC'S and the microsfot beta RDP client for my 27" 5K iMac, while still sluggish via a windows 10 laptop i have its not nearly as bad as when i use the RDP for my iMac.
I've concurrently remoted in using RDP (from Windows) to seven Win10 VM's on RC8 with no lagging at all on any of them. Same with using NoVNC. The difference in my case is that the VMs are running on two off-array SSD's.
Is it sluggish using NoVNC? Any crossed pinned CPU's (don't even know if it allows it)? Try creating a new VM template with the same settings & drives using the GUI to rule out some kind of XML issue.
-
1 hour ago, Can0nfan said:
my three are however my windows 10 is awwwfuly sluggish and slow..still trying to see if its the mac rdp client or vm itself
Are you running an overclocked Threadripper? My Windows 10 VM's are extremely sluggish to the point of not even being viable even with a very modest OC but rock-solid on stock. Haven't identified a reason why yet.
- 1
-
Have you tried the last stable version?
-
Yup, I'll end up going that route going forward for the time being.
Forgot to add that the keyboard was lost at the exact moment the loading logo vanished and the video driver initialized the display.
-
It is still happening on RC7. Booted the Win10 VM four times, the passed through mouse worked all 4, the keyboard only on the 3rd boot (failed on 4th).
-
3 minutes ago, dedi said:
I reverted back to 6.7.2 and the problem REMAINS.
You'll want to try reseating your SATA cable or replacing it. It's likely it existed prior to upgrading but you did not notice.
-
I benchmarked (AIDA64) my VM with the cpu numa block and without and the physical RAM assigned to one numa node. The read/write/copy speeds were compatible as expected but there was a 0.4ns decrease in the memory latency using the numa block.
Unless the physical RAM is split up too, the only advantage to having the cpu/numa block is for matching the physical CPU/Numa configuration which does provide an improvement in performance in benchmarks.
-
5 hours ago, bastl said:
@testdasi I know it always depends on the workload. My question was if there is a benefit to "trick" VM into thinking it runs on multiple nodes
Some programs are numa aware in that they'll prioritize their threads on one node vs another. If the node assignment matches the physical server, then you will see a benefit. I did on mine but I don't recall the percentages.
-
It's the following block (not in the CPU xml tree) that causes the RAM to split up between physical numa nodes. It's my understanding that the <numa> block has the guest OS thinking the RAM is split up between the virtual nodes.
<numatune> <memory mode='interleave' nodeset='0,2'/> </numatune>
I got at least a 19% improvement in memory operations (read/write/copy) at the cost of a higher latency. This showed a 2% increase in CPU loads in my use case.
-
Able to reproduce in RC6.
-
I guess support could be added by removing the same number of CPU from the different numa ID's. Take 1 off, take 1 off of ID=1. Take 2 off, take 1 off ID=1, 1 off ID=2. etc.
-
I'm able to reproduce it. I had missed a custom configuration that causes the error to happen. I've updated the above entry to include the cpu/numa tree.
The CPU Pinning editor preserves the CPU block but the existence of the numa tags causes an invalid CPU assignment error on removing a pinned CPU: internal error: Number of CPUs in <numa> exceeds the <vcpu> count
At this point, the Cam 1 VM no longer existed.
Since the GUI editor doesn't understand the hardware numa assignments I'm duplicating inside the VM, it can't properly edit this cpu/numa tree. Recommend checking to see if this xml tree exists and not allowing an edit via the CPU Pinning page.
I've attached the full VM XML.
-
Downgraded to RC5, no issue with the keyboard passthrough.
-
Changed Status to Closed
-
The "-kern" in "5.3.8-Unraid-kern" is appended onto the driver version by the driver itself (ver.h) so the driver in use is "5.3.8-Unraid".
I see tags for 5.3 with RC's up to 5.3-RC8. There's a 5.4 RC branch but no 5.4 release yet. I guess I need to look into how to update the firmware on the card itself if possible.
Marking this as closed as it seems I have a firmware error in the NIC itself vs driver issue.
-
To clarify, a dealbreaker on using the 10G card or not
-
22 hours ago, limetech said:
That's a pretty old report and you're running "5.3.8-Unraid-kern" which I don't know that that is.
What exactly do you want us to do?
I'm not loading anything for the lan card, whatever driver is used/picked is being decided by the unraid OS. I don't know if the version or firmware version represents the driver version but if I had to choose, it's 1.5.44. It's my understanding that the bundled driver is an older version which seems to be the same with other linux distributions and the old version has network drops.
My goal for this is to have the driver updated to a version equal or greater than 1.6.13 so I can use my 10gig lan card without network drops. I plan to use Unraid as a VM host to have 4-6 Windows 10 VM's each pushing NDI video streams to other PCs and to YouTube for my Foster Kitten Cam so connection breaks is a deal breaker. The MB has a 1G nic that I could use but I'd rather use the 10G one.
I had better Google Foo on my phone right now than I did on my PC and found what looks to be the source. Stuff I found yesterday had beta all over it.
https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/aquantia
6.6.0-rc1 Gui Boot - locked at low resolution
-
-
-
-
-
in Prereleases
Posted
Stumbled across this thread. GUI boot has been 1024x768 for me since forever or however long I've been using GUI mode for the added numa support tools. I've always thought that was just how it is. I also have two video cards, a GT 1030 for unraid, and a Quadro P2000 for a VM.
Is the unraid GUI supposed to be at 1920x1080?