jbartlett

Community Developer
  • Posts

    1881
  • Joined

  • Last visited

  • Days Won

    8

Report Comments posted by jbartlett

  1. Stumbled across this thread. GUI boot has been 1024x768 for me since forever or however long I've been using GUI mode for the added numa support tools. I've always thought that was just how it is. I also have two video cards, a GT 1030 for unraid, and a Quadro P2000 for a VM.

     

    Is the unraid GUI supposed to be at 1920x1080?

  2. Adding for posterity that RC8 also updated the Aquantia 10G drivers (which is a good thing). Probably from the downgrade of the kernel. I don't have the network dropping out issues any longer.

     

    RC7
    root@VM1:~# ethtool -i eth0
    driver: atlantic
    version: 5.3.8-Unraid-kern
    firmware-version: 1.5.44
    
    RC8
    root@VM1:~# ethtool -i eth0
    driver: atlantic
    version: 2.0.3.0-kern
    firmware-version: 3.1.44

     

     

     

     

  3. 2 hours ago, Can0nfan said:

    it does seem most likely to be an issue introduced in unRAID 6.7 RC'S and the microsfot beta RDP client for my 27" 5K iMac, while still sluggish via a windows 10 laptop i have its not nearly as bad as when i use the RDP for my iMac.

    I've concurrently remoted in using RDP (from Windows) to seven Win10 VM's on RC8 with no lagging at all on any of them. Same with using NoVNC. The difference in my case is that the VMs are running on two off-array SSD's.

     

    Is it sluggish using NoVNC? Any crossed pinned CPU's (don't even know if it allows it)? Try creating a new VM template with the same settings & drives using the GUI to rule out some kind of XML issue.

  4. 1 hour ago, Can0nfan said:

    my three are however my windows 10 is awwwfuly sluggish and slow..still trying to see if its the mac rdp client or vm itself

    Are you running an overclocked Threadripper? My Windows 10 VM's are extremely sluggish to the point of not even being viable even with a very modest OC but rock-solid on stock. Haven't identified a reason why yet.

    • Like 1
  5. I benchmarked (AIDA64) my VM with the cpu numa block and without and the physical RAM assigned to one numa node. The read/write/copy speeds were compatible as expected but there was a 0.4ns decrease in the memory latency using the numa block.

     

    Unless the physical RAM is split up too, the only advantage to having the cpu/numa block is for matching the physical CPU/Numa configuration which does provide an improvement in performance in benchmarks.

  6. 5 hours ago, bastl said:

    @testdasi I know it always depends on the workload. My question was if there is a benefit to "trick" VM into thinking it runs on multiple nodes

    Some programs are numa aware in that they'll prioritize their threads on one node vs another. If the node assignment matches the physical server, then you will see a benefit. I did on mine but I don't recall the percentages.

  7. It's the following block (not in the CPU xml tree) that causes the RAM to split up between physical numa nodes. It's my understanding that the <numa> block has the guest OS thinking the RAM is split up between the virtual nodes.

    <numatune>
      <memory mode='interleave' nodeset='0,2'/>
    </numatune>

    I got at least a 19% improvement in memory operations (read/write/copy) at the cost of a higher latency. This showed a 2% increase in CPU loads in my use case.

  8. I'm able to reproduce it. I had missed a custom configuration that causes the error to happen. I've updated the above entry to include the cpu/numa tree.

     

    The CPU Pinning editor preserves the CPU block but the existence of the numa tags causes an invalid CPU assignment error on removing a pinned CPU: internal error: Number of CPUs in <numa> exceeds the <vcpu> count

     

    At this point, the Cam 1 VM no longer existed.

     

    Since the GUI editor doesn't understand the hardware numa assignments I'm duplicating inside the VM, it can't properly edit this cpu/numa tree. Recommend checking to see if this xml tree exists and not allowing an edit via the CPU Pinning page.

     

    I've attached the full VM XML.

    Cam1VM.xml

  9. The "-kern" in "5.3.8-Unraid-kern" is appended onto the driver version by the driver itself (ver.h) so the driver in use is "5.3.8-Unraid".

     

    I see tags for 5.3 with RC's up to 5.3-RC8. There's a 5.4 RC branch but no 5.4 release yet. I guess I need to look into how to update the firmware on the card itself if possible.

     

    Marking this as closed as it seems I have a firmware error in the NIC itself vs driver issue.

  10. 22 hours ago, limetech said:

    That's a pretty old report and you're running "5.3.8-Unraid-kern" which I don't know that that is.

    What exactly do you want us to do?

    I'm not loading anything for the lan card, whatever driver is used/picked is being decided by the unraid OS. I don't know if the version or firmware version represents the driver version but if I had to choose, it's 1.5.44. It's my understanding that the bundled driver is an older version which seems to be the same with other linux distributions and the old version has network drops.

     

    My goal for this is to have the driver updated to a version equal or greater than 1.6.13 so I can use my 10gig lan card without network drops. I plan to use Unraid as a VM host to have 4-6 Windows 10 VM's each pushing NDI video streams to other PCs and to YouTube for my Foster Kitten Cam so connection breaks is a deal breaker. The MB has a 1G nic that I could use but I'd rather use the 10G one.

     

    I had better Google Foo on my phone right now than I did on my PC  and found what looks to be the source. Stuff I found yesterday had beta all over it.

     

    https://github.com/torvalds/linux/tree/master/drivers/net/ethernet/aquantia