Ph9214

Members
  • Content Count

    66
  • Joined

  • Last visited

Community Reputation

0 Neutral

About Ph9214

  • Rank
    Newbie

Converted

  • Gender
    Male

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. yes, I have the latency spikes too, although it is mostly un-knoticable it is annoying as h*ll that I cant isolate the memory when it is a listed feature of kvm and libvirt from redhat the lead developers!
  2. I am setting up a Threadripper 1900X 8-Core 16-Thread 2-NUMA system that I am trying to split into 2 VMs each using resources form only one NUMA for hopefully less latency than bare metal. However I am having difficulty isolating the memory as my attempts so far have failed. I have disabled membaloon and followed some instructions form redhat and put <numatune> <memory mode='strict' nodeset='1'/> </numatune> in my xml but it has been ignored as reveled by "numastat"ing qemu-kvm. If you have set up a Threadripper system and have managed to isolate VMmms, please
  3. Systems using the Thread-ripper chipset have more than one NUMA Node. Unraid currently ignores this when creating vms which will lead to higher latency. I would recommend that a warning appear if resources are crossing numa nodes while creating a vm, and that memory is assigned on the numa node that a vm will be using.
  4. When using the following in my Windows 10 xml... <numatune> <memory mode='strict' nodeset='1'/> </numatune> ...the option is ignored and memory access is still split across NUMA nodes
  5. I was interested in making something similar and that would be a great starting point. Sent from my XT1687 using Tapatalk
  6. Linus tech tips found this one in this video, but its 100$, yikes
  7. sorry, I'm pretty inexperienced with unraid but what I probably should have said was don't use a cache drive on a share where you store vdisks that are larger than the cache drive and you actually do use parity drives in raid 5 and 6 and they are usually the main limiting factor on writes. In RAID5 the parity is stripped accross all the disks in the array. So a file you write is written simultanteoulsy to all drives in the array. In effect the filesystem is split over a multitude of disks, where data to recover from a failed disk (parity info) is also written against all drives.
  8. If anyone has a [shadow=red,left]SUPERMICRO MBD-X10SRL-F LGA 2011 R3[/shadow] could you post your system devices page ideally with all the pcie slots filled so we can see how well it handles iommu grouping.
  9. Would really like to see some community testing on this thread. the asrock z170 extreme7+ and asrock z170 gaming i7 (which are basically identical) have 3 isolated 16x lanes but the 4th (top middle) 16x lane and all the other pcie lanes are grouped with the chipset and sata controllers and the usb ports are also bonded to the chipsets groups, I can post a full copy paste of the system devices page when I get home.
  10. So-Very-True!!! I would agree that both you have identified the problem. However, I did fix it (using the "documents" as a share name) by creating a new share (identical config) using Documents2, then verified it worked, then deleted the documents share, then renamed Documents2 to Documents. And everything is working. Given I agree with your assessments on the 'default' names, why would it now work? Seems like as soon as I renamed Documents2 to Documents, the conflict should show-up again... Thoughts? it may be that the actual path and not the name is still D
  11. oops forgot the link and you are right, but just in case you had insane amounts of ram this could be useful, I thought it would also be intresting So did you forget it again I assume you meant to include it with this comment, but there's still no link 8) You just missed it Gary, it's a RAM link, it's not persistent.... ;D
  12. oops forgot the link and you are right, but just in case you had insane amounts of ram this could be useful, I thought it would also be intresting So did you forget it again I assume you meant to include it with this comment, but there's still no link 8) sorry I just updated the original post and didn't think about putting it in the reply
  13. I have a Command Line plugin that backs up /root on system shutdown then restores it on startup. It's useful for bash history, ssh authorized keys, mc and htop settings and any scripts you have there. It also includes shellinabox, which is a web based terminal. And an awesome ascii lime and system info when you log in. what is it? http://lime-technology.com/forum/index.php?topic=42683.msg406446.msg#406446 thx I'll try that when I get home
  14. they will cause latency in some situations putting it up from the standard ~0.5 latency to ~100 !!! you can check this with latencymon you may also want to disable thermal throtteling