mazice

Members
  • Posts

    6
  • Joined

  • Last visited

Everything posted by mazice

  1. It's not on by default. You can do it by editing the container, enabling Advanced view, and setting the extra parameters field like so: mongo --auth #
  2. I'm loving your concept! Passing thru a PCI-USB card to each of the 2 VMs might be the adequate solution. I suspect that you'll need PCI risers given your MoBo layout and how the GPUs are blocking the PCI slots (and most likely a chassis modification). Some questions out of pure curiosity: how is your core assignment for 4 VMs with 6 cores? 1 core each, leaving 2 free? Does 1 single core suffice your gaming needs when all the VMs are in use?
  3. Almost exact same thing here. I have very few plugins that aren't resource intensive at all (all of them are Dynamix extensions), not a single Docker, and 3 VMs running W10 with 4GB+4GB+3GB of RAM, with 16GB of available RAM. If I set them up with 4GB+4GB+4GB, all of them immediately shutdown when they're all up. I've done a full memtest and everything is correct, 0 errors. It might seems that the RAM overhead needed for each VM is roughly ~25% of it's allocated memory. unRAID reports 200MB of used RAM and 1.4GB of cached RAM when all VMs are off, which should allow plenty of room to allocate 12GB to VMs. (I can provide any logs required) I think that there needs to be at least a tool, or configuration, that allows unRAID users to reserve some amount of RAM to all of KVM or to each VM in particular so, at least, unRAID logs some kind of error or halts the startup of a VM; instead of shutting everything down. Edit: here's a snapshot of RAM usage with all 3 VMs up (4GB+4GB+3GB) RAM usage with all VMs off:
  4. Hi!, are you willing to ship internationally via USPS to Argentina? I'm mostly interested in the small parts (everything but the case and the mobo). Thanks.
  5. I can see where this may be applicable. Sharing, say, a games library between multiple VMs would be a painless process if this could be achievable. As of this time, setting up a Share as a Windows network drive works half of the time for things like games on Steam libraries, and it doens't work at all for Battle.net games (they totally forbid games in network drives). In the scenario of Steam libraries: roughly half the games I've tested (~100) work perfectly when they are sitting in a network drive and being opened, and read, by Steam instances in multiple VMs at the same time (since they mostly read stuff and save their settings to each VM's Documents folder). But the other half that has to make any changes to its games files fail to even launch most of the time, since it dislikes being in a network drive. In those cases those games have to be installed 'locally' on each VM. By somehow mounting a Share as a local drive might 'trick' those programs to behave as installed in a local drive. However I suspect we're out of luck in the cases where a program locks files to itself. So far I haven't had any luck in this subject: software that maps network drives as local drives usually have lots of issues as far as I've tested, and the most useful workaround that I've found is to sync a local folder to its network drive (there's plenty of tools like that, I'm using DSynchronize right now), at the cost of disk space.
  6. Interestingly enough I've seen this same issue happen on virtual network adapters. I've noticed that virtual adapters have a delayed start (sometimes up to 5 seconds) after Windows inits; maybe something about virtio drivers? One (kind-of) workaround is to have a software that manages your network mapped drives as soon as LAN is reachable, which could be done with a simple Ping batch that loops till it gets some response, and then executes the drive mapping. Note that passing thru a NIC to a Windows 10 VM works flawlessly, so this points even harder to being a virtio driver issue.