kode54

Members
  • Content Count

    246
  • Joined

  • Last visited

Community Reputation

10 Good

1 Follower

About kode54

  • Rank
    Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Bumping to prove that under certain conditions, great things could happen for no apparent reason, other than hopefully more optimal configuration settings. Upon adding to my domain tag: <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> And adding the following override to the -cpu switch to the end, just before the end of the domain tag: <qemu:commandline> <qemu:arg value='-cpu'/> <qemu:arg value='host,kvm=off,hv_relaxed,hv_spinlocks=0x1fff,hv_vapic,hv_time,hv_vendor_id=Microsoft'/> </qemu:commandline>
  2. Bumping this old topic to get it some more attention. Especially since a branch with Nexenta ZFS based TRIM support is waiting to be accepted into the main line. I for one would love to see ZFS support replace BTRFS use. Create n-drive zpool based on the current cache drive setup, and create specialized and quota limited ZFS datasets for the Docker and libvirt configuration mount points. Yes, Docker supports ZFS. And from what I've seen, one only wants to stick with BTRFS on a system they're ready to nuke at a moment's notice.
  3. There is a trick to switching over: 1) Add another hard drive, maybe 1MB or slightly larger, and make it VirtIO. 2) Boot the VM. 3) Install the viostor drivers for your OS to support the tiny image you mounted above. 4) Shut down the VM. 5) Delete the mini temporary VirtIO drive. 6) Change your boot image to VirtIO. 7) It should boot fine now.
  4. kode54

    -Delete-

    Whatever it was, it obviously wasn't important enough to share.
  5. To install the driver in unRAID, he'd need a version of ALSA built for the correct kernel.
  6. Updated CrystalDiskMark shots, with a new VM backed on an XFS cache drive. First one's using cache='none' io='native': Second one's using the defaults that always get overwritten by the template editor, cache='writeback', and no io parameter: The io=native mode appears to reflect the actual drive performance, with the overhead of XFS and virtualization factored in.
  7. Wtf? Dec 12 18:23:50 unraid root: plugin: running: /boot/packages/python-2.7.5-x86_64-1.txz Dec 12 18:23:50 unraid root: Dec 12 18:23:50 unraid root: +============================================================================== Dec 12 18:23:50 unraid root: | Upgrading python-2.7.9-x86_64-1 package using /boot/packages/python-2.7.5-x86_64-1.txz Dec 12 18:23:50 unraid root: +============================================================================== Dec 12 18:23:50 unraid root: Dec 12 18:23:50 unraid root: Pre-installing package python-2.7.5-x86_64-1... Dec 12 18:23:55 unraid root:
  8. 6.3.0-rc already knows about newer virtio iso downloads. Presumably, none of these were added to 6.2 newer releases because they were presumed to be incompatible with the older Qemu?
  9. Proposing the inclusion of the atop package from Slackbuilds, for moments where it may be useful to monitor which resources may be maxing out in a system. It will handily display color coded load percentages for memory and disks, and blink a status line red if it's being maxed out. May be useful in tracking down overburdening issues some people are experiencing.
  10. It is possible if you use virt-manager to configure your VMs. But you won't be able to connect to them from the WebUI, since there is no web SPICE client in unRAID. There are already performance issues with the stock noVNC client, so you may experience better performance if you use a native client.
  11. Try deleting network.cfg, reboot, and reconfigure your network changes from scratch? Fixed my Docker issues from downgrading, may fix your Samba issues from upgrading. Maybe also keep a backup copy, see how the two differ between your original and the recreated version.
  12. Found the Dockerfile. Which imports from this base image. Looks like you just mount anything you want to be accessible to the image. A simple start is to mount /mnt as /mnt, read-only. Then you forward any port(s) you want to access to whatever public ports you want to access them on. The base image exposes an HTTPd with web RDP client on port 8080, and a direct RDP service on port 3389.
  13. My results are also somewhat different. I've also taken to isolcpus and using only two cores / four threads for the virtual machine. I also have the two disk images hosted on an lz4 compressed ZFS pool on my SSDs. <domain type='kvm' id='1'> <name>Windows 10</name> <uuid>xxxxxxxx-xxxx-xxxx-xxxx-xxxxxxxxxxxx</uuid> <metadata> <vmtemplate xmlns="unraid" name="Windows 10" icon="windows.png" os="windows10"/> </metadata> <memory unit='KiB'>8388608</memory> <currentMemory unit='KiB'>8388608</currentMe
  14. Quick add on scrapping the optical drives: I tend to keep exactly one optical drive around after I'm fully installed, with no <source> line, implying empty drive. This way, I can use virsh or virt-manager to hot mount images if necessary. You can't hot attach or detach drives, though.
  15. Note that only works for snapshots taken while the VM is powered off. Live snapshots will require the memory and system state data from the /var/lib/libvirt/images directory. Of course, if you're using passthrough of anything but USB devices, you're already prevented from taking live snapshots.