Jump to content

kode54

Members
  • Posts

    246
  • Joined

  • Last visited

Posts posted by kode54

  1. I downgraded, installed the ZFS plugin, and replaced my cache partition with a ZFS pool. I moved all my data, which I backed up to the array, back to the new ZFS partition at /mnt/zfs.

     

    Then I realized I should remake my Docker image, since it would be too new for the current version of Docker in this release.

     

    It won't start, though:

     

    Dec 3 20:38:37 Tower root: starting docker ...
    Dec 3 20:38:37 Tower avahi-daemon[4866]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Dec 3 20:38:37 Tower avahi-daemon[4866]: New relevant interface docker0.IPv4 for mDNS.
    Dec 3 20:38:37 Tower avahi-daemon[4866]: Registering new address record for 172.17.0.1 on docker0.IPv4.
    Dec 3 20:38:37 Tower avahi-daemon[4866]: Withdrawing address record for 172.17.0.1 on docker0.
    Dec 3 20:38:37 Tower avahi-daemon[4866]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Dec 3 20:38:37 Tower avahi-daemon[4866]: Interface docker0.IPv4 no longer relevant for mDNS.
    Dec 3 20:38:37 Tower emhttp: shcmd (1723): umount /var/lib/docker |& logger
    Dec 3 20:39:01 Tower emhttp: shcmd (1735): set -o pipefail ; /usr/local/sbin/mount_image '/mnt/zfs/system/docker/docker.img' /var/lib/docker 20 |& logger
    Dec 3 20:39:01 Tower kernel: BTRFS info (device loop0): disk space caching is enabled
    Dec 3 20:39:01 Tower kernel: BTRFS: has skinny extents
    Dec 3 20:39:01 Tower root: Resize '/var/lib/docker' of 'max'
    Dec 3 20:39:01 Tower emhttp: shcmd (1736): /etc/rc.d/rc.docker start |& logger
    Dec 3 20:39:01 Tower kernel: BTRFS info (device loop0): new size for /dev/loop0 is 21474836480
    Dec 3 20:39:01 Tower root: starting docker ...
    Dec 3 20:39:01 Tower avahi-daemon[4866]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Dec 3 20:39:01 Tower avahi-daemon[4866]: New relevant interface docker0.IPv4 for mDNS.
    Dec 3 20:39:01 Tower avahi-daemon[4866]: Registering new address record for 172.17.0.1 on docker0.IPv4.
    Dec 3 20:39:01 Tower avahi-daemon[4866]: Withdrawing address record for 172.17.0.1 on docker0.
    Dec 3 20:39:01 Tower avahi-daemon[4866]: Leaving mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
    Dec 3 20:39:01 Tower avahi-daemon[4866]: Interface docker0.IPv4 no longer relevant for mDNS.
    Dec 3 20:39:02 Tower emhttp: shcmd (1738): umount /var/lib/docker |& logger

     

    It just stops instantly. Attempting to start docker daemon manually:

     

    INFO[0000] [graphdriver] using prior storage driver "btrfs" 
    INFO[0000] Graph migration to content-addressability took 0.00 seconds 
    INFO[0000] Firewalld running: false                     
    INFO[0000] Default bridge (docker0) is assigned with an IP address 172.17.0.0/16. Daemon option --bip can be used to set a preferred IP address 
    FATA[0000] Error starting daemon: Error initializing network controller: Error creating default "bridge" network: Failed to setup IP tables, cannot acquire Interface address: Interface docker0 has no IPv4 addresses

     

    Attaching diagnostics.

     

    E: New symptom. Bridged networking isn't working for VMs, either. E2: Needed to reboot the VM.

    tower-diagnostics-20161203-2039.zip

  2. My Windows 10 VM suffered a MEMORY_MANAGEMENT BSOD during a several hours long gaming session. Then the UI and console all locked up, so I cold rebooted the machine. Then I noticed my cache volume, during a routine check, had a mess of errors.

     

    I attempted to use btrfs recover, but that only produced about 650MB of sparse data, out of over 300GB.

     

    So I've nuked my VMs and Docker containers, and restored my AppData backup, reformatted the cache, then started a newer and much smaller Windows 10 VM. I already have it up and running now, with passthrough video card, and Steam installed again. Now it is fetching the game I left behind, and I'm hoping at least some of my save got synchronized over Steam Cloud. I've also given the VM 12GB of RAM instead of just 8.

     

    If only I had been running some sort of VM backup schedule.

  3. Full libvirt snapshots are created under /var/lib/libvirt/images, which is in RAM, and lost on reboot.

     

    The only snapshots you'll be able to keep between reboots, for now, is qcow2 snapshots taken while they are unmounted. You'll have to manage those with qemu-img:

     

    qemu-img snapshot -c "new snapshot" <path to.img>

    qemu-img snapshot -l <path to.img>

    qemu-img snapshot -a "apply this snapshot" <path to.img>

    qemu-img snapshot -d "delete this snapshot" <path to.img>

     

    So, create, list, apply, delete, all except for list take a snapshot name after their respective switches. I assume you'll want to do this with an unmounted image, or an image for a VM that's suspended to disk.

  4. No, it does not provide sound support.

     

    If you are using a Windows machine, you could try connecting by Remote Desktop / Terminal Services if you need sound.

     

    Or you could try Splashtop Personal, or NoMachine, or X2Go.

  5. Looks like either your Samba client or your terminal client is using CP1251 or latin1 instead of UTF-8, so your Unicode is broken. Character 233 is correct, but only for a latin1 code page, not for the correct UTF-8 code page.

     

    It's a pity that even after all these years, PuTTY doesn't default to UTF-8 encoding.

  6. This is different from KVM+Qemu doing "save" on the VM state. This is ordering the guest to perform its own hibernate or suspend to disk. So long as the guest can save and restore the state of its own hardware, and so long as you feed it the same configuration on next boot, it should be perfectly safe. In fact, the XML should even maintain the exact same hardware layout, as long as you don't do anything to regenerate the PCI bus/slot/etc values.

     

    E: dom pm suspend -> domain power management suspend state. Basically, tell the guest agent, or through ACPI, to perform its own S3 or S4 suspend. Adding the "disk" target makes it an S4 suspend. The guest takes care of shutting down the passed in hardware, including saving any state it needs using the device drivers.

     

    The reason Qemu cannot reliably save its own states with pass through is that it would require full support for every piece of hardware passed through, to know how and where to interrupt the guest connection and save its own state data. Doing S4 suspend with the guest OS takes that out of Qemu's hands. The guest has the full drivers for every device passed through, and as long as those devices supported hibernate on bare metal, they should also work fine in virtualization, assuming every device supports its own soft reset. Weird cases where devices won't even reset to be usable within a reboot cycle of the VM would break if restarted without the host OS being rebooted, of course, but I don't know which devices are this flaky.

  7. Kode,

     

    I have been using unrAID for ages... I cannot tell you how many HDD have failed on me. More than one would think. Parity is what has saved me every time. I am dual parity user now.

     

    Beg, borrow or steal for that parity drive.... AND make sure the data you are storing is copied elsewhere.

     

    I cannot say I disagree with you. Perhaps I shall ask for a 5TB hard drive for Christmas, and employ it as a parity drive. And maybe also remove this 640GB drive, as it's kind of a joke to waste a port with that.

  8. Upgraded smoothly, and preparing to remove the VFS Fruit settings from my Samba extras configuration, since it should be redundant now. Also thanks to my new host reboot and array restart procedures, my Windows 10 VM now has several days' more uptime than the host machine.

     

    That would be another useful thing, but I think it's already in a feature request topic? Doing dompmsuspend <domain> disk, which does a suspend-to-disk, or else shutdown if the guest isn't configured for it. Acceptable so long as the virtual hardware configuration doesn't change (noticeably) between shutdown and restart. And possibly faster, since it won't trigger Windows Update if it's actually hibernating.

  9. E: Please move this to CA Application auto update topic.

     

    The new version creates this crontab entry:

     

    0 0 * * * /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplica

    tions.php >dev/null 2>&1

     

    Missing the leading slash there. Results in a notice to my email:

     

    Subj: cron for user root /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplica

    tions.php >dev/null 2>&1

    Message: /bin/sh: dev/null: No such file or directory

     

  10. Have you tried passing a VGA BIOS for the 550?

     

    Look, I even found your card. Unless you would rather dump it yourself.

     

    You'll have to look a bit further in the forum or on Google or Bing if you want to research how to configure a BIOS with a passthrough card in libvirt XML syntax.

     

    Okay, for example, this device passed through from a Github repository readme:

     

    <hostdev mode='subsystem' type='pci' managed='yes'>
      <source>
        <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/>
      </source>
      <rom file='/home/maikel/bios7850random.rom'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </hostdev>

     

    I'm not sure if 5xx cards have the same trouble with Seabios as they do with OVMF. Your troubles would seem to indicate it doesn't matter if it's UEFI or not, 5xx doesn't work with passthrough.

  11. I also cannot afford additional storage, and even if I could, I would need to upgrade my unRAID license to fit it into this machine, which is already almost maxed with its SATA capacity, so then I'd be looking to buy interface cards, and I couldn't use anything that's x16, since this particular board lumps that into the same unbreakable IOMMU group as the video card I am passing to my Windows installation.

     

    Basically, I'm doing everything, flying by the seat of my pants, using what I've already got.

     

    It's incredibly convenient being able to create arbitrary file separation points or shares within a large merged storage set, such as the 8TB I have now, and sharing it with my whole network. It's also very convenient having Windows running under a hypervisor, and having all the Docker services for random things I choose to run.

     

    I'm not quite yet a Mr. Moneybags Data Hoarder. I was already lucky enough to have scrambled enough to buy that second 4TB drive when I needed to convert data from one drive partition format to another. I've just been lucky I haven't even gone near capacity with what I have now.

  12. I was having issues, so I (temporarily?) disabled all scanning engines, but I was using Defender. While it may be one of the best free options, I was being bothered by constant real time scans of frequently accessed and modified files, which led to system slowdown overall. I may decide to back down on that and enable it again some day, if I can bother myself to ignore the scanning engine munching whole cores to itself for minutes at a time throughout the day.

     

    Note that I do not consider any other engine to be any less of a burden on system resources. Use or do not use, both with ample caution.

  13. Those VPN dockers appear to be for running a local service behind a remote VPN. This docker is for running a local VPN for connecting back in to your network, say, to access services within your own network from a remote host, or protecting your traffic while behind an open WiFi access point without having to pay for a separate VPN service.

  14. If I understand correctly, it had not "disabled" the drive (since no writes had been attempted) ... so simply reconnecting the cable would have resolved everything with no further action needed.

    Only, I have hotplug disabled for my SATA ports in the BIOS settings, so I don't think it would have remounted.

  15. The solution is to not install Avast, or somehow ask those idiots at Avast to provide you a way to disable their shitty hypervisor mode before you install their shitty product.

     

    Or you could just use the AV product that comes bundled with Windows. You know, the one that isn't half bad, but like all other AV products, isn't half as good as some good old Common Sense? I find that most brand new in-the-wild stuff either isn't going to be detected before it infects you, or isn't going to infect you at all unless you start opening email attachments without looking first. Or start downloading your software from questionable places.

×
×
  • Create New...