kode54

Members
  • Posts

    246
  • Joined

  • Last visited

Everything posted by kode54

  1. I cannot say I disagree with you. Perhaps I shall ask for a 5TB hard drive for Christmas, and employ it as a parity drive. And maybe also remove this 640GB drive, as it's kind of a joke to waste a port with that.
  2. E: Please move this to CA Application auto update topic. The new version creates this crontab entry: Missing the leading slash there. Results in a notice to my email: Subj: cron for user root /usr/local/emhttp/plugins/ca.update.applications/scripts/updateApplica tions.php >dev/null 2>&1 Message: /bin/sh: dev/null: No such file or directory
  3. Have you tried passing a VGA BIOS for the 550? Look, I even found your card. Unless you would rather dump it yourself. You'll have to look a bit further in the forum or on Google or Bing if you want to research how to configure a BIOS with a passthrough card in libvirt XML syntax. Okay, for example, this device passed through from a Github repository readme: <hostdev mode='subsystem' type='pci' managed='yes'> <source> <address domain='0x0000' bus='0x01' slot='0x00' function='0x0'/> </source> <rom file='/home/maikel/bios7850random.rom'/> <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/> </hostdev> I'm not sure if 5xx cards have the same trouble with Seabios as they do with OVMF. Your troubles would seem to indicate it doesn't matter if it's UEFI or not, 5xx doesn't work with passthrough.
  4. And how do you plan to reduce? Remove one of the DIMMs, crippling it to single channel? Or actually swap both out for 2x2GB? You are aware that it will be using all of the memory available to it as cache, right?
  5. I also cannot afford additional storage, and even if I could, I would need to upgrade my unRAID license to fit it into this machine, which is already almost maxed with its SATA capacity, so then I'd be looking to buy interface cards, and I couldn't use anything that's x16, since this particular board lumps that into the same unbreakable IOMMU group as the video card I am passing to my Windows installation. Basically, I'm doing everything, flying by the seat of my pants, using what I've already got. It's incredibly convenient being able to create arbitrary file separation points or shares within a large merged storage set, such as the 8TB I have now, and sharing it with my whole network. It's also very convenient having Windows running under a hypervisor, and having all the Docker services for random things I choose to run. I'm not quite yet a Mr. Moneybags Data Hoarder. I was already lucky enough to have scrambled enough to buy that second 4TB drive when I needed to convert data from one drive partition format to another. I've just been lucky I haven't even gone near capacity with what I have now.
  6. I was having issues, so I (temporarily?) disabled all scanning engines, but I was using Defender. While it may be one of the best free options, I was being bothered by constant real time scans of frequently accessed and modified files, which led to system slowdown overall. I may decide to back down on that and enable it again some day, if I can bother myself to ignore the scanning engine munching whole cores to itself for minutes at a time throughout the day. Note that I do not consider any other engine to be any less of a burden on system resources. Use or do not use, both with ample caution.
  7. That 8W may just be the CPU alone, not counting the entire system.
  8. Those VPN dockers appear to be for running a local service behind a remote VPN. This docker is for running a local VPN for connecting back in to your network, say, to access services within your own network from a remote host, or protecting your traffic while behind an open WiFi access point without having to pay for a separate VPN service.
  9. Only, I have hotplug disabled for my SATA ports in the BIOS settings, so I don't think it would have remounted.
  10. The solution is to not install Avast, or somehow ask those idiots at Avast to provide you a way to disable their shitty hypervisor mode before you install their shitty product. Or you could just use the AV product that comes bundled with Windows. You know, the one that isn't half bad, but like all other AV products, isn't half as good as some good old Common Sense? I find that most brand new in-the-wild stuff either isn't going to be detected before it infects you, or isn't going to infect you at all unless you start opening email attachments without looking first. Or start downloading your software from questionable places.
  11. I just had an incredible experience. I received a notification from my Tower that my disk 1 was triggering read errors. It reported that it could not read the SMART data without spinning the drive up. The array stats were reporting some 140 trillion read errors for that drive. So I took the array offline, and powered the machine off. I attempted to coax some life out of the drive using its power cables, to feel for some sign of life. It didn't even spin up. So I removed it, and chucked it aside. Then I proceeded to power the machine up, and initiated New Config, retaining the cache pool settings. Then I noticed: One of my SSDs was missing as well. I opened the case again, remembering that SSD shared its power cable with the drive I just mercilessly chucked aside onto the carpeted floor. I traced it through a haphazard attempt at cable routing I made when I first installed that power cable, to find that the power cable had become disconnected from the power supply. Whoops. So, as coolly as I removed the drive and set things aside, I put it back in, adjusted the power cables into a more sensible configuration, closed it all up, and booted it once again. Upon booting with array set to not auto start, I popped into maintenance mode, mounted the drive to replay its journal, unmounted, checked it, came back all clean. I also ran a btrfsck on the cache pool, since half of that was disconnected while the VM inside was running. No problems there, either. Live and learn. Maybe now I'll set aside some money for a 4TB or 5TB parity drive, before something really does die.
  12. Why would you have an Apple iOS device in recovery mode permanently attached to your virtual machine? Clearly you meant to remove that from the configuration after you were done recovering it, right? Maybe this is one more plus for hotplug capability? There's a hotplug plugin, but I don't know if it automatically detaches devices which have been unplugged from the host machine.
  13. And you're not the first to have troubles with Samba after upgrading from 6.1.x. Try completely obliterating your share settings from the config folder, rebooting, and creating everything again from scratch. A little drastic, but perhaps necessary due to incompatible settings files.
  14. Why? If it works for you, it's perfectly safe to continue using it indefinitely.
  15. Possible addition to this script: Enabling Hibernate / suspend-to-disk inside respective VMs, then using virsh dompmsuspend <domain> disk to save for the backup. This does, of course, assume your VMs are in a position to be suspended, and if they are not configured to suspend, they will likely shut down anyway.
  16. I just fed double commands to mine, it seems to be a synchronization issue with libvirtd? root@Tower:~# virsh dompmsuspend "Windows 10" disk Domain Windows 10 successfully suspended root@Tower:~# virsh dompmsuspend "Windows 10" disk error: Domain Windows 10 could not be suspended error: Guest agent is not responding: QEMU guest agent is not connected root@Tower:~# virsh list Id Name State ---------------------------------------------------- root@Tower:~#
  17. Comcast is now throttling everybody, so trouble downloading this does not surprise me.
  18. Technically, you can modify the XML to turn a Seabios VM into an OVMF VM, but that won't boot. You can still perform a repair install on it, though, or at least I think you can.
  19. Appdata lives on your cache volume, not in the Docker image. The important part is the /dev/sdj1.
  20. Seems the code 43 thing is common even with Tesla and GRID cards, you have to use NVidia's virtualization platform instead.
  21. MTU is usually lower for PPPoE framing. Strange that unRAID did not pick that up in the DHCP advertisement.
  22. Those warnings are for your cache pool, not the Docker (or possibly libvirt) images.
  23. At least until it dies of bit rot, which it may do faster when it has the power removed for an extended period.
  24. Fix your crontab, or reinstall the Dynamix system stats plugin. It would appear that some part of it was removed, and it did not clean up its crontab entry.