Jump to content

bland328

Members
  • Content Count

    86
  • Joined

  • Last visited

Community Reputation

13 Good

About bland328

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Did you end up opening an issue on this? If so, I'd love to contribute to it, at least in the form of a +1, as I really need to get IOMMU going again and patching the kernel myself sounds at least...unwise. I mean, it sounds kinda fun, but mostly unwise
  2. I somehow missed that, so I did not. Thanks for the tip! I'll give it a try when I have a few free minutes and report back. I tried it and got an immediate failure of: Backupfolder not set I don't have time at the moment to look into that deeply, but I did find food for thought at https://forums.urbackup.org/t/urbackup-mount-helper-test-backupfolder-not-set/5271.
  3. I'm running into this (often? always?) after upgrades, as well. If I do this from the container console... # dpkg --configure -a ...and restart the container, it fixes me up until the next upgrade. Though I'm keeping an eye out for a better solution, naturally
  4. bland328

    Restic for backups

    @maxse, FYI, I'm just getting started with restic, and to install it I downloaded the 'linux_amd64' build from the restic releases page on Github, and have a script called from /boot/config/go that (along with plenty of other boot-time tweaks) handles copying it to /usr/local/bin. I'll also mention that my startup scripts set the XDG_CACHE_HOME environment variable to point to a dir I made on a nice, speedy Unassigned Device (though you could also use /mnt/cache/.cache or wherever you like) so that restic makes all its cache files somewhere persistent, instead of in the RAM disk, where they'd be lost on a reboot, which almost certainly isn't what you want! The restic Docker container may be great, but it sounded like an un-necessary layer of complication to me, so I approached it this way.
  5. I'm curious about this, too, @binhex! I've been attempting incremental backups (directly to a /mnt/cache/... path within a cache-only share folder on a BTRFS cache drive) and finding that UrBackup is making none of the expected BTRFS subvols or snapshots. I'm absolutely not up to speed on what all is involved in a Docker container performing BTRFS-specific operations on an "external" BTRFS volume. So, following up on @SuperDan's question, might this be because certain BTRFS resources are excluded from the binhex-urbackup image? And, if so, is that strategic? If it isn't strategic, it would be lovely to see them added. And when I have a bit of free time, if I won't be duplicating someone else's efforts, I'll take a shot at it myself. EDIT: After some consideration and experimentation, I'm not even sure I'm thinking about this correctly. I installed btrfs-progs within the binhex-urbackup container (# pacman -Fy && pacman -S btrfs-progs) as an experiment, but my next incremental backup still didn't create the BTRFS subvolume I was hoping for. On the UrBackup Developer Blog, it says that "[e]very file backup is put into a separate sub-volume" if "the backup storage path points to a btrfs file system and btrfs supports cross-sub-volume reflinks." So, admitting I'm more than a touch out of my depth here, perhaps: 1) Unraid btrfs doesn't support cross-sub-volume reflinks for some reason, or 2) I shouldn't expect it to work from within a Docker container accessing a filesystem that's outside the container, or 3) ...something else. Any insight is appreciated, and I'll post here if I happen to get it figured out.
  6. Sorry...should've explained! I was lucky enough to have recently migrated the VM in question to a second, non-Unraid box to use as a template for another project, so I was able to simply go grab a copy of the OVMF_VARS.fd file from there. Had that not been possible, I suppose I would've grabbed a clean copy of that file from here or here, the downside being the loss of my customized NVRAM settings. I didn't notice if any cores were pegged with this happened, but I rather doubt it, because in my case there was no boot activity--I didn't get to the Tianocore logo, nor even to the point of generating any (virtual) video output for noVNC to latch onto.
  7. I humbly nominate the moreutils collection for inclusion in NerdPack. I'm particularly interested in the sponge and pee commands, but there's a variety of good stuff in there. moreutils is a nice complement to coreutils, which I believe is already included in Unraid. Thanks for considering, @dmacias, and for your generous work on NerdPack!
  8. For the record, I solved this...but I'm not sure what to make of it. And it almost surely has nothing to do with the OP's problem, but I'll leave the solution here anyway, in case it helps someone else: Apparently, the 'OVMF_VARS.fd' file (OVMF NVRAM backing for the VM) for that VM became corrupt. It is stored in an unassigned btrfs volume on an NVME drive, and the btrfs volume itself does not appear to be corrupt. I've no idea what happened there, but hopefully (and probably) it has nothing to do with Unraid 6.8.1.
  9. I'm having a somewhat similar problem. My trusty macOS VM that I've been running 24/7 for years suddenly won't start. And by that I mean that the VM claims to have started (green 'play' button lights up in the Unraid GUI), but nothing ever happens--I don't get even as far as being able to VNC in (I get the "Guest has not initialized the display (yet)" message). In the log file for the VM, not much appears--when I try to start the VM, it spits out the long set of qemu args, then this standard stuff: 2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: high-privileges 2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: custom-argv 2020-01-15 01:06:05.240+0000: Domain id=4 is tainted: host-cpu char device redirected to /dev/pts/1 (label charserial0) and then...nothing. I know this may not actually have anything to do with 6.8.1, but in the name of research...is anyone else having VM woes under Unraid 6.8.1?
  10. Thanks for the update, @ryoko227. If/when I get some time, I'll also put some work into this, and will post here. EDIT: For the record, I'm on an ASUS PRIME X370-PRO + AMD Ryzen 5 2600, with a 500GB Kingston NVME drive in the motherboard slot, formatted BTRFS, and unassigned. Turning off IOMMU in the BIOS does stop the flood of page faults, but I need to turn that back on soon 😅
  11. Thanks for the info, @Gitchu. I've upgraded to Unraid 6.8.0 final, and now find that VMXNET3 (+Q35-3.1, which may or may not have anything to do with it) is working great with Catalina, as well. Though, to be fair, I haven't freshly logged into iCloud services (iMessage, App Store, iCloud Drive, etc.) recently; for anyone else reading this, I've run into problems in the past with VMXNET3 working fine with those services only after I've successfully logged in with a different (e1000 or passed-through) NIC. Fingers crossed that those days are now behind us, but I don't feel like logging out of what's now working just to test it. 😉
  12. Just learned I'm also affected by this, and do need to keep IOMMU turned on. @ryoko227, did you have any luck with the patch?
  13. For the record, I'm currently experimenting with running Catalina under qemu/kvm/libvirt on a non-Unraid Linux box (still an Unraid fan here...this is just a side project!), and I find that using when qemu-4.1.1_1, a virtual e1000-82545em NIC is working just great with br0: bridging. To be fair, this br0: bridge is one I configured, and I'm not currently quite savvy enough to know if that could be the difference...but I doubt it. So, I'm looking forward to Unraid 6.8.0, suspecting an updated qemu will fix everything for me, as it did for @Gitchu. Assuming it does, I'll stop using qemu's virtual e1000 NIC+e1000 kext, and return to using e1000-82545em.