• Content Count

  • Joined

  • Last visited

Community Reputation

40 Good

About glennv

  • Rank


  • Gender
  • URL
  • Location

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

  1. Ok, its working now. It was indeed as you suggested "adding" the new penryn patch to the plist instead of as i did before replacing the existing patch. Tnx again !!!
  2. Cool. Will await your experiments. My XML of the working 11.2 is pretty standard, only increased cores and mem . The other one i dont even post as is a fresh install of macinabox i did today and not touched the xml yet, but is also failing with boot loop when pickinh BigSur as install. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu=''> <name>BigSur - MacInABox v2 VNC</name> <uuid>a2497553-6159-3dd9-29a7-95df03aa7cb7</uuid> <metadata> <vmtempl
  3. They are unmodified (besides the serials etc) macinabox plist files so nothing interesting. The existing one was from a cataline macinabox , upgraded to bigsur 11.2 (running fine in 11.2 but bootlooping during 11.3 upgrade) and the other a just today installed fresh new bigsur macinabox. Both failing. But will try later having both patches in a single plist file. I only replaced the existing patch in the standard macinabox plist after your post, but did not think of putting them both in.
  4. Thanks, but somehow stuck with the same issues (just bootlooping when upgrading from 11.2 to 11.3.1 or on case of new install just looping from the start. Will give up for now as not sure what is going on here. Will wait for a macinabox upgrade
  5. Anyone want to share a working EFI qcow image to boot BigSur 11.3 (or 11.3.1) ? Tried to update my old 11.2 which did not work even after applying earlier mentioned penryn passthry fix. Tried a fresh MAcinabox, but also not booting (same as previous posters).
  6. Anyone tried to update the standard macinabox created BigSur VM to 11.3 ? Any tips. Tried to just blind run an update (to a cloned snapshot of it on zfs) to see what happens but end in a boot loop . Otherwise does a new macinabox result in 11.3 or still the old version .? Just want to set one up fast for just some testing purposes. Whatever is the fastest (am in a lazy mood
  7. update: guess i stick with kernel build as i can not get the plugin to load on a compiled 6.9.2 kernel (clean with only the gnif vendor reset as extra module) vs the stock 6.9.2 kernel where the plugin works fine. Maybe some slight version diff or something but during the modprob load of the zfs module the kernel crashes. If i include zfs during build zfs works fine (of course plugin removed then). Nevermind. Will stick with a slighly more complex workflow then. No huge issue as updates are not that often.
  8. Ok, thanks. Thats what i thought but better ask then assume
  9. Just to confirm, is there "ANY" difference between running ZFS compiled into the kernel via ich777 kernel builder vs using the ZFS plugin ? reason behind question: As i need the gnif vendor reset for my AMD gpu, which is only available via the kernel build option , i moved zfs to the kernel build option some time ago at the same time for convenience. After going through a few version upgrades , where during the process i loose zfs (as step one is to install a clean unraid build and only then compile a new kernel with the docker) and all my dockers/vm etc are on zfs which is then unavailable,
  10. How do i use the docker to build a 6.9.1 kernel ? I have issues where my GPU passthru stopped working completely since i moved from 6.9.1 to 6.9.2 (both compiled with gnif/vendor reset and zfs only) and only discovered recently after disgarding 6.9.1 backup. (yeah i know , duhhhh) So need to rebuild with 6.9.1/zfs/gnif to determine if its just the kernel version or if the vendor rest patch broke. But dont see any options to select kernel version. I installed 6.9.1 standard but the docker still pulls 6.9.2
  11. Aha, so thats why i suddenly had ssh issues as well. Fixing the mentioned permissions on the flash drive after i found it after a long time fruitless ssh troubleshooting solved it but had no idea why this suddenly happened after years of working fine without issues or changes. But indeed i had also recently installed this plugin. Good to know. I dont like unsolved misteries
  12. An ignore option for a specific state/status message that resets when the state and or status message changes. That is an interesting idea. That way you will still notice when it changes to a different state than the one you ignored.
  13. I would not overthink it . My 2 cents: If the zfs status -x does not show all is healthy , just flag it as not healthy and additionaly show the status and action fields contents, which are designed to tell you what is going on. So rather then trying to interpret and grade the level of severity , you just spit out what zfs gives us.
  14. Personally i think the plugin is correct as your pool is not 100% healthy as the message indicates. Its working but it is not as it should be. This is how it should look : But what would be good is that "if" the status is not 100% healthy, the plugin shows the status message , so you know why and can act on it or ignore if its ok with you. The whole pupose of a dashboard. So your situation should "not" be reported as healthy but maybe as warning or attention.