glennv

Members
  • Content Count

    251
  • Joined

  • Last visited

Everything posted by glennv

  1. I dont use krusader, but i installed the binhex krusader docker for a quick test and i just define the path to a newly created test zfs dataset under the Host Path variable. Then click the edit button next to it and set the access mode to read/write -slave. Then when you start the docker, you will find the content under the /media folder in the docker. All working as normal. The trick may be the access mode. Forgot exactly why, but i remember i need it for anything outside the array.
  2. Nope cant say i have that. Seems all fluid. Did a quick performance test with Davinci Resolve and the passed thru 5700XT and seems exactly the same or even slightly faster than my normal production Resolve Render VM on Catalina. Have not spend much time with it yet. Just for base testing my own code against he new OS , but as little seems to have changed compared to BigSur all is fine there. I remember from my old Windows 10 VM's when i used them for gaming i did have these stutters , but typicaly where related to passed thru usb and or network polling latency issues. On OSX i
  3. Solved the mysterous VNC bootloop issue. 1. Log in with a GPU passed thru (i used RDP as my GPU showed black at the time) 2. Set autologin in user options (seems to be an issue with login process looping in some cases) 3. Reboot without GPU passed thru and now VNC works fine.
  4. Ah , just when i wanted to call it a day, it started working Issue was related to me using/trying to boot from an earlier created Monterey update partition, before i messed with opencore/nvram clearing etc etc when booting in the BigSur partition. So likely not valid anymore (nvram, seal , stuff like that) . I just booted in BigSur again and re-ran the updater, which recreated that partition and voila. Update is running fine now and no boot issues. The real issue in "my" opencore config was as i mentioned the max value limit (minkernel/maxkernel) on the kernel patches. The rest was no
  5. Maybe we hit the same VNC bootloop then , who knows My GPU is in use for a while in a productive VM so cant test that. Some other time...
  6. Mind sharing your (neutralised) EFI ? Have Big Sur running fine but after update it does not boot (kp's) . Did update lilu/weg etc before (and did set -lilubetaall ), but something is off . Probably too old OC or something. edit : updated tp OC 0.7.0 but still same . Boots fine in BigSur but not in the Monterey mode. edit2 : came a bit further. Found that kernel patches where limited by the set max release value (20.99.99) , changing that to 21.99.99 moved the boot process further. Still crashing at the end in a boot loop with mentions of SMC issue but out of time this week to
  7. Have the exact same messages. Thought i was alone and never found reason and just ignored it ,but if you find it i am interested.
  8. Just found out today from Ich777 that the gnif/vendor-reset patch , when compiled in the kernel , prevents plugin kernel modules such as ZFS to load (can as in my case , crash the kernel). This may change soon depending ongoing dev/testing efforts. So for now a good to know gotcha !! If you need ZFS together with the gnif patch, you need to compile them both at the same time in the kernel for now.
  9. Totaly 100000% understand. I can be a patient man when i need to be. (and can help as a guinee pig if needed at some point )
  10. Ahaaaaaa. That also explains my issues with ZFS plugin if i tried (and failed) to use it in combination with the build kernel with gnif/vendor-reset only, instead of including it in the build (as i run now). I tried that some time ago to make version upgrades easier (to not loose zfs functionality , which also houses my dockers etc) during an upgrade/rebuild phase). Unfortunately i need 100% the gnif patch as is the only solution for my reset bug ridden AMD GPU . Pitty, would have loved to use this plugin as looks amazing and amazingly usefull for me. Do you think there w
  11. Done Yes. Gnif and ZFS included 6.9.2 build (using your buikder docker) tach-unraid-diagnostics-20210527-1216.zip
  12. Looks nice. Only getting this error when calling corefreq-cli (via ssh) : Checked the logs and seeing a dump there on the plugin install: output.log
  13. glennv

    AWS

    You can use syncovery in a docker. I use it with Backblaze B2 (legacy and now also in S3 format buckets) for my main cloud backups from Unraid and it also supports AWS S3 buckets and lots more. You do need to buy a syncovery for linux license to use it
  14. Basically open the config.plist in the efi , search for "PENRYN" and after that <dict> blabla </dict> section add the following <dict>...</dict> section from below <dict> <key>Arch</key> <string>x86_64</string> <key>Base</key> <string></string> <key>Comment</key> <string>DhinakG - cpuid_set_cpufamily - force CPUFAMILY_INTEL_PENRYN - 11.3b1</string> <key>Count</key> <integer>1</integer> <key>Enabled</key>
  15. Just my 2 cents. Maybe it works for you (and not immediate for us for 11.3.1) as you seem to run on Ryzen. I believe the Penryn passthru patch for the config.plist is for Intel CPU's. Once i added that patch all works fine. But as it is not part of the current standard Macinabox, it wont work for fresh 11.3.1 installs out of the box on intel hardware with cpu passthru. I see the same threads happening on the German form sections. Bootloops all over the place
  16. Ok, its working now. It was indeed as you suggested "adding" the new penryn patch to the plist instead of as i did before replacing the existing patch. Tnx again !!!
  17. Cool. Will await your experiments. My XML of the working 11.2 is pretty standard, only increased cores and mem . The other one i dont even post as is a fresh install of macinabox i did today and not touched the xml yet, but is also failing with boot loop when pickinh BigSur as install. <?xml version='1.0' encoding='UTF-8'?> <domain type='kvm' xmlns:qemu='http://libvirt.org/schemas/domain/qemu/1.0'> <name>BigSur - MacInABox v2 VNC</name> <uuid>a2497553-6159-3dd9-29a7-95df03aa7cb7</uuid> <metadata> <vmtempl
  18. They are unmodified (besides the serials etc) macinabox plist files so nothing interesting. The existing one was from a cataline macinabox , upgraded to bigsur 11.2 (running fine in 11.2 but bootlooping during 11.3 upgrade) and the other a just today installed fresh new bigsur macinabox. Both failing. But will try later having both patches in a single plist file. I only replaced the existing patch in the standard macinabox plist after your post, but did not think of putting them both in.
  19. Thanks, but somehow stuck with the same issues (just bootlooping when upgrading from 11.2 to 11.3.1 or on case of new install just looping from the start. Will give up for now as not sure what is going on here. Will wait for a macinabox upgrade
  20. Anyone want to share a working EFI qcow image to boot BigSur 11.3 (or 11.3.1) ? Tried to update my old 11.2 which did not work even after applying earlier mentioned penryn passthry fix. Tried a fresh MAcinabox, but also not booting (same as previous posters).
  21. Anyone tried to update the standard macinabox created BigSur VM to 11.3 ? Any tips. Tried to just blind run an update (to a cloned snapshot of it on zfs) to see what happens but end in a boot loop . Otherwise does a new macinabox result in 11.3 or still the old version .? Just want to set one up fast for just some testing purposes. Whatever is the fastest (am in a lazy mood
  22. update: guess i stick with kernel build as i can not get the plugin to load on a compiled 6.9.2 kernel (clean with only the gnif vendor reset as extra module) vs the stock 6.9.2 kernel where the plugin works fine. Maybe some slight version diff or something but during the modprob load of the zfs module the kernel crashes. If i include zfs during build zfs works fine (of course plugin removed then). Nevermind. Will stick with a slighly more complex workflow then. No huge issue as updates are not that often.
  23. Ok, thanks. Thats what i thought but better ask then assume
  24. Just to confirm, is there "ANY" difference between running ZFS compiled into the kernel via ich777 kernel builder vs using the ZFS plugin ? reason behind question: As i need the gnif vendor reset for my AMD gpu, which is only available via the kernel build option , i moved zfs to the kernel build option some time ago at the same time for convenience. After going through a few version upgrades , where during the process i loose zfs (as step one is to install a clean unraid build and only then compile a new kernel with the docker) and all my dockers/vm etc are on zfs which is then unavailable,