ofawx

Members
  • Posts

    14
  • Joined

  • Last visited

Recent Profile Visitors

916 profile views

ofawx's Achievements

Noob

Noob (1/14)

6

Reputation

  1. No luck upgrading to Ventura on my hardware yet. Having this issue I think as the verbose output is almost identical (in my case, X99 QEMU 5.1 w/ RX 6600 XT on OC 0.8.5). Have tried QEMU 6.X but not swapping out the Navi for an old RX 580. Will be next step when I have a chance. Monterey 12.6 working fine however so happy for now.
  2. For any of you looking to enable Content Caching, I've updated my kernel patch to support Monterey here. Let me know if it works for you, especially if you're still on Big Sur!
  3. Hi, how are you deleting it and is there an error or a message when you try? If it is in its own share, it should be straightforward to delete from the Unraid GUI. If there is still invisible data in the share, use the command line to delete everything in the share like this: $ rm -rf /mnt/user/share/bitcoind/* and then delete the share from the GUI. Be careful with the above command, it will delete everything specified without confirming! If it is a folder within another share, you can use the same command above like: $ rm -rf /mnt/user/myshare/some/folder/bitcoind but again, be sure to specify the correct folder as there is no undo.
  4. I've been trying to make sense of what is required to get OTA updates working in Monterey beta with my MacPro7,1 SMBIOS. So far I've tried SecureBootModel Disabled, Default (> MacPro7,1 per OC 0.7.4) and x86legacy, and nothing seems to work (all without the kext, since the patch you linked above was removed from release). Any ideas?
  5. Thanks for the quick debug! Not sure why this suddenly stopped working but have removed it from the default config, which will appear in CA in the next few hours. If txindex is required (which is for my Electrum server containers for example), please add it manually in your bitcoin.conf file.
  6. I used to have this issue too which was also solved by the VTI trick. Maybe worth trying anyway?
  7. For those suffering issues during restart/shutdown, I can share what worked for me. In config.plist, under NVRAM > boot-args, add vti=12 (in reality, you can pick any number you want) VTI is a multiplier the macOS kernel uses to extend the allowable timeout when waiting for cores to synchronise, when running in a virtual machine (this is an oversimplification, but conveys the idea). This isn't needed when running on bare metal, as the kernel can reasonably expect all cores to behave (relatively) predictably. As our VMs are sharing CPU resources with the host, this assumption doesn't hold, so we can extend the allowable time before the macOS kernel panics. Nice that it's built in, too. The number you choose represents the number of times to multiply the default by two (binary left-shifts). Don't set the number too large (<<32), or it will eventually overflow (and cause the effective value to be zero – worse than the default). See here for where I found this solution originally, with more technical details: https://www.nicksherlock.com/2020/08/solving-macos-vm-kernel-panics-on-heavily-loaded-proxmox-qemu-kvm-servers/ This resolved all of my occasional panics both during use of the VM day-to-day, and during shutdown/restart of the VM. I think that on my machine, these panics were causing my PCIe devices (GPU [RX580] & USB [ASM3142]) to be left in some undefined state – in turn causing crashes of the Unraid host. Very occasionally, the hardware would detect this (PCIe bus error warnings from iDrac) allowing Unraid to keep working, but would require a full restart to reset the devices and restore the bus. Good luck!
  8. It sounds more like a Docker error than a container error to be honest, but happy to have a look. Could you share: Which container it is you're having issues Full logs since startup for that container The container config, with any private details redacted of course. Cheers
  9. Just published electrs to Community Apps. Give it a go and feel free to let me know if you encounter any issues.
  10. It's unfortunately not possible to connect Electrum directly to Bitcoin Core (bitcoind). Bitcoin Core has a built-in wallet, but without a GUI here it is unwieldy to use. I hope to release container templates soon implementing both ElectrumX and electrs servers. These connect to Bitcoin Core via the RPC interface, and serve Electrum clients. I can update you here when these become available.
  11. FAQ Should I use ElectrumX or electrs? ElectrumX is a high-performance server appropriate for serving many clients, including yourself and optionally the wider public. It is faster (<1 second responses), but consumes more resources. electrs is a low-footprint server that is appropriate for personal use or with a small number of trusted friends. Requests are served more slowly (3+ seconds), but the server consumes fewer resources. Or run both! I run a public ElectrumX server to support the network, and retain a personal electrs server to ensure uptime and privacy for myself.
  12. OFAWX/DOCKER-TEMPLATES Support thread for my Docker templates. Let me know of any issues and will get back ASAP. Please share at minimum: Which container it is you're having issues Full logs since startup for that container The container config, with any private details redacted Current supported containers: bitcoind Don't trust, verify! A full Bitcoin Core node to maintain your autonomy Based on trustless docker-bitcoind electrs A lightweight personal Electrum server Based on docker-electrs electrumx A high-performing Electrum server Based on docker-electrumx
  13. Most likely, the USB controller chip on that card is not supported by macOS. The eBay listing doesn't specify which it uses; are you able to read the printing on the chip? There is a long post in the MacRumors > Mac Pro forum about supported USB3 cards; the current recommendation is those based on the ASM3142 chip. There are a few around on eBay/Aliexpress. I have one of these these in the mail, I can definitely advise when it arrives if it works correctly.
  14. I've got a simpler kernel patch that will remove the VMM flag without breaking sysctl, see here. It works well for me, please try it out.