geekazoid

Members
  • Posts

    61
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

geekazoid's Achievements

Rookie

Rookie (2/14)

4

Reputation

  1. It is done. Sorry I forgot about this until I got an update on the thread.
  2. PiHole Browser Extension supports multiple PiHoles. For the other stuff: You can run dockers on ESXi. Then you can deploy pihole in docker and use the shared storage to run pihole-sync. There may be licensing requirements from vmware. My last certification was vSphere 5; I don't know. Otherwise you'd need to set up some kind of shared storage (nfs) that your linux pihole VMs can mount. Note: this would not be mounted to the hypervisor but to the VMs directly. Basically a bunch of work and adding points of failure. Then you could fork the project and adapt the pihole-sync scripts to whatever linux you want to run etc.
  3. Another configuration nugget for those using pihole's dhcp service. In dnsmasq.d create a file called 99-extra-dns.conf, contents: dhcp-option=6,pihole1_ip,pihole2_ip So your dhcp clients are aware of the primary and secondary dns servers.
  4. I built this again from scratch and it went off without a hitch using the procedure I laid out in my comment above. What a delight! I also figured out what causes the root permissions bug. If you create the path manually before installation, it doesn't happen. So I think I will update my guide to move those steps to the beginning and include the symlink steps as well. I also found a little nugget people might appreciate: If you want to set the hostname of your pihole, select Advanced View in the Edit page and append " --hostname=your_hostname" to Extra Parameters.
  5. The script didn't work at all for my unraid 6.11.5 host and pihole container. Should it? Thanks for the script. It works great after a reboot!
  6. I've fixed the ssh pubkeygen auth issue. Authentication refused: bad ownership or modes for directory /root cd / chown root root chgrp root root I made an issue on github for this, and I submitted an edit to the README. Above I've written a Quick Start Guide which I'll probably build upon later when I get the whole stack dialed. When done I'll bring the final product to github, but I figure being on first page it will be helpful for the next guy here. Have a Happy New Year!
  7. I've deleted the verbose detail of my troubleshooting because its not relevant to the community. Instead I'm going to turn my findings into a Quick Start Guide for pihole-sync: Quick Start Guide Environment Primary pihole on the unraid docker host A (unraidA) has it's files typically in /mnt/user/appdata/pihole. There are two important directories there that this app is going to sync: /mnt/user/appdata/pihole/pihole /mnt/user/appdata/pihole/dnsmasq.d The pihole-sync sender is going to reside on the same docker host as the master pihole server as it has direct access to the same filesystem. A sensible path for this would be /mnt/user/appdata/pihole-sync-sender (on unraidA) Secondary pihole on unraid docker host B (unraidB) should probably have a similar filesystem setup for consistency. Of course the path for the receiver should be similarly /mnt/user/appdata/pihole-sync-receiver (on unraidB) NETWORKING: If you use Bridge mode for your containers, the sync will be employing TCP ports 22222. Defaults will work for you. If you use br0 mode for a dedicated IP per container, you can just use the standard ssh port 22 (change 22222 to 22). Remember that your sync hosts are going to be on different IPs than your piholes if you use br0 method, so don't confuse that in your setup. Operations: you should be able to ssh from unraidA to unraidB. Otherwise you can use the clipboard and web terminal. Install the Receiver On unraidB, use Community Apps to install Pihole-Sync-Receiver. In the Add Container dialog you are going to have the following fields to fill: Name: I recommend renaming the docker image to pihole-sync-receiver (lowercase) for consistency (this will make it less tedious in the CLI later) Network: as discussed above in Networking Remote Host IP: as discussed above in Networking Root directory: /mnt/user/appdata/pihole-sync-receiver/root Etc-ssh: /mnt/user/appdata/pihole-sync-receiver/ssh Pi-Hole Path: /mnt/user/appdata/pihole-sync-receiver/pihole (we're going to change this later to a symlink) Pi-Hole DNSmasq path: /mnt/user/appdata/pihole-sync-receiver/dnsmasq.d (we're going to change this later to a symlink) <click Show more settings> Note Type: receiver Remote SSH port: as discussed above in Networking <Apply> Stop the Receiver container for now. Open a Terminal on unraidB cd /mnt/user/appdata/pihole-sync-receiver mkdir root/.ssh later you will be symlinking the pihole and dnsmasq.d directories here to your secondary pihole server's paths one level up. Let's get this working first. Note: We installed the Receiver first because the sender is going to thrash against the receiver trying to connect until we complete the ssh key installation. Install the Sender and copy the ssh key to the Receiver On unraidA, use Community Apps to install Pihole-Sync-Sender. In the Add Container dialog you are going to have the following fields to fill: Name: I recommend renaming the docker image to pihole-sync-sender for consistency (this will make it less tedious in the CLI later) Network: as discussed above in Networking Remote Host IP: as discussed above in Networking Root directory: /mnt/user/appdata/pihole-sync-sender/root Etc-ssh: /mnt/user/appdata/pihole-sync-sender/ssh Pi-Hole Path: /mnt/user/appdata/pihole-sync-sender/pihole (we're going to change this later to a symlink) Pi-Hole DNSmasq path: /mnt/user/appdata/pihole-sync-sender/dnsmasq.d (we're going to change this later to a symlink) <click Show more settings> Note Type: sender Remote SSH port: as discussed above in Networking <Apply> On first startup, the container is going to generate a ssh host key. If you used the paths defined above, it will be located in /mnt/user/appdata/pihole-sync-sender/root/.ssh/ Open Terminal on unraidA docker logs pihole-sync-sender there is a message early on in the logs that mentions the steps to copy the ssh key to the receiver. I'll re-iterate these instructions below. cd /mnt/user/appdata/pihole-sync-sender/root/.ssh/ scp id_ed25519.pub root@unraidB:/mnt/user/appdata/pihole-sync-receiver/root/.ssh/authorized_keys enter the root password for unraidB when prompted to authenticate this secure copy Now start the Receiver on unraidB Open Terminal on unraidB docker exec -it pihole-sync-receiver /bin/bash cd / chown root root chgrp root root exit On unraidA cd /mnt/user/appdata/pihole-sync-sender touch pihole/psynctestfile touch dnsmasq.d/dsynctestfile Now start the Sender on unraidA docker logs pihole-sync-sender On unraidB docker logs pihole-sync-receiver cd /mnt/user/appdata/pihole-sync-receiver ls -a pihole (do you see your psynctestfile ?) ls -a dynsmasq.d (do you see your dsynctestfile ?) Once you know its syncing properly, the setup is ready to be connected to your primary and secondary piholes. stop both the sender and receiver containers delete the sync test files at both ends Use symlinks: on unraidA (sender) it would be something like this: ln -s /mnt/user/appdata/pihole/pihole /mnt/user/appdata/pihole-sync-sender/pihole ln -s /mnt/user/appdata/pihole/dnsmasq.d /mnt/user/appdata/pihole-sync-sender/dnsmasq.d on unraidB (receiver) it would be something like: ln -s /mnt/user/appdata/pihole/pihole /mnt/user/appdata/pihole-sync-receiver/pihole ln -s /mnt/user/appdata/pihole/dnsmasq.d /mnt/user/appdata/pihole-sync-receiver/dnsmasq.d Start the Receiver, then the Sender... check the logs! This is where the more specific pihole configuration begins (beyond scope of this quick start guide)
  8. Hi, I'm just wondering how this is supposed to be operable. When I deploy the images they just drop into a reboot loop forever and I can't get a console long enough to set up ssh.
  9. I agree with DaKarli. Samba usermap is the right way.
  10. Having this issue with a VM as well. It's an anomaly. Win10, 6 cores. The CPU usage reported is host only, in VM the CPUs are idle. So this is a hypervisor/VM configuration issue. I'm chasing it down as a libvirt/qemu issue since I'm not seeing a solution in the unraid searches. Check this thread out for example: link.
  11. Attached. It's been a while so I will quickly recap: GPU C- onboard VGA to 1080@60 (for UNRAID console only) GPU 1- Quadro M4000 DisplayPort 1.2 to 4K@60 (for PCI Passthrough only - workstation 1) GPU 2- GT 1030 HDMI 2.0 to 4K@60 (for PCI Passthrough only - workstation 2) In BIOS, the primary graphics is set to the onboard (GPU C). The other option is to set it to the PCIe and then it will likely go to the first PCI ID which is GPU 1. I only use the first: onboard. As desired, when console is on GPU C, I have remote access to it via the integrated IPMI board. If the GPUs 1 or 2 cables are connected to a display, console will always go to GPU 1. If I disconnect those cables, it will go to GPU C as desired. After boot I can connect the cables and Passthrough will work as normal. If I let the console go to GPU 1 and then attempt PCI Passthrough, it will work but I will lose console. And of course no access to console from IPMI management agent. I'm running 6.8.2 at this point, very stable. This issue emerged 2yrs ago after a BIOS update and subsequent BIOS updates have not changed it back. We could blame ASUS if we want, alhough my feeling is that if you tell the kernel to boot and use a device, and ignore another device, and it does not do that, there is a bug. Right now we are using this UNRAID machine as a dual headed (soon triple) workstation and it is very stable and reliable. The GPU cable issue on reboot is inconvenient at worst. I feel that I wasted my money buying a server board for UNRAID when really it can't support the server features. But UNRAID brings so many other conveniences that I can't complain too loudly. fluffy-diagnostics-20200224-1232.zip
  12. Cool. Well what it changes is this: - the iommu group numbers change for my passthrough devices, possible more. Total iommu groups is diminished. All my passthrough devices are listed though. - my PCIe USB cards are still not showing up as shareable in the VM manager - the devices connected to my passthrough USB cards are still connected to my host despite the blacklist - this appears in dmesg: root@fluffy:~# dmesg | grep vfio [ 0.000000] Command line: BOOT_IMAGE=/bzimage vfio-pci.ids=10de:13f1,10de:1d01,1912:0014,1b73:111i intel_iommu=on initrd=/bzroot [ 0.000000] Kernel command line: BOOT_IMAGE=/bzimage vfio-pci.ids=10de:13f1,10de:1d01,1912:0014,1b73:111i intel_iommu=on initrd=/bzroot [ 12.411973] vfio-pci 0000:01:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none [ 12.424129] vfio_pci: add [10de:13f1[ffffffff:ffffffff]] class 0x000000/00000000 [ 12.424473] vfio-pci 0000:81:00.0: vgaarb: changed VGA decodes: olddecodes=io+mem,decodes=io+mem:owns=none [ 12.436168] vfio_pci: add [10de:1d01[ffffffff:ffffffff]] class 0x000000/00000000 [ 12.436405] vfio_pci: add [1912:0014[ffffffff:ffffffff]] class 0x000000/00000000 [ 12.436621] vfio_pci: add [1b73:0111[ffffffff:ffffffff]] class 0x000000/00000000 Also this does nothing either way: root@fluffy:~# cat /boot/config/vfio-pci.cfg BIND=0000:0100 0000:8100 0000:0200 0000:0300 All of my test cycles feature a full cold start. I'm going to start regression testing on 6.6 soon because this is messing with my workaday life.
  13. Oh in fact I believe that you meant this: Thus the correct form would be: append vfio-pci.ids=10de:13f1,10de:1d01,1912:0014,1b73:111 intel_iommu=on initrd=/bzroot Is this right?
  14. Sorry I think I forgot to enable notify on reply. Thanks for this important comment on my syntax. By provider I assume that you mean domain? So basically use the notation from dmesg. So if I follow you, this: append vfio-pci.ids=01:00 81:00 02:00 03:00 intel_iommu=on initrd=/bzroot should be this: append vfio-pci.ids=0000:0100 0000:8100 0000:0200 0000:0300 intel_iommu=on initrd=/bzroot