murkus

Members
  • Posts

    87
  • Joined

Everything posted by murkus

  1. If bridging is on, ipvlan is used. If bridging is off, vhost is used. I guess simply each active hardware interface gets its own vhost interface.
  2. You are saying that you still see duplicate warnings although you followed my post? Then I have no better idea, sorry. regarding the HA VM, you may try to enable only 1 of both (just try both alternatives): - IPv4 custom network on interface eth0 (optional) (default is ON) - Host access to custom networks (default is OFF)
  3. Sounds awesome, thanks for the effort. Having said that: sometimes Bacularis dies, but I don't want to restart the container while Bacula jobs are still running. What would be the best way to start Bacularis from within the container console?
  4. I believe I wrote something about that in my thread. You have 2 parameters that can have 4 value combinations, while the combination where both are enabled leads to create vhost0@eth... to avoid vhost0@eth0 to use the same IP as eth0 (will be alarmed by arpwatch, pfSense, TrueNAS, etc.): do no NOT enable BOTH of IPv4 custom network on interface eth0 (optional) (default is ON) and Host access to custom networks (default is OFF) Have you tried to enable one and disable the other? From what I am reading I believe you use HAOS in a VM, not in a container on unraid. The two above parameters should not have an impact on VMs, only on Docker containers, but I am no expert. The VM Manager settings use different mechanisms (br0 or virbr0) to ensure communication between host and guest VMs (maybe you check out the "Default network source:" in the VM Manager settings, this is the help text copied from the UI: Select the name of the network you wish to use as default for your VMs. You can choose between 'bridges' created under network settings or 'libvirt' created with virsh command in the terminal. The bridge 'virbr0' and the associated virtual network 'default' are created by libvirt. Both utilizes NAT (network address translation) and act as a DHCP server to hand out IP addresses to virtual machines directly. More optional selections are present for bridges under network settings or for libvirt networks with the virsh command in the terminal. If your are unsure, choose 'virbr0' as the recommended Unraid default. NOTE: You can also specify a network source on a per-VM basis. IMPORTANT: Neither Libvirt nor Unraid automatically brings up an interface that is assigned to a Libvirt network. Before you use a Libvirt network, please go to Settings -> Network Settings and, if necessary, manually set the associated interface to up.)
  5. I am not using bond0 because bonding makes no sense in my environment. If I would use bonding, I would just have replaced eth0 with bond0 everyhwere in my description.
  6. no problems here so far, but note that I have enabled bridging and I use ipvlan driver for Docker. The recommendation you mentioned is for the case when people use macvlan driver for Docker and no bridging because they otherwise had problems with network equipement (Fritzbox, Unifi). However, I do not experience problems with the Unifi network application and the IP-flapping issue with arpwatch and truenas is resolved now, although I am using ipvlan.
  7. @ljm42 @JorgeB thanks I got that working now. For the benefit of all people who had the same problem and posted to a bunch of other threads (which have been leinked to this thread now) here is my summary of what I have learnt: to be able to use macvlan driver for docker: disable bridging in network settings; here eth0 and possibly vhost0@eth0 have the IP address of the unraid host to be able to use ipvlan driver for docker: enable bridging in network settings; here eth0 has no IP address, but is connected to br0 which has the IP address of the unraid host to avoid vhost0@eth0 to use the same IP as eth0 (will be alarmed by arpwatch, pfSense, TrueNAS, etc.): do no NOT enable BOTH of IPv4 custom network on interface eth0 (optional) (default is ON) and Host access to custom networks (default is OFF) I am currently using these settings successfully: in this setting eth0-eth3 have no IP address but are connected to the bridge br0, which has the IP address of the unraid host network settings: Bridging: yes docker settings: Docker custom network type: ipvlan; Host access to custom networks: disabled; IPv4 custom network on interface eth0 (optional): enabled
  8. I just updated to the latest image and see that Bacula has been updated to 3.0.4 and Bacularis to 2.6.0. Thanks for this! So far it works for me. The S3 driver would also be great to have at some point (I actually already bought an extra NAS for offsite backup and installed minIO for S3 - I erroneously thought that the S3 cloud stuff is part of the Bacula default feature scope.).
  9. There are useful answers in this new thread:
  10. There are useful answers in this new thread:
  11. There are useful answers in this new thread:
  12. And I thought I had enable ipvlan, but actually macvlan is enabled. ipvlan is shown as an option, but it is greyed out and cannot be selected. Why would that be the case?
  13. I investigated further wit different settings for Docker and I found that vhost0@eth0 will only then get an IP (and then it is the same IP as eth0) if both of these are enabled (if only one of these is enabled, vhost0 does not get an IP): Host access to custom networks IPv4 custom network on interface eth0 (optional) I have no ides why both were enabled on my server (and it is quite possible I did this myself without knowing what I was doing). Just to clarify whether I really need those, could someone confirm or correct my assumptions: IPv4 custom network on interface eth0 (optional): I only need this, if I want to have a containter to use an IP from the subnet in which the IP of eth0 is located. Correct? Host access to custom networks: I only need this if the unraid host should access a service provided by some of the containers running on the the unraid host. Example an agent running natively on the unraid host needs to connect to a service (like NMS or backup) in a container. Correct?
  14. I am not the first, when I searched for the problem in the forum I found 5+ threads of people reporting this and asking for help and others commenting +1. Unfortunately none of the threads received any working help.
  15. arpwatch and truenas are complaining that the IP of my unraid server is flapping between the MAC of its eth0 (one of 4 interfaces, but eth1-3 are not used) and vhost0@eth0. And from all I know about IP networking having two interfaces with different MACs using the same IP is not a good condition. I have investigated a little and it looks like vhost0@eth0 is there even if docker is off, but then it has no IP assigned. If I turn docker on, it gets the IP of eth0 assigned. (VM manager is using virbr0, which uses a different IP and also goes away when turnin VM manager off) The curious thing is that vhost0 hast a routing metric of 0 and is thus preferred over eth0 (metric 1007). Such a construct is unseen on all of my Debian docker hosts, so it is certainly possible to have docker working fine without such a unsane setup. Is it possible to run unraid with docker in a way that each interface has an on IP (without assigning individual non-docker-network IPs to the containers), and how?
  16. I configured a copy job for the catalog to be copied to my S3 minIO, but it seems this container does not contain the S3 cloud driver: Fatal error: init_dev.c:505 [SF0020] dlopen of SD driver=cloud at /opt/bacula/plugins/bacula-sd-cloud-driver-13.0.3.so failed: ERR=/opt/bacula/plugins/bacula-sd-cloud-driver-13.0.3.so: cannot open shared object file: No such file or directory As follows I would suggest to include this driver in the container, if this is possible. AFAIK it is supported for the Community Edition.
  17. it is supposedly used for the virtualization. I will try to disable virtualization when the parity check has run through. Then I will see if vhost0 goes away.
  18. I am planning to add offsite backup to my backup strategy and to use the Bacula S3 driver to implement it - actually using minIO as S3 server on a remote NAS. Does this container already include the S3 driver for Bacula SD?
  19. I am seeing the same issue (no bridging enabled): eth0 and vhost0 have the same IP and different MACs, so there are 2 route entries here, too
  20. I am seeing the same issue (no bridging enabled): eth0 and vhost0 have the same IP and different MACs
  21. I am seeing the same issue (no bridging enabled): eth0 and vhost0 have the same IP and different MACs
  22. I am seeing the same issue (no bridging enabled): eth0 and vhost0 have the same IP and different MACs