Jump to content

wierdbeard65

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by wierdbeard65

  1. I don't want to sound like an ass here, but I'm trying to figure out what the libvert.img file actually is beyond "it's used by VMs", which seems to be the stock answer when the question is asked. I get it, it's used by the VM subsystem. But what is it FOR? What does it contain? What happens if it is damaged or deleted? Does it need backing up? Does it benefit from being on high-speed drives etc. etc. I'm sorry, but the stock response isn't answering the question....
  2. I wish this command allowed you to select where to run it. I have a failing drive and am trying to use UnBalance to move everything prior to removing it. Obviously, I want to do this asap, but this command won't let me run it on just that one drive! I am also concerned that, despite it being "docker safe", some of my containers (POSTE in particular) may not be happy afterwards....
  3. Ok, after much digging as well as not a little hair-pulling, I now have it working. If anyone else stumbles across this post and has the same problem, it seems that when you access the web interface using port 8280, it redirects the browser to use https on port 443, changing the location to be /webmail. I hadn't noticed the switch.... The Let's Encrypt challenge works on http on port 80. So, what I did was to set my reverse proxy to forward all http requests to mail.<mydomain> port 80 to https on my Unraid box and it was able to verify everything....
  4. Hi, I have Poste.io set up on my Unraid server which is behind my firewall. I have a reverse proxy (NGINX) set up for web access. Mail ports are forwarded. All this is working. My problem is with using Let's Encrypt for certificates. I created a wildcard cert for my domain and, if I manually copy the certs, it "kind-of" works. Problem is that I have to manually renew this and then copy a bunch of certificates around. If I try to use Poste.io's internal certificate setup, then I hit a road-block. I don't know how this is working, but if I go to the URL for my Unraid box, then I get the Unraid interface. Same URL, but with /webmail or /admin gets me Poste.io. I have no idea where this redirect is being set up. I don't really care, however when I use the Cert setup, it tries to set up a challenge on http://<mydomain>/.well-known/acme-challenge/IyfGN5K7ZHtYnV198g5g-phW219wh73eMjddgVvhrmg and that is NOT redirected, so fails. Can anyone help?
  5. Hi, This is (at least as far as I'm concerned" a really odd one... I have a VM on my Unraid 6.5.3 system which is running Ubuntu and I sue as my Mail Server. (I intend to move it over to Docker, but that's a project for another day). Anyway, I am using a bridged network and I have a single subnet for my entire network. I have a variety of other machines (physical) including a couple of Windows 10, an Ubuntu-based router/firewall and some iOS devices. So, if I go to either my Unraid host machine, or my Ubuntu router, I can ping the VM in question (and vice-versa). From the Windows machines, I cannot (times out). Inbound e-mail gets delivered to the email server (via the firewall) and outbound e-mail gets sent. I can check my e-mail from my iPhone, but not from my windows desktop. On the Host, I keep getting the error "Tower kernel: br0: received packet on eth0 with own address as source address (addr:9c:8e:99:0b:e2:0a, vlan:0)" in syslog. This is driving me completely insane! can anyone suggest what the issue might be or how to proceed with troubleshooting? Oh, and just to further upset everything, this was all working fine and, as far as I'm aware, nothing has changed. It just stopped a little after midday, 9 days ago! TIA Paul
  6. Ok, I think I follow! I wasn't aware that I could have 2 slaves in one Jenkins job, so I will look into this. I wholeheartedly agree that if I could do it all with Docker, I should, but unless something has changed, I don't believe I can. I am developing .Net applications which need the Windoze OS to compile and test, I believe? Incidentally, I am not wanting to create or delete the VM (which would, as you say, be slow), only start it up and shut it down. I don't want it to be left running 24x7, just fired up to do a build and then shut down again afterwards. Thank you once again for your support.
  7. First up, thanks for the prompt response! I completely understand the theory of what you suggest, as well as the logic behind it, however I am not sure how to proceed in order to make it happen! The plugin for Jenkins that allows me to control KVM does not, as far as I am aware, support this kind of "one step removed" process, unless I am missing something? The plugin you referenced is pretty cool and, if I am reading it correctly, does with Docker what I am trying to do with KVM (i.e. allows a Jenkins job to dynamically provision a Docker container). I guess my question is, how would I then make the library installed in the second container available to the first (Jenkins) one? Or am I missing something here?
  8. Hey, thanks for yet another great plugin! OK, so after many hours on Google, I have been unable to find the answer to my question, so my apologies if I just failed to enter the correct search string! I have an UnRaid server (6.5.1) which runs both Docker and KVM. One of my Dockerized (is that a word?) applications is binhex-Jenkins. I want to run my build / test agent(s) as VMs within KVM (since some of them are Windows-based). So far, so good. I would, however, like Jenkins to be able to start the VM, execute the job, then shut the VM down again afterwards. To do this, I need to use the Libvirt slaves plugin (I think). One of the requirements for that is that the libvirt library is installed on the Jenkins machine. This is where I am stuck. Can anyone point me at documentation of how to do this? Everything I can find either assumes Jenkins is running directly on the host, as opposed to in Docker, or that the base OS is Ubuntu, not UnRaid. Alternatively, would it be a big job (and popular, perhaps) to pre-add this library to the main Docker image? Of course, I am assuming that so I guess an alternative approach would be to allow the container to access the host OS's copy of the library? Needless to say, I don't know where to start with this, either! TIA
×
×
  • Create New...