wierdbeard65

Members
  • Posts

    18
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

wierdbeard65's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I don't want to sound like an ass here, but I'm trying to figure out what the libvert.img file actually is beyond "it's used by VMs", which seems to be the stock answer when the question is asked. I get it, it's used by the VM subsystem. But what is it FOR? What does it contain? What happens if it is damaged or deleted? Does it need backing up? Does it benefit from being on high-speed drives etc. etc. I'm sorry, but the stock response isn't answering the question....
  2. I wish this command allowed you to select where to run it. I have a failing drive and am trying to use UnBalance to move everything prior to removing it. Obviously, I want to do this asap, but this command won't let me run it on just that one drive! I am also concerned that, despite it being "docker safe", some of my containers (POSTE in particular) may not be happy afterwards....
  3. Ok, after much digging as well as not a little hair-pulling, I now have it working. If anyone else stumbles across this post and has the same problem, it seems that when you access the web interface using port 8280, it redirects the browser to use https on port 443, changing the location to be /webmail. I hadn't noticed the switch.... The Let's Encrypt challenge works on http on port 80. So, what I did was to set my reverse proxy to forward all http requests to mail.<mydomain> port 80 to https on my Unraid box and it was able to verify everything....
  4. Hi, I have Poste.io set up on my Unraid server which is behind my firewall. I have a reverse proxy (NGINX) set up for web access. Mail ports are forwarded. All this is working. My problem is with using Let's Encrypt for certificates. I created a wildcard cert for my domain and, if I manually copy the certs, it "kind-of" works. Problem is that I have to manually renew this and then copy a bunch of certificates around. If I try to use Poste.io's internal certificate setup, then I hit a road-block. I don't know how this is working, but if I go to the URL for my Unraid box, then I get the Unraid interface. Same URL, but with /webmail or /admin gets me Poste.io. I have no idea where this redirect is being set up. I don't really care, however when I use the Cert setup, it tries to set up a challenge on http://<mydomain>/.well-known/acme-challenge/IyfGN5K7ZHtYnV198g5g-phW219wh73eMjddgVvhrmg and that is NOT redirected, so fails. Can anyone help?
  5. Hi, This is (at least as far as I'm concerned" a really odd one... I have a VM on my Unraid 6.5.3 system which is running Ubuntu and I sue as my Mail Server. (I intend to move it over to Docker, but that's a project for another day). Anyway, I am using a bridged network and I have a single subnet for my entire network. I have a variety of other machines (physical) including a couple of Windows 10, an Ubuntu-based router/firewall and some iOS devices. So, if I go to either my Unraid host machine, or my Ubuntu router, I can ping the VM in question (and vice-versa). From the Windows machines, I cannot (times out). Inbound e-mail gets delivered to the email server (via the firewall) and outbound e-mail gets sent. I can check my e-mail from my iPhone, but not from my windows desktop. On the Host, I keep getting the error "Tower kernel: br0: received packet on eth0 with own address as source address (addr:9c:8e:99:0b:e2:0a, vlan:0)" in syslog. This is driving me completely insane! can anyone suggest what the issue might be or how to proceed with troubleshooting? Oh, and just to further upset everything, this was all working fine and, as far as I'm aware, nothing has changed. It just stopped a little after midday, 9 days ago! TIA Paul
  6. Ok, I think I follow! I wasn't aware that I could have 2 slaves in one Jenkins job, so I will look into this. I wholeheartedly agree that if I could do it all with Docker, I should, but unless something has changed, I don't believe I can. I am developing .Net applications which need the Windoze OS to compile and test, I believe? Incidentally, I am not wanting to create or delete the VM (which would, as you say, be slow), only start it up and shut it down. I don't want it to be left running 24x7, just fired up to do a build and then shut down again afterwards. Thank you once again for your support.
  7. First up, thanks for the prompt response! I completely understand the theory of what you suggest, as well as the logic behind it, however I am not sure how to proceed in order to make it happen! The plugin for Jenkins that allows me to control KVM does not, as far as I am aware, support this kind of "one step removed" process, unless I am missing something? The plugin you referenced is pretty cool and, if I am reading it correctly, does with Docker what I am trying to do with KVM (i.e. allows a Jenkins job to dynamically provision a Docker container). I guess my question is, how would I then make the library installed in the second container available to the first (Jenkins) one? Or am I missing something here?
  8. Hey, thanks for yet another great plugin! OK, so after many hours on Google, I have been unable to find the answer to my question, so my apologies if I just failed to enter the correct search string! I have an UnRaid server (6.5.1) which runs both Docker and KVM. One of my Dockerized (is that a word?) applications is binhex-Jenkins. I want to run my build / test agent(s) as VMs within KVM (since some of them are Windows-based). So far, so good. I would, however, like Jenkins to be able to start the VM, execute the job, then shut the VM down again afterwards. To do this, I need to use the Libvirt slaves plugin (I think). One of the requirements for that is that the libvirt library is installed on the Jenkins machine. This is where I am stuck. Can anyone point me at documentation of how to do this? Everything I can find either assumes Jenkins is running directly on the host, as opposed to in Docker, or that the base OS is Ubuntu, not UnRaid. Alternatively, would it be a big job (and popular, perhaps) to pre-add this library to the main Docker image? Of course, I am assuming that so I guess an alternative approach would be to allow the container to access the host OS's copy of the library? Needless to say, I don't know where to start with this, either! TIA
  9. Hi, I know this is an old thread, but just in case.... I now use BitSync to sync my various machines (including my UnRaid NAS) works like a charm!
  10. Hi, Apologies if this is the wrong forum, please re-direct if necessary! Ok, I'm a current Unraid 5 user and absolutely love it. I run Plex / SickBeard / CouchPotato / SabNzbD and it takes care of our family entertainment needs. Anyway, I think the hardware I'm using is starting to show signs of being under-powered. I also want to add OwnCloud, Minecraft servers and possibly Asterisk to the mix, so Docker support would be great. I have a spare HP380GL server which I'd like to use. It has a built in RAID array which it would be a shame to waste. I'm thinking of using it as a cache drive (possibly the one used by the VMs.) I then plan to install SATA cards to move over my existing storage. Anyway, I don't believe Unraid comes with the CCISS drivers. I saw some threads on re-compiling Unraid 5 Kernels and wanted to get some information of doing this with 6.. [*]Do I need to bother, or is this already built in? [*]Any general advice / thoughts / pointers? [*]What Kernel version should I use? (The wiki doesn't mention Unraid 6 and Slakware Versions) [*]Do I have to use Slackware to compile on, or will any 64-bit Linux enable me to compile? Thanks!
  11. Hi, I'm shuffling my disks around and I want to take one of my current data drives, replace it with a bigger one and then use it as a cache drive. I'm not sure what the "correct" way to do this is.... Should I just replace it and allow the parity to rebuild. or copy the data off it to another disk in the array? How do I prep the drive (if at all) for cache? My concern is that if I simply remove it from the array and let parity rebuild it, when I add it as cache there will be some form of conflict as all the files will still be on it. Or am I worrying about nothing? Thanks :-)
  12. Thanks, both of you! That certainly makes some sense now! For me, it isn't so much about performance (especially as I was planning on using the SSD as cache eventually) as it is about capacity and ease of expansion. My DL380 doesn't have any spare room inside, nor does it have any SATA slots, so I will need to get a SATA card and then figure out a way to hook up the drives as eSATA. I reckon an old PC case full of drives hooked up to a SATA port multiplier connected to my server may well do the trick :-)
  13. Thanks for the response! 1. is a real bummer. I guess I was confused due to the number of people flagging up "good deals" on large USB drives. Why would you install them if not as part of the array? If I were to use the full Slackware version (I read on the Wiki that "pretty much anything is then possible") could I then make the USB drives part of the array?