CHBMB

Community Developer
  • Posts

    10620
  • Joined

  • Days Won

    51

Everything posted by CHBMB

  1. Decided to do a bit more with my WHS2011 install. Firstly the VM was 160GB large, which I don't need, although the qcow2 img isn't that big initially I didn't like the idea of it getting out of hand. WHS partitions the disk into two parts C: OS Install - 60GB D: User Shares -100GB My plan was to try and shrink it down to a more manageable 40GB. Began by opening the dashboard in WHS and moving all the user shares to the OS disk. You get a warning that it's not recommended but just click through that. Then open computer management and go to Storage: Disk Management. Once there, delete the D partition. Shrink the C partition to a size smaller than you want your final partition size to be. eg. I wanted a 40GB final partition, so I shrunk C: to 25GB. I now have the System Reserved partition at 100MB, and a C: partition at 25GB and a lot of empty space. Shutdown the VM. Copy the qcow2 file to another location - I used my cache disk as you're about to need more space to convert it to a RAW file. I then used my Linux VM, mounted my cache drive and opened a terminal running qemu-img convert -O raw WHS2011.qcow2 WHS2011.raw I then shrunk the RAW file qemu-img resize WHS2011.raw -120G Then converted it back to qcow2 qemu-img convert -O qcow2 WHS2011.raw WHS2011.qcow2 I then copied the WHS2011.qcow2 file back to the VM folder for my WHS2011 install after renaming the original just in case. Booted it up, go back into Computer Management - Storage - Disk Management and extend your C partition to fill the empty space. Robert's your Mother's Brother! I've only tried this with WHS2011 but I don't see any reason why it wouldn't work with any other Windows OS. The only thing I have in the back of my mind is that this was a fresh install and hadn't been used other than to make all the changes I've documented in the thread thus far. I wonder if it was a heavily used VM whether the shrink command may truncate a file system if there was stuff written "towards the end of it" if you see what I mean...
  2. You don't have to pass any ports to the host to access the VM from the outside. The VM gets its own IP, so you just use that to connect to it. I have a Tvheadend server set up on an Ubuntu VM here and it works without any port redirecting. Or maybe I misunderstood what you want? Yep, that's the way I have found it to work. My routers LAN page is starting to look very very full! More like a medium size business than a home network for two people...
  3. So thought I'd update this post with some XML files to help anyone else. Jbartlett has done a great job solving this one, but I find if I have a couple of xml files to peruse then it enables me to see which bits are relevant and generic and those that are specific to my install. Bit of background: My KVM folder is mounted on a non-array 250Gb SSD named aptly enough KVM, which is why I needed to create WHS2011.img on the cache drive as I didn't have enough space, due to WHS requiring a 160GB drive as a prerequisite to install. In summary what I did is: 1. Use a Linux VM Opened a terminal and ran. sudo apt-get install qemu-utils Then in my KVM share from a terminal ran qemu-img create -f raw WHS2011.img 160G Then I created my VM in the plugin in the normal way (I used q35) the relevant part I then posted in to get the VM to work was <os> <disk type='file' device='disk'> <driver name='qemu' type='raw'/> <source file='/mnt/cache/WHS2011.img'/> <target dev='hda' bus='ide'/> <address type='drive' controller='0' bus='0' target='0' unit='0'/> </disk> I also altered the disk entry for the WHS2011.qcow2 file changing the target dev from hda to vda (Not sure if this is necessary but seemed sensible, so it looked like this. <disk type='file' device='disk'> <driver name='qemu' type='qcow2'/> <source file='/mnt/disk/KVM/VM/WHS2011/WHS2011.qcow2'/> <target dev='vda' bus='virtio'/> <address type='pci' domain='0x0000' bus='0x02' slot='0x03' function='0x0'/> </disk> Then I booted the VM and installed WHS2011, along the way you need to install the virtio network drivers and balloon driver. Onceit was all installed, at this point I deviated a little from jbartlett's instructions as I already had a qcow2 hdd that had been created with the VM by the plugin and I left it in the XML file, so once WHS2011 was installed I could install the virtio SCSI driver straightaway. Meaning my WHS2011 install now had the ability to see qcow2 images. Once that's done, shutdown the WHS2011 VM. Then booted my Linux VM, mount the cache drive and open a terminal there and run qemu-img convert -O qcow2 WHS2011.img WHS2011.qcow2 That takes a while to run...... Once it's finished just copy over the WHS2011.qcow2 file overwriting the existing file, then remove the reference to the WHS2011.img disk, rename the target dev line in the qcow2 entry to this <target dev='hda' bus='virtio'/> Within the plugin change the boot device to HD and there you go... Once again this was all jbartlett's work, I'm just documenting the way I did it. Hope it helps someone along the way. I'm very new to KVM and still finding my feet so I figure there must be others like me out there so documenting the problems I have as I find them seems a good idea. Of course if anyone thinks it's spam then let me know and I'll stop!
  4. Noctua fans are amazing. All I ever use, not keen on the colour though!
  5. I see what you saying. I found the problem and will update tonight with the other fixes. Thanks for all your hard work. I'm turning into a VM fanatic. This is probably the greatest thing to hit UnRAID since I started out with V4.7!
  6. I haven't tested this at all, but I think where I'd start is: On Your Windows VM 1. Download plink from here onto your Windows VM and place it in wherever you want it to stay I would suggest C:\Windows\System32 2. Create a file called plink.bat on your windows machine (Use Notepad++) containing: C:\Windows\System32\plink.exe [email protected] -m C:\UPS_Shutdown.sh might as well put it in C:\ 3. Create a file called UPS_Shutdown.sh (again using Notepadd++) on your Windows box and copy it to C:\ In that file I would put powerdown On Your Unraid Box 4. Install the powerdown plugin from dlandon & Weebotech https://github.com/dlandon/unraid-snap/raw/master/powerdown-x86_64.plg (That's the V6 Plugin - I'm assuming you are on V6 as you have a VM.) Back To Your Windows Box & UPS Program 5. Finally get your UPS to execute the file plink.bat in the event of a power cut with C:\plink.bat Refer to page 35 of the user manual you posted. Like I said, I haven't tested any of this as I have an APC UPS, but I think it will work, might need a bit of tweaking, and I'm afraid I can't try it myself at the moment as I'm in the middle of 2 x 4TB preclears! Once that's finished I'm happy to try it out and see, so let me know if you have any problems. I'm sure some Unraid guru will come along and check the above at some point and offer some advice and tweaking.
  7. I'm comfortable using SSH but I couldn't find the qemu commands location to run them. Care to share as you may have notice from my previous reply that I went somewhat round the houses to get to the solution and I'd prefer not to have to do all that again!
  8. Right, managed to sort it out, and the solution was staring me in the face the whole time. Feel a bit of an idiot as it was so obvious. Fired up a Linux Mint VM and mounted my KVM share. Opened a terminal and ran. sudo apt-get install qemu-utils Then in my KVM share from a terminal ran qemu-img create -f raw WHS2011.img 160G The rest is just following jbartlett's post with a few things changed to suit my config. So I want to say a big thank you to jbartlett, don't think I'd ever have got things going without his post. All of this is only to run an IIS Server while I try and get my head around Apache or Nginx! Damn you Windows!
  9. The passwords feature is a great addition, although not one I need (at the moment) but is there any way to connect via the Web-VNC without having to send a blank password. If I click send password with a blank password, then I can connect fine, but the pop-up box doesn't disappear. Not a massive problem but I can't seem to work out how to close it without disconnecting.
  10. I do understand that, but as of yet haven't needed to install anything from any repositories other than those created by posters here. I did play around with docker on a Linux Mint VM a few months ago, but the plugin here has made it much easier to implement and helped me get my head around the whole concept a bit better too.
  11. I've got some 5 in 3 cages, different to yours, but I replaced all the fans to some Noctua ones. Still not silent by any stretch of the imagination. You could remove the fans and see what your temps do, but my suspicion is that your hard drives will get a little bit too toasty for comfort. May just be better off seeing if you can replace the fans.
  12. Just reread your posts Luca, If you've got it installed on your Win8 VM and it's working and shutting down your VM, then assuming you run your VM constantly whilst your UnRAID host is on then why don't you just setup a script within the Win8 VM to send a powerdown command to your UnRAID host via SSH. That would bypass any need to use the Linux configuration of your UPS.
  13. It is possible to import some config files, but sometimes the paths are different depending on how you run the docker. I have to say I remember running plugins before I left UnRAID for a while and have found it much easier to get Docker's up and going. I'd backup your flash drive contents with your V5 install, start afresh with V6 and then try importing your configs once you've got the docker apps up and going. Can always go back to V5 if it doesn't work out. I'd recommend spending a bit of time getting used to Dockers and experimenting. Also for what it's worth, I'd take this opportunity to migrate to NZBGet from SABNzbd (Lighter on resources and in my experience, faster downloads) and NZBDrone/Sonarr (Just fantastic, way better than Sickbeard in my humble opinion)
  14. Yes on V6. Not V5. Sorry, you're right, I assumed as luca2 was using a VM that he's be using one of the 6 betas, but I don't know whether he is or not. Come to think of it, I don't even know if virtualisation is possible in V5?
  15. Aren't we using a 64bit Slackware kernel, which unfortunately doesn't seem to appear there.
  16. I'm trying to install WHS 2011 as a VM. From what I've been reading, it's not possible to install to a qcow container so I'm trying to follow various guides around. Then stumbled upon this one from jbartlett on the forums here! Unfortunately I'm stuck at the first hurdle... Anyone know how best to run this command, qemu-img create -f raw WHS2011.img 160G I think once I've managed to create the img file I'll be good to go. I've tried using the plugin to create Raw, qcow & qcow2 but WHS2011 won't load the drivers to install to any of these and it's a well documented issue. Thanks for any help.
  17. I know what you mean, but I found that once I got 6b12 up and running then using Docker became very easy. To be honest I had already read about docker and been playing around with it in a Mint install a month or so ago, but the current plugin makes it very easy. What I'd suggest is get stuck in if you haven't already and see how you get on. The needo, smdion and gfjardim repositories make life very easy, I haven't yet progressed onto using "Non-Unraid" dockers but I will with time, and I think I've got my head around how to do so now.
  18. So finally started working at some point between 22:00 4th February and 08:00 on 5th February - Forum time. There was an issue for others as documented here but seems sporadic and uncommon. Posting back so that if anyone else has similar issues then they will know the cause. As gfjardim said it is being looked into and as there isn't a timeout on curl in the plugin, it was causing my webgui to hang. Problem is there was no way to check if it was back up and I found myself restarting my server after each failed attempt. If you think this may be your problem then a simple way to check is to SSH into your box and run a docker pull For example docker pull needo/mariadb If you're having the same problem I had then after about a minute you'll get an error message along the lines of Get https://index.docker.io/v1/repositories/... read tcp 162.242.195.84:443: i/o timeout Once the problem has disappeared and you can pull successfully, you're good to go with using the webgui again. Thanks to gfjardim for telling me the issue as I don't think I'd have figured it out by myself!
  19. I spotted this deal on Ebay. Bought one for myself but there is still six left. Thought the community might find it useful. For the record I have no relationship or interest in the seller personally.
  20. I'm getting timeouts when trying to access the registry. Gfjardim has confirmed it's a problem. Maybe it's geographical? I'm in the UK for what it's worth.
  21. Thanks for everyone who contributed to the guides posted here, all very useful.
  22. Think I've got it. If I SSH in docker pull needo/mariadb Hangs and nothing happens, figure that when that works, I'm good to go again.
  23. CHBMB

    vpn help

    Yes, I currently am running a pfSense VM and then run an ethernet cable to my E3000 and have that act as the wifi. The VM is not running "perfectly" yet, I am planning on post more on this once I have done other tests but that will probably be a few weeks. The reason I went to pfSense was that my E3000 router was not powerful enough to give me my full download speed when using a VPN. The E3000 would only give me around 7 Mbps download speed when using the VPN and I get 4x that speed. I then moved to pfSense so that I could get my full download speed when sending data through the VPN. I would guess that you will run into the same limitations, unless you own a very high end router (commercial grade) or your download speed is lower than the threshold. I'll look forward to that write up. Got a dual Intel NIC waiting to go in my server, but a few more pressing issues at the moment to sort out.
  24. CHBMB

    vpn help

    Not to digress from the original subject matter, but I'm keen on getting PFSense up and running as a VM on my Unraid box. I'd quite like to use it as a VPN gateway and keep my current router modem combo as is. Is that possible?