planetwilson

Members
  • Posts

    242
  • Joined

  • Last visited

Everything posted by planetwilson

  1. I created a new VM recently using a common Windows Server backing file I have and I got EFI boot failures like it couldn't see the boot device. Other VMs using the backing file were okay. I realised this new one was using version 2.7 of the i440fx chipset emulation and the others were all on 2.3 having been created a while back. After I downgraded to 2.3 it booted fine. Anyone else had issues like that?
  2. Didn't go with it in the end. Good price but no good if not plug and play. Shame as Sandisk seem to have abandoned them. They should just open source the drivers. On the hunt for a decent priced PCIe / NVMe drive now...
  3. I have bought a Fusion-io 1.6TB card from eBay assuming it would just work turns out there might be driver issues. Aaaanyway...I thought I would look into perhaps building a custom kernel but I am not seeing Slackware on this list of driver packs supplied by Sandisk (attached). Can I use any of these or do I need to just sell it on?
  4. Is this not where you need to dump the card bios and refer to it in the VM xml? I only have a single card and have to do that in order to not have a blank screen. Sent from my iPad using Tapatalk
  5. Can I ask why btrfs is a bad idea for running VMS off? I am wondering if some of my VM issues are down to the images being on my cache drive which is btrfs?...
  6. How can I set up my VMS so that I can have them all go to sleep at say 10pm at night and wake up at 8am? My disks could spin down then. I don't really want to shut down/start up again, a sleep would be fine. I am guessing some sort of libvirt command line on a schedule? Actually I wonder if the hard part is the scheduling not familiar with all the flavours of unix to know how to schedule scripts at specific times in unRAID...
  7. So I finally embarked on a build, got cold feet, abandoned it, sold/returned the bits, got regrets and embarked again The original build I was going to do is similar to what I ended up with but this time around I was a little more patient and found second hand / almost new parts on eBay for the most part. PSU: EVGA G2 850W PSU from eBay (about £20 cheaper than the less powerful PSU I bought first time) Seems like a really high quality modular PSU. Motherboard: ASRock Extreme 6 X99 motherboard from eBay (paid 60 quid less this time round than buying new ASRock X99 Taichi first time) RAM 64GB Crucial ECC DDR4 from eBay (paid less this time round than first time) Case: Phanteks Evolv ATX case, “b grade”/customer return from Overclockers (half price but is basically new). Previously I had bought the tempered glass version for twice the price. It was beautiful but slightly impractical and the hinged doors on the non-glass version are a lot more practical to work with. Cooling: Noctua NH-D15S (only thing brand new in this build as not much more than 2nd hand), plus the Phanteks case comes with 3 decent fans. *very* quiet cooler. I had gone for an H75 AIO cooler but given my server will sit under a desk I might as well go for the huge quiet but slightly ugly Noctua. Storage: 3x3TB WD Red drives from previous machine (one is parity) Cache: 500GB Crucial SSD cache/VM drive from previous machine Graphics: Asus GTX 950 from previous machine USB card: added an additional card for passthrough to OSX, Inateck PCI-E -> USB card The only thing I have not been able to get a good deal on this time round was the processor. But I lucked out previously managing to get an E5-2683v3 retail for £250. I can’t get anything anywhere near as good as that this time so have gone for a “holding pattern” in the form of an E5-2630L v3 ES for 60 quid (update….currently awaiting an ES version of the E5-2695 to arrive) Next steps:- Upgrade the processor when it arrives, increase my storage from the piddly 6TB I currently have. Increase my fast SSD based VM storage or maybe even try out NVMe passthrough since this motherboard has an M.2 slot.
  8. +1 here as well so we can make v3 Xeons boost on all cores max
  9. Ran a emu-img check on the file and got this:- ... lots of these line:- ERROR OFLAG_COPIED data cluster: l2_entry=800000008ac00000 refcount=0 ERROR OFLAG_COPIED data cluster: l2_entry=800000008ad80000 refcount=0 ERROR OFLAG_COPIED data cluster: l2_entry=800000008ad90000 refcount=0 ERROR OFLAG_COPIED data cluster: l2_entry=800000008ada0000 refcount=0 2911 errors were found on the image. Data may be corrupted, or further writes to the image may corrupt it. 36022/1638400 = 2.20% allocated, 15.18% fragmented, 0.00% compressed clusters Image end offset: 2344484864
  10. I have a bunch of Windows server VMS which are all based off a base VM backing file. That is I created a base Windows install configured how I like, then sys prepped it, shut it down and made it the base or backing file for all subsequent VMs. (using details in this blog post:- http://www.greenhills.co.uk/2013/03/24/cloning-vms-with-kvm.html) That way I only use the space for the basic Windows install once. I have recently started to have big issues with this though. It has worked fine for months but recently my VMS randomly will start up an error: - Booting from Hard Disk... Boot failed: not a bootable disk No bootable device. Once it does this it is broken completely, I have to restore or recreate it. I've tried partition repair etc, Windows repair etc but just doesn't work. If I restore my VM file from backup then it works fine again (for a while) Anyone else do this and noticed an issue?
  11. Yeah I bailed in the end. Bought all the bits. Built it and realised I'd built a massive server with more space than I'd ever use, louder than my TS140 for a lot of memory for basically the ability to have more VMs than I can now which would be normally switched off... Sent it all back
  12. I think I might have got it all confused with the cache drive, I was trying to move some of the VM images to the array as I was running out of space but then I think I created a share without specifying no cache.... Anyway I moved them all off to an external drive, and re-created the shares and copied them back. I'll shout if I have any more issues. My log seems to full of DHCP stuff - I am running dnsmasq on my domain, unRAID is acting as a DHCP server (my router won't let me specify custom external DNS servers but will let me switch DHCP off/on). I need to see if I can stop it from spamming the log. Thanks!
  13. As noted in the OSX thread I am having issues spinning up VMs. When I look at the main dashboard my cache SSD has 80GB free on it. Yet when I run "df" I see this:- Filesystem 1K-blocks Used Available Use% Mounted on rootfs 16381948 425680 15956268 3% / tmpfs 16456420 280 16456140 1% /run devtmpfs 16381964 0 16381964 0% /dev cgroup_root 16456420 0 16456420 0% /sys/fs/cgroup tmpfs 131072 131072 0 100% /var/log /dev/sda1 30835984 322608 30513376 2% /boot /dev/md1 2928835740 2116488000 812347740 73% /mnt/disk1 /dev/md2 2928835740 1479892052 1448943688 51% /mnt/disk2 /dev/sdf1 488386552 396351360 0 100% /mnt/cache shfs 5857671480 3596380052 2261291428 62% /mnt/user0 shfs 6346058032 3992731412 2261291428 64% /mnt/user /dev/loop0 20971520 5458104 13940376 29% /var/lib/docker shm 65536 0 65536 0% /var/lib/docker/containers/ba364dde68d8e6749adf4f8dc94b8c735aed6ffc62d32e1fe4538d58ce541a1f/shm shm 65536 0 65536 0% /var/lib/docker/containers/77f344e4d5a81a5dc31c127b6f3ca56df668a5b5ae1d1f62d2358a8923c87767/shm shm 65536 0 65536 0% /var/lib/docker/containers/27acdf1b59f3982628809427fc0b9753fd7cbfc640f90c75aa38a123d8a67e30/shm shm 65536 0 65536 0% /var/lib/docker/containers/0b2f313f7598ae2c2ca44830dd723925f8447b0deef4a2560efce87ab814972d/shm shm 65536 0 65536 0% /var/lib/docker/containers/e6e05b33116c672f179607085a187b19a6e3e68ec1f44d4250ba5ee556bff347/shm shm 65536 0 65536 0% /var/lib/docker/containers/d9a73a66bbd72a751ceb8b237e157a5260cf4dadc647656d716a80ed8a179217/shm /dev/loop1 1048576 18248 924856 2% /etc/libvirt which looks like there is no room in /var/log and also no room in /dev/sdf1 which appears to be the cache drive. So a little confused....anyone got any ideas?
  14. Most common cause seems to be a disk filling up, I have a share set to cache disk only (my SSD) called VM which is where I store all the disk images. There is plenty of space on there. I was trying to move it across to the user share on the array the other day but couldn't simply move it like that. I wonder if I have broken something somewhere...
  15. Yep, reboot has sorted it. Very strange though, nothing of any note in any of the libvirt logs either. Spoke too soon, started doing it again....damn.
  16. Okay scratch that - all my VMS are doing it. Not OSX specific...
  17. So my OSX VM that I created last week and has been working fine is suddenly behaving very strangely. I have a GTX950 based through and now during boot it will pause and refreshing the list of VMs in unraid shows it as paused or suspended. Hit play again and the progress bar goes a little further before it shows suspended again. I managed to eventually get it started but it runs for a few seconds at most before becoming suspended again. Strangely the monitor still shows the desktop at this point but frozen in time.
  18. Aah, I didn't know you could expand them through the UI. Good to know. Expanding the partition internally I did struggle with at first. I booted into recovery thinking perhaps it was like Windows and you couldn't expand the boot partition but that had issues too. In the end I think it was the poor UX of Disk Utility. I had to click on the "free space" then click remove! then I could expand the partition into the actual free space. It was almost like Disk Utility was presenting free space as a special partition called "free space". Very odd.
  19. One thing that tripped me up for a while that is worth noting is that when I went back to expand my drive later on using:- qemu-img resize Sierra.img +30G I found that it wouldn't boot afterwards. Only when I went back to check the output of the above line did it have a warning about not detecting the image image format correctly. WARNING: Image format was not specified for 'x86-64.img' and probing guessed raw. Automatically detecting the format is dangerous for raw images, write operations on block 0 will be restricted. Specify the 'raw' format explicitly to remove the restrictions. i.e. it had made the image read only. I ran the command again, expanding it by another 1GB but this time also specifying the format with -fmt raw and after that it all worked fine. I could then use disk manager inside OSX to expand to use the newly available space...
  20. If you have the NVMe drive attached through a PCI card, would that make much of a difference? or do the PCI adapter cards basically pass through the NVMe card?
  21. Brilliant video as usual I am actually surprised that the non-pass through speeds are as high as they are. I am assuming that there is no way for this to work with OSX currently? It certainly wouldn't put me off grabbing an NVMe drive anyway with the the first set of speeds to be honest.
  22. Okay - go it working by going into recovery mode, and doing "csrutil disable" in terminal in order to turn off SIP. Worked after that
  23. I tried the new way of doing the VM using Fusion but when it comes to installing your version of Clover R3974 that you attached to the notes of the video it tells me it is incompatible with this version of OSX. I carry on anyway and the install fails. I notice someone else in the video comments has the same issue. I am running OSX 10.12.3
  24. Great suff, sounds straight forward.