Pascal51882 Posted November 16, 2019 Share Posted November 16, 2019 Hello, I test the free version 6.7.2 of unraid at the moment. I really like the idea of unraid compared with my esxi and 5 VMs (NAS, pihole, house automationm, Plex...) on it. It would be way better for my needs and I dont need to build a raid, because the array is enough for me with a parity disk. I need to migrate some VMs because they need to be up and running quick. In future I want to recreate the services on them on docker ur plugins. I installed unraid in a VM on my ESXi. I RAW passthrough my HDDs over a SATA/SAS Controller and created a array Now I want to copy my esxi pihole VM (.vmdk, Debian 9) to unraid and run it there. The copy over nfs was easy but I just dont get it running at all. I cant even create a new VM out of an iso file. I always get at the end the error: VM creation error invalid argument: could not find capabilities for arch=x86_64 domaintype=kvm I read that some versions of the 6.x.x are not that stable. Is this in this case also the problem or is it already fixed? The posts are over 1 year old on reddit. Please tell me of you need any further information from me. Thanks a lot for your help. Quote Link to comment
trurl Posted November 16, 2019 Share Posted November 16, 2019 Let me see if I understand. You are running Unraid itself as a VM in ESXi, and you are trying to setup a VM with that virtual Unraid as the host? So a VM within a VM? Running Unraid itself as a VM is not supported, but we do have a subforum for people who are virtualizing Unraid, and I can move your post there if you want. I don't know if anybody there can help you or not. Probably you will get more help if you simplify and run Unraid bare metal. Quote Link to comment
Pascal51882 Posted November 16, 2019 Author Share Posted November 16, 2019 Yes its in a VM. I tried it now bare but it dont want to boot at all on my server. Not the newest version and not the oldest I can choose in the USB Tool. Tried 3 different USB Sticks. On my normal computer it booted but the bios settings are the same and both are AMD Systems. Only my server Bios is from end of 2018 and my main PC from this year. Server: Ryzen 2200G, 16Gb Ram, Asus TUF Gaming B450 SVM on PC: Ryzen 3600 G, 16GB Ram, Asus Rog Strix X470 SVM on/off no difference Is there anything I can try? Quote Link to comment
bastl Posted November 16, 2019 Share Posted November 16, 2019 @Pascal51882 Nested virtualisation is a pain in the butt. Mixing up different hypervisors each with it's own tweaks and patches you need to make it run properly, good luck with that. Each layer you add will not only reduce the performance it also will show weird behaviour if you don't know what you're doin. There is no easy way to config this. Quote Link to comment
Pascal51882 Posted November 16, 2019 Author Share Posted November 16, 2019 1 minute ago, bastl said: @Pascal51882 Nested virtualisation is a pain in the butt. Mixing up different hypervisors each with it's own tweaks and patches you need to make it run properly, good luck with that. Each layer you add will not only reduce the performance it also will show weird behaviour if you don't know what you're doin. There is no easy way to config this. You read my last post? I try unraid first in a test VM before I unroll it for my physical server. I looked again in the VM settings and found the problem quickly. The virtualization for the CPU was disabled for the unraid VM. So now I need to get the physical boot problem away Quote Link to comment
bastl Posted November 16, 2019 Share Posted November 16, 2019 @Pascal51882 Sorry, as I posted, your comment wasn't up for me. 😂 For your boot problem, did you tried to boot as non-uefi? Usually you see 2 option to boot from your USB stick. Select the one labeled without UEFI Quote Link to comment
Pascal51882 Posted November 16, 2019 Author Share Posted November 16, 2019 (edited) No problem haha Yes I boot normally with non uefi and tried also uefi. Edit: Could try a Bios update but this has always some risks. Edited November 16, 2019 by Pascal51882 Quote Link to comment
trurl Posted November 16, 2019 Share Posted November 16, 2019 Have you tried a USB2 port? USB2 is often more reliable for the Unraid boot flash. What exactly is the result when you try to boot? Quote Link to comment
Pascal51882 Posted November 16, 2019 Author Share Posted November 16, 2019 Yes i tried my external Mobo Ports and the Ports of my case 3.0/3.1 and 2.0 Start -> Boot Screen -> HBA Controller -> black -> Bios With my ESXi Stick: Start -> Boot Screen -> HBA Controller -> short black -> ESXi Quote Link to comment
trurl Posted November 16, 2019 Share Posted November 16, 2019 4 minutes ago, Pascal51882 said: Start -> Boot Screen -> HBA Controller -> black -> Bios This suggests it isn't finding anything to boot. Make sure the USB is the only bootable device. Quote Link to comment
Pascal51882 Posted November 16, 2019 Author Share Posted November 16, 2019 Yes I removed all other USB devices Quote Link to comment
trurl Posted November 16, 2019 Share Posted November 16, 2019 Are you sure the flash has only one partition? Maybe try preparing the Unraid flash again. Or try the manual method to prepare flash: https://wiki.unraid.net/UnRAID_6/Getting_Started#Manual_Method_.28Legacy.29 Quote Link to comment
Pascal51882 Posted November 17, 2019 Author Share Posted November 17, 2019 So I dont know how but recreating the USB Sticks with a new downloaded USB Tool/ZIP File worked. Now I can boot both Legacy and Uefi The next thing I need to do is to import 2 existing VMs which are .vmdks. Is this possible in general? Quote Link to comment
itimpi Posted November 17, 2019 Share Posted November 17, 2019 23 minutes ago, Pascal51882 said: The next thing I need to do is to import 2 existing VMs which are .vmdks. Is this possible in general? It is possible to use .vmdk vdisks directly in KVM but since this is not supported by the Unraid GUI you have to manually put in the full path to such vdisk files to use them. Quote Link to comment
Pascal51882 Posted November 17, 2019 Author Share Posted November 17, 2019 4 hours ago, itimpi said: It is possible to use .vmdk vdisks directly in KVM but since this is not supported by the Unraid GUI you have to manually put in the full path to such vdisk files to use them. I try to get a small VM running as test with Debian9 and pi-hole but it doesnt start. I copied the hole folder from my ESXi into my Array. Than I created the VM in Unraid with the Debian Template and selected my virtual disk with this paths: This are all .vmdk files I can find. I am stuck at the UEFI Interactive Shell. With exit I can select a disk but I get back to the same point. I dont find any bootfiles I could chose. Do I need to change something else? Quote Link to comment
itimpi Posted November 17, 2019 Share Posted November 17, 2019 (edited) Is the VM in question set up for UEFI boot? If not, then when creating the VM you want to make sure you select the SeaBios option. Edited November 17, 2019 by itimpi Quote Link to comment
Pascal51882 Posted November 18, 2019 Author Share Posted November 18, 2019 (edited) 18 hours ago, itimpi said: Is the VM in question set up for UEFI boot? If not, then when creating the VM you want to make sure you select the SeaBios option. I tried both but its the same problem, with Seabios it dont find a Boot device. I got my Windows Server 2019 running with your guide in another topic: -create a new VM -set the BIOS to use SeaBios -Set the primary vdisk to use SATA, Manual mode and then set the path to point to the .vmdk file -set the network to a bridged connection (br0 in my case) -create the VM On more question. How do I get the network working? Is there a driver file i can mount as iso? Edited November 18, 2019 by Pascal51882 Quote Link to comment
itimpi Posted November 18, 2019 Share Posted November 18, 2019 (edited) 6 minutes ago, Pascal51882 said: On more question. How do I get the network working? Is there a driver file i can mount as iso? There should be a virtio iso image file available which has the drivers to support virtio networking (the most efficient). Alternatively you can change the network card type in the VM definition to be one that Windows already has drivers for. You could also use the virtio image file to install virtio disk drivers if you want to use those instead of SATA. Edited November 18, 2019 by itimpi Quote Link to comment
Pascal51882 Posted November 18, 2019 Author Share Posted November 18, 2019 5 hours ago, itimpi said: There should be a virtio iso image file available which has the drivers to support virtio networking (the most efficient). Alternatively you can change the network card type in the VM definition to be one that Windows already has drivers for. Where can I find that drivers? Are they already in unraid? Quote Link to comment
itimpi Posted November 18, 2019 Share Posted November 18, 2019 Just now, Pascal51882 said: Where can I find that drivers? Are they already in unraid? When you edit a VM configuration there is a standard setting for the virtio ISO file. Quote Link to comment
Pascal51882 Posted November 18, 2019 Author Share Posted November 18, 2019 Thanks for your hint. Found it after some research. You need to load the driver over the VM manager and than it shows up Are there any settings I can try for my Linux VM to get it up and running? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.