unRAID Server Release 6.2.0-beta18 Available


Recommended Posts

 

Is there any reason when creating a vdisk for a VM via the GUI that you are limited to raw and qcow2?  Could not any of the other KVM supported types be listed.  For instance I regularly use .vdi files which seem to provide a good compromise between space and performance and allow me to easily interchange the vdisk files with Virtualbox on my desktop PC.

There is also LVM :-)

There are several other KVM supported formats.  I just asked why the list was limited to the two I quoted.
Link to comment
  • Replies 421
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

My second issue is following:

When installing Win10 as a VM in UNRAID 6.2.0 BETA. The installation cannot update to newest Build 1511 without crashing with BSOD afterwards.

The BSOD shows different driver related issues. I did try different driver versions under the VM TAB including the latest version of "virtio-win-0.1.113"

 

I did try in IDE and AHCI with the new BIOS as well as the previous.

 

Sometimes the VM shuts down randomly and shows that the process were terminated.

 

I've used weeks on this previous with HDD array, still the same result.

 

- Do some know the answer or experienced simular problems?

How many CPU's do you have assigned to the VM?  There have been reports that you often need to reduce it to 1 while doing the update, and then can set it back to a larger number afterwards.

 

I will try that itimpi :-)

 

 

I have the 5960X shows up as 16 cores (8 cores/8 threads). I've used between 4-6 cores.

Link to comment

My second issue is following:

When installing Win10 as a VM in UNRAID 6.2.0 BETA. The installation cannot update to newest Build 1511 without crashing with BSOD afterwards.

The BSOD shows different driver related issues. I did try different driver versions under the VM TAB including the latest version of "virtio-win-0.1.113"

 

I did try in IDE and AHCI with the new BIOS as well as the previous.

 

Sometimes the VM shuts down randomly and shows that the process were terminated.

 

I've used weeks on this previous with HDD array, still the same result.

 

- Do some know the answer or experienced simular problems?

How many CPU's do you have assigned to the VM?  There have been reports that you often need to reduce it to 1 while doing the update, and then can set it back to a larger number afterwards.

 

I will try that itimpi :-)

 

 

I have the 5960X shows up as 16 cores (8 cores/8 threads). I've used between 4-6 cores.

 

I had a similar issue but have yet to see if 1 core resolves it. (same cpu)

 

One issue i am having is i am unable to get my second VM to pick up its GPU properly.

My main has my 780 attached, my second has been set to have a 750ti attached to it but it has no display output at all

 

I am using VNC to do the install and see if putting drivers on helps but these were not needed for my 780 install

 

Regards,

Jamie

 

 

 

Edit:

Just a quick edit, these cards have been working under seabios for 4 months until i did the upgrade so i'm unsure why this card is now not working. I have tried recreating the vm a number of times and have tried different outputs but no luck yet

 

Edit 2

As another 1, it seems the device is present in windows. Nvidia drivers install fine and the device is picked up in the device manager. Windows was unable to put any drivers on for the card at all but it still has no video out even via cables i know work.

Link to comment

My second issue is following:

When installing Win10 as a VM in UNRAID 6.2.0 BETA. The installation cannot update to newest Build 1511 without crashing with BSOD afterwards.

The BSOD shows different driver related issues. I did try different driver versions under the VM TAB including the latest version of "virtio-win-0.1.113"

 

I did try in IDE and AHCI with the new BIOS as well as the previous.

 

Sometimes the VM shuts down randomly and shows that the process were terminated.

 

I've used weeks on this previous with HDD array, still the same result.

 

- Do some know the answer or experienced simular problems?

How many CPU's do you have assigned to the VM?  There have been reports that you often need to reduce it to 1 while doing the update, and then can set it back to a larger number afterwards.

 

I will try that itimpi :-)

 

 

I have the 5960X shows up as 16 cores (8 cores/8 threads). I've used between 4-6 cores.

 

I had a similar issue but have yet to see if 1 core resolves it. (same cpu)

 

One issue i am having is i am unable to get my second VM to pick up its GPU properly.

My main has my 780 attached, my second has been set to have a 750ti attached to it but it has no display output at all

 

I am using VNC to do the install and see if putting drivers on helps but these were not needed for my 780 install

 

Regards,

Jamie

 

 

 

Edit:

Just a quick edit, these cards have been working under seabios for 4 months until i did the upgrade so i'm unsure why this card is now not working. I have tried recreating the vm a number of times and have tried different outputs but no luck yet

 

Hi bigjme :-)

 

I have just tested and confirmed that it worked with only one core assignet to the VM. It did update to 1511 without any issues.

Link to comment

All I can say is WOW! Thank you for all your hard work in bringing all the new features and improvements. I am really excited to try this out and disappointed that I have to wait until it's out of beta...my wife would kill be if I nuked our media server/pvr.  :)

Link to comment

Before I attempt to convert my two unassinged SSD/zfs pool drives to a cache pool - are we allowed to set and stick the raid mode?  I want to combine the 2 240G drives into a single unprotected 480G pool

 

Thanks

Myk

 

Yes, you can post-configure your cache pool as raid0.

 

After assigning both SSDs to your cache pool and starting the array, you can click on the first Cache disk and Balance with the following options for raid0:

-dconvert=raid0 -mconvert=raid0

 

Oh, here it is :-)

 

Just have to figure out how to actually do it

Link to comment

So i am still unable to get my 750ti to passthrough properly at all. I have even re-seated it on the motherboard.

 

Is there some limitation to having only 1 vm with gpu passthrough at a time? If i post my diagnostics will that help?

 

Edit:

So I just set up a new windows 7 vm under seabios with my 750ti and it boots instantly with video. My 750ti has a bios toggle so I'm going to see if that makes any difference with ovmf

 

Edit:

I have switched the toggle and tried again and it seems that the 750ti does not work at all under OVMF. Windows 10 under seabios works fine however (this is the basic windows 10 template but set to seabios)

 

I have no idea why this is happening. jonp it would be great if you could help me on this one?

Link to comment

 

Is there any reason when creating a vdisk for a VM via the GUI that you are limited to raw and qcow2?  Could not any of the other KVM supported types be listed.  For instance I regularly use .vdi files which seem to provide a good compromise between space and performance and allow me to easily interchange the vdisk files with Virtualbox on my desktop PC.

There is also LVM :-)

 

Other vdisk types may be added in the future.  The reality is that just because KVM "supports" something, doesn't mean that it will necessarily work in all configurations.  We certainly don't want to get dragged down a path of supporting various vdisk types for performance and other issues when raw and qcow2 work well enough for most.

 

Managing LVM for virtual machines and snapshots would require a lot more custom work and for not all that much benefit.  There are no plans at this time to implement LVM support in unRAID.  If someone wants to make a plugin that does it, we will gladly consider it for incorporation in the future, but putting time and effort into supporting that feature right now is just not going to happen.  Same thing can be said for NFS v4 support.

Link to comment

A quick update guys!  We are making good progress on patching the bugs thanks to all your feedback.  The NFS bug has already been figured out along with many others.  We hope to have another beta release out very soon with these fixes and more!

Link to comment

Please explain the logic of expanding the number of drives allowed in Trial to 6? I mean, aren't they restricted to 250Gb/ drive already? Who would want dual parity on 4 250 Gb beyond verifying that it 'works" - Is that the only reason? And then if they actually pay $29 they get - TADA - 6 drives?

 

Maybe the 250 Gb limitation was removed? Just thinking I am missing the point... ???  Seems like Pro users made out like a bandit and the rest of us - not so much?

Link to comment

Please explain the logic of expanding the number of drives allowed in Trial to 6? I mean, aren't they restricted to 250Gb/ drive already? Who would want dual parity on 4 250 Gb beyond verifying that it 'works" - Is that the only reason? And then if they actually pay $29 they get - TADA - 6 drives?

 

Maybe the 250 Gb limitation was removed? Just thinking I am missing the point... ???  Seems like Pro users made out like a bandit and the rest of us - not so much?

 

There is no 250 GB limit to trial anymore.  The purpose of expanding the trial to include more devices was so folks could test out a wider array configuration before committing to a purchase.  It also made sense to match the number of devices allowed for trial with that of Basic.

 

Pro users definitely got a nice uplift to their license capabilities.  Perhaps upgrading to Pro is in your future?

Link to comment

Please explain the logic of expanding the number of drives allowed in Trial to 6? I mean, aren't they restricted to 250Gb/ drive already? Who would want dual parity on 4 250 Gb beyond verifying that it 'works" - Is that the only reason? And then if they actually pay $29 they get - TADA - 6 drives?

 

Maybe the 250 Gb limitation was removed? Just thinking I am missing the point... ???  Seems like Pro users made out like a bandit and the rest of us - not so much?

 

There is no 250 GB limit to trial anymore.  The purpose of expanding the trial to include more devices was so folks could test out a wider array configuration before committing to a purchase.  It also made sense to match the number of devices allowed for trial with that of Basic.

 

Pro users definitely got a nice uplift to their license capabilities.  Perhaps upgrading to Pro is in your future?

 

To expand on Jon's reply: there was never a 250GB limit ever imposed on Trial.  We talked about it, actually implemented, but decided against releasing it.

Link to comment

Please explain the logic of expanding the number of drives allowed in Trial to 6? I mean, aren't they restricted to 250Gb/ drive already? Who would want dual parity on 4 250 Gb beyond verifying that it 'works" - Is that the only reason? And then if they actually pay $29 they get - TADA - 6 drives?

 

Maybe the 250 Gb limitation was removed? Just thinking I am missing the point... ???  Seems like Pro users made out like a bandit and the rest of us - not so much?

 

I almost regret buying basic now,  I went to check out the upgrade yesterday and realized it cost more to upgrade than If I had just bought Pro initially.  Doh  :-\

Link to comment

Please explain the logic of expanding the number of drives allowed in Trial to 6? I mean, aren't they restricted to 250Gb/ drive already? Who would want dual parity on 4 250 Gb beyond verifying that it 'works" - Is that the only reason? And then if they actually pay $29 they get - TADA - 6 drives?

 

Maybe the 250 Gb limitation was removed? Just thinking I am missing the point... ???  Seems like Pro users made out like a bandit and the rest of us - not so much?

 

I almost regret buying basic now,  I went to check out the upgrade yesterday and realized it cost more to upgrade than If I had just bought Pro initially.  Doh  :-\

If it's any consolation I upgraded 2 plus licenses to pro just before they increased the number of plus devices a while back. Probably could have lived with plus fine if I had waited. I never even mentioned it at the time. I have always felt like I got my money's worth from unRAID.
Link to comment

second time this has happened, i have a file in one share and am trying to move it to another during the transfer the speed will drop to 0 and the Openelec VM that's running will freeze.

 

I then trying to stop the array looks like it shuts down all the Dockers fine but the webui never loads back up i think it gets stuck on shutting down the frozen VM. Last time i tried to load the tower/VMS page and that made the webui stop loading.

 

i don't know whats happening maybe samba is crashing since both times Openelec was also watching video files using smb, on windows i can brows the smb shares and see the files but i cant get them to open.

 

also when i ssh and typed reboot it wont reboot

 

6.1 i did this many times never had a problem.

 

edit:

transferring the file first to my computer then to the share i want works, tested again to see if i can direct transfer between the shares and this time openelec vm was not playing any files but still the speed dropped to 0.

 

trying to go to the VMs page does not load and causes the webui to never load again and the smb shares are not working anymore.

Link to comment

So i am still unable to get my 750ti to passthrough properly at all. I have even re-seated it on the motherboard.

 

Is there some limitation to having only 1 vm with gpu passthrough at a time? If i post my diagnostics will that help?

 

Edit:

So I just set up a new windows 7 vm under seabios with my 750ti and it boots instantly with video. My 750ti has a bios toggle so I'm going to see if that makes any difference with ovmf

 

Edit:

I have switched the toggle and tried again and it seems that the 750ti does not work at all under OVMF. Windows 10 under seabios works fine however (this is the basic windows 10 template but set to seabios)

 

I have no idea why this is happening. jonp it would be great if you could help me on this one?

Why not open up a separate support topic in the KVM Hypervisor board?

Link to comment

Please explain the logic of expanding the number of drives allowed in Trial to 6? I mean, aren't they restricted to 250Gb/ drive already? Who would want dual parity on 4 250 Gb beyond verifying that it 'works" - Is that the only reason? And then if they actually pay $29 they get - TADA - 6 drives?

 

Maybe the 250 Gb limitation was removed? Just thinking I am missing the point... ???  Seems like Pro users made out like a bandit and the rest of us - not so much?

 

I almost regret buying basic now,  I went to check out the upgrade yesterday and realized it cost more to upgrade than If I had just bought Pro initially.  Doh  :-\

If it's any consolation I upgraded 2 plus licenses to pro just before they increased the number of plus devices a while back. Probably could have lived with plus fine if I had waited. I never even mentioned it at the time. I have always felt like I got my money's worth from unRAID.

And we have gotten our money's worth from all your support and help, you have given back hugely to the community!

Link to comment

major issue when upgrading through the gui, upon reboot after install i'm getting a default install  with eth0 now on dhcp and the webserver not running and more importantly all 10Tb of data, dockers, vms etc not showing.  Any ideas? Being a dumbass I didn't back up my configuration file assuming since its beta there wouldn't be a show stopper like this.  :-\

 

Anyone have any ideas?

 

 

edit: that was scary - usb taken off the esxi header and data was as expected when plugged into laptop.  Issue still remains that the server won't load as expected.

 

Link to comment

A note to anyone posting issues with 6.2.

 

If you do not include your diagnostics, your post will probably go unanswered and your issue unresolved.  Describing an issue you are having in a reply on this thread is not sufficient to report a bug.  We need a copy of your system diagnostics after the bug occurred.

 

The diagnostics zip file contains critical information we need to help debug any issues discovered.

Link to comment
Guest
This topic is now closed to further replies.