unRAID Server Release 6.2.0-beta21 Available


Recommended Posts

  • Replies 545
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Will this fix the issue of unraid not appearing in Windows 10 network?    Currently have to use IP address to browse unraid from Windows 10 machine

 

Odd, it shows up fine for me on 6.1.9.

 

This often a router issue.  If you are using a static IP address, in many cases,you will have to program your router so that it knows that IP address is to be associated with the name of your server. 

Link to comment

"iommu=nopt"didn't work out...

 

instead of a full crash I now get a freeze on both vm and no response from server.

 

I found out that one of my *VM is faulty and the other is running pretty good (but the faulty one crash the good one)

 

I also tried to reinstall graphics driver on the faulty one but windows stop working everytime on the installation.

 

Unraid shows no errors on the hard drive and it's pretty much a factory new (slow but new)

 

here's my syslinux in case:

 

default /syslinux/menu.c32

menu title Lime Technology, Inc.

prompt 0

timeout 50

label unRAID OS

  menu default

  kernel /bzimage

  append iommu=nopt initrd=/bzroot

label unRAID OS GUI Mode

  kernel /bzimage

  append initrd=/bzroot,/bzroot-gui

label unRAID OS Safe Mode (no plugins, no GUI)

  kernel /bzimage

  append initrd=/bzroot unraidsafemode

label Memtest86+

  kernel /memtest

Link to comment

"iommu=nopt"didn't work out...

 

instead of a full crash I now get a freeze on both vm and no response from server.

 

I found out that one of my *VM is faulty and the other is running pretty good (but the faulty one crash the good one)

 

I also tried to reinstall graphics driver on the faulty one but windows stop working everytime on the installation.

 

Unraid shows no errors on the hard drive and it's pretty much a factory new (slow but new)

 

here's my syslinux in case:

 

default /syslinux/menu.c32

menu title Lime Technology, Inc.

prompt 0

timeout 50

label unRAID OS

  menu default

  kernel /bzimage

  append iommu=nopt initrd=/bzroot

label unRAID OS GUI Mode

  kernel /bzimage

  append initrd=/bzroot,/bzroot-gui

label unRAID OS Safe Mode (no plugins, no GUI)

  kernel /bzimage

  append initrd=/bzroot unraidsafemode

label Memtest86+

  kernel /memtest

 

Hmm, the plot thickens.  Let's try something else.  Similar to how you did iommu=nopt, I want you to add this to your syslinux.cfg (you can get rid of iommu=nopt).

 

vfio_iommu_type1.allow_unsafe_interrupts=1

 

Apply that change, reboot your system, and see if that helps.

Link to comment

Will this fix the issue of unraid not appearing in Windows 10 network?    Currently have to use IP address to browse unraid from Windows 10 machine

 

Odd, it shows up fine for me on 6.1.9.

 

This often a router issue.  If you are using a static IP address, in many cases,you will have to program your router so that it knows that IP address is to be associated with the name of your server.

 

Interesting.  I'll give that a shot.  No issues with Windows 7 accessing unraid though (same network same router). So something about Windows 10

Link to comment

Will this fix the issue of unraid not appearing in Windows 10 network?    Currently have to use IP address to browse unraid from Windows 10 machine

 

Odd, it shows up fine for me on 6.1.9.

 

This often a router issue.  If you are using a static IP address, in many cases,you will have to program your router so that it knows that IP address is to be associated with the name of your server.

 

Interesting.  I'll give that a shot.  No issues with Windows 7 accessing unraid though (same network same router). So something about Windows 10

 

Are you talking about the shares or the GUI?  If it is the shares, make sure the Workgroup in properly set and matches in both spelling and capitalization on your Win 10 computer.  I would also install the  Dynamix Local Master    plugin and see what computer is serving as the local master.  Most folks have found that you really want your unRAID server to be the local master as most of us have our servers running 24-7.  (In case you did not know, the local master performs the same function for the Samba network as a DNS server for the Internet.)

Link to comment

Hi team,

 

One thing ive noticed, not a major issue but i've seen it for a while in the betas is that certain elements of my vm configuration are not retained when i edit a VM. If i edit a VM each time the disk location reverts to none and i need to manually specify the vdisk location. Not a huge issue, but kinda a pain in the neck if when you want to tweak something then and you forget to reset the disk location. This issue was not evident in 6.1 releases.

 

If it has been brought up before apologies, would love to know the fix if it exists.

 

 

hamburgler-diagnostics-20160408-0849.zip

Link to comment

Hi team,

 

One thing ive noticed, not a major issue but i've seen it for a while in the betas is that certain elements of my vm configuration are not retained when i edit a VM. If i edit a VM each time the disk location reverts to none and i need to manually specify the vdisk location. Not a huge issue, but kinda a pain in the neck if when you want to tweak something then and you forget to reset the disk location. This issue was not evident in 6.1 releases.

 

If it has been brought up before apologies, would love to know the fix if it exists.

 

I can confirm this bug.

Link to comment

Jonp what does the iommu=nopt actually do?

 

The opposite of this, as 6.2 now has iommu=pt as the default (was not previously, so you're basically turning it back to "normal" mode):

"Sets the IOMMU into passthrough mode for host devices.  This reduces the overhead of the IOMMU for host owned devices, but also removes any protection the IOMMU may have provided again errant DMA from devices.  If you weren't using the IOMMU before, there's nothing lost.  Regardless of passthrough mode, the IOMMU will provide the same degree of isolation for assigned devices."

http://vfio.blogspot.com/2015/05/vfio-gpu-how-to-series-part-3-host.html

Link to comment

I had 2 issues upgrading from 6.19 to 6.2b21 (not sure I have the logs, can look if you really want).

 

Updated from GUI/plugin, all is well, reboot server.

Message: "boot failed" on boot (not a BIOS "no boot device") but a statement of boot failed.

Check to make sure boot order is right, legacy (non UEFI) boot, all is well, boot failed.

Remove drive, pop in Windows computer, re-run make_bootable.bat (as admin), finishes correctly, fixed the booting issue.

 

Previous VM's are listed correctly, but will not start by default.

On VM edit the Primary Vdisk is set to none (or auto if that was an initial option), have to set to manual, however it then knows the right location.

Works, however if I stop the VM and go to edit, it is now (always) listed as Primary vdisk location: none, I then edit to manual, it pops up with the right location, save. All is good, but if I edit again, back to none until I change it back to manual.

Link to comment

Anyone with a HBA with the Marvell 9215 Chipset having issues with seeing the drives?

Relatively classic Marvell bug behavior, please see Marvell disk controller chipsets and virtualization.  There's a workaround in the first post, and I think another in the thread somewhere.  Do try updating the firmwares for the cards.

 

You have 4 of them, with a drive connected to the second and a drive connected to the third (ata15 and ata19).  The cards were recognized and setup without issue, and their SCSI and ATA channels set up, but the following occurs when it attempts to set up the 2 connected drives -

Apr  6 21:55:39 Tower kernel: ata15: link is slow to respond, please be patient (ready=0)

Apr  6 21:55:39 Tower kernel: ata19: link is slow to respond, please be patient (ready=0)

Apr  6 21:55:39 Tower kernel: ata15: softreset failed (device not ready)

Apr  6 21:55:39 Tower kernel: ata19: softreset failed (device not ready)

Apr  6 21:55:39 Tower kernel: ata15: softreset failed (1st FIS failed)

Apr  6 21:55:39 Tower kernel: ata19: softreset failed (1st FIS failed)

Apr  6 21:55:39 Tower kernel: ata15: softreset failed (1st FIS failed)

Apr  6 21:55:39 Tower kernel: ata15: limiting SATA link speed to 3.0 Gbps

Apr  6 21:55:39 Tower kernel: ata19: softreset failed (1st FIS failed)

Apr  6 21:55:39 Tower kernel: ata19: limiting SATA link speed to 3.0 Gbps

Apr  6 21:55:39 Tower kernel: ata15: softreset failed (1st FIS failed)

Apr  6 21:55:39 Tower kernel: ata15: reset failed, giving up

Apr  6 21:55:39 Tower kernel: ata19: softreset failed (1st FIS failed)

Apr  6 21:55:39 Tower kernel: ata19: reset failed, giving up

And that's all the attention the drives got.  You'll recognize that response in the first post of the Marvell thread.  Hopefully a firmware update or other workaround will help you.

Link to comment

 

Hmm, the plot thickens.  Let's try something else.  Similar to how you did iommu=nopt, I want you to add this to your syslinux.cfg (you can get rid of iommu=nopt).

 

vfio_iommu_type1.allow_unsafe_interrupts=1

 

Apply that change, reboot your system, and see if that helps.

 

tried it, sadly still crashed pretty fast.

 

only this time, only one of the vm crashed but it seems to be different everytime so..

 

edit: nevermind they both crashed and the second time it froze unraid and I had to force power off the PC

Link to comment

On VM edit the Primary Vdisk is set to none (or auto if that was an initial option), have to set to manual, however it then knows the right location.

Works, however if I stop the VM and go to edit, it is now (always) listed as Primary vdisk location: none, I then edit to manual, it pops up with the right location, save. All is good, but if I edit again, back to none until I change it back to manual.

I also have this behaviour (also having upgraded from 6.1.9 to 6.2b21). This is both on a pre-existing W10 VM I had and on a new W10 VM I created post-upgrade.

 

Don't have access to the machine at present but can provide logs/diagnostics if required later.

Link to comment

I dont suppose anyone knows why with GPU Passthrough with an AMD card, why the Dedicated display doesnt initialize until the graphics drivers for the card have loaded in the OS ?

I.e. dont see UEFI splash screen or boot animation for windows to, but when reaching the login screen, the screen comes on and is working

Link to comment

I dont suppose anyone knows why with GPU Passthrough with an AMD card, why the Dedicated display doesnt initialize until the graphics drivers for the card have loaded in the OS ?

I.e. dont see UEFI splash screen or boot animation for windows to, but when reaching the login screen, the screen comes on and is working

 

That's the way mine works with UEFI boot.  Nothing displays until Windows has the display drivers loaded.  I believe that is normal.

Link to comment

I have found that every time I try to shutdown or kill my Windows 10 Vm with passthrough, The system has a hissy fit and pretty much becomes unusable.

 

Trying to kill, shutdown or even force stop a VM with passthrough results in a soft lockup (Webgui stops working, cant shutdown or restart unraid without hitting the reset button)

 

Looking at the ssh terminal, I see that the qemu process goes zombie

root@USS-Enterprise:~# ps ax | grep qem
1942 pts/0    S+     0:00 grep qem
2638 ?        Zl   173:12 [qemu-system-x86] <defunct>

 

running:

virsh destroy "Gaming Rig" results in:

error: Failed to destroy domain Gaming Rig
error: Failed to terminate process 61222 with SIGKILL: Device or resource busy

 

I've tried the following iommu options to try resolve but none have been successful (Both on their own and together):

iommu=nopt

vfio_iommu_type1.allow_unsafe_interrupts=1

 

Diagnostics attached.

 

I also have this annoying white section coming up on my display everytime I click anywhere on the unraid webui

Here everything is fine

unraid1.png

 

If I click somewhere on the page, this white section comes up, blocking the view of whatever is normally in that spot (In the VM editor and Docker manager, this is extremely annoying)

unraid2.png

 

Dug around in the inspector for Chrome and this element seems to be the cause (Deleting it makes the box go away)

unraid3.png

uss-enterprise-diagnostics-20160408-2353.zip

Link to comment

Quick tangential question: Over in the Preclear Plugin conversation someone made the comment that the primary purpose of the script is largely moot now, since the 6.2 beta can zero new drives in the background while the array is active. I thought I was keeping pretty close tabs on the beta, and usually comb through release notes with each new announcement... but this was news to me.

 

Is this a confirmed feature of the 6.2 beta now? A quick search of the forum didn't turn anything up, but is there a discussion somewhere with more details? I don't have a spare drive to add to my test rig or I'd just try it for myself... ;)

 

-A

Link to comment

Hi team,

 

One thing ive noticed, not a major issue but i've seen it for a while in the betas is that certain elements of my vm configuration are not retained when i edit a VM. If i edit a VM each time the disk location reverts to none and i need to manually specify the vdisk location. Not a huge issue, but kinda a pain in the neck if when you want to tweak something then and you forget to reset the disk location. This issue was not evident in 6.1 releases.

 

If it has been brought up before apologies, would love to know the fix if it exists.

 

I can confirm this bug.

 

Thanks for confirming, are the any known workarounds/temp fixes other than the manual method which i outlined ?

Link to comment

Quick tangential question: Over in the Preclear Plugin conversation someone made the comment that the primary purpose of the script is largely moot now, since the 6.2 beta can zero new drives in the background while the array is active. I thought I was keeping pretty close tabs on the beta, and usually comb through release notes with each new announcement... but this was news to me.

 

Is this a confirmed feature of the 6.2 beta now? A quick search of the forum didn't turn anything up, but is there a discussion somewhere with more details? I don't have a spare drive to add to my test rig or I'd just try it for myself... ;)

 

-A

 

This was talked about in one of the previous beta's as a discovery by a user (JohnnieBlack? maybe).

Anyhow, from what I recall this is a big deal as the array is available while clearing (as this was not the case previously), however it only clears the drive.

Preclear (by default) also does a post read after the clear to verify the SMART parameters have not changed, which is a good indication of a problematic drive.

(someone else will likely have more/better things to add to this clarification)

Link to comment

Hi team,

 

One thing ive noticed, not a major issue but i've seen it for a while in the betas is that certain elements of my vm configuration are not retained when i edit a VM. If i edit a VM each time the disk location reverts to none and i need to manually specify the vdisk location. Not a huge issue, but kinda a pain in the neck if when you want to tweak something then and you forget to reset the disk location. This issue was not evident in 6.1 releases.

 

If it has been brought up before apologies, would love to know the fix if it exists.

 

I can confirm this bug.

 

Thanks for confirming, are the any known workarounds/temp fixes other than the manual method which i outlined ?

 

Not that I am aware of. Out of curiosity, are you images saved on a cache drive or a drive that is mounted outside of array?

Link to comment

Quick tangential question: Over in the Preclear Plugin conversation someone made the comment that the primary purpose of the script is largely moot now, since the 6.2 beta can zero new drives in the background while the array is active. I thought I was keeping pretty close tabs on the beta, and usually comb through release notes with each new announcement... but this was news to me.

 

Is this a confirmed feature of the 6.2 beta now? A quick search of the forum didn't turn anything up, but is there a discussion somewhere with more details? I don't have a spare drive to add to my test rig or I'd just try it for myself... ;)

 

-A

 

This was talked about in one of the previous beta's as a discovery by a user (JohnnieBlack? maybe).

Anyhow, from what I recall this is a big deal as the array is available while clearing (as this was not the case previously), however it only clears the drive.

Preclear (by default) also does a post read after the clear to verify the SMART parameters have not changed, which is a good indication of a problematic drive.

(someone else will likely have more/better things to add to this clarification)

 

Many of us feel that running two or three preclear cycles will get the drive past the 'infant mortality' portion of the bathtub curve (google for further discussion).  Uncovering an early HD failure before putting that drive into an array is much less stressful than finding a compromised array in the first week after introducing a new drive into the mix. 

 

PS--- I could tell a story about how the concept of infant mortality came into general knowledge to the military during WWII but that would be completely out of topic...

Link to comment

 

Many of us feel that running two or three preclear cycles will get the drive past the 'infant mortality' portion of the bathtub curve (google for further discussion).  Uncovering an early HD failure before putting that drive into an array is much less stressful than finding a compromised array in the first week after introducing a new drive into the mix. 

 

PS--- I could tell a story about how the concept of infant mortality came into general knowledge to the military during WWII but that would be completely out of topic...

 

I agree that there is value in stress testing the drive and checking to make sure nothing is failing after the first few writes.

 

That said, maybe this signals that a new plugin needs to be made that removes the clearing portion of the plugin and instead focuses entirely on stress testing. Leave the clearing entirely to the OS since that's not an issue anymore.

 

This should allow more cycles of stress testing without having to have that long post read cycle (that verifies the dive is zeroed) meaning you can do more cycles faster... I think.

Link to comment

 

Many of us feel that running two or three preclear cycles will get the drive past the 'infant mortality' portion of the bathtub curve (google for further discussion).  Uncovering an early HD failure before putting that drive into an array is much less stressful than finding a compromised array in the first week after introducing a new drive into the mix. 

 

PS--- I could tell a story about how the concept of infant mortality came into general knowledge to the military during WWII but that would be completely out of topic...

 

I agree that there is value in stress testing the drive and checking to make sure nothing is failing after the first few writes.

 

That said, maybe this signals that a new plugin needs to be made that removes the clearing portion of the plugin and instead focuses entirely on stress testing. Leave the clearing entirely to the OS since that's not an issue anymore.

 

This should allow more cycles of stress testing without having to have that long post read cycle (that verifies the dive is zeroed) meaning you can do more cycles faster... I think.

 

I think you are missing a part of the  equation.  It is not only the stress introduced by the testing, the elapsed time is an integral part of the entire process.

Link to comment
Guest
This topic is now closed to further replies.