Unraid OS version 6.6.2 available


limetech

Recommended Posts

19 hours ago, johnnie.black said:

This is likely a problem with libvirt.img and not directly related to this release, you should start a thread on the general support forum and don't forget to please post your diagnostics: Tools -> Diagnostics

too late, after the parity sync. was finished i found out that HVM was disabled in Bios (i dont know how this happens) and i think it got removed because i have set 1 VM to autostart. the other VM xml was back after reboot... just created a new vm, after create replaced the 'new' .img file with the old one and it boots with no errors.

 

another think that work again, Plugin, docker and OS update check! only the auto update plugin did update it, but manually search did gifts the status 'Unknow" 

after 10 hours, NFS is stil works.

 

Link to comment
Just now, limetech said:

Why don't you try it?

You would think I would just try it, I have some rclone transfers going and my unRAID also has my pfSense VM on it so I was hoping someone else would have confirmed it fixed before I go through the rigmarole of upgrading and potentially having to downgrade again. If no one else confirms it, I can probably do the test myself this weekend. I was just hoping this might at least have a good chance to fix it, making it work trying. 😜

Link to comment

Still having a weird issue with slow VM boot times, this issue did not exist in 6.5.3, I downgraded and checked.  VM goes 100% single core utilization and takes several minutes to boot.  Afterwards the VM works at normal speed/CPU utilization.  There are VM crashes, but I believe those to be something else's fault.  

This issue has existed in 6.6.0rc1 all the way to now, with the same VM config.

 

Here's a sample of one of the VM configs I'm using.

 

 

sampleVM.txt

Link to comment
On 10/14/2018 at 11:56 PM, hawihoney said:

Couldn't wait. Need these servers.

 

Did delete all txz files in /boot/config/plugins/[DevPack|NerdPack]/[6.5|6.6]. User Shares are back again. Thanks for now.

 

The DevPack/NerdPack plugins are of great help. But they are always a bulk of problems. Is there a way to bind them more closely to main Unraid in case of version compatibility and compile options?

 

 

Thanks for this. I’m as tearing my hair out as to why none of my shares were mounting. As soon as I deleted all the txz’s for devoack, everything was fine. (Note: this was upgrade to 6.6.2 from 6.6.1)

Link to comment
On 10/14/2018 at 8:10 PM, jwiese997 said:

I finally figured it out. It was my rclone script that was causing it to go bonkers. I've corrected that and have updated to 6.6.2 with no problems now. Thanks for the help.

@jwiese997

What did you have to do with the rclone script to get the server to work properly? What did the rclone script do if i may ask. I has severe issues with my update on my backup array, Iwould like to shut off or uninstall anything that is going to mess up my primary server.

Link to comment
9 hours ago, sentein said:

@jwiese997

What did you have to do with the rclone script to get the server to work properly? What did the rclone script do if i may ask. I has severe issues with my update on my backup array, Iwould like to shut off or uninstall anything that is going to mess up my primary server.

I just uninstalled rclone and user scripts. I haven't started that back up yet so I haven't attempted it again.

Link to comment

I'm on 6.5.3 and looking at the release notes, I cannot make heads or tails out of most of it. What is important, what is breaking, what is security related, what is changed or moved?...

 

Where can I find a concise list of updates/fixes? Just the important parts please.

Link to comment
On 10/16/2018 at 8:13 PM, thenonsense said:

Still having a weird issue with slow VM boot times, this issue did not exist in 6.5.3, I downgraded and checked.  VM goes 100% single core utilization and takes several minutes to boot.  Afterwards the VM works at normal speed/CPU utilization.  There are VM crashes, but I believe those to be something else's fault.  

This issue has existed in 6.6.0rc1 all the way to now, with the same VM config.

+1 with 2990WX (VM 48GB of RAM + 24 cores)

 

The diff is that in my case, it didn't get back to normal speed after slow boot. It stayed laggy for well over 5 minutes so I downgraded.

 

I tried every version since 6.6.0. Originally thought to be a BIOS-related problem with Gigabyte being behind with their AGESA but even with latest version (same AGESA as ASRock BIOS), the problem persisted.

 

Probably still a Gigabyte-specific bug but unraid doesn't care enough to help out cuz we aren't that important, at least a lot less important that the GUI whiners. 👎

 

We'll both be on 6.5.3 for a foreseeable future mate, or pay up for a new motherboard. 👊

 

 

Link to comment
2 hours ago, testdasi said:

+1 with 2990WX (VM 48GB of RAM + 24 cores)

 

The diff is that in my case, it didn't get back to normal speed after slow boot. It stayed laggy for well over 5 minutes so I downgraded.

 

I tried every version since 6.6.0. Originally thought to be a BIOS-related problem with Gigabyte being behind with their AGESA but even with latest version (same AGESA as ASRock BIOS), the problem persisted.

 

Probably still a Gigabyte-specific bug but unraid doesn't care enough to help out cuz we aren't that important, at least a lot less important that the GUI whiners. 👎

 

We'll both be on 6.5.3 for a foreseeable future mate, or pay up for a new motherboard. 👊

Downgrading seems to be the only viable option.  Unfortunately I played with fire too long while upgrading from 6.6.1 to 6.6.2, so now I have to retrofit instead of downgrading via the update OS tool.  Not to mention losing out on every benefit that came with 6.6.0 (I'm a big fan of the VM configs saving various XML-only settings).

 

As for ponying up for a motherboard, I'm unsure that's the problem.  More research is needed.

Link to comment
4 hours ago, testdasi said:

+1 with 2990WX (VM 48GB of RAM + 24 cores)

 

The diff is that in my case, it didn't get back to normal speed after slow boot. It stayed laggy for well over 5 minutes so I downgraded.

 

I tried every version since 6.6.0. Originally thought to be a BIOS-related problem with Gigabyte being behind with their AGESA but even with latest version (same AGESA as ASRock BIOS), the problem persisted.

 

Probably still a Gigabyte-specific bug but unraid doesn't care enough to help out cuz we aren't that important, at least a lot less important that the GUI whiners. 👎

 

We'll both be on 6.5.3 for a foreseeable future mate, or pay up for a new motherboard. 👊

 

 

Ok so...

 

First Linux as a whole is still patching in Threadripper 2 support at KERNEL level.

 

Windows is having issues with the higher core ships also.

 

Second AMD has released BUGGY/BROKEN AGESA for us threadripper users with linux the past few months. It's mostly fair to point at them to fix their stuff.

 

Third coming off that the devlopers here don't care is a WEAK accusation.

 

We are pretty much running the latest kernel with current security fixes and they ARE listening to feedback on GUI. Can't please everyone.

 

Again the blame for your woes is a multi fold finger pointing with other companies deserving it more.

 

Plus if you live on the edge of hardware, sometimes you can slip :)

Edited by Dazog
Link to comment

"In terms of code changes, this is a very minor release; however, we changed a significant linux kernel CONFIG setting that changes the kernel preemption model.  This change should not have any deleterious effect on your server, and in fact may improve performance in some areas, certainly in VM startup (see below).  This change has been thoroughly tested - thank you! to all who participated in the 6.5.3-rc series testing.

 

Background: several users have reported, and we have verified, that as the number of cores assigned to a VM increases, the POST time required to start a VM increases seemingly exponentially with OVMF and at least one GPU/PCI device passed through.  Complicating matters, the issue only appears for certain Intel CPU families.  It took a lot of work by @eschultz in consultation with a couple linux kernel developers to figure out what was causing this issue.  It turns out that QEMU makes heavy use of a function associated with kernel CONFIG_PREEMPT_VOLUNTARY=yes to handle locking/unlocking of critical sections during VM startup.  Using our previous kernel setting CONFIG_PREEMPT=yes makes this function a NO-OP and thus introduces serious, unnecessary locking delays as CPU cores are initialized.  For core counts around 4-8 this delay is not that noticeable, but as the core count increases, VM start can take several minutes(!)."

 

From the 6.5.3 release notes here: 

I'm not a linux dev but just reading this, it sounds like a possible issue, and too coincidental to overlook.  Maybe we should test compiling with this toggled?  Has it been changed recently?

 

It's not us vs each other, it's us vs the problem.  Don't bite the hand that feeds you.  @limetech Jon, I remember having you on the phone in January and we talked about how many fixes came from the community vs from AMD themselves.  If AMD sees this and helps, all the better, but I think we as the community need to take the first step.

 

I'll look at building a kernel.  Others do what you can for what problems you may have.

Link to comment
On 10/16/2018 at 12:13 PM, thenonsense said:

Still having a weird issue with slow VM boot times, this issue did not exist in 6.5.3, I downgraded and checked.  VM goes 100% single core utilization and takes several minutes to boot.  Afterwards the VM works at normal speed/CPU utilization.  There are VM crashes, but I believe those to be something else's fault.  

This issue has existed in 6.6.0rc1 all the way to now, with the same VM config.

 

Here's a sample of one of the VM configs I'm using.

Is there a Bug Report for this?

Link to comment

Upgraded without a issue.

 

I'm still very satisfied with the new look of unRAID! In my opinion it looks more professional.

 

[offtopic]

Are there plans to integrate a firewall into unRAID? I like to host a unRAID server in a Data center.

[/offtopic]

Link to comment
  • limetech unfeatured and unpinned this topic

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.