unRAID OS version 6.5.0 Stable Release - Update Notes


ljm42

88 posts in this topic Last Reply

Recommended Posts

On 2/4/2018 at 8:08 PM, CHBMB said:

 

It's pretty valid "hysteria" as far as I can tell...

Admittedly a home server running unRAID may not be the most obvious of targets, but I think the concern has been pretty justified.

 

Don't know how as it is pretty much back-end without user download activity.. and Linux

 

Anyway, installed OK, took few minutes.

Link to post
  • Replies 87
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Popular Posts

I took a stab at writing a How-To based on the feedback in the release thread. If this looks helpful, maybe it could be added to the top post?   Edit: this warrants it's own topic, thank you

Problem: Cache disk shows as having an incompatible partition after upgrading to 6.4.   Reason: The cache disk was formatted by UD with an incompatible format.  Until 6.4, unRAID would accep

I also upgraded from 6.3.5 without removing any plugins.  Preclear, Unassigned Devices, S3 Sleep (disabled and not in use prior to upgrade) etc. were installed and caused me no upgrade issues.  They c

Posted Images

FYI - This was an upgrade from v6.3.5 direct to v6.4.1.

 

Someone pointed me to this thread (really wish I'd seen it before upgrading!) and the Aspeed IMPI fix got video working again for my ASRock motherboard. It would boot up but at the point the login prompt should appear the screen would just go blank, which was irritating but at least the web GUI and SSH still worked. The fix just has you add the "nomodeset" parameter to the main/default unRAID OS label/boot option's append line. Do I need to or should I add that "nomodeset" parameter to the other labels as well - GUI and Safe Modes? Just want to be sure I won't run into the problem again, especially if I need to go in to safe mode.

 

Another issue I have is that after the upgrade I can't get a VM to boot. My VM Manager starts up fine, so it isn't to do with the common problem of the paths getting lost or messed up mentioned in the OP. But when I try to start my Windows 10 VM after the upgrade I get bluescreen/frowny face with "your pc ran into a problem and needs to restart" in the VM. It proceeds to do so but ends u p just going to a "your pc did not start correctly" screen. Shutting down and rebooting it the same thing happens again and again. I tried a startup repair from that screen, but that didn't work. The only other thing I've tried changing so far is upping the q35-2.7 to the latest q35-2.10. No luck there either. Attaching the [removed xml] for that VM. Frustrating as that VM was chugging along fine just a couple hours ago under v6.3.5.

 

My Windows 7 VM seems to be doing just fine, but it is running seabios+i440fx rather than the Win10 one above which uses ovmf+q35, though I'm not sure that makes a difference or not. Also have a CentOS 7 VM that comes up fine and that uses seabios/q35. None have any physical hardware passed through to them. I've made a quick copy of my disk *.img files for the VMs in case anything corrupts my Windows install hopefully I can revert back if needed.

 

edit: Got it to boot by deleting the VM and recreating it, pointing back to my same .img file again for the C: drive. I remembered I made some tweaks to that VM from this thread a while back to do with SSD optimization and best guess is those settings messed things up with the new version of unRAID/KVM. Leaving this here though in case someone runs into the same issue.

 

Edited by deusxanime
hopefully got it fixed on my own
Link to post
58 minutes ago, Squid said:

You're getting the message because the upgrade notes recommend to add that line to all Ryzen systems.  Whether or not you need to do so or not, I cannot answer.

 

@killeriq, if you don't add it, write yourself a BIG note and post on your server that you didn't do it.  That way if you do get a problem in the future, you will have a reminder of the first thing to try!  

  • Like 1
Link to post
26 minutes ago, killeriq said:

1. Is there need to install something or the "line in config" is that "installation" ? 

2. does is das to do anything with C-states enable/disable in BIOS ?

 

Your     go     file in the   config    directory/folder  needs to look like this.  Be sure that you use a Linux aware editor (not Notepad that comes with Windows!).

 

#!/bin/bash
# Start the Management Utility
/usr/local/sbin/zenstates --c6-disable
/usr/local/sbin/emhttp &

 

For number 2, This setting is somewhere in your BIOS which you access before the unRAID boot process starts.  You may have to read the MB manual to find out the proper key to hit to access the BIOS.  

Link to post
1 minute ago, Frank1940 said:

Your     go     file in the   config    directory/folder  needs to look like this.  Be sure that you use a Linux aware editor (not Notepad that comes with Windows!).

 


#!/bin/bash
# Start the Management Utility
/usr/local/sbin/zenstates --c6-disable
/usr/local/sbin/emhttp &

 

For number 2, This setting is somewhere in your BIOS which you access before the unRAID boot process starts.  You may have to read the MB manual to find out the proper key to hit to access the BIOS.  

1. thanks

2. i have them ENABLED in BIOS, know where is it ;) , thats why i wonder why to disable "/usr/local/sbin/zenstates --c6-disable" if is related or not...I know they were issues when ENABLED in below v6.40RC , but now is fine, no system freeze

  • Like 1
Link to post

Hi folks, please use this thread to discuss the update notes (i.e. the OP) themselves. If you need support or have questions about the update, the actual release thread would be a better place to post.

Link to post
3 hours ago, Frank1940 said:

It seems like more and more MB's are having problems booting using legacy mode.  Pointing it out could save a lot of threads and posts about the issue...

 

Interestingly, with all prior versions of unRAID, booting in legacy mode with this board (ASRock C236 WSI) was no problem.  Something changed in unRAID 6.5.0 that no longer allows legacy boot.  Limetech is not sure what changed as they did nothing that, to their knowledge, would impact that.  However, since a roll-back to 6.4.1 once again boots the board in legacy mode, something must have changed in the Linux kernel with the current release.  Perhaps other boards will be similarly affected in the future.

 

The fix for this board involves making sure UEFI boot is the ONLY boot option in the BIOS.  If other boot options exist, it fails to boot even if UEFI is the first priority.

Edited by Hoopster
  • Thanks 1
Link to post
On 3/20/2018 at 8:34 AM, Hoopster said:

The fix for this board involves making sure UEFI boot is the ONLY boot option in the BIOS.  If other boot options exist, it fails to boot even if UEFI is the first priority.

That's interesting because for my board pure EFI or pure Legacy booting causes problems with my virtual machines, but setting the bios to, "EFI mode with Legacy support" (CSM on some boards) and then making sure unRAID boots as a legacy OS, is what makes my setup happy. Then again I've been on this mobo (new to me board) since only 6.4.0-rc's.

 

I would agree there kernel/(a feature) is hunting for something out of the bios, and boot-mode seems to effect whether it gets the resource, or not.

Link to post
5 minutes ago, Jcloud said:

That's interesting because for my board pure EFI or pure Legacy booting causes problems with my virtual machines, but setting the bios to, "EFI mode with Legacy support" (CSM on some boards) and then making sure unRAID boots as a legacy OS, is what makes my setup happy. Then again I've been on this mobo (new to me board) since only 6.4.0-rc's.

 

I would agree there kernel/(a feature) is hunting for something out of the bios, and boot-mode seems to effect whether it gets the resource, or not.

 

Just to clarify, I am referring to UEFI: (flash dirve)  boot being the only option in the boot priority settings. CSM is enabled and the boot option filter is set to "UEFI and Legacy."

 

My full findings are here:

 

 

Link to post

Hi,

I've run the upgrade assistant to upgrade from 6.41 to 6.5 and it gives me this message:

Issue Found: Cache drive partition doesn't start on sector 64. You will have problems. See here https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?tab=comments#comment-511923 for how to fix this.

This link tells me how to change some of my share settings but I don't really understand what I'm trying to do as it doesn't move everything off the cache drive so I don't want to proceed removing the cache drive and re-formatting it. Is this a new issue or has it just been introduced with 6.5? The instructions look as if they have been around a while.

 

 

Link to post
2 minutes ago, PhilDG said:

Hi,

I've run the upgrade assistant to upgrade from 6.41 to 6.5 and it gives me this message:

Issue Found: Cache drive partition doesn't start on sector 64. You will have problems. See here https://lime-technology.com/forums/topic/46802-faq-for-unraid-v6/?tab=comments#comment-511923 for how to fix this.

This link tells me how to change some of my share settings but I don't really understand what I'm trying to do as it doesn't move everything off the cache drive so I don't want to proceed removing the cache drive and re-formatting it. Is this a new issue or has it just been introduced with 6.5? The instructions look as if they have been around a while.

 

 

What the instructions are telling you is how to move the data off the cache by setting the cache preferences on the shares.  Then you can reformat the cache disk and move the data back onto the newly formatted cache.  Read the instructions carefully.

Link to post
11 minutes ago, PhilDG said:

Fix Common Problems? Yes I think so

Then do me a favour and post your diagnostics and I'll give you a command to run to see why it's failing here

Link to post
11 minutes ago, PhilDG said:

Thanks Squid

 

I have just run FCP in extended mode and got some SMB file permission issues but it doesn't report anything about the cache.

 

Here is the diagnostics

tower-diagnostics-20180324-2124.zip

Give me the output of this command from a terminal if you're so inclined:

fdisk -l /dev/sdb
Edited by Squid
Link to post
1 minute ago, Squid said:

Give me the output of this command from a terminal if you're so inclined:


fdisk -l /dev/sdb

root@tower:/# fdisk -l /dev/sdb
Disk /dev/sdb: 232.9 GiB, 250059350016 bytes, 488397168 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start       End   Sectors   Size Id Type
/dev/sdb1          63 488397167 488397105 232.9G 83 Linux
root@tower:/#

Link to post
  • trurl unpinned and locked this topic
  • trurl pinned this topic
  • trurl unpinned this topic
Guest
This topic is now closed to further replies.