unRAID Server Version 6.2.4 Available


limetech

Recommended Posts

Download

 

Clicking 'Check for Updates' on the Plugins page is the preferred way to upgrade.

 

Upgrade Notes

 

If you are upgrading from unRAID Server 6.1.x or earlier, please refer to the Version 6.2.0 release post for important upgrade information.

 

Release Notes

 

This release updates the

curl

package which was recently updated with a number of security fixes.

 

[glow=red,2,300]Everyone is strongly encouraged to update to this release.[/glow]

 

Known Issues

  • NFS connectivity issues with some media players.  This is solved in 6.3.0-rc4 and later.
  • Reported performance issues with some low/mid range hardware.  No movement on this yet.
  • Spindown of SAS and other pure-SCSI devices assigned to the parity-protected array does not work (but SATA devices attached to SAS controller does work).  No movement on this yet.

 

unRAID Server OS Change Log
===========================

Version 6.2.4 2016-11-05
------------------------

Base distro:

- curl: version 7.51.0 (CVE-2016-8615, CVE-2016-8616, CVE-2016-8617, CVE-2016-8618, CVE-2016-8619, CVE-2016-8620, CVE-2016-8621, CVE-2016-8622, CVE-2016-8623, CVE-2016-8624, CVE-2016-8625)
- glibc-zoneinfo: version 2016i

Linux kernel:

- added config options:
  - CONGIG_CFQ_GROUP_IOSCHED: CFQ Group Scheduling support
  - CONFIG_CGROUP_PIDS: PIDs cgroup subsystem
  - CONGIG_BLK_DEV_THROTTLING: Block layer bio throttling support
  - CONFIG_XFRM_USER: Transformation user configuration interface

Version 6.2.3 2016-11-01
------------------------

Base distro:

- php: version 5.6.27

Linux kernel:

- version 4.4.30
- md/unraid version: 2.6.8
  - make the 'check' command "correct"/"nocorrect" argument case insensitive
  - mark superblock 'clean' upon initialization
    
Management:

- emhttp: ensure disk shares have proper permissions set even if not being exported
- emhttp: fix detecton of unclean shutdown to trigger automatic parity check upon Start if necessary
- emhttp: unmount docker/libvirt loopback if docker/libvirt fail to start properly
- update: hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} smartmontools/drivedb.h
- webGui: correct handling of unclean shutdown detection
- webGui: fixed some help text typos
- webGui: Fixed: Web UI Docker Stop/Restart does not cleanly stop container

Version 6.2.2 2016-10-21
------------------------

Linux kernel:

- version 4.4.26 (CVE-2016-5195)
- reiserfs: incorporated patch from SUSE to fix broken extended attributes bug

Management:

- shutdown: save diagnostics and additional logging in event of cmdStop timeout
- webGui: Fixed: Windows unable to extract diagnostics zip file

Version 6.2.1 2016-10-04
------------------------

Base distro:

- curl: version 7.50.3 (CVE-2016-7167)
- gnutls: version 3.4.15 (CVE-2015-6251)
- netatalk: version 3.1.10
- openssl: version 1.0.2j (CVE-2016-7052, CVE-2016-6304, CVE-2016-6305, CVE-2016-2183, CVE-2016-6303, CVE-2016-6302, CVE-2016-2182, CVE-2016-2180, CVE-2016-2177, CVE-2016-2178, CVE-2016-2179, CVE-2016-2181, CVE-2016-6306, CVE-2016-6307, CVE-2016-6308)
- php: version 5.6.26 (CVE-2016-7124, CVE-2016-7125, CVE-2016-7126, CVE-2016-7127, CVE-2016-7128, CVE-2016-7129, CVE-2016-7130, CVE-2016-7131, CVE-2016-7132, CVE-2016-7133, CVE-2016-7134, CVE-2016-7416, CVE-2016-7412, CVE-2016-7414, CVE-2016-7417, CVE-2016-7411, CVE-2016-7413, CVE-2016-7418)

Linux kernel:

- version 4.4.23 (CVE-2016-0758, CVE-2016-6480)
- added config options:
  - CONFIG_SENSORS_I5500: Intel 5500/5520/X58 temperature sensor

Management:

- bug fix: For file system type "auto", only attempt btrfs,xfs,reiserfs mounts.
- bug fix: For docker.img and libvirt.img, if path on /mnt/ check for mountpoint on any subdir component, not just second subdir.
- bug fix: During shutdown force continue if array stop taking too long.
- bug fix: Handle case in 'mover' where rsync may move a file but also return error status.
- md/unraid: Fix bug where case of no data disks improperly detected.
- webGui: Add "Shutdown time-out" control on Disk Settings page.

Version 6.2 2016-09-15
----------------------

- stable release version

Link to comment
  • Replies 106
  • Created
  • Last Reply

Top Posters In This Topic

Just went from 6.2.3 to 6.2.4 using built in updater.

 

WebGUI isn't coming up, and I'm getting this notification sent to my gmail every minute:

 

/bin/sh: line 1: 22637 Segmentation fault      /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null

 

All my dockers are functioning and my shares are available.  What's my next step?

 

UPDATE:

Shutdown -r now from command line allowed me to reboot, and WebGUI was up upon reboot.  I did save a copy of syslog to disk1 before rebooting in case you're interested.

Link to comment

Upgraded from 6.2.0 and my Windows 10 VM will no longer boot.  It starts the boot process and then switches to "preparing automatic repair".  I've toyed around with some of the options there but none have worked.  It thinks I need to reinstall Windows. 

 

Anyone have a suggestion?

 

Here's my qemu log for the VM:

 

2016-11-06 19:14:24.957+0000: starting up libvirt version: 1.3.1, qemu version: 2.5.1, hostname: MooreunRaid

LC_ALL=C PATH=/bin:/sbin:/usr/bin:/usr/sbin HOME=/ QEMU_AUDIO_DRV=none /usr/local/sbin/qemu -name 'Cottage Win 10' -S -machine pc-i440fx-2.5,accel=kvm,usb=off,mem-merge=off -cpu host,hv_time,hv_relaxed,hv_vapic,hv_spinlocks=0x1fff,hv_vendor_id=none -drive file=/usr/share/qemu/ovmf-x64/OVMF_CODE-pure-efi.fd,if=pflash,format=raw,unit=0,readonly=on -drive file=/etc/libvirt/qemu/nvram/ee9ea7e9-9968-30dd-95a5-a6eaa4d6cfac_VARS-pure-efi.fd,if=pflash,format=raw,unit=1 -m 3072 -realtime mlock=on -smp 3,sockets=1,cores=3,threads=1 -uuid ee9ea7e9-9968-30dd-95a5-a6eaa4d6cfac -no-user-config -nodefaults -chardev 'socket,id=charmonitor,path=/var/lib/libvirt/qemu/domain-Cottage Win 10/monitor.sock,server,nowait' -mon chardev=charmonitor,id=monitor,mode=control -rtc base=localtime -no-hpet -no-shutdown -boot strict=on -device ich9-usb-ehci1,id=usb,bus=pci.0,addr=0x7.0x7 -device ich9-usb-uhci1,masterbus=usb.0,firstport=0,bus=pci.0,multifunction=on,addr=0x7 -device ich9-usb-uhci2,masterbus=usb.0,firstport=2,bus=pci.0,addr=0x7.0x1 -device ich9-usb-uhci3,masterbus=usb.0,firstport=4,bus=pci.0,addr=0x7.0x2 -device virtio-serial-pci,id=virtio-serial0,bus=pci.0,addr=0x4 -drive 'file=/mnt/user/domains/Cottage Win 10/vdisk1.img,format=raw,if=none,id=drive-virtio-disk2,cache=writeback' -device virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-disk2,id=virtio-disk2,bootindex=1 -drive file=/mnt/user/ISO/Win10_1511_English_x64.iso,format=raw,if=none,id=drive-ide0-0-0,readonly=on -device ide-cd,bus=ide.0,unit=0,drive=drive-ide0-0-0,id=ide0-0-0,bootindex=2 -drive file=/mnt/user/ISO/virtio-win-0.1.118-1.iso,format=raw,if=none,id=drive-ide0-0-1,readonly=on -device ide-cd,bus=ide.0,unit=1,drive=drive-ide0-0-1,id=ide0-0-1 -netdev tap,fd=22,id=hostnet0,vhost=on,vhostfd=23 -device virtio-net-pci,netdev=hostnet0,id=net0,mac=52:54:00:32:7a:57,bus=pci.0,addr=0x3 -chardev pty,id=charserial0 -device isa-serial,chardev=charserial0,id=serial0 -chardev 'socket,id=charchannel0,path=/var/lib/libvirt/qemu/channel/target/domain-Cottage Win 10/org.qemu.guest_agent.0,server,nowait' -device virtserialport,bus=virtio-serial0.0,nr=1,chardev=charchannel0,id=channel0,name=org.qemu.guest_agent.0 -device usb-tablet,id=input0 -vnc 0.0.0.0:0,websocket=5700 -k en-us -device qxl-vga,id=video0,ram_size=67108864,vram_size=67108864,vgamem_mb=16,bus=pci.0,addr=0x2 -device vfio-pci,host=01:00.1,id=hostdev0,bus=pci.0,addr=0x6 -device virtio-balloon-pci,id=balloon0,bus=pci.0,addr=0x8 -msg timestamp=on

Domain id=4 is tainted: high-privileges

Domain id=4 is tainted: host-cpu

char device redirected to /dev/pts/0 (label charserial0)

 

 

 

And syslog attached...

Syslog.txt

Link to comment

Just went from 6.2.3 to 6.2.4 using built in updater.

 

WebGUI isn't coming up, and I'm getting this notification sent to my gmail every minute:

 

/bin/sh: line 1: 22637 Segmentation fault      /usr/local/emhttp/plugins/dynamix/scripts/monitor &> /dev/null

 

All my dockers are functioning and my shares are available.  What's my next step?

 

UPDATE:

Shutdown -r now from command line allowed me to reboot, and WebGUI was up upon reboot.  I did save a copy of syslog to disk1 before rebooting in case you're interested.

 

Whenever you are reporting an issue in the release threads, you should always attach either the syslog or the diagnostics file without being asked.  (I am posting this not only for you but for some other people who are neglecting to do so.)  It is so simple for the poster to do and saves everyone's time and effort.

Link to comment

Quick and uneventful upgrade via the Web GUI.

 

Same here. But is it just me or does it seem like reboots are taking longer since 6.2.2? Unmounting drives seems to take forever now.

 

EDIT: Just wanted to add that I do have the Powerdown plugin active on my server during these delayed reboots.

Link to comment

Same here. But is it just me or does it seem like reboots are taking longer since 6.2.2? Unmounting drives seems to take forever now.

 

I haven't noticed that. Are you sure you don't have any files open?

 

I don't think so but it's a good question. Is it best practice to shutdown any running VM's manually prior to reboot?  Unless I know it's doing something (like downloading), I usually let the shutdown script take care of it.  Perhaps something was running in the background?

Link to comment

Same here. But is it just me or does it seem like reboots are taking longer since 6.2.2? Unmounting drives seems to take forever now.

 

I haven't noticed that. Are you sure you don't have any files open?

 

I don't think so but it's a good question. Is it best practice to shutdown any running VM's manually prior to reboot?  Unless I know it's doing something (like downloading), I usually let the shutdown script take care of it.  Perhaps something was running in the background?

I normally shut vm's down manually as I've had the shutdown process hang a few times when I forgot to do that first.

Link to comment

Same here. But is it just me or does it seem like reboots are taking longer since 6.2.2? Unmounting drives seems to take forever now.

 

I haven't noticed that. Are you sure you don't have any files open?

 

I don't think so but it's a good question. Is it best practice to shutdown any running VM's manually prior to reboot?  Unless I know it's doing something (like downloading), I usually let the shutdown script take care of it.  Perhaps something was running in the background?

I normally shut vm's down manually as I've had the shutdown process hang a few times when I forgot to do that first.

 

Curious how many others have had this as an issue before...

Link to comment

Upgraded from previous to this, no issues so far.

 

My process:

Stop dockers

Shutdown VMs

Stop array

Apply update

Reboot

 

Is all this necessary, out of curiosity?

 

Doesn't matter where in the process you "Apply update" => that's just changing the files on the flash drive so the next time you boot the newer version will be loaded.  My understanding of the current state of the shutdown code is that you don't have to manually shutdown the Dockers and VMs, but it certainly doesn't hurt to do so.

 

Link to comment

I normally shut vm's down manually as I've had the shutdown process hang a few times when I forgot to do that first.

 

Curious how many others have had this as an issue before...

I definitly had issues in the past, stopping the array without manually shutting down VMs/Docker. (once the array stopped, I never had issues though)

 

If I remember correctly, it was a thing in 6.1. Spinning up all disks and shutting down every VM and Container one by one, has become a habit.

I think the "stop Array" did not work, when I was logged into the console and had the prompt somewhere in the array (/mnt/user/...).

The web-gui was not responding until the array was stopped, which never happened, so I could not restart the array or cleanly shut down the server.

I couldn't even tell in which state the VMs/container were, so at first I thought the VMs/Container were causing the issue.

 

And because of the 6.2 (beta) issues with the complete array lockdown ("num_stripes"), I still do it as a precaution.

So, if anything happens after I press "stop array", at least I know there is no VM/container running/shutting down that could be the issue.

 

The system usually runs 24/7, so it's not a big deal, but since you asked... :)

Due to that habit, I can't tell if its still an issue or not.

 

That beeing said, no issues in 6.2.4 that I can speak of  8)

 

I had a system freeze due to an GPU-overclock attempt and fat-fingering a '9' instead of '0'...  :-[

The unclean shutdown was correctly recognized, parity check started, 9 errors corrected with normal speed.

That did not happen during beta array-lockups and may have been a result of the fixes 6.2.3.

Link to comment

There have been various improvements to the shutdown processes, so it would be useful to hear if there are still shutdown issues, using this latest version, especially from those who have had issues with shutting down in the past.

What information would be useful? Mine seems to hang since I no longer have Powerdown. Still on 6.2.2 while I wait for a parity rebuild and check to finish. Are any logs/diagnostics preserved now on reboot?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.