unRAID Server Version 6.2.3 Available


limetech

Recommended Posts

Download

 

Clicking 'Check for Updates' on the Plugins page is the preferred way to upgrade.

 

Upgrade Notes

 

If you are upgrading from unRAID Server 6.1.x or earlier, please refer to the Version 6.2.0 release post for important upgrade information.

 

Release Notes

 

The primary change in this release is to fix an issue where an "unclean shutdown" is not detected properly, meaning, if the server unexpectedly resets or loses power, upon reboot a parity-check operation should be initiated when the array is Started.  In previous 6.2.x releases this was not happening.

 

[glow=red,2,300]Everyone is strongly encouraged to update to this release.[/glow]

 

Known Issues

  • NFS connectivity issues with some media players.  This is solved in 6.3.0-rc4 and later.
  • Reported performance issues with some low/mid range hardware.  No movement on this yet.
  • Spindown of SAS and other pure-SCSI devices assigned to the parity-protected array does not work (but SATA devices attached to SAS controller does work).  No movement on this yet.

 

unRAID Server OS Change Log
===========================

Version 6.2.3 2016-11-01
------------------------

Base distro:

- php: version 5.6.27

Linux kernel:

- version 4.4.30
- md/unraid version: 2.6.8
  - make the 'check' command "correct"/"nocorrect" argument case insensitive
  - mark superblock 'clean' upon initialization
    
Management:

- emhttp: ensure disk shares have proper permissions set even if not being exported
- emhttp: fix detecton of unclean shutdown to trigger automatic parity check upon Start if necessary
- emhttp: unmount docker/libvirt loopback if docker/libvirt fail to start properly
- update: hwdata/{pci.ids,usb.ids,oui.txt,manuf.txt} smartmontools/drivedb.h
- webGui: correct handling of unclean shutdown detection
- webGui: fixed some help text typos
- webGui: Fixed: Web UI Docker Stop/Restart does not cleanly stop container

Version 6.2.2 2016-10-21
------------------------

Linux kernel:

- version 4.4.26 (CVE-2016-5195)
- reiserfs: incorporated patch from SUSE to fix broken extended attributes bug

Management:

- shutdown: save diagnostics and additional logging in event of cmdStop timeout
- webGui: Fixed: Windows unable to extract diagnostics zip file

Version 6.2.1 2016-10-04
------------------------

Base distro:

- curl: version 7.50.3 (CVE-2016-7167)
- gnutls: version 3.4.15 (CVE-2015-6251)
- netatalk: version 3.1.10
- openssl: version 1.0.2j (CVE-2016-7052, CVE-2016-6304, CVE-2016-6305, CVE-2016-2183, CVE-2016-6303, CVE-2016-6302, CVE-2016-2182, CVE-2016-2180, CVE-2016-2177, CVE-2016-2178, CVE-2016-2179, CVE-2016-2181, CVE-2016-6306, CVE-2016-6307, CVE-2016-6308)
- php: version 5.6.26 (CVE-2016-7124, CVE-2016-7125, CVE-2016-7126, CVE-2016-7127, CVE-2016-7128, CVE-2016-7129, CVE-2016-7130, CVE-2016-7131, CVE-2016-7132, CVE-2016-7133, CVE-2016-7134, CVE-2016-7416, CVE-2016-7412, CVE-2016-7414, CVE-2016-7417, CVE-2016-7411, CVE-2016-7413, CVE-2016-7418)

Linux kernel:

- version 4.4.23 (CVE-2016-0758, CVE-2016-6480)
- added config options:
  - CONFIG_SENSORS_I5500: Intel 5500/5520/X58 temperature sensor

Management:

- bug fix: For file system type "auto", only attempt btrfs,xfs,reiserfs mounts.
- bug fix: For docker.img and libvirt.img, if path on /mnt/ check for mountpoint on any subdir component, not just second subdir.
- bug fix: During shutdown force continue if array stop taking too long.
- bug fix: Handle case in 'mover' where rsync may move a file but also return error status.
- md/unraid: Fix bug where case of no data disks improperly detected.
- webGui: Add "Shutdown time-out" control on Disk Settings page.

Version 6.2 2016-09-15
----------------------

- stable release version

Link to comment
  • Replies 67
  • Created
  • Last Reply

Top Posters In This Topic

used plugin to upgrade from 6.2.2 to 6.2.3, says i need to reboot now..  went to stop server via gui and all I see is endless:

 

Spinning up all drives...Stopping services...Sync filesystems...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...Unmounting disks...Retry unmounting disk share(s)...

 

the system log shows this repeating but each time its a different disk:

Nov  3 07:39:25 husky emhttp: shcmd (129543): umount /mnt/cache |& logger
Nov  3 07:39:25 husky root: umount: /mnt/cache: target is busy
Nov  3 07:39:25 husky root:         (In some cases useful info about processes that
Nov  3 07:39:25 husky root:          use the device is found by lsof( or fuser(1).)
Nov  3 07:39:25 husky emhttp: Retry unmounting disk share(s)...
Nov  3 07:39:30 husky emhttp: Unmounting disks...
Nov  3 07:39:30 husky emhttp: shcmd (129544): umount /mnt/disk1 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk1: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129545): rmdir /mnt/disk1 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk1': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129546): umount /mnt/disk2 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk2: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129547): rmdir /mnt/disk2 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk2': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129548): umount /mnt/disk3 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk3: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129549): rmdir /mnt/disk3 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk3': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129550): umount /mnt/disk4 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk4: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129551): rmdir /mnt/disk4 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk4': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129552): umount /mnt/disk5 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk5: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129553): rmdir /mnt/disk5 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk5': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129554): umount /mnt/disk8 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk8: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129555): rmdir /mnt/disk8 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk8': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129556): umount /mnt/disk9 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk9: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129557): rmdir /mnt/disk9 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk9': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129558): umount /mnt/disk13 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk13: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129559): rmdir /mnt/disk13 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk13': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129560): umount /mnt/disk14 |& logger
Nov  3 07:39:31 husky root: umount: /mnt/disk14: mountpoint not found
Nov  3 07:39:31 husky emhttp: shcmd (129561): rmdir /mnt/disk14 |& logger
Nov  3 07:39:31 husky root: rmdir: failed to remove '/mnt/disk14': No such file or directory
Nov  3 07:39:31 husky emhttp: shcmd (129562): umount /mnt/cache |& logger
Nov  3 07:39:31 husky root: umount: /mnt/cache: target is busy
Nov  3 07:39:31 husky root:         (In some cases useful info about processes that
Nov  3 07:39:31 husky root:          use the device is found by lsof( or fuser(1).)
Nov  3 07:39:31 husky emhttp: Retry unmounting disk share(s)...

 

 

I just noticed that I have a ssh session to the box up, I close out of that.. looks like that was blocking it...  rebooting box now.

 

alright back up.. so far so good on 6.2.3

Link to comment

All of that was normal, when something is still open on an array drive.  Perhaps a message could pop up after third or fourth try - "A file or session is still open on the following: list_of_still_mounted_drives".

 

I couldn't tell you how many times I have struggled trying to figure out which VM I left an open folder/file or which SSH session I accidentally left myself at the pwd of a mount.  This would be extremely helpful to implement.

Link to comment

All of that was normal, when something is still open on an array drive.  Perhaps a message could pop up after third or fourth try - "A file or session is still open on the following: list_of_still_mounted_drives".

 

I couldn't tell you how many times I have struggled trying to figure out which VM I left an open folder/file or which SSH session I accidentally left myself at the pwd of a mount.  This would be extremely helpful to implement.

 

+1! Anything to make the shutdown process less stressful. It always feels like a roll of the dice when I need to reboot...

Link to comment

All of that was normal, when something is still open on an array drive.  Perhaps a message could pop up after third or fourth try - "A file or session is still open on the following: list_of_still_mounted_drives".

 

I couldn't tell you how many times I have struggled trying to figure out which VM I left an open folder/file or which SSH session I accidentally left myself at the pwd of a mount.  This would be extremely helpful to implement.

 

+1! Anything to make the shutdown process less stressful. It always feels like a roll of the dice when I need to reboot...

You can type the 'reboot' command and it will reboot with being stuck in that loop.  Of course if you have unsaved data in open files that is causing that loop, that data will probably be lost.

Link to comment

Not sure how to report this other than just describe it. Got an email that new version of Plex was available. Logged into UR and went to docker tab. Tried to restart the docker to get the update. No reaction. Chrome spinng showing its waiting for the server to respond. I look over and hard drive lights are lighting up in sequence like the array is spinning up ALL drives. Finally all drives awake and now I regain control over Docker restart command. Updates Plex and able to login there and verify it has new version.

 

My dockers are on a 2 SSD cache drive pool. Never seen it required that array be awake for dockers to be restarted. Diags attached.

tower-diagnostics-20161103-1257.zip

Link to comment

I just upgraded from 6.2.2 to 6.2.3, shutdown all my VM's first plus plex, rebooted the server and let it boot from the console not gui as I dont really use the gui that often. It gets to the bzroot..... and continually reboots in an endless loop. The server never comes back on line. [However] If i select to boot in GUI mode, it boots up just fine. Any ideas? Diagnostics attached.

core-diagnostics-20161103-2222.zip

Link to comment

All of that was normal, when something is still open on an array drive.  Perhaps a message could pop up after third or fourth try - "A file or session is still open on the following: list_of_still_mounted_drives".

 

I couldn't tell you how many times I have struggled trying to figure out which VM I left an open folder/file or which SSH session I accidentally left myself at the pwd of a mount.  This would be extremely helpful to implement.

 

+1! Anything to make the shutdown process less stressful. It always feels like a roll of the dice when I need to reboot...

You can type the 'reboot' command and it will reboot with being stuck in that loop.  Of course if you have unsaved data in open files that is causing that loop, that data will probably be lost.

Thanks! I'll try that next time it happens!

Link to comment

I had a small hiccup with my 6.2.3 upgrade from 6.2.2.  After installing the upgrade via the plugin I rebooted the server.  I have my unraid Tower setup to automatically start the array and then start several dockers and then two VMs.  At first everything looked fine..unraid booted and both VMs started...just after the VM was up I opened chrome and started to browse and the VM went blank.  I browsed to the unraid webgui and the array was stopped.  I restarted the array and the samething happened.  I tried on more time and on this array start, I immediately went to one of the VMs and stopped it.  The other VM spun up and has been running for a day.  I also then spun the second VM up and it has been running along side the first one for awhile now.

 

Here is the entry in the VM log that concerns me:

 

2016-11-03T18:10:58.258022Z qemu-system-x86_64: terminating on signal 15 from pid 14432

 

For each of the VM crashes there is one of these entries and each VM failed with the same signal 15, but on three different pids.

 

I have attached diagnostics.

 

Mods - feel free to move if you think this belongs in the KVM forum.

 

Any thoughts?

 

Dan

 

tower-diagnostics-20161103-1421.zip

Link to comment

Not sure how to report this other than just describe it. Got an email that new version of Plex was available. Logged into UR and went to docker tab. Tried to restart the docker to get the update. No reaction. Chrome spinng showing its waiting for the server to respond. I look over and hard drive lights are lighting up in sequence like the array is spinning up ALL drives. Finally all drives awake and now I regain control over Docker restart command. Updates Plex and able to login there and verify it has new version.

 

My dockers are on a 2 SSD cache drive pool. Never seen it required that array be awake for dockers to be restarted. Diags attached.

 

Happening again right now with latest Plex update. This new behaviour started when I added 2nd Parity drive. How about some help here?

Link to comment

I just upgraded from 6.2.2 to 6.2.3, shutdown all my VM's first plus plex, rebooted the server and let it boot from the console not gui as I dont really use the gui that often. It gets to the bzroot..... and continually reboots in an endless loop. The server never comes back on line. [However] If i select to boot in GUI mode, it boots up just fine. Any ideas? Diagnostics attached.

 

Also displaying some errors in the log as follows.

 

Nov  3 22:12:36 Core kernel: ACPI: Early table checksum verification disabled

Nov  3 22:12:36 Core kernel: spurious 8259A interrupt: IRQ7.

Nov  3 22:12:36 Core kernel: ACPI Error: [\_SB_.PCI0.XHC_.RHUB.HS11] Namespace lookup failure, AE_NOT_FOUND (20150930/dswload-210)

Nov  3 22:12:36 Core kernel: ACPI Exception: AE_NOT_FOUND, During name lookup/catalog (20150930/psobject-227)

Nov  3 22:12:36 Core kernel: ACPI Exception: AE_NOT_FOUND, (SSDT:xh_rvp08) while loading table (20150930/tbxfload-193)

Nov  3 22:12:36 Core kernel: ACPI Error: 1 table load failures, 8 successful (20150930/tbxfload-214)

Nov  3 22:12:36 Core kernel: acpi PNP0A08:00: _OSC failed (AE_ERROR); disabling ASPM

Nov  3 22:12:36 Core kernel: usb 1-7: device descriptor read/64, error -71

Nov  3 22:12:36 Core kernel: usb 1-7: device descriptor read/64, error -71

Nov  3 22:12:36 Core kernel: usb 1-7: device descriptor read/64, error -71

Nov  3 22:12:36 Core kernel: usb 1-7: device descriptor read/64, error -71

Nov  3 22:12:36 Core kernel: usb 1-7: device not accepting address 4, error -71

Nov  3 22:12:36 Core kernel: usb 1-7: device not accepting address 5, error -71

Nov  3 22:12:36 Core kernel: floppy0: no floppy controllers found

Nov  3 22:12:36 Core rpc.statd[1651]: Failed to read /var/lib/nfs/state: Success

Nov  3 22:12:51 Core avahi-daemon[8718]: WARNING: No NSS support for mDNS detected, consider installing nss-mdns!

Link to comment

Not sure how to report this other than just describe it. Got an email that new version of Plex was available. Logged into UR and went to docker tab. Tried to restart the docker to get the update. No reaction. Chrome spinng showing its waiting for the server to respond. I look over and hard drive lights are lighting up in sequence like the array is spinning up ALL drives. Finally all drives awake and now I regain control over Docker restart command. Updates Plex and able to login there and verify it has new version.

 

My dockers are on a 2 SSD cache drive pool. Never seen it required that array be awake for dockers to be restarted. Diags attached.

 

Happening again right now with latest Plex update. This new behaviour started when I added 2nd Parity drive. How about some help here?

If you edit the Plex Docker container and toggle on the Advanced View are there any folder mappings to just '/mnt/user'?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.