unRAID Server Release 6.2.0-beta18 Available


Recommended Posts

NOTE:  PLEASE READ THIS POST IN ITS ENTIRETY BEFORE UPGRADING!  There are special instructions for taking advantage of new features as well as one-time procedures you may have to perform with virtual machines / docker containers.

 

While every effort has been made to ensure no data loss, THIS IS BETA SOFTWARE.... use at your own risk...

 

To upgrade your server to this release, navigate to Plugins/Install Plugin, copy this text into the box and click Install:

https://raw.githubusercontent.com/limetech/unRAIDServer-6.2/master/unRAIDServer.plg

 

Alternately, you may Download the release and generate a fresh install.

 

Important: Your server will require internet access upon boot in order to validate with the LimeTech key server.

 

For -beta and -rc releases, of all key types, the server must validate at boot time.  The reason is that this lets us "invalidate" a beta release.  That is, if a beta gets out there with a major snafu we can prevent it being run by new users who stumble upon the release zip file.  For example, if there is a bug in P+Q handling.  Remember the reiserfs snafu last year?  We want to minimize that.

 

For stable releases, Basic/Plus/Pro keys do not validate at boot time, that is, works the same as it always has.

 

Starting with 6.2, Trials will require validation with key server.  This is in preparation for making the Trial experience easier.

 

Why is this -beta18, where is -beta1..17?  Those were private releases; this is the first public release.

 

Major Highlights

  • licensing:  increased device limits for Trial and Pro
  • array:  dual parity support
  • array:  turbo write support
  • boot:  GUI boot mode
  • docker:  automatic volume mapping preference for /config
  • docker:  simplified views for basic view
  • docker:  do not force containers to start/run when updated
  • shfs:  automatic system shares for docker and virtual machines
  • virtualization:  simplified vdisk storage management
  • virtualization:  improved OVMF support
  • virtualization:  hyper-v enlightenment support for NVIDIA GTX GPUs
  • virtualization:  assign stubbed PCI devices to virtual machines
  • virtualization:  misc. GPU pass through improvements
  • virtualization:  support for USB 3 controller emulation
  • virtualization:  OpenELEC 6.0.0 virtual machine template
  • virtualization:  improved VNC quality/performance with QXL support
  • virtualization:  download VirtIO drivers from within the unRAID webGui
  • virtualization:  nested virtualization support
  • array/cache:  general storage device performance improvements

 

For a complete breakdown of these new features, see our blog post on the subject.

 

Guides for New Features

 

Dual Parity

 

Assigning a second parity disk

 

  • Stop the array
  • Navigate to the Main tab
  • Assign a new storage device to the Parity2 disk slot (must be equal to or larger than your largest data disk
  • Start the array

 

A parity sync will begin automatically after starting the array with a new storage device assigned to the Parity2 slot.

 

Replacing a failed disk

 

NOTE:  swap disable functionality has not yet been added for dual parity, which means you cannot reassign a parity disk to the array while adding a new larger parity disk at the same time.

 

The procedure to replace a failed disk is therefore the same as it is with single parity.

 

Upgrading a parity disk (to a larger size)

  • Stop the array
  • Assign the new larger disk to the parity slot you wish to upgrade
  • Start the array

 

Downgrading to 6.1.x from 6.2

 

If you decide to roll-back to a previous 6.1.x release of unRAID after setting up dual-parity on 6.2, you will no longer have dual-parity on the 6.1.x release.  If you then later upgrade again to 6.2, you will once again need to assign your secondary parity disk and start the array to begin another parity sync.

 

GUI boot mode

 

To access the GUI boot mode, attach a monitor, mouse, and keyboard to your system and boot it up with 6.2.  When the boot menu loads, you can select the option to boot into the GUI (no, this is not the default boot mode).

 

To make this boot mode the default, perform the following steps:

 

  • Navigate to the Main tab
  • Click on the flash device to go to its settings page
  • Under Syslinux Configuration, move the line "menu default" to live under the GUI boot mode option
  • Click apply and future reboots will automatically launch the GUI boot mode

 

Note: GUI boot mode requires about 200MB additional RAM at run-time.

 

Automatic System Shares

 

When starting the array with at least one cache device, a share called "system" will be automatically created.  Inside, two subfolders will be created (docker and libvirt respectively).  Each of these folders will contain a loopback image file for each service. 

 

Upon adding your first application via Docker, the appdata share will be created (set to cache=only).

 

Upon downloading your first VirtIO driver ISO through the webGui (under Settings -> VM Manager), the isos share will be created (set to cache=no).  If you do not need the VirtIO drivers (meaning you are only using Linux-based guest VMs), you will need to manually add the isos share.

 

Upon adding your first VM with a virtual disk, the domains share will be created automatically (set to cache=only).

 

NOTE:  If you do NOT have a cache device, you will need to add those three shares manually if you wish to utilize apps or VMs on unRAID.

 

Default Docker /config Volume Mapping

 

You can adjust this setting under the Settings -> Docker page.  Whatever path is specified hear will have sub-folders created underneath it automatically whenever new containers are added to the system that have a /config volume mapping specified.

 

Docker 1.10 Container Update Process

 

The reason we have not upgraded Docker in unRAID 6.1 has been because of a significant change to the Docker Hub API.  Legacy versions of Docker can still talk to the legacy API, but the newer versions require you to talk through the newer API.  This newer API broke a number of functions in the Docker Manager that is in unRAID 6.1, but since the API for Docker still functions against that release version, it has continued to serve its purpose for the community.  In unRAID 6.2, we are using the latest release of Docker (1.10.2) and we've resolved all the API related issues so that Docker Manager works ok.  However, there is a one-time update procedure that each container will need to go through in order to point it towards that new API going forward, even if the container itself truly isn't in need of an update.

 

From the Docker 1.10 release notes:

https://github.com/docker/docker/releases/tag/v1.10.0

 

IMPORTANT: Docker 1.10 uses a new content-addressable storage for images and layers.

A migration is performed the first time docker is run, and can take a significant amount of time depending on the number of images present.

 

Refer to this page on the wiki for more information: https://github.com/docker/docker/wiki/Engine-v1.10.0-content-addressability-migration

 

Virtual Machines

 

NOTE:  before upgrading to 6.2, please be sure to backup any VMs you have AND set disable them from auto-starting.  This will give you the opportunity to perform the post-upgrade procedures before starting them.

 

NOTE:  In addition, any edits you make in 6.2 to your VMs will not be present if you roll-back to 6.1.x at a later point in time.  When you roll-back, your VM configurations (XML) will be in the state they were prior to the upgrade.  This also means that new VMs you create in 6.2 will NOT show up under VM manager if you roll back to 6.1.x.  This doesn't affect virtual disk image, only the VM configurations themselves

 

Post-Upgrade Procedures

 

A number of "under the hood" changes have occurred for virtualization in this release.  To ensure that your VMs take advantage of all these changes and continue to function properly, the following one-time actions should be performed before starting your VMs for the first time.

 

NOTE:  Any custom XML edits you have made will be lost after performing this procedure.

 

  • For each VM, go to the VMs tab, click the VM's icon, and select the Edit option
  • Turn on "Advanced View" in the top right of the Edit VM page
  • If you are using VNC for the primary graphics card, adjust the VNC Video Driver field to QXL
  • Click Apply

 

Even if your VM isn't using VNC (such as GPU pass through), you should perform the above procedure.

 

Upgrading 6.1.x to 6.2 OVMF VMs

After upgrading to 6.2 and performing the previously documented procedure against an OVMF VM that was created under a 6.1.x version of unRAID, your first time booting that VM in 6.2 will likely take you to a UEFI shell prompt, and not boot your OS.  To resolve this simply type the following commands:

 

fs0:
cd efi
cd boot
bootx64.efi

 

Adjust the first command to fs1: if fs0: doesn't work.  This only needs to be performed once per OVMF VM.

 

Stubbing and assigning other PCI devices

 

To stub PCI devices so they can be assigned to VMs through the webGui directly, you'll need to follow these steps:

 

  • Login to your server using the unRAID webGui
  • Navigate to the Tools -> System Devices page
  • Locate the PCI device you wish to stub and then copy the vendor and product ID specified in brackets near the end of it's row.  Example (the bolded part highlights the vendor/product ID):

 

01:00.0 Ethernet controller [0200]: Intel Corporation Ethernet Controller 10-Gigabit X540-AT2 [8086:1528] (rev 01)

 

  • Navigate to the Main tab and click on your Flash device
  • Under the Syslinux Configuration section, locate the line that says "menu default"
  • Beneath that line, you will see the following:

 

append initrd=/bzroot

 


  • Change the line, adding the bolded part as shown in the example below:

 

append vfio-pci.ids=8086:1528 initrd=/bzroot

 


  • Click Apply and then reboot your system.

 

When adding / editing VMs from this point forward, the stubbed PCI device(s) will show up at the bottom with checkboxes that you can click to assign them.

 

Nested virtualization support

 

Now you can install VMWare on unRAID as a VM itself!  Yup!  It really works!

 

OpenELEC 6.0.0 Support

 

After starting up your 6.2 release for the first time, edit your existing OpenELEC VM(s) and you will find a new version available to download in the webGui.  Download it to your system and start up your VM(s) to begin using this upgrade!

 

NOTE:  We fully plan on upgrading to 6.0.3 before 6.2 final.

 

Turbo Write

To enable this setting, visit the Settings -> Disk Settings page and set Tunable (md_write_method) to "reconstruct write".

 

 

unRAID Server OS Change Log
===========================

THIS IS BETA SOFWARE
--------------------

While every effort has been made to ensure no data loss, **use at your own risk!**

Version 6.2-beta18 2016-03-11
-----------------------------

Changes vs. unRAID Server OS 6.1.9.

Base distro:

- switch to 'slackware64-current' base packages
- avahi: version 0.6.32
- beep: version 1.3
- docker: version 1.10.2
- eudev: version 3.1.5a: support NVMe
- fuse: version 2.9.5
- irqbalance: version 1.1.0
- jemalloc: version 4.0.4
- libestr: version 0.1.10
- liblogging: version 1.0.5
- libusb: version 1.0.20
- libvirt: version 1.3.1
- lshw: version B.02.17 svn2588
- lz4: version r133
- mozilla-firefox: version 44.0.2 (console GUI mode)
- netatalk: version 3.1.8
- numactl: version 2.0.11
- php: version 5.6.19
- qemu: version 2.5.0
- rsyslog: version 8.16.0
- samba:
  - version: 4.3.5
  - enable asynchronous I/O in /etc/samba/smb.conf
  - remove 'max protocol = SMB3' from /etc/samba/smb.conf (automatic negotiation chooses the appropriate protocol)
- spice: version 0.12.6
- xorg-server: version 1.18.1
- yajl: version 2.1.0

Linux kernel:

- version 4.4.4
- default iommu to passthrough (iommu=pt)
- kvm: enabled nested virtualization
- unraid: array PQ support (dual-parity)

Management (emhttp):

- Trial key now supports 6 devices, validates with limetech keyserver
- Pro key supports max 30 array devices, unlimited attached devices
- add 10Gb ethernet tuning in /etc/sysctl.conf
- add tunable: md_write_method (so-called "turbo write")
- array PQ support (dual-parity)
- do not auto-start parity operation when Starting array in Maintenance mode
- libvirt image file handling
- stop md/unraid driver cleanly upon system poweroff/reset
- support NVMe storage devices assignable to array and cache/pool
- support USB storage devices assignable to array and cache/pool
- system shares handling
- misc other improvements and bug fixes

webGui:

- all fixes and enhancements from 6.1.9
- added hardware profile page
- added service status labels to docker and vm manager settings pages
- docker: revamped docker container edit page (thanks gfjardim!)
- docker: now using docker v2 index/repos
- docker: updating a stopped container will keep it stopped upon completion
- dyanmix-6.2: version 2016-03-11
- reverse the negative logic in docker and libvirt image fsck confirmation
- support user specified network MTU value
- vm manager: usb3 controller support, improved usb device sorting and display
- vm manager: integrated virtio driver iso downloader
- vm manager: support nvidia with hyper-v for windows guests
- vm manager: added auto option for vdisk location
- misc other improvements and bug fixes

Link to comment
  • Replies 421
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

A note to anyone posting issues with 6.2.

 

If you do not include your diagnostics, your post will probably go unanswered and your issue unresolved.  Describing an issue you are having in a reply on this thread is not sufficient to report a bug.  We need a copy of your system diagnostics after the bug occurred.

 

The diagnostics zip file contains critical information we need to help debug any issues discovered.

 

Known Issues

 

NFS Not Working

 

Not much to say just yet, we are investigating this issue.

 

VM vdisk assignments lost in edit mode

 

Some users have reported that when editing their existing VMs using the webGui editor, the virtual disks that were previously assigned may not appear correctly.  The solution is simply add them back using the "manual" drop down option and picking the same vdisks using the file picker.

 

Q35 Guest VMs Need OS Install ISO set to SATA

 

If you are attempting to install a Linux OS in a VM and you have the Q35 machine type selected, then you need to manually toggle the ISO bus type from IDE to SATA if you wish for your installation media to work properly.  This is a bug that will be addressed in a future release.

 

Red X Appears Next to Parity 2

 

By default, if you do not have a second parity disk assigned, you will see a red X next to this slot when starting the array and you will receive notifications about the parity2 disk being disabled.  This is a known issue and will be addressed in a future release.

 

Q Sync Errors with Valid Parity Selected

 

Some users have reported that with single or dual-parity configurations, if "valid parity" is selected before starting the array, sync errors will be reported in the log.  Currently investigating this issue.

 

GPU Pass Through CPU Overhead

 

Some users are reporting an overly high amount of CPU usage when running Windows guest VMs with GPU pass through and opening applications that engage 3D graphics, video, or audio.  The usage doesn't appear to actually affect the performance of the guest, but the utilization reported on the host definitely doesn't appear to match the guest.

 

We are actively investigating this issue, but know that it shouldn't affect functionality of your system or usability, just how much your CPU reports to utilize on the host.

Link to comment

Important: Your server will require internet access upon boot in order to validate with the LimeTech key server.

 

Is this a one-time validation OR is this new version crippled unless it phones home at EVERY reboot???????

HOPEFULLY this only will apply to the beta and evaluation versions. If I can't start my server because my internet happens to be down... >:( >:(
Link to comment

so with turbo write, do we even really need to use a cache disk for anything other than cache-only shares for docker/vms?

 

If you don't mind all your disks spinning when writes are occurring, some users can forego using cache-enabled shares because turbo-write can often times saturate the performance of a network when using fast-enough devices.  However, the slowest performing disk in your array because your performance bottleneck with this feature enabled.  It is also possible that when the array is wide enough (# of disks in the array), the performance of this feature may be even LESS than with it turned off.

 

In short, Turbo Write is a great feature when having to bulk copy large sums of data to the array directly.  Enable it, do your big bulk copies and then turn it back off.  That's its current recommended use.  We have plans to incorporate the use of Turbo Write in other ways in the future, but it'd be premature to discuss details just yet.

Link to comment

Pretty excited about this!

 

NOTE:  If you do NOT have a cache device, you will need to add those three shares manually if you wish to utilize apps or VMs on unRAID.

 

This confuses me a bit. I don't use a cache disk, and run docker from a mounted SSD. What do I need to do to add these shares? Are they SMB Shares? User Shares?

Link to comment

Holy Cr@p!!! Pro supports 54 total drives??? That's more than a storinator pod!

 

Pro users will now benefit from no attached device limits whatsoever, and additionally will be able to assign up to a total of 30 devices in the array (2 parity + 28 data) in addition to another 24 devices in the cache pool.
Link to comment

so with turbo write, do we even really need to use a cache disk for anything other than cache-only shares for docker/vms?

 

If you don't mind all your disks spinning when writes are occurring, some users can forego using cache-enabled shares because turbo-write can often times saturate the performance of a network when using fast-enough devices.  However, the slowest performing disk in your array because your performance bottleneck with this feature enabled.  It is also possible that when the array is wide enough (# of disks in the array), the performance of this feature may be even LESS than with it turned off.

 

In short, Turbo Write is a great feature when having to bulk copy large sums of data to the array directly.  Enable it, do your big bulk copies and then turn it back off.  That's its current recommended use.  We have plans to incorporate the use of Turbo Write in other ways in the future, but it'd be premature to discuss details just yet.

 

ah thanks for the clarification

Link to comment

so with turbo write, do we even really need to use a cache disk for anything other than cache-only shares for docker/vms?

 

If you don't mind all your disks spinning when writes are occurring, some users can forego using cache-enabled shares because turbo-write can often times saturate the performance of a network when using fast-enough devices.  However, the slowest performing disk in your array because your performance bottleneck with this feature enabled.  It is also possible that when the array is wide enough (# of disks in the array), the performance of this feature may be even LESS than with it turned off.

 

In short, Turbo Write is a great feature when having to bulk copy large sums of data to the array directly.  Enable it, do your big bulk copies and then turn it back off.  That's its current recommended use.  We have plans to incorporate the use of Turbo Write in other ways in the future, but it'd be premature to discuss details just yet.

 

I could see this being advantageous when the mover script is running possibly..

 

Link to comment

Holy Cr@p!!! Pro supports 54 total drives??? That's more than a storinator pod!

 

Pro users will now benefit from no attached device limits whatsoever, and additionally will be able to assign up to a total of 30 devices in the array (2 parity + 28 data) in addition to another 24 devices in the cache pool.

 

What is the benefit and/or usage scenario for such a large cache pool?  Video editing?

Link to comment
Guest
This topic is now closed to further replies.