Jump to content
limetech

unRAID Server Release 6.2 Stable Release Available

444 posts in this topic Last Reply

Recommended Posts

My cache drive did not get assigned after update.  It is shown on the main page as a new device.

 

I've attached my diagnostics.

 

EDIT:  this is even wierder.  I just noticed that I didn't get updated.  I hit the update, it downloaded extracted and said to reboot.  I stopped the array, power down, power up. and still at 6.1.9 with no cache drive assigned.  I can stop the array and assign the cache drive and then docker started.

 

I noticed that my cache is set for 2 slots, but only one is assigned, the other is unassigned.  Maybe that is the issue?

 

I guess I'll re-apply the update and try again.

 

EDIT2: after my second upgrade attempt it worked.  What I did differently.  On the first update, after I updated and it said to reboot.  I remembered that for 6.2 I don't need the powerdown plugin, so I removed it.  I then powered down/powered up.  the second time I didn't remove that plugin, because it is already gone.  Can removing a plugin after updating, but before rebooting revert the update?

 

For the cache drive I set it to 1 slot and upon upgrading it came up without issue.

 

Docker ran for a few minutes before the webpage came up, but things seem to be working now.

david

tower-diagnostics-20160916-0842.zip

Share this post


Link to post

Got an email this morning that powerdown plg has an update. Checked the 'what's new' info and it says no longer will install on 6.2 as its not necessary... are there details on this anywhere?

 

Noticed the same thing.

Share this post


Link to post

Got an email this morning that powerdown plg has an update. Checked the 'what's new' info and it says no longer will install on 6.2 as its not necessary... are there details on this anywhere?

 

Noticed the same thing.

 

Thanks to Squid for providing a link, in 6.2 the powerdown functions built into unRAID are much more robust and the powerdown plugin really just duplicates the work (from what I gather).

Share this post


Link to post

Running into an issue with an ubuntu VM after the upgrade from 6.19.  The VM comes up, but is unable to get a network connection (can't ping it or ssh into it, and when I do a connection through built in VNC, and in ifconfig I see IP is listed as 127.0.01).

 

Diagnostics attached

unraid-server1-diagnostics-20160916-0854.zip

Share this post


Link to post

I have one 6.1.9 server where I attempted to update the share permissions to private, giving read/write access to one user (not root)  This worked for a bit, but suddenly I could no longer access the server.  After fighting with it for hours, even setting every share back to public did not help.  Finally it just worked again even though it had been set back to public hours earlier....

 

Today I updated to 6.2 on this server, and I can no longer access the SMB shares via W10 anymore on any client.  It asks for a login, and is trying to use the user I had set up previously for read/write access.  Login with that user into unRaid doesn't help.

 

Likely a Windows issue more than unRaid, but let me ask anyways.....

Share this post


Link to post

Today I updated to 6.2 on this server, and I can no longer access the SMB shares via W10 anymore on any client.  It asks for a login, and is trying to use the user I had set up previously for read/write access.  Login with that user into unRaid doesn't help.

 

Likely a Windows issue more than unRaid, but let me ask anyways.....

You probably want to run Windows Credential Manager to remove any cached credentials for the unRAID server.    It is a not untypical Windows issue where it tries to use cached credentials that are not actually correct.

Share this post


Link to post
NOTE:  before upgrading to 6.2, please be sure to backup any VMs you have AND set disable them from auto-starting.  This will give you the opportunity to perform the post-upgrade procedures before starting them.

 

So... I want to update but I can not find any instructions anywhere on what exactly the procedure is for backing up a VM. Can someone in the know please provide a link? Or, the instructions?

 

Many thanks in advance!

Share this post


Link to post

NOTE:  before upgrading to 6.2, please be sure to backup any VMs you have AND set disable them from auto-starting.  This will give you the opportunity to perform the post-upgrade procedures before starting them.

 

So... I want to update but I can not find any instructions anywhere on what exactly the procedure is for backing up a VM. Can someone in the know please provide a link? Or, the instructions?

I don't have an answer for backing up your VM's, but wanted to comment that it would be nice if there was some sort of upgrade advice page, collecting the various tips, issues, and recommendations for the 6.2 upgrade.  I don't feel qualified, no direct experience with VM's and little with Dockers.

 

A suggestion for Tom for the future - when it's a relatively major upgrade like this, could you reserve a post immediately following the announcement post, just for moderators to collect all of the advice given and put it in one place.  Call it the 'moderator' post, or the 'upgrade advice' post.

 

I think I would already add these to it:

- Squid's Docker updating advice (and a pointer to the Docker FAQ)

- best practices for the Docker and system path changes

- SparklyBalls Docker upgrade notes

- the powerdown problem

- the flash drive redo

- more detail on stubbing

- Windows login and credentials issues

- methods for backing up your VM's

- etc

Share this post


Link to post

NOTE:  before upgrading to 6.2, please be sure to backup any VMs you have AND set disable them from auto-starting.  This will give you the opportunity to perform the post-upgrade procedures before starting them.

 

So... I want to update but I can not find any instructions anywhere on what exactly the procedure is for backing up a VM. Can someone in the know please provide a link? Or, the instructions?

I don't have an answer for backing up your VM's, but wanted to comment that it would be nice if there was some sort of upgrade advice page, collecting the various tips, issues, and recommendations for the 6.2 upgrade.  I don't feel qualified, no direct experience with VM's and little with Dockers.

 

A suggestion for Tom for the future - when it's a relatively major upgrade like this, could you reserve a post immediately following the announcement post, just for moderators to collect all of the advice given and put it in one place.  Call it the 'moderator' post, or the 'upgrade advice' post.

 

I think I would already add these to it:

- Squid's Docker updating advice (and a pointer to the Docker FAQ)

- best practices for the Docker and system path changes

- SparklyBalls Docker upgrade notes

- the powerdown problem

- the flash drive redo

- more detail on stubbing

- Windows login and credentials issues

- methods for backing up your VM's

- etc

 

Good idea.  Please use the existing post #2 for this purpose.

Share this post


Link to post

I have one 6.1.9 server where I attempted to update the share permissions to private, giving read/write access to one user (not root)  This worked for a bit, but suddenly I could no longer access the server.  After fighting with it for hours, even setting every share back to public did not help.  Finally it just worked again even though it had been set back to public hours earlier....

 

Today I updated to 6.2 on this server, and I can no longer access the SMB shares via W10 anymore on any client.  It asks for a login, and is trying to use the user I had set up previously for read/write access.  Login with that user into unRaid doesn't help.

 

Likely a Windows issue more than unRaid, but let me ask anyways.....

 

It actually isn't all the clients, but only the clients that had saved the credentials for the read/write user while it was being used.  Now the server is completely wide open again this shouldn't be needed for any client.  Only clients that had never logged in while it was private are able to get in today.  Blame Windows 10, or unRaid?

Share this post


Link to post

It actually isn't all the clients, but only the clients that had saved the credentials for the read/write user while it was being used.  Now the server is completely wide open again this shouldn't be needed for any client.  Only clients that had never logged in while it was private are able to get in today.  Blame Windows 10, or unRaid?

 

This is a Windows issue, if your shares are set to public type a random username without any password, e.g., "user" and it should work, if it works save credentials.

Share this post


Link to post

Got the same problem as others who had to remove the dynamix.plg file.

 

1. Removed the file

2. Rebooted and my disk config was missing (problem with parity disk is missing I think)

3. Thought I might as well upgrade as the system was broken and had to reinstall from backup

4. Upgrade to 6.2 via the update, did not upgrade the dynamix plugin

5. Rebooted

6. WebUI would not load, server was able to be reached via ssh

7. Restored from prior version, rebooted

8. Unable to connect to the WebUI, network interface appears to be down.

9. Sigh

 

I am troubleshooting all this remotely so I will have to wait till I'm onsite to troubleshoot more.

 

The IRONY is that the backup unit upgraded without a problem but the primary unit went to hell in a handbasket.

 

Share this post


Link to post

NOTE:  before upgrading to 6.2, please be sure to backup any VMs you have AND set disable them from auto-starting.  This will give you the opportunity to perform the post-upgrade procedures before starting them.

 

So... I want to update but I can not find any instructions anywhere on what exactly the procedure is for backing up a VM. Can someone in the know please provide a link? Or, the instructions?

 

Many thanks in advance!

 

I shall try and help, but it is early in the morning and I have not had coffee yet so apologies if it is a bit rushed.

 

*Note: you HAVE to do this while the Array is running as LT won't allow us to run Docker and KVM without the Array first being started*

 

To backup your Virtual Machines do the following:

 

Prepare: Go to your unRAID shares and create a Folder to store the backup using Explorer (Windows) or Finder (OSX).

 

Lets name this Folder VMBackup. We shall use this throughout.

 

The system address of this folder (skipping cache) is:

 

/mnt/user0/<share>/VMBackup/

 

*Note: we are going to use the path to this share that skips Cache as we don't want unprotected backup files while we are doing an upgrade*

 

Step 1: Drop to the command line (I shall assume you're using Telnet) using either PuTTY (Windows) or Terminal (OSX):

 

PuTTY: http://www.chiark.greenend.org.uk/~sgtatham/putty/download.html

Terminal: Goto Launchpad>Type "Terminal"

 

Command:

telnet -l root <nameorIPofserver>

 

Sample Command and Output:

root@main:~# telnet -l root backup.danioj.lan
Trying 192.168.1.12...
Connected to backup.danioj.lan.
Escape character is '^]'.
Linux 4.1.18-unRAID.
Last login: Tue Sep  6 22:40:08 +1000 2016 on /dev/tty1.

 

Step 2: Get a list of your installed VM's:

 

Command:

virsh list

 

Sample Command and Output:

root@backup:~# virsh list
Id    Name                           State
----------------------------------------------------
2     windows-vm-backup              running
3     pfsense-vm-backup              running

 

Step 3: Shutdown each VM.

 

Command:

virsh shutdown <vmname>

 

Sample Command and Output:

root@backup:~# virsh shutdown windows-vm-backup
Domain windows-vm-backup is being shutdown

 

*Note: if for whatever reason the VM does not shutdown, you can use the command "destroy" instead of "shutdown" but this is your call and REMEMBER it is akin to just jacking the power out of the back of your PC*

 

Step 4: Backup the XML.

 

Command:

virsh dumpxml <vmname> > /mnt/user0/<share>/VMBackup/<vmname>.xml

 

Sample Command and Output:

root@backup:~# virsh dumpxml windows-vm-backup > /mnt/user0/nas/VMBackup/windows-vm-backup.xml

 

*Note: there is no output for this command, but you can check that it has worked by either going and looking in the share or by doing the below*

 

Command:

cat /mnt/user0/<share>/VMBackup/<vmname>.xml

 

Sample Command and Output:

root@backup:~# cat /mnt/user0/nas/VMBackup/windows-vm-backup.xml
<domain type='kvm' id='2'>
  <name>windows-vm-backup</name>
  <uuid>cbd394b6-d8c2-5baa-f628-490e83518bc0</uuid>
  <description>Windows 10 (Upgraded from 7) VM on unRAID Backup</description>
  <metadata>
    <vmtemplate name="Custom" icon="windows.png" os="windows"/>
  </metadata>
  <memory unit='KiB'>4194304</memory>
  <currentMemory unit='KiB'>4194304</currentMemory>
  <memoryBacking>
    <nosharepages/>
    <locked/>
  </memoryBacking>
  <vcpu placement='static'>2</vcpu>
  <cputune>
    <vcpupin vcpu='0' cpuset='0'/>
    <vcpupin vcpu='1' cpuset='1'/>
  </cputune>
  <resource>
    <partition>/machine</partition>
  </resource>
  <os>
    <type arch='x86_64' machine='pc-i440fx-2.3'>hvm</type>
  </os>
  <features>
    <acpi/>
    <apic/>
    <hyperv>
      <relaxed state='on'/>
      <vapic state='on'/>
      <spinlocks state='on' retries='8191'/>
    </hyperv>
  </features>
  <cpu mode='host-passthrough'>
    <topology sockets='1' cores='2' threads='1'/>
  </cpu>
  <clock offset='localtime'>
    <timer name='hypervclock' present='yes'/>
    <timer name='hpet' present='no'/>
  </clock>
  <on_poweroff>destroy</on_poweroff>
  <on_reboot>restart</on_reboot>
  <on_crash>restart</on_crash>
  <devices>
    <emulator>/usr/bin/qemu-system-x86_64</emulator>
    <disk type='file' device='disk'>
      <driver name='qemu' type='raw' cache='writeback'/>
      <source file='/mnt/cache/app/kvm/windows-vm-backup/vdisk1.img'/>
      <backingStore/>
      <target dev='hdc' bus='virtio'/>
      <boot order='1'/>
      <alias name='virtio-disk2'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x05' function='0x0'/>
    </disk>
    <controller type='usb' index='0'>
      <alias name='usb'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x01' function='0x2'/>
    </controller>
    <controller type='pci' index='0' model='pci-root'>
      <alias name='pci.0'/>
    </controller>
    <controller type='virtio-serial' index='0'>
      <alias name='virtio-serial0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x04' function='0x0'/>
    </controller>
    <interface type='bridge'>
      <mac address='52:54:00:d2:a4:38'/>
      <source bridge='br0'/>
      <target dev='vnet2'/>
      <model type='virtio'/>
      <alias name='net0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x03' function='0x0'/>
    </interface>
    <serial type='pty'>
      <source path='/dev/pts/1'/>
      <target port='0'/>
      <alias name='serial0'/>
    </serial>
    <console type='pty' tty='/dev/pts/1'>
      <source path='/dev/pts/1'/>
      <target type='serial' port='0'/>
      <alias name='serial0'/>
    </console>
    <channel type='unix'>
      <source mode='bind' path='/var/lib/libvirt/qemu/channel/target/windows-vm-backup.org.qemu.guest_agent.0'/>
      <target type='virtio' name='org.qemu.guest_agent.0' state='disconnected'/>
      <alias name='channel0'/>
      <address type='virtio-serial' controller='0' bus='0' port='1'/>
    </channel>
    <input type='tablet' bus='usb'>
      <alias name='input0'/>
    </input>
    <input type='mouse' bus='ps2'/>
    <input type='keyboard' bus='ps2'/>
    <graphics type='vnc' port='5901' autoport='yes' websocket='5701' listen='0.0.0.0' keymap='en-us'>
      <listen type='address' address='0.0.0.0'/>
    </graphics>
    <video>
      <model type='vmvga' vram='16384' heads='1'/>
      <alias name='video0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x02' function='0x0'/>
    </video>
    <memballoon model='virtio'>
      <alias name='balloon0'/>
      <address type='pci' domain='0x0000' bus='0x00' slot='0x06' function='0x0'/>
    </memballoon>
  </devices>
</domain>

 

Step 5: Get a list of all vDisks attached to the VM you're backing up.

 

Command:

virsh domblklist <vmname>

 

Sample Command and Output:

root@backup:~# virsh domblklist windows-vm-backup
Target     Source
------------------------------------------------
hdc        /mnt/cache/app/kvm/windows-vm-backup/vdisk1.img

 

Copy down the full path of each vdisk or copy to notepad.

 

*Note: if your VM has multiple vDisks then you will have multiple entries in the list.*

 

Step 6: Backup each vDisk for the VM you are backing up.

 

Command:

cp </path/to/vdisk/vdisk1.img> /mnt/user0/<share>/VMBackup/

 

Sample Command and Output:

root@backup:~# cp /mnt/cache/app/kvm/windows-vm-backup/vdisk1.img /mnt/user0/nas/VMBackup/

 

*Note: As you do this for each VM and for each vDisk used by each VM you may find that each VM has the same name vDisk at some point. Therefore it might be worthwhile making your backup folder structure a bit more granular*

 

If you do the above with each VM and with each VM for it's XML and each vDisk attached to it, you should be fine!

Share this post


Link to post

It actually isn't all the clients, but only the clients that had saved the credentials for the read/write user while it was being used.  Now the server is completely wide open again this shouldn't be needed for any client.  Only clients that had never logged in while it was private are able to get in today.  Blame Windows 10, or unRaid?

 

This is a Windows issue, if your shares are set to public type a random username without any password, e.g., "user" and it should work, if it works save credentials.

 

I had the same issue with all my shares. They are all set to public, but after the upgrade from RC5 to Stable, I was no longer able to access the shares from my Windows 10 desktop. Kept prompting for a username and password. I even checked credential manager and there are no saved credentials.

 

I did end up trying your solution of typing in "user" for the username and no password, and now I'm back in though. Very odd.

Share this post


Link to post

UPdate to my issue.

 

1. Had to force restart the server (thank you helping hands)

2. Able to get to the WebUI after resetting to 6.1.9

3. Updated all the plugins

4. Update to 6.2

5. rebooted

6. WebUI is unresponsive, ssh is alive.

7. SSH to the box, system is online.

 

I'm stumped as to where I need to go from here.

Share this post


Link to post

Upgrade from 6.1.9 went fine, have now added a 2nd Parity HDD and am rebuilding Parity.

 

I only had 1 docker image - Plex, however I wanted to make a number of configuration changes so created a new image rather than upgraded it.

Share this post


Link to post

I done that, it is only appearing on limetech's version of Plex...

 

Pulling image: limetech/plex:latest

IMAGE ID [latest]: Pulling from limetech/plex.

IMAGE ID [6ffe5d2d6a97]: Already exists.

IMAGE ID [f4e00f994fd4]: Already exists.

IMAGE ID [e99f3d1fc87b]: Already exists.

IMAGE ID [a3ed95caeb02]: Already exists.

IMAGE ID [ededd75b6753]: Already exists.

IMAGE ID [1ddde157dd31]: Already exists.

IMAGE ID [79321844ebba]: Pulling fs layer. Downloading 100% of 772 B. Verifying Checksum. Download complete. Extracting. Pull complete.

IMAGE ID [ebe499b4c161]: Pulling fs layer. Downloading 100% of 119 MB. Download complete.

IMAGE ID [cb387480a3c1]: Pulling fs layer. Download complete.

 

TOTAL DATA PULLED: 119 MB

 

Command:

root@localhost:# /usr/local/emhttp/plugins/dynamix.docker.manager/scripts/docker run -d --name="PlexMediaServer" --net="host" --privileged="true" -e TZ="America/Los_Angeles" -e HOST_OS="unRAID" -v "/mnt/user/appdata/plexmediaserver/":"/config":rw limetech/plex

Unable to find image 'limetech/plex:latest' locally

latest: Pulling from limetech/plex

6ffe5d2d6a97: Already exists

f4e00f994fd4: Already exists

e99f3d1fc87b: Already exists

a3ed95caeb02: Already exists

a3ed95caeb02: Already exists

ededd75b6753: Already exists

1ddde157dd31: Already exists

a3ed95caeb02: Already exists

79321844ebba: Already exists

ebe499b4c161: Pulling fs layer

cb387480a3c1: Pulling fs layer

cb387480a3c1: Download complete

ebe499b4c161: Download complete

ebe499b4c161: Pull complete

ebe499b4c161: Pull complete

cb387480a3c1: Pull complete

cb387480a3c1: Pull complete

docker: layers from manifest don't match image configuration.

See '/usr/bin/docker run --help'.

 

The command failed.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.