unRAID Server Release 6.2.0-rc3 Available


Recommended Posts

ok so I upgrade to 6.2rc3, and I think ok I will stop my array and add the 2nd parity disk now, I try to stop the array and the browser just keeps repeating at the bottom unmounting disks...retrying unmounting disks for ever and ever....I try to refresh my browser, load other pages and nothing is responding, eventually I hard reboot my unraid machine and it comes up ok.

 

I try a second time to stop array....same thing ending in hard reboot, everything runs and looks good, but I cant stop the array :(

 

Try this plugin to see what files are open:

http://lime-technology.com/forum/index.php?topic=42881.0

Link to comment
  • Replies 190
  • Created
  • Last Reply

Top Posters In This Topic

I installed the open files plugin, and the only files open/listed are in various docker apps running. Should I try shutting docker down and then stop the array?

 

docker is the only thing my unraid system does really, apps like delugevpn, plex, sonarr, couchpotato, ubooquity, calibre-server and plexrequests.net

 

TIA

Link to comment

I installed the open files plugin, and the only files open/listed are in various docker apps running. Should I try shutting docker down and then stop the array?

 

docker is the only thing my unraid system does really, apps like delugevpn, plex, sonarr, couchpotato, ubooquity, calibre-server and plexrequests.net

 

TIA

Do you have telnet/ssh/console with a current working directory on one of the drives? That will also keep the drive from unmounting as Squid mentioned.
Link to comment

Hello everyone

 

I want to try 6.2, but there is a little problem. The USB-Stick work fine on my desktop, 6.2 rc3 boots up like expected. But if i try to boot my unRAID server with the rc3 stick it boots 6.1.9 instead of rc3. What i´m doing wrong?

 

greets

Ronny

Link to comment

Hello everyone

 

I want to try 6.2, but there is a little problem. The USB-Stick work fine on my desktop, 6.2 rc3 boots up like expected. But if i try to boot my unRAID server with the rc3 stick it boots 6.1.9 instead of rc3. What i´m doing wrong?

 

greets

Ronny

Do you have a stick with 6.1.9 on it?
Link to comment

Sure. Two sticks, one with 6.1.9 and one with the rc3. I pulled the 6.1.9 stick and replace it with the rc3 stick, but the server boots up 6.1.9. I tried this rc3 stick on my desktop pc and there rc3 boots fine...

The only thing I can think of is that the server is not actually booting off the USB stick that you think it is.  There is no way it can really be booting of a rc3 stick and then say that it is running 6.1.9. 
Link to comment

ok so I upgrade to 6.2rc3, and I think ok I will stop my array and add the 2nd parity disk now, I try to stop the array and the browser just keeps repeating at the bottom unmounting disks...retrying unmounting disks for ever and ever....I try to refresh my browser, load other pages and nothing is responding, eventually I hard reboot my unraid machine and it comes up ok.

 

I try a second time to stop array....same thing ending in hard reboot, everything runs and looks good, but I cant stop the array :(

 

Same problem

I first stopped a Windows 7 VM and one Docker,  Logitech Media Server

I then upgraded to 6.2rc3

I can stop the system and assign an empty precleared/formatted 8TB HD that is in disc position 6 for the #2 parity.

Since you cannot move 2 drives without a reconfiguration and I wanted to replace the empty #2 Parity drive position #6 I went to Tools Reconfig to assign another empty precleared/formatted 6tb HDD frrom position 16 to position 6 the position that was 8tb #2 parity drive.

I assigned all the remaining drives and then went to stop to reboot after running reconfig.

Unraid then enters into an never ending umounting disk loop and will not stop and is hung in that state.

I can reboot using IPMI or the hard Reboot button on the case and it enters into the same unmounting disk loop when I try to stop and reboot

I then have to temporarily load 6.1.9 again with a different flash drive to be able to use the server

I ran Open Files and nothing is open

I also ran Open file to stop all processes. No processes running

How do I determine what is preventing Unraid from stopping.

I now have new shares, Domains, ISOS, and System now in my shares tab for 6.1.9

The only thing I observed different is when stopping during 6.1.9 libvirt is the second item stopping and I don't see it stopping when stopping

6.2.0-rc3.

I could not capture a Diagnostics log with the system hung at stopping. There is one for 6.1.9 attached

tower-diagnostics-20160729-2103.zip

Link to comment

is the mover supposed to be accessing everything with a "." in the path?

 

ex:  /mnt/cache/./SortThru

 

use to be /mnt/cache/SortThru

 

As it trys to remove non empty dirs now

 

ex:  Jul 29 17:00:12 Tower move: rmdir: /mnt/cache/./SortThru Directory not empty

 

Myk

 

Link to comment

is the mover supposed to be accessing everything with a "." in the path?

 

ex:  /mnt/cache/./SortThru

 

use to be /mnt/cache/SortThru

 

As it trys to remove non empty dirs now

 

ex:  Jul 29 17:00:12 Tower move: rmdir: /mnt/cache/./SortThru Directory not empty

 

Myk

It's the same behavior as before - the /./ is just equivalent to /

The difference is that it's reporting directories on cache that cannot be deleted because there are still file(s) in them.  This can  happen if the "next" file to be moved is "in_use", meaning, open by some process or mounted on a loopback device.  Your system log would tell the tale.

Link to comment

I must have missed a debate or discussion on here (I guess that's a good sign of unRAID being rock solid).

 

Why is there a move to /mnt/user/appdata instead of /mnt/cache/appdata ? What's the benefits either way ?  I've always kept it in cahce to leave the stuff that runs 24x7 separate from the array.

Link to comment

after update, eth1 seems to be renamed to eth119 and is no longer connected (I did a remote update)

 

eth0: 1000 Mb/s, full duplex, mtu 1500

eth119: not connected

 

gona check cables anyhow once I'm on site.

 

Same issue here. Eth1 doesn't show up in ifconfig.

 

Network bond0 IEEE 802.3ad Dynamic link aggregation, mtu 1500

                eth0         1000 Mb/s, full duplex, mtu 1500

                eth118 not connected

                eth2         1000 Mb/s, full duplex, mtu 1500

                lo         loopback

Link to comment

I must have missed a debate or discussion on here (I guess that's a good sign of unRAID being rock solid).

 

Why is there a move to /mnt/user/appdata instead of /mnt/cache/appdata ? What's the benefits either way ?  I've always kept it in cahce to leave the stuff that runs 24x7 separate from the array.

I think the idea is to standardize some system-critical paths regardless of whether the user has a cache drive or not. You can still keep the appdata user share as cache-only so it will still work like you want it assuming the hardlinks in user shares issue has been fixed.
Link to comment

I must have missed a debate or discussion on here (I guess that's a good sign of unRAID being rock solid).

 

Why is there a move to /mnt/user/appdata instead of /mnt/cache/appdata ? What's the benefits either way ?  I've always kept it in cahce to leave the stuff that runs 24x7 separate from the array.

Fundamentally, the real reason is that there is no requirement that unRaid have a cache drive.  So there is now a new Use Cache Setting of "Prefer" for those that do not have a cache drive but in the future may want one.

 

The downside up until now of using /mnt/user/appdata instead of /mnt/cache/appdata (or /mnt/diskX/appdata) has been that hardlinks would not work at all which directly impacted the ability of a number of docker apps to run correctly.

 

It appears that hardlink support works in /mnt/user (at least CA Backup and Restore doesn't return any errors like it would pre RC-2 if using /mnt/user/ as a destination - although I haven't tested the mover script)

Link to comment

Thanks.  Hard keeping up with the standards.  I only just moved all my app data from /apps to /appdata

 

I'll keep mine where it is for now as I really don';t understand the difference nor the consequences.  Thanks

 

Yes, we are moving to have all standard paths to be specified on 'user shares' because this eases configuration and lessens the learning curve before getting started.  If you want to override the standard paths, that's ok and not a problem for advanced  users.  Cache mode 'prefer' was added for just this purpose: if someone starts out with only one or two array disks, then later adds a cache disk, they can move shares to the cache by clicking 'Move Now'.  This feature is not 100% refined but forms the basis for a more generalized mover/rebalancer in the future.

Link to comment

after update, eth1 seems to be renamed to eth119 and is no longer connected (I did a remote update)

 

eth0: 1000 Mb/s, full duplex, mtu 1500

eth119: not connected

 

gona check cables anyhow once I'm on site.

 

Same issue here. Eth1 doesn't show up in ifconfig.

 

Network bond0 IEEE 802.3ad Dynamic link aggregation, mtu 1500

                eth0         1000 Mb/s, full duplex, mtu 1500

                eth118 not connected

                eth2         1000 Mb/s, full duplex, mtu 1500

                lo         loopback

 

It doesn't refer to eth1 but your config has eth118 (eth119). Do you have something in the go file configuring this ethernet port?

Link to comment

ok so I upgrade to 6.2rc3, and I think ok I will stop my array and add the 2nd parity disk now, I try to stop the array and the browser just keeps repeating at the bottom unmounting disks...retrying unmounting disks for ever and ever....I try to refresh my browser, load other pages and nothing is responding, eventually I hard reboot my unraid machine and it comes up ok.

 

I try a second time to stop array....same thing ending in hard reboot, everything runs and looks good, but I cant stop the array :(

 

Same problem

 

After attempting to stop the array and seeing retrying unmounting disks in a loop, try these commands in console/ssh terminal:

umount /var/lib/docker/btrfs
umount /var/lib/docker

 

Any error messages come back from either of those commands?  Does the array actually stop a few seconds after these commands are executed?

Link to comment

ok so I upgrade to 6.2rc3, and I think ok I will stop my array and add the 2nd parity disk now, I try to stop the array and the browser just keeps repeating at the bottom unmounting disks...retrying unmounting disks for ever and ever....I try to refresh my browser, load other pages and nothing is responding, eventually I hard reboot my unraid machine and it comes up ok.

 

I try a second time to stop array....same thing ending in hard reboot, everything runs and looks good, but I cant stop the array :(

 

Same problem

 

After attempting to stop the array and seeing retrying unmounting disks in a loop, try these commands in console/ssh terminal:

umount /var/lib/docker/btrfs
umount /var/lib/docker

 

Any error messages come back from either of those commands?  Does the array actually stop a few seconds after these commands are executed?

 

 

As a last resort I sometimes have to use -l (lower L) in the umount command to get things under control

 

 

Myk

Link to comment

Drive failure on a massive scale....

 

Some of this is related to 6.2rc2.  Twice when I tried to shut the server down in RC2, it took a very long time, and when it did stop the array, it showed a large number of missing disks.  Since this is the second time, I got a diagnostics and I think the relevant parts are a mpt2sas_cm1 failure.  Server has 2 IBM1015 installed and an Areca 1231ML.  There was no indication that anything was wrong while the server was running, but just pressing the stop array button resulted in this massive failure both times.  The first time I restarted, the server came up with all the drives again, except parity1 and disk2.  Since this is running dual parity, that was fine and it did a rebuild of parity and disk2 without incident.  This time, we had disks 1-6 fail as well as parity1, but after a reboot, they all came back except parity and disk2 again.  This time I was just in the process of restarting after installing RC3, so RC3 is running now.  I immediately stopped the array.  This is a backup server so I can explore a few things if anyone has ideas....

 

Diagnostics attached.

 

Jul 29 14:19:53 Server1 kernel: mpt2sas_cm1: port enable: FAILED
Jul 29 14:19:53 Server1 kernel: mpt2sas_cm1: host reset: FAILED scmd(ffff8805243a5500)
Jul 29 14:19:53 Server1 kernel: sd 9:0:0:0: Device offlined - not ready after error recovery
Jul 29 14:19:53 Server1 kernel: sd 9:0:1:0: Device offlined - not ready after error recovery
Jul 29 14:19:53 Server1 kernel: sd 9:0:4:0: Device offlined - not ready after error recovery
Jul 29 14:19:53 Server1 kernel: sd 9:0:0:0: [sdg] tag#0 UNKNOWN(0x2003) Result: hostbyte=0x08 driverbyte=0x00
Jul 29 14:19:53 Server1 kernel: sd 9:0:0:0: [sdg] tag#0 CDB: opcode=0x28 28 00 ae bc 7d c8 00 00 08 00
Jul 29 14:19:53 Server1 kernel: blk_update_request: I/O error, dev sdg, sector 2931588552
Jul 29 14:19:53 Server1 kernel: sd 9:0:1:0: [sdh] tag#1 UNKNOWN(0x2003) Result: hostbyte=0x08 driverbyte=0x00
Jul 29 14:19:53 Server1 kernel: sd 9:0:1:0: [sdh] tag#1 CDB: opcode=0x28 28 00 ae bc 7d c7 00 00 08 00
Jul 29 14:19:53 Server1 kernel: blk_update_request: I/O error, dev sdh, sector 2931588551

server1_6.2.rc2-diagnostics-20160729-1424.zip

Link to comment
Guest
This topic is now closed to further replies.