Jump to content
limetech

unRAID Server Release 6.2.0-rc4 Available

186 posts in this topic Last Reply

Recommended Posts

I'll try NFS on my 600x3d in the weekend.

I can confirm it's the same problem here.

What player are you using?  Can you obtain any logs or other info on why the player is not displaying file names?

It's the same player as spl147 except it's without wifi (If I remember correctly).

I played with different settings on both the media player and unraid, but got the same result all the time. Empty folder.

But... I didn't do a exportfs -a after editing /etc/exports.

After I turned on the extra logging I had a look at the log and found this entry when browsing the NFS share from the media player. You might have it already from spl147's logs.

 

Aug 28 15:35:23 Server1 root: Starting NFS server daemons:
Aug 28 15:35:23 Server1 root:   /usr/sbin/exportfs -r
Aug 28 15:35:23 Server1 root:   /usr/sbin/rpc.nfsd 8
Aug 28 15:35:23 Server1 root:   /usr/sbin/rpc.mountd
Aug 28 15:35:23 Server1 rpc.mountd[10882]: Version 1.3.3 starting
Aug 28 15:36:04 Server1 rpc.mountd[10882]: authenticated mount request from 192.168.1.114:728 for /mnt/user/Movies (/mnt/user/Movies)
Aug 28 15:41:41 Server1 root: exportfs: Failed to resolve -async,no_subtree_check
Aug 28 15:41:41 Server1 root: exportfs: Failed to resolve -async,no_subtree_check
Aug 28 15:41:42 Server1 root: exportfs: Failed to resolve -async,no_subtree_check
Aug 28 15:41:42 Server1 root: exportfs: Failed to resolve -async,no_subtree_check
Aug 28 15:42:01 Server1 root: exportfs: Failed to resolve -async,no_subtree_check
Aug 28 15:42:01 Server1 root: exportfs: Failed to resolve -async,no_subtree_check
Aug 28 15:44:59 Server1 rpc.mountd[10882]: No host name given with /mnt/user/Movies (ro,async,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=100,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash), suggest *(ro,async,wdelay,hide,nocrossmnt,secure,root_squash,no_all_squash,no_subtree_check,secure_locks,acl,no_pnfs,fsid=100,anonuid=65534,anongid=65534,sec=sys,ro,secure,root_squash,no_all_squash) to avoid warning
Aug 28 15:44:59 Server1 rpc.mountd[10882]: authenticated mount request from 192.168.1.114:749 for /mnt/user/Movies (/mnt/user/Movies)
Aug 28 15:45:06 Server1 rpc.mountd[10882]: authenticated mount request from 192.168.1.114:771 for /mnt/user/Movies (/mnt/user/Movies)

 

I'm not sure if I can get any logs from the media player. I'll check if it's somehow possible.

 

If the exports file was not right those messages are to be expected.

 

I'll do it on a virgin export just to be sure it wasn't me.

 

I found this on the mede8er forum about the NFS version.

 

I now have NFS working with Server 2012 by using NFSv2 (The folders would show as empty on v3), but I have some more problems though.

 

So does it have anything to do with which version is used?

Share this post


Link to post

I have absolutely no idea if this is related to RC4 or not but figured I would post it here since I did not see it in previous RCs.  Diagnostics are attached.

 

Aug 29 01:47:16 unRAID kernel: ------------[ cut here ]------------
Aug 29 01:47:16 unRAID kernel: WARNING: CPU: 2 PID: 24059 at ./arch/x86/include/asm/thread_info.h:236 SyS_rt_sigsuspend+0x8f/0x9e()
Aug 29 01:47:16 unRAID kernel: Modules linked in: xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables vhost_net tun vhost macvtap macvlan xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod bonding igb ptp pps_core i2c_algo_bit coretemp kvm_intel kvm ahci i2c_i801 libahci mvsas libsas scsi_transport_sas ipmi_si acpi_cpufreq [last unloaded: pps_core]
Aug 29 01:47:16 unRAID kernel: CPU: 2 PID: 24059 Comm: Threadpool work Not tainted 4.4.18-unRAID #1
Aug 29 01:47:16 unRAID kernel: Hardware name: Supermicro X8DTH-i/6/iF/6F/X8DTH, BIOS 2.1b 05/04/12 
Aug 29 01:47:16 unRAID kernel: 0000000000000000 ffff880c0591fee0 ffffffff8136a2c7 0000000000000000
Aug 29 01:47:16 unRAID kernel: 00000000000000ec ffff880c0591ff18 ffffffff8104a39a ffffffff81055502
Aug 29 01:47:16 unRAID kernel: fffffffffffffdfe 00000000000000be 00002b8e2600a240 0000000000000001
Aug 29 01:47:16 unRAID kernel: Call Trace:
Aug 29 01:47:16 unRAID kernel: [<ffffffff8136a2c7>] dump_stack+0x61/0x7e
Aug 29 01:47:16 unRAID kernel: [<ffffffff8104a39a>] warn_slowpath_common+0x8f/0xa8
Aug 29 01:47:16 unRAID kernel: [<ffffffff81055502>] ? SyS_rt_sigsuspend+0x8f/0x9e
Aug 29 01:47:16 unRAID kernel: [<ffffffff8104a457>] warn_slowpath_null+0x15/0x17
Aug 29 01:47:16 unRAID kernel: [<ffffffff81055502>] SyS_rt_sigsuspend+0x8f/0x9e
Aug 29 01:47:16 unRAID kernel: [<ffffffff816237ae>] entry_SYSCALL_64_fastpath+0x12/0x6d
Aug 29 01:47:16 unRAID kernel: ---[ end trace ea2867ef7be5c954 ]---

unraid-diagnostics-20160829-1920.zip

Share this post


Link to post

I have absolutely no idea if this is related to RC4 or not but figured I would post it here since I did not see it in previous RCs.  Diagnostics are attached.

 

That's a busy machine!  I don't see anything serious, just a hiccup at a very low level.  Any time I see something like that though, I'd want to reboot, "just in case".

 

I did see a few BIOS issues, so to cover the basics, I'd check for a newer BIOS, yours is from 2012.  Never know, could be a BIOS related hiccup.

 

It happened on CPU 2, so you might check to see if you know what CPU 2 was working on, and whether it was pinned to something.  If this issue happens again, you'll want to note which clues are the same.

Share this post


Link to post

Thanks Rob.

 

That's a busy machine!  I don't see anything serious, just a hiccup at a very low level.  Any time I see something like that though, I'd want to reboot, "just in case".

 

I did see a few BIOS issues, so to cover the basics, I'd check for a newer BIOS, yours is from 2012.  Never know, could be a BIOS related hiccup.

 

I rebooted and will keep an eye on the syslog for new occurrences.

 

Unfortunately, that is the latest BIOS available and has not been updated for some time.

 

My two OE VMs are pinned to 12/13 and 14/15 respectively (which I assume is CPU2).

 

John

Share this post


Link to post

Can someone expand on:

 

...

 

Slackware64-14.2

We are "in sync" with the recent release of Slackware64-14.2, meaning all unRAID OS packages are either same as in the official Slackware64-14.2 distro or newer.  However, in general, we track Slackware64 "current".

...

 

what exactly does "track" mean in terms of packages? e.g.does it mean all packages are kept on the Slackware64 "current" track or it just when packages are selectively updated they come from the Slackware64 "current" track?

 

At a guess it means we started with Slackware64-14.2 and cherry picked some package upgrades for 6.2?

Share this post


Link to post

Can someone expand on:

 

...

 

Slackware64-14.2

We are "in sync" with the recent release of Slackware64-14.2, meaning all unRAID OS packages are either same as in the official Slackware64-14.2 distro or newer.  However, in general, we track Slackware64 "current".

...

 

what exactly does "track" mean in terms of packages? e.g.does it mean all packages are kept on the Slackware64 "current" track or it just when packages are selectively updated they come from the Slackware64 "current" track?

 

At a guess it means we started with Slackware64-14.2 and cherry picked some package upgrades for 6.2?

 

It means we track slackware "current".  When packages are updated in slackware "current" we check if they are used in unRAID OS, and if so, we add them to "unRAID-next" and test.  They then will appear in the the next release of unRAID OS.  In some cases, notably the kernel, we tend to keep ahead of slackware "current",  though in 6.2 we are tracking very closely since slackware will likely stay on kernel 4.4.x for quite some time (since it's the latest LTS kernel).  With unRAID 6.3 we will be upgrading to latest stable kernel, which today would be 4.7.  Sometimes a fix comes out in a package and we update it before it appears in slackware "current".

Share this post


Link to post

So in this context would "unRAID-next" be 6.3, 7.0 or 6.2 RC5?

Share this post


Link to post

any movement on the NFS share issues?

In the absence of more info, this is going to have to wait until 6.2 'stable' is released.

Share this post


Link to post

any movement on the NFS share issues?

In the absence of more info, this is going to have to wait until 6.2 'stable' is released.

i can send you a player if it will help?

Share this post


Link to post

any movement on the NFS share issues?

In the absence of more info, this is going to have to wait until 6.2 'stable' is released.

i can send you a player if it will help?

 

That's alright, what I need to do is publish a series of test releases that generate more debugging output and/or downgrade certain components in an attempt to bisect the release to find out where the problem got introduced.  Normally if this affected all NFS clients it would be something I'd jump on right away (and in that case we could probably easily reproduce).  But we cannot delay the 6.2 'stable' release any further, and this will have to be a "known issue" until we can get it sorted.

Share this post


Link to post

any movement on the NFS share issues?

In the absence of more info, this is going to have to wait until 6.2 'stable' is released.

i can send you a player if it will help?

That's alright, what I need to do is publish a series of test releases that generate more debugging output and/or downgrade certain components in an attempt to bisect the release to find out where the problem got introduced.  Normally if this affected all NFS clients it would be something I'd jump on right away (and in that case we could probably easily reproduce).  But we cannot delay the 6.2 'stable' release any further, and this will have to be a "known issue" until we can get it sorted.

Agreed

Share this post


Link to post

 

I don't suppose you noticed whether the network activity LEDs on your router were going nuts, did you? I've seen issues the past where network hosts can get stuck continuously sending the same packet out their network interface at full line rate, which at 100Mbps or 1Gbps would probably choke a residential router's packet processor. I've personally only seen this with lab equipment at work, never with unRAID. But if we're theorizing that the trigger was a power surge/lightning strike then I'd wager it's possible.

 

Just a thought,

 

-A

 

Unfortunately I did not, but it would seem the most likely culprit as I did check out the modem (which was up) and anything, including AP routers, couldn't connect through to it. As I haven't seen this before and haven't been able to reproduce it, I'll just have to call it a fluke at this point. I suppose unrelated to this thread now, but I do worry that the speculation might relate it to a power surge though. I have both of my servers behind their own individual APC RS 550 UPS. Niether of which load beyond 50% of nominal power, so I'm hoping this wasn't a powersupply issue. Anyways, thanks to you and John_M for the help  :)

Share this post


Link to post

I did not get the chance to read this thread in it's entirety. Please ignore if this bug has already been reported.

 

With RC4 "make_bootable_mac" script is rendering the flash drive un-bootable and corrupts the contents of the same. See the attached screenshots. The RC4 version of the script encounters an error and doesn't seem to complete all steps in comparison to the RC3 version. Also attached is the screenshot of the disk initialization error. This happens when re-inserting the flash drive on the Mac after the script has completed and ejected the flash.

 

The issue appears to be because of a typo in the device identifier hdutil is trying to unmount. Should be "/dev/disk1s1" instead of "/dev/disk1s1s1".

make_bootable_mac_rc4.png.f10a6508be8ff7b490a8c50a3999fe17.png

make_bootable_mac_rc3.png.d308dee1942e12ea5a79a10355aa22ae.png

disk_intialization_error.png.fdf74d53c0c86a8bdae0b1bb646a9b12.png

Share this post


Link to post

But we cannot delay the 6.2 'stable' release any further, and this will have to be a "known issue" until we can get it sorted.

 

:)  ;D  ;)

Share this post


Link to post

I am have problems with the VM disappearing in (Meta Data) the web browser with 6.2.0-rc4,  this seems to be weird behavior and I have to us the existing image file to create a new VM.  Seems a little buggy to have this happen,  and start happening with the latest update.

Share this post


Link to post

I am have problems with the VM disappearing in (Meta Data) the web browser with 6.2.0-rc4,  this seems to be weird behavior and I have to us the existing image file to create a new VM.  Seems a little buggy to have this happen,  and start happening with the latest update.

i am not sure I understand what the issue is that you have on your system.

Share this post


Link to post

I am have problems with the VM disappearing in (Meta Data) the web browser with 6.2.0-rc4,  this seems to be weird behavior and I have to us the existing image file to create a new VM.  Seems a little buggy to have this happen,  and start happening with the latest update.

i am not sure I understand what the issue is that you have on your system.

Also, you should include a Diagnostic file with any issue reports.

Share this post


Link to post

unRAID 6.2.0-rc4 does not finish booting and halts with kernel panic if this card is installed in the primary PCI-E slot of an Asus P5Q Deluxe: http://www.sybausa.com/index.php?route=product/product&path=64_77_90&product_id=818 IOCrest USB 3.1 MultiPort Card Part Number: SI-PEX20189.

 

If I move it to one of the other PCI-E slots it sometimes finishes booting but is not stable.

 

I have two of those cards and they both appear to be working OK in Windows 10 x64. I'll see if some other USB 3.1 Full Duplex 10Gbps card will work instead in unRAID.

It probably conflicts with other interfaces on the motherboard, did you try to disable them in the BIOS?

Share this post


Link to post

Sorry for the noise, I removed my posts until I have definite proof that my problems have something to do with unRAID. Thank you for your help, dikkiedirk - I think you're probably on the right track. All motherboard features have been disabled but I think my issues are caused by the old P5Q Deluxe not being able to handle my Dell H310 or the USB 3.1 card. Sounds like it's time to invest into a new motherboard.

Share this post


Link to post

I have noticed a difference in behaviour between 6.1.9 and 6.2 RC4. I am not reporting it as a bug as I am not sure it is. Perhaps it is expected behaviour that 6.2 only shows bridges configured in unRAID and not via the CLI.

 

For context, my setup requires me to add 3 more custom bridges to my network config. These bridges are directly associated with their own eth interface. I achieve this by using the brctl command.

 

In 6.1.9 I assigned each of these bridges to a VM in using the VM GUI the allowing the VM to use the bridges.

 

The problem in 6.2 RC4 (and I didn't test in previous RC's or BETA's) is that once added to the OS via brctl the unRAID VM GUI drop down does not contain the bridges for me to assign to a VM. However, it does on 6.1.9. This can be seen below.

 

6.2 RC4:

 

Screen_Shot_2016_09_03_at_11_43_33_AM.png

 

6.1.9:

 

Screen_Shot_2016_09_03_at_11_43_47_AM.png

 

Just to reenforce here is the output from "brctl show" on both systems.

 

6.2.RC4:

 

Screen_Shot_2016_09_03_at_11_46_12_AM.png

 

6.1.9:

 

Screen_Shot_2016_09_03_at_11_46_23_AM.png

 

Has anyone else come across this before or know of an explanation or fix? Unfortunately a forum search has not helped me.

Share this post


Link to post

I finally upgraded from 6.1.9 to 6.2rc4, and had some issues.  System was fine with no known issues before upgrade.  On first boot of 6.2rc4, XFS errors began spewing to syslog, rapidly growing it.  CPU stayed near 100%, essentially hanging everything.  And I made the mistake of going to Tools->Syslog and loading it, well over a megabyte already, which pretty well hung Firefox and finally brought on the dreaded "script is running..." popup.  Took me a few minutes to finally get the array stopped.  Not an auspicious beginning.

 

* I figured out that the XFS in 6.2rc4 is newer than the one in 6.1.9, and it found an issue on one drive's file system, an issue that 6.1.9 didn't care about.  It proceeded to notify the syslog thousands of times that I needed to run xfs_repair 4.3.  Happily, that's what is included in 6.2rc4.  Running it apparently fixed the issue, because it no longer complains.

-- The lesson: when the rest of the users upgrade to 6.2 final, there are likely to be some that have brand new XFS errors, where they had none before.  Tell them to run xfs_repair from the Disk menu.

-- Suggestion: add the -v option to all runs of xfs_repair.  Where it now says just -n, change it to -nv.  If you need to repair, just put -v.  Perhaps the developers could make -nv the new default?

-- Important change: I want to strongly recommend my feature request to check the return from the xfs_repair -n check.  It's the *only* way to determine if there are corruptions in an XFS file system.  I believe this should be a critical fix.

 

* The above fixed the syslog growth, but didn't bring the CPU usage down, still above 98%.  Load averages were well above 3.0, on a dual core system.  I finally discovered various settings that had to be reapplied, various plugins to uninstall, and various dependencies that had to be upgraded/re-downloaded.  I don't know which change finally brought the CPU usage back to normal, a low idle.

-- The lesson: the upgrading users may need to be told to be ready to go re-apply various settings, check that their plugins are running correctly, and redo dependencies from the NerdPack.  On the NerdTools screen, you have to toggle the tools you want *off* first, then back *on*, to make them re-download with the correct version.

-- An appeal to plugin authors (and possibly other addon authors?): please consider saving the unRAID version used to your persistent storage, so that you can detect unRAID upgrades on load, and trigger the appropriate conversions and 're-applies'.  When 6.2 goes final, we're going to get many users upgrading and running into issues with this.  It would be greatly appreciated if on the first boot of 6.2 your plugin detects that it's the first boot of 6.2, and automatically does whatever is needed.  I particularly would like to encourage the NerdPack author to consider this, since a number of the tools change versions between 6.1 and 6.2.

 

* System I/O seems much slower now.  It's too soon to blame 6.2rc4, without reloading 6.1.9 and retesting, but so far I'm seeing considerably slower I/O.  I did a parity check, and it finished in 21 hours 10 minutes.  Previously, they generally run between 14 and 15 hours.  Another operation was very slow too.  I need to do more research, and tweak the tunables, but so far, I can't account for the slowdown.  There are no SMART changes, or new errors.  Memory usage is fine, lots of unused memory.  CPU usage seems very low, now.

 

* The new feature to restore selected assignments in the New Config tool is very nice!  Saves time, and limits mistakes.  However, it missed one drive of mine, did not restore Disk 5, of my 12 drives.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.