unRAID OS version 6.4.0 Stable Release Available


limetech

Recommended Posts

9 minutes ago, detz said:

Why can't I see 6.4? I'm on 6.2.4 and it only shows I can upgrade to 6.3.2 which I can't because it kernel panics. I've tried refresh the plugins multiple times on different days, same result.

 

Sounds like it is set up so you have to upgrade in steps to reduce issues upgrading. 6.3.5 is the prior release to 6.4, so you were already out of date before. 

Maybe backup your config and data and do a clean install of unRAID onto your unRAID usb, replacing the needed configs. 

Link to comment
25 minutes ago, detz said:

Why can't I see 6.4? I'm on 6.2.4 and it only shows I can upgrade to 6.3.2 which I can't because it kernel panics. I've tried refresh the plugins multiple times on different days, same result.

Have you seen this- https://lime-technology.com/wiki/UnRAID_OS_version_6_Upgrade_Notes

 

If you scroll down a little it shows steps for a manual upgrade. Also shows precautions before upgrading and prerequisites.

Link to comment
On 1/22/2018 at 12:59 PM, jbartlett said:

I have UNRAID 6.4.0 running under VirtualBox to develop a plugin to handle NVMe drives. I have two such drives configured in the VM but UNRAID is only reporting on the 2nd device.

 

I booted my main PC using the unraid stick because it has two NVMe drives and both were picked up.

Link to comment

Has the way the mover write to the syslog changed in this release?  I know the notes say mover has been improved, but am seeing odd timings in the syslog as the mover runs.

 

i.e. 

Jan 24 08:13:55 Tower root: mover: started
Jan 24 08:41:28 Tower root: move: file /mnt/cache/Backups/Backup 2018-01-22 [Mon] 10-59 (Full).7z
Jan 24 08:41:28 Tower root: move: file /mnt/cache/Backups/Backup 2018-01-24 [Wed] 08-00 (Full).7z

Above you can see mover started at 08:13, then writes nothing to the log until 08:41.  Now if mover was writing AFTER it completes a file i could understand that, as the Backup file mentioned is 60GB in size.

 

However the second backup file is also 60GB in size, but writes to the log at exactly the same time?

 

Does it just write to the log in batches rather than as it is actually working?

Edited by dvd.collector
Link to comment
Jan 23 23:29:56 Raptor kernel: ------------[ cut here ]------------
Jan 23 23:29:56 Raptor kernel: WARNING: CPU: 4 PID: 0 at net/netfilter/nf_conntrack_core.c:769 __nf_conntrack_confirm+0x97/0x4d6
Jan 23 23:29:56 Raptor kernel: Modules linked in: xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables ip6table_filter ip6_tables vhost_net tun vhost tap veth xt_nat macvlan ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod nct6775 hwmon_vid igb ptp pps_core i2c_algo_bit x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel pcbc ttm aesni_intel aes_x86_64 drm_kms_helper crypto_simd ipmi_ssif glue_helper drm cryptd intel_cstate agpgart syscopyarea intel_uncore ahci sysfillrect i2c_i801 video sysimgblt intel_rapl_perf i2c_core libahci backlight fb_sys_fops ie31200_edac acpi_pad button thermal fan ipmi_si [last unloaded: pps_core]
Jan 23 23:29:56 Raptor kernel: CPU: 4 PID: 0 Comm: swapper/4 Not tainted 4.14.13-unRAID #1
Jan 23 23:29:56 Raptor kernel: Hardware name: To Be Filled By O.E.M. To Be Filled By O.E.M./E3C226D2I, BIOS P3.30 06/04/2015
Jan 23 23:29:56 Raptor kernel: task: ffff88040d5bc600 task.stack: ffffc90001934000
Jan 23 23:29:56 Raptor kernel: RIP: 0010:__nf_conntrack_confirm+0x97/0x4d6
Jan 23 23:29:56 Raptor kernel: RSP: 0018:ffff88041fd03908 EFLAGS: 00010202
Jan 23 23:29:56 Raptor kernel: RAX: 0000000000000188 RBX: 00000000000066a8 RCX: 0000000000000001
Jan 23 23:29:56 Raptor kernel: RDX: 0000000000000001 RSI: 0000000000000001 RDI: ffffffff81c0954c
Jan 23 23:29:56 Raptor kernel: RBP: ffff88038a672600 R08: 0000000000000101 R09: ffff88026181d600
Jan 23 23:29:56 Raptor kernel: R10: ffff880409cfa84e R11: 0000000000000006 R12: ffffffff81c8af00
Jan 23 23:29:56 Raptor kernel: R13: 0000000000007fd3 R14: ffff88031766fcc0 R15: ffff88031766fd18
Jan 23 23:29:56 Raptor kernel: FS:  0000000000000000(0000) GS:ffff88041fd00000(0000) knlGS:0000000000000000
Jan 23 23:29:56 Raptor kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
Jan 23 23:29:56 Raptor kernel: CR2: 000014c077257000 CR3: 0000000001c0a006 CR4: 00000000001606e0
Jan 23 23:29:56 Raptor kernel: Call Trace:
Jan 23 23:29:56 Raptor kernel: <IRQ>
Jan 23 23:29:56 Raptor kernel: ipv4_confirm+0xac/0xb4 [nf_conntrack_ipv4]
Jan 23 23:29:56 Raptor kernel: nf_hook_slow+0x31/0x90
Jan 23 23:29:56 Raptor kernel: ip_local_deliver+0xab/0xd3
Jan 23 23:29:56 Raptor kernel: ? inet_del_offload+0x3e/0x3e
Jan 23 23:29:56 Raptor kernel: ip_sabotage_in+0x25/0x2b
Jan 23 23:29:56 Raptor kernel: nf_hook_slow+0x31/0x90
Jan 23 23:29:56 Raptor kernel: ip_rcv+0x2ef/0x343
Jan 23 23:29:56 Raptor kernel: ? ip_local_deliver_finish+0x1b2/0x1b2
Jan 23 23:29:56 Raptor kernel: __netif_receive_skb_core+0x58b/0x6f4
Jan 23 23:29:56 Raptor kernel: netif_receive_skb_internal+0xbb/0xd0
Jan 23 23:29:56 Raptor kernel: br_pass_frame_up+0x12d/0x13a
Jan 23 23:29:56 Raptor kernel: ? br_port_flags_change+0xf/0xf
Jan 23 23:29:56 Raptor kernel: br_handle_frame_finish+0x41a/0x44a
Jan 23 23:29:56 Raptor kernel: ? br_pass_frame_up+0x13a/0x13a
Jan 23 23:29:56 Raptor kernel: br_nf_hook_thresh+0x91/0x9c
Jan 23 23:29:56 Raptor kernel: ? br_pass_frame_up+0x13a/0x13a
Jan 23 23:29:56 Raptor kernel: br_nf_pre_routing_finish+0x268/0x27a
Jan 23 23:29:56 Raptor kernel: ? br_pass_frame_up+0x13a/0x13a
Jan 23 23:29:56 Raptor kernel: ? nf_nat_ipv4_fn+0x114/0x164 [nf_nat_ipv4]
Jan 23 23:29:56 Raptor kernel: ? nf_nat_ipv4_in+0x21/0x68 [nf_nat_ipv4]
Jan 23 23:29:56 Raptor kernel: br_nf_pre_routing+0x2d8/0x2e8
Jan 23 23:29:56 Raptor kernel: ? br_nf_forward_ip+0x32c/0x32c
Jan 23 23:29:56 Raptor kernel: nf_hook_slow+0x31/0x90
Jan 23 23:29:56 Raptor kernel: br_handle_frame+0x29d/0x2d0
Jan 23 23:29:56 Raptor kernel: ? br_pass_frame_up+0x13a/0x13a
Jan 23 23:29:56 Raptor kernel: ? br_handle_local_finish+0x31/0x31
Jan 23 23:29:56 Raptor kernel: __netif_receive_skb_core+0x43c/0x6f4
Jan 23 23:29:56 Raptor kernel: ? inet_gro_receive+0x258/0x26d
Jan 23 23:29:56 Raptor kernel: netif_receive_skb_internal+0xbb/0xd0
Jan 23 23:29:56 Raptor kernel: napi_gro_receive+0x42/0x76
Jan 23 23:29:56 Raptor kernel: igb_poll+0xb63/0xb89 [igb]
Jan 23 23:29:56 Raptor kernel: net_rx_action+0xf6/0x24a
Jan 23 23:29:56 Raptor kernel: __do_softirq+0xc7/0x1bc
Jan 23 23:29:56 Raptor kernel: irq_exit+0x4f/0x8e
Jan 23 23:29:56 Raptor kernel: do_IRQ+0x9f/0xb5
Jan 23 23:29:56 Raptor kernel: common_interrupt+0x98/0x98
Jan 23 23:29:56 Raptor kernel: </IRQ>
Jan 23 23:29:56 Raptor kernel: RIP: 0010:cpuidle_enter_state+0xde/0x130
Jan 23 23:29:56 Raptor kernel: RSP: 0018:ffffc90001937ef8 EFLAGS: 00000246 ORIG_RAX: ffffffffffffff7c
Jan 23 23:29:56 Raptor kernel: RAX: ffff88041fd20900 RBX: 0000000000000000 RCX: 000000000000001f
Jan 23 23:29:56 Raptor kernel: RDX: 00019500b79fa02a RSI: 0000000000020140 RDI: 0000000000000000
Jan 23 23:29:56 Raptor kernel: RBP: ffff88041fd28800 R08: 00055e0defac1cf8 R09: 0000000000000078
Jan 23 23:29:56 Raptor kernel: R10: ffffc90001937ed8 R11: 0000000000000000 R12: 0000000000000005
Jan 23 23:29:56 Raptor kernel: R13: 00019500b79fa02a R14: ffffffff81c592b8 R15: 00019500b6cc4c9e
Jan 23 23:29:56 Raptor kernel: ? cpuidle_enter_state+0xb6/0x130
Jan 23 23:29:56 Raptor kernel: do_idle+0x11a/0x179
Jan 23 23:29:56 Raptor kernel: cpu_startup_entry+0x18/0x1a
Jan 23 23:29:56 Raptor kernel: secondary_startup_64+0xa5/0xb0
Jan 23 23:29:56 Raptor kernel: Code: 48 c1 eb 20 89 1c 24 e8 24 f9 ff ff 8b 54 24 04 89 df 89 c6 41 89 c5 e8 a9 fa ff ff 84 c0 75 b9 49 8b 86 80 00 00 00 a8 08 74 02 <0f> ff 4c 89 f7 e8 03 ff ff ff 49 8b 86 80 00 00 00 0f ba e0 09 
Jan 23 23:29:56 Raptor kernel: ---[ end trace 09e3cd5708ced719 ]---

Call trace on 6.4.0, after just over 5 days of uptime. Nothing seems broken to me but peculiar that I'm still seeing call traces on a 'stable' build ;)

raptor-diagnostics-20180124-1035.zip

Edited by nexusmaniac
Link to comment

An interesting anomaly with v6.4 and the SuperMicro X7SPA-H Atom D525 motherboard.    See the posts towards the end of the thread I referenced below by landS and the dialogue we had about his issue.    Quick summary:   He noticed VERY high disk temps on his first parity check on v6.4.    My initial suggestion was the fans blowing air over the disks had failed, but when he checked he found they were still spinning, but at a very low rpm.     I suggested he connect the fans via molex -> fan adapters (i.e. not using the PWM control on the motherboard), and that resolved the issue.    But all worked fine until 6.4.     So apparently something has caused a "glitch" in the PWM control.

 

While landS's issue has been resolved (by not using PWM for the fans), I have to wonder if that's the only motherboard that this issue is impacting.

 

FWIW, I also have a system using that board, but my fans aren't PWM fans so I don't have the issue.

 

 

Link to comment
10 hours ago, nexusmaniac said:

Call trace on 6.4.0, after just over 5 days of uptime. Nothing seems broken to me but peculiar that I'm still seeing call traces on a 'stable' build ;)

 

I got similar call traces on 6.4.0 stable after assigning an IP address to a docker.  it looks like your call traces are IP related as well.  When I removed the IP address assignment from the docker and let it go back to the same IP as the server, the call traces went away.  Have you done anything lately to modify any of the IP addresses on your server, VMs, dockers, etc?

Link to comment
11 hours ago, garycase said:

An interesting anomaly with v6.4 and the SuperMicro X7SPA-H Atom D525 motherboard.    See the posts towards the end of the thread I referenced below by landS and the dialogue we had about his issue.    Quick summary:   He noticed VERY high disk temps on his first parity check on v6.4.    My initial suggestion was the fans blowing air over the disks had failed, but when he checked he found they were still spinning, but at a very low rpm.     I suggested he connect the fans via molex -> fan adapters (i.e. not using the PWM control on the motherboard), and that resolved the issue.    But all worked fine until 6.4.     So apparently something has caused a "glitch" in the PWM control.

 

While landS's issue has been resolved (by not using PWM for the fans), I have to wonder if that's the only motherboard that this issue is impacting.

 

FWIW, I also have a system using that board, but my fans aren't PWM fans so I don't have the issue.

 

 

 

I wonder if I'm having a similar issue. I noticed that temps were getting high.
Trying to look into it, I installed "Dynamix System Temperature", however the settings just blank out after I set it up, so where it detected fan speeds, the selection boxes suddenly go empty.
Went to the support threat for the plugin, but there's been no word from the maker of the plugins. Maybe he's on holiday >.<


I also installed "Dynamix Auto Fan Control" to manage the fan speeds, but it doesn't display speeds or anything. So while it say's it's on, I can't tell if it actually is.

Link to comment
2 hours ago, Ryonez said:

 

I wonder if I'm having a similar issue. I noticed that temps were getting high.
Trying to look into it, I installed "Dynamix System Temperature", however the settings just blank out after I set it up, so where it detected fan speeds, the selection boxes suddenly go empty.
Went to the support threat for the plugin, but there's been no word from the maker of the plugins. Maybe he's on holiday >.<


I also installed "Dynamix Auto Fan Control" to manage the fan speeds, but it doesn't display speeds or anything. So while it say's it's on, I can't tell if it actually is.

Can confirm i'm having the same issues with my SuperMicro board, i just set the pwm on 120 using the shell for the time being. Bit more fan noise, rather that then disk heating up.
It's a community plugin, and the fan control is in beta i think.. So sometimes u have to have patience or be creative yourself.

Link to comment
16 hours ago, landS said:

Oi!  Would you be so kind as to share the Magick it takes to accomplish such a feat!   Or the terminal command in Lieu of that :)

http://kmwoley.com/blog/controlling-case-fans-based-on-hard-drive-temperature/

I used the first steps, before u can run sensors-detect u have to install the nerd pack and perl within the pack.

Didn't use the script myself, just set the pwm value myself to get a stable 30celcius disk temps in my array.

Depending on your pwmconfig, u know what pwm to configure.

  • Like 1
Link to comment

Noticed a STRANGE display anomaly this afternoon ...

 

The following snippets are from my 3 servers ...  (all have been updated to 6.4 in the past couple weeks)

 

This one has some disks spun up, some spun down, and shows the correct indicators ...

 

Disk Status snippet 1.jpg

 

This one has all disks spun up, and also shows the correct indicators ....

 

Disk Status snippet 2.jpg

 

This one has all disks spun up (note the temps are displayed) ... but is missing the "green ball" to show the spun up status.

 

Disk Status snippet 3.jpg

 

... but the same server DOES show the white ball to indicate spun down status (note I spun up disk 2 after spinning them all down to see if that would change the spun up indicator back to normal -- it did not).

 

Disk Status snippet 4.jpg

 

I've never seen this before -- and the server that had this issue has been working just fine.

 

FWIW I powered down the server and then rebooted it, and the display is now working normally.   Just curious if anyone has any idea about what might have happened.

 

Link to comment
11 hours ago, SiNtEnEl said:

http://kmwoley.com/blog/controlling-case-fans-based-on-hard-drive-temperature/

I used the first steps, before u can run sensors-detect u have to install the nerd pack and perl within the pack.

Didn't use the script myself, just set the pwm value myself to get a stable 30celcius disk temps in my array.

Depending on your pwmconfig, u know what pwm to configure.

 

Lovely.  Installed the original fans back on the headers, and the results are 1 fan at pwm 90 and the other at pwm 127....under full disk load. 255 is max rpm, so this explains why I saw 700rpm on 1 fan under IPMI... and the high jump in disk temps.  

 

Back to running the fans directly from sata adapter for now. 

Edited by landS
Link to comment
On 1/24/2018 at 9:28 PM, Hoopster said:

 

I got similar call traces on 6.4.0 stable after assigning an IP address to a docker.  it looks like your call traces are IP related as well.  When I removed the IP address assignment from the docker and let it go back to the same IP as the server, the call traces went away.  Have you done anything lately to modify any of the IP addresses on your server, VMs, dockers, etc?

 

Oh right... I had no idea that was the cause! I've been getting call traces on and off for a little while now (been on all beta releases of 6.4)

I have a MacVLAN set on my 2nd Plex docker. But that's been there since 6.2/6.3 and I shan't be removing the MacVLAN! I need it haha :D

 

Seems peculiar that that would be the cause of call-traces though, wouldn't you say? Maybe there's a better solution than MacVLAN for my docker container but I haven't found one haha (It's setup through the GUI on a bridged connection, IIRC)

Link to comment
1 hour ago, nexusmaniac said:

Seems peculiar that that would be the cause of call-traces though, wouldn't you say? Maybe there's a better solution than MacVLAN for my docker container but I haven't found one haha (It's setup through the GUI on a bridged connection, IIRC)

 

All I know is that the call traces appeared on my server after assigning an IP address to a docker and they went away when I removed the IP assignment.  Why that causes call traces I do not know, but, there are other reports in the general support forum of the same.

 

Certainly it should work and removing IP address cannot be the solution.  I have not followed the guide by bonienl as, for now, I am working on other things with my server, but, perhaps the thread below will be helpful to you:

 

 

  • Like 1
Link to comment

hello, so i have a pb with an upgrade to 6.4.0 from 6.3.5 .

a VM slackware  64 14.2 has a kernel panic at startup.

 

the server is new, i just created 2 VM , a debian and a slackware, they were up and running on 6.3.5 .

that's all i did not even started to do something with them just created an started.

with 6.4.0 , the debian VM is still running ok.

but not the slackware. the boot screen is ok , the kernel start but panic immediately.

if try to create another VM ,  i have the grub page and nothing more after.

 

here diagnostic and screnshoot.

 

Pascal.

 

           

 

 

Templates

boot page (Slackware) .png

failed boot(Slackware) .png

aserv1-diagnostics-20180126-1411.zip

Link to comment
On 1/12/2018 at 12:23 PM, limetech said:

Add this line just before emhttp is invoked:

zenstates --c6-disable

 

Based on my testing with another file in the /usr/local/sbin directory, you have to specify the full path to zenstates or it won't have any effect:

/usr/local/sbin/zenstates --c6-disable

This was also mentioned by someone here:

   https://lime-technology.com/forums/topic/66327-unraid-os-version-640-stable-release-update-notes/?tab=comments#comment-621611

 

Link to comment

Google "Unraid - upgrade to 6.4 best practice(s)" and see the result(s).

 

As a newbie I find the result puzzling and unrewarding.

 

I (naively) simply clicked "update" on OS server plugin and (very much) wished I hadn't.  No warnings, cautions, nor caveats displayed.  "What?  Me worry?"

 

Color me unhappy.

 

Call me  

 

WTF?

 

 

 

 

Link to comment
3 minutes ago, WTF? said:

Google "Unraid - upgrade to 6.4 best practice(s)" and see the result(s).

 

As a newbie I find the result puzzling and unrewarding.

 

I (naively) simply clicked "update" on OS server plugin and (very much) wished I hadn't.  No warnings, cautions, nor caveats displayed.  "What?  Me worry?"

 

Color me unhappy.

 

Call me  

 

WTF?

 

 

 

 

If you are saying that you did not want to update, you can easily revert. The previous system files are stored in boot/previous. Just shutdown the server and move the files to the root of your flash drive.

Link to comment
  • limetech unpinned and locked this topic
Guest
This topic is now closed to further replies.