unRAID Server Release 4.6-rc3 Available


limetech

Recommended Posts

Before doing that, just type:

cat /boot/syslinux.cfg

 

If it says something like this, then it might just be your browser cache that needs to be cleared:

root@Tower:/boot# cat syslinux.cfg

default menu.c32

menu title Lime Technology LLC

prompt 0

timeout 100

label unRAID OS

 menu default

 kernel bzimage

 append initrd=bzroot

label Memtest86+

 kernel memtest

 

You can also type this command to see that version is showing in your system log:

grep -i unraid /var/log/syslog*

 

Link to comment
  • Replies 67
  • Created
  • Last Reply

Top Posters In This Topic

I know that this release wasn't intended to be a major revision, but I wonder whether it would be possible to address the issue I raised here.

 

I believe that the crucial point is in the last post ... a non-zero/unique fsid needs to be generated for the nfs cache share, in the same way that it is for user shares.

 

Let's move this discussion to the thread you indicate above.

Link to comment

Before doing that, just type:

cat /boot/syslinux.cfg

 

If it says something like this, then it might just be your browser cache that needs to be cleared:

root@Tower:/boot# cat syslinux.cfg

default menu.c32

menu title Lime Technology LLC

prompt 0

timeout 100

label unRAID OS

 menu default

 kernel bzimage

 append initrd=bzroot

label Memtest86+

 kernel memtest

 

You can also type this command to see that version is showing in your system log:

grep -i unraid /var/log/syslog*

 

 

Joe,

 

Thanks for the help

 

This is what I get

 

root@storage:/boot#

root@storage:/boot# cat syslinux.cfg

default menu.c32

menu title Lime Technology LLC

prompt 0

timeout 60

label unRAID OS

  kernel bzimage

  append initrd=bzroot rootdelay=10

 

label Memtest86+

  kernel memtest

 

label BubbaRaid

  menu default

  kernel bu_image

  append initrd=bu_root rootdelay=10root@storage:/boot#

root@storage:/boot#

root@storage:/boot#

root@storage:/boot# grep -i unraid /var/log/syslog*

Nov 24 22:41:36 storage kernel: Linux version 2.6.27.7-unRAID-Bubba (root@d5) (gcc version 4.2.3) #1 SMP Wed Jan 7 13:47:04 GMT+5 2009

Nov 24 22:41:36 storage emhttp: unRAID System Management Utility version 4.4.2

Nov 24 22:41:36 storage kernel: md: unRAID driver 0.95.0 installed

Nov 24 22:41:38 storage kernel: unraid: allocated 7096kB

root@storage:/boot#

 

It's not a browser cache issue as I am seeing it on different browsers

Link to comment

Before doing that, just type:

cat /boot/syslinux.cfg

 

If it says something like this, then it might just be your browser cache that needs to be cleared:

root@Tower:/boot# cat syslinux.cfg

default menu.c32

menu title Lime Technology LLC

prompt 0

timeout 100

label unRAID OS

 menu default

 kernel bzimage

 append initrd=bzroot

label Memtest86+

 kernel memtest

 

You can also type this command to see that version is showing in your system log:

grep -i unraid /var/log/syslog*

 

 

Joe,

 

Thanks for the help

 

This is what I get

 

root@storage:/boot#

root@storage:/boot# cat syslinux.cfg

default menu.c32

menu title Lime Technology LLC

prompt 0

timeout 60

label unRAID OS

  kernel bzimage

  append initrd=bzroot rootdelay=10

 

label Memtest86+

  kernel memtest

 

label BubbaRaid

  menu default

  kernel bu_image

  append initrd=bu_root rootdelay=10root@storage:/boot#

root@storage:/boot#

root@storage:/boot#

root@storage:/boot# grep -i unraid /var/log/syslog*

Nov 24 22:41:36 storage kernel: Linux version 2.6.27.7-unRAID-Bubba (root@d5) (gcc version 4.2.3) #1 SMP Wed Jan 7 13:47:04 GMT+5 2009

Nov 24 22:41:36 storage emhttp: unRAID System Management Utility version 4.4.2

Nov 24 22:41:36 storage kernel: md: unRAID driver 0.95.0 installed

Nov 24 22:41:38 storage kernel: unraid: allocated 7096kB

root@storage:/boot#

 

It's not a browser cache issue as I am seeing it on different browsers

 

You are running BubbbaRAID, you need to disable BubbaRAID before you can run ANY version of unRAID besides 4.4.2

Link to comment

got the fan control working again.  So would I just put those two lines in my go file and be good?

 

Yup.

 

Release -rc4 will have the same kernel as 4.5.6, so this will be unnecessary for you.

 

Why will we go back to an older Kernel? so far 4.6-rc3 Seems very good.

 

Thank you Tom.

Link to comment

got the fan control working again.  So would I just put those two lines in my go file and be good?

 

Yup.

 

Release -rc4 will have the same kernel as 4.5.6, so this will be unnecessary for you.

 

Why will we go back to an older Kernel? so far 4.6-rc3 Seems very good.

 

Thank you Tom.

 

It's the same kernel.  Just some hwmon drivers were restored as built-ins.  All I wanted to do was fix the kernel oops.

Link to comment

Before doing that, just type:

cat /boot/syslinux.cfg

 

If it says something like this, then it might just be your browser cache that needs to be cleared:

root@Tower:/boot# cat syslinux.cfg

default menu.c32

menu title Lime Technology LLC

prompt 0

timeout 100

label unRAID OS

 menu default

 kernel bzimage

 append initrd=bzroot

label Memtest86+

 kernel memtest

 

You can also type this command to see that version is showing in your system log:

grep -i unraid /var/log/syslog*

 

 

Joe,

 

Thanks for the help

 

This is what I get

 

root@storage:/boot#

root@storage:/boot# cat syslinux.cfg

default menu.c32

menu title Lime Technology LLC

prompt 0

timeout 60

label unRAID OS

  kernel bzimage

  append initrd=bzroot rootdelay=10

 

label Memtest86+

  kernel memtest

 

label BubbaRaid

  menu default

  kernel bu_image

  append initrd=bu_root rootdelay=10root@storage:/boot#

root@storage:/boot#

root@storage:/boot#

root@storage:/boot# grep -i unraid /var/log/syslog*

Nov 24 22:41:36 storage kernel: Linux version 2.6.27.7-unRAID-Bubba (root@d5) (gcc version 4.2.3) #1 SMP Wed Jan 7 13:47:04 GMT+5 2009

Nov 24 22:41:36 storage emhttp: unRAID System Management Utility version 4.4.2

Nov 24 22:41:36 storage kernel: md: unRAID driver 0.95.0 installed

Nov 24 22:41:38 storage kernel: unraid: allocated 7096kB

root@storage:/boot#

 

It's not a browser cache issue as I am seeing it on different browsers

 

You are running BubbbaRAID, you need to disable BubbaRAID before you can run ANY version of unRAID besides 4.4.2

 

Joe, prostuff1 & gfjardim Thanks for your help. Disabling BubbaRaid did the trick.

 

J

Link to comment

Hello all, I've been experiencing a lot of kernel panics lately, even with the recent 4.6rc2 and rc3 builds.  Unfortunately, I'm coming pretty late into this discussion, so I don't know if my errors are the same "kernel oops" that these builds are supposed to address, but my errors certainly do have a line that starts with "Oops" in them...

 

FYI, hdc is my 80GB PATA cache drive (all other drives are SATA):

 

Nov 27 03:40:01 AJs_Unraid logger: mover started
Nov 27 03:40:01 AJs_Unraid logger: ./Matrix/Movies (By Genre)/Comedy/Office Space [1999].iso
Nov 27 03:40:01 AJs_Unraid logger: .d..t...... ./
Nov 27 03:40:01 AJs_Unraid logger: .d..t...... Matrix/
Nov 27 03:40:01 AJs_Unraid logger: .d..t...... Matrix/Movies (By Genre)/
Nov 27 03:40:01 AJs_Unraid logger: .d..t...... Matrix/Movies (By Genre)/Comedy/
Nov 27 03:40:01 AJs_Unraid logger: >f+++++++++ Matrix/Movies (By Genre)/Comedy/Office Space [1999].iso
Nov 27 03:41:17 AJs_Unraid kernel: BUG: unable to handle kernel paging request at 80000004
Nov 27 03:41:17 AJs_Unraid kernel: IP: [] find_get_page+0x38/0x79
Nov 27 03:41:17 AJs_Unraid kernel: *pdpt = 000000002ccd7001 *pde = 0000000000000000 
Nov 27 03:41:17 AJs_Unraid kernel: Oops: 0000 [#1] SMP 
Nov 27 03:41:17 AJs_Unraid kernel: last sysfs file: /sys/devices/pci0000:00/0000:00:14.1/ide1/1.0/block/hdc/stat
Nov 27 03:41:17 AJs_Unraid kernel: Modules linked in: md_mod xor ide_gd_mod atiixp ahci r8169
Nov 27 03:41:17 AJs_Unraid kernel: 
Nov 27 03:41:17 AJs_Unraid kernel: Pid: 8859, comm: shfs Tainted: G W (2.6.32.9-unRAID #7) A760G M2+
Nov 27 03:41:17 AJs_Unraid kernel: EIP: 0060:[] EFLAGS: 00010a83 CPU: 2
Nov 27 03:41:17 AJs_Unraid kernel: EIP is at find_get_page+0x38/0x79
Nov 27 03:41:17 AJs_Unraid kernel: EAX: 7fffffff EBX: 80000000 ECX: 80000000 EDX: 00000000
Nov 27 03:41:17 AJs_Unraid kernel: ESI: f07ef820 EDI: 00000000 EBP: eccdbcac ESP: eccdbc94
Nov 27 03:41:17 AJs_Unraid kernel: DS: 007b ES: 007b FS: 00d8 GS: 0033 SS: 0068
Nov 27 03:41:17 AJs_Unraid kernel: Process shfs (pid: 8859, ti=eccda000 task=c215e940 task.ti=eccda000)
Nov 27 03:41:17 AJs_Unraid kernel: Stack:
Nov 27 03:41:17 AJs_Unraid kernel: f7284b2c 0000820c 073e56bc 073e56bc 00000000 00000000 eccdbcd0 c10861b8
Nov 27 03:41:17 AJs_Unraid kernel: <0> f7284a78 f7284b28 10000000 05040000 073e56bc 00000000 f7284a00 eccdbce4
Nov 27 03:41:17 AJs_Unraid kernel: <0> c1086cbe 00000000 eb218e80 00000001 eccdbd44 c1087297 c143a780 fffff000
Nov 27 03:41:17 AJs_Unraid kernel: Call Trace:
Nov 27 03:41:17 AJs_Unraid kernel: [] ? __find_get_block_slow+0x42/0xf8
Nov 27 03:41:17 AJs_Unraid kernel: [] ? unmap_underlying_metadata+0x1c/0x4e
Nov 27 03:41:17 AJs_Unraid kernel: [] ? __block_prepare_write+0x16e/0x30c
Nov 27 03:41:17 AJs_Unraid kernel: [] ? add_to_page_cache_locked+0x63/0x95
Nov 27 03:41:17 AJs_Unraid kernel: [] ? block_write_begin+0x75/0xce
Nov 27 03:41:17 AJs_Unraid kernel: [] ? reiserfs_get_block+0x0/0x10a3
Nov 27 03:41:17 AJs_Unraid kernel: [] ? reiserfs_write_begin+0x118/0x197
Nov 27 03:41:17 AJs_Unraid kernel: [] ? reiserfs_get_block+0x0/0x10a3
Nov 27 03:41:17 AJs_Unraid kernel: [] ? generic_file_buffered_write+0xb9/0x1de
Nov 27 03:41:17 AJs_Unraid kernel: [] ? __generic_file_aio_write+0x3ca/0x404
Nov 27 03:41:17 AJs_Unraid kernel: [] ? generic_file_aio_write+0x54/0x95
Nov 27 03:41:17 AJs_Unraid kernel: [] ? do_sync_write+0xbb/0xf9
Nov 27 03:41:17 AJs_Unraid kernel: [] ? do_sync_read+0xbb/0xf9
Nov 27 03:41:17 AJs_Unraid kernel: [] ? autoremove_wake_function+0x0/0x30
Nov 27 03:41:17 AJs_Unraid kernel: [] ? autoremove_wake_function+0x0/0x30
Nov 27 03:41:17 AJs_Unraid kernel: [] ? reiserfs_file_write+0x6b/0x74
Nov 27 03:41:17 AJs_Unraid kernel: [] ? reiserfs_file_write+0x0/0x74
Nov 27 03:41:17 AJs_Unraid kernel: [] ? vfs_write+0x8c/0x116
Nov 27 03:41:17 AJs_Unraid kernel: [] ? sys_pwrite64+0x44/0x5d
Nov 27 03:41:17 AJs_Unraid kernel: [] ? syscall_call+0x7/0xb
Nov 27 03:41:17 AJs_Unraid kernel: Code: 55 f0 89 45 e8 8b 55 f0 8b 45 e8 e8 75 ab 12 00 85 c0 89 c6 74 44 8b 08 83 cb ff f6 c1 01 0f 44 d9 8d 43 ff 89 d9 83 f8 fd 77 da <8b> 53 04 85 d2 74 d3 8d 42 01 89 c7 89 d0 f0 0f b1 79 04 39 d0 
Nov 27 03:41:17 AJs_Unraid kernel: EIP: [] find_get_page+0x38/0x79 SS:ESP 0068:eccdbc94
Nov 27 03:41:17 AJs_Unraid kernel: CR2: 0000000080000004
Nov 27 03:41:17 AJs_Unraid kernel: ---[ end trace 4a3417395aeccedb ]---
Nov 27 03:41:17 AJs_Unraid kernel: ------------[ cut here ]------------
Nov 27 03:41:17 AJs_Unraid kernel: WARNING: at kernel/exit.c:895 do_exit+0x2b/0x508()
Nov 27 03:41:17 AJs_Unraid kernel: Hardware name: A760G M2+
Nov 27 03:41:17 AJs_Unraid kernel: Modules linked in: md_mod xor ide_gd_mod atiixp ahci r8169
Nov 27 03:41:17 AJs_Unraid kernel: Pid: 8859, comm: shfs Tainted: G D W 2.6.32.9-unRAID #7
Nov 27 03:41:17 AJs_Unraid kernel: Call Trace:
Nov 27 03:41:17 AJs_Unraid kernel: [] warn_slowpath_common+0x60/0x77
Nov 27 03:41:17 AJs_Unraid kernel: [] warn_slowpath_null+0xd/0x10
Nov 27 03:41:17 AJs_Unraid kernel: [] do_exit+0x2b/0x508
Nov 27 03:41:17 AJs_Unraid kernel: [] ? print_oops_end_marker+0x1e/0x23
Nov 27 03:41:17 AJs_Unraid kernel: [] oops_end+0x75/0x7c
Nov 27 03:41:17 AJs_Unraid kernel: [] no_context+0x14b/0x155
Nov 27 03:41:17 AJs_Unraid kernel: [] __bad_area_nosemaphore+0xe0/0xe8
Nov 27 03:41:17 AJs_Unraid kernel: [] __bad_area+0x2e/0x37
Nov 27 03:41:17 AJs_Unraid kernel: [] bad_area+0xd/0x10
Nov 27 03:41:17 AJs_Unraid kernel: [] do_page_fault+0x135/0x1e4
Nov 27 03:41:17 AJs_Unraid kernel: [] ? do_page_fault+0x0/0x1e4
Nov 27 03:41:17 AJs_Unraid kernel: [] error_code+0x66/0x6c
Nov 27 03:41:17 AJs_Unraid kernel: [] ? do_page_fault+0x0/0x1e4
Nov 27 03:41:17 AJs_Unraid kernel: [] ? find_get_page+0x38/0x79
Nov 27 03:41:17 AJs_Unraid kernel: [] __find_get_block_slow+0x42/0xf8
Nov 27 03:41:17 AJs_Unraid kernel: [] unmap_underlying_metadata+0x1c/0x4e
Nov 27 03:41:17 AJs_Unraid kernel: [] __block_prepare_write+0x16e/0x30c
Nov 27 03:41:17 AJs_Unraid kernel: [] ? add_to_page_cache_locked+0x63/0x95
Nov 27 03:41:17 AJs_Unraid kernel: [] block_write_begin+0x75/0xce
Nov 27 03:41:17 AJs_Unraid kernel: [] ? reiserfs_get_block+0x0/0x10a3
Nov 27 03:41:17 AJs_Unraid kernel: [] reiserfs_write_begin+0x118/0x197
Nov 27 03:41:17 AJs_Unraid kernel: [] ? reiserfs_get_block+0x0/0x10a3
Nov 27 03:41:17 AJs_Unraid kernel: [] generic_file_buffered_write+0xb9/0x1de
Nov 27 03:41:17 AJs_Unraid kernel: [] __generic_file_aio_write+0x3ca/0x404
Nov 27 03:41:17 AJs_Unraid kernel: [] generic_file_aio_write+0x54/0x95
Nov 27 03:41:17 AJs_Unraid kernel: [] do_sync_write+0xbb/0xf9
Nov 27 03:41:17 AJs_Unraid kernel: [] ? do_sync_read+0xbb/0xf9
Nov 27 03:41:17 AJs_Unraid kernel: [] ? autoremove_wake_function+0x0/0x30
Nov 27 03:41:17 AJs_Unraid kernel: [] ? autoremove_wake_function+0x0/0x30
Nov 27 03:41:17 AJs_Unraid kernel: [] reiserfs_file_write+0x6b/0x74
Nov 27 03:41:17 AJs_Unraid kernel: [] ? reiserfs_file_write+0x0/0x74
Nov 27 03:41:17 AJs_Unraid kernel: [] vfs_write+0x8c/0x116
Nov 27 03:41:17 AJs_Unraid kernel: [] sys_pwrite64+0x44/0x5d
Nov 27 03:41:17 AJs_Unraid kernel: [] syscall_call+0x7/0xb
Nov 27 03:41:17 AJs_Unraid kernel: ---[ end trace 4a3417395aeccedc ]---

 

FYI, I've also posted my syslog (zipped), which admittedly has a lot of messages dealing with duplicate files.  I have since removed the dups, which I believe were "created" by recovering from a few forced reboots with reiserfsck (essentially, I moved the files to a different disk and subsequently had a couple forced reboots, ran reiserfsck and I think it recovered the "deletes after move")...

syslog-2010-11-27.txt.zip

Link to comment

Hey Brit,

 

Thanks for your reply!  Before I upgraded to the 4.6rc builds, I was running 4.5.6 (and still experiencing quite a few crashes).  I ran memtest for at least 8 hours a couple weeks ago, probably closer to 10 (I let it run before I headed to work).  It ran without any errors and so I canceled out and proceeded to do my reiserfsck's...

 

I can certainly try again, if you think I should be on the look out for something specific...

 

thx

-alex

Link to comment

Alright, will do.

 

A quick question while I'm performing the reiserfsck's...  What is the suggested course of action after running "--fix-fixable" when fixable corruptions are found?  Should I preclear the disk and restore from parity?  The following article (http://lime-technology.com/wiki/index.php?title=Check_Disk_Filesystems) makes it sound like it's not necessary.

 

The reason I ask is I just performed my last reiserfsck's Wednesday (day before Thanksgiving) night.  It found some corruptions, all fixable, and so I ran it (and --check) for multiple iterations until no corruptions were found on each disk.  Most of my reboots are clean (via the web interface), but lately, I've been encountering scenarios where the GUI is not responding and/or some file is still open (and lsof doesn't return anything or isn't responding) and my only recourse is to force a shutdown/reboot (e.g. after trying what's outlined http://lime-technology.com/forum/index.php?topic=8235.msg79660#msg79660

 

thanks!

-alex

Link to comment

Alright, will do.

 

A quick question while I'm performing the reiserfsck's...  What is the suggested course of action after running "--fix-fixable" when fixable corruptions are found?  Should I preclear the disk and restore from parity?  The following article (http://lime-technology.com/wiki/index.php?title=Check_Disk_Filesystems) makes it sound like it's not necessary.

If you ran the reiserfsch on the /dev/mdX device there is no need to do any rebuilding.  It is already fixed on the disk.

 

If you ran the reiserfsck on the /dev/sdX devices then you have to completely rebuild parity afterwords by pressing "Check" and it will find parity errors and correct them related to the fixes on the physical disk.

The reason I ask is I just performed my last reiserfsck's Wednesday (day before Thanksgiving) night.  It found some corruptions, all fixable, and so I ran it (and --check) for multiple iterations until no corruptions were found on each disk.  Most of my reboots are clean (via the web interface), but lately, I've been encountering scenarios where the GUI is not responding and/or some file is still open (and lsof doesn't return anything or isn't responding) and my only recourse is to force a shutdown/reboot (e.g. after trying what's outlined http://lime-technology.com/forum/index.php?topic=8235.msg79660#msg79660

If you've fixed your file systems, and there were no more corruptions, and then there is a subsequent corruption found. (when you check now) then you have an entirely different issue.  It is VERY difficult to find, as it probably indicates one of your hard disks is returning different data than what was written to it, but not erroring out.  It could be a disk itself, or a disk sensitive to noise on the power supply or a disk controller port, or a chipset issue on the motherboard controller.

 

You can confirm that is occurring by running repeated parity "Checks"  There should NEVER be an error in a subsequent check even if one is detected/fixed in the first.

 

Link to comment

Hi something i noticed in my logs;

 

Nov 29 17:37:33 kenny emhttp: shcmd (68): /usr/sbin/hdparm -y /dev/hdb >/dev/null

 

HDB is my cache drive. Every so often it tries to put it to spindown. Is this normal occurrence? I don't notice a trend, possibly more when i open the console.

 

PS I'm still in RC2.

Link to comment

Hi something i noticed in my logs;

 

Nov 29 17:37:33 kenny emhttp: shcmd (68): /usr/sbin/hdparm -y /dev/hdb >/dev/null

 

HDB is my cache drive. Every so often it tries to put it to spindown. Is this normal occurrence? I don't notice a trend, possibly more when i open the console.

 

PS I'm still in RC2.

 

Yes, this is normal. The array disks are put in stanby by the MD driver, but the cache disk isn't part of the array, so emhttp use hdparm to "spindown" it.

Link to comment
Guest
This topic is now closed to further replies.