Jump to content

BUG: Objects remaining on kmem_cache_close()


Recommended Posts

Posted

Just tried shutting down my unRAID box (b11), so that I could upgrade to b12, and got this error in the syslog:

 

unraid: unraid_stop() called with 1 active stripes!

=============================================================================

BUG unraid/md: Objects remaining on kmem_cache_close()

-----------------------------------------------------------------------------

 

I've got SABnbzd, Sickbeard, and Plex installed on the box, and had clicked the stop button for all of them.

However, there was still one Plex process running, with files open on the cache drive.

I ssh'd in, and kill -9'd it, which is something I've done before when the array was stuck on 'Unmounting'.

 

As soon as I'd done this, then the following messages were output on the console, and into the syslog:

 

mdcmd (22): spinup 0

mdcmd (23): spinup 1

mdcmd (24): spinup 2

mdcmd (25): spinup 3

mdcmd (26): spinup 4

mdcmd (27): stop

md1: stopping

md: recovery thread woken up ...

md: recovery thread checking parity...

md: using 1152k window, over a total of 1953514552 blocks.

md: md_do_sync: got signal, exit...

md2: stopping

md3: stopping

md4: stopping

unraid: unraid_stop() called with 1 active stripes!

=============================================================================

BUG unraid/md: Objects remaining on kmem_cache_close()

-----------------------------------------------------------------------------

 

INFO: Slab 0xf2787b00 objects=13 used=1 fp=0xf61db840 flags=0x40004080

Pid: 1290, comm: emhttp Not tainted 2.6.37.6-unRAID #4

Call Trace:

[<c1078444>] slab_err+0x51/0x58

[<c10481b7>] ? smp_call_function_single+0xc3/0xd4

[<c1078c99>] ? flush_cpu_slab+0x0/0x3b

[<c107abe7>] T.880+0x4a/0xde

[<c107ad24>] kmem_cache_destroy+0xa9/0x147

[<f84d70ad>] shrink_stripes+0x98/0xa7 [md_mod]

[<f84d70ec>] unraid_stop+0x30/0x4d [md_mod]

[<f84d4177>] do_stop+0x80/0xa7 [md_mod]

[<f84d66e5>] T.835+0x6e/0x93 [md_mod]

[<f84d6cdd>] md_proc_write+0x4a1/0x67c [md_mod]

[<c108c65f>] ? inode_init_always+0x171/0x17a

[<c108c69c>] ? alloc_inode+0x34/0x62

[<c108c6f4>] ? get_new_inode_fast+0x2a/0xeb

[<c103b5c1>] ? wake_up_bit+0x57/0x5b

[<f84d40ac>] ? md_proc_open+0xd/0x1b [md_mod]

[<c10abc92>] ? proc_reg_open+0x90/0xef

[<f84d40ba>] ? md_proc_release+0x0/0x11 [md_mod]

[<c107b332>] ? __dentry_open+0x118/0x1da

[<c108362f>] ? path_put+0x20/0x23

[<c10845a2>] ? finish_open+0x111/0x134

[<c108362f>] ? path_put+0x20/0x23

[<c108626b>] ? do_filp_open+0x3d4/0x43a

[<c10af336>] proc_file_write+0x4c/0x60

[<c10abf79>] proc_reg_write+0x5a/0x6e

[<c10af2ea>] ? proc_file_write+0x0/0x60

[<c107cb19>] vfs_write+0x8a/0xfd

[<c10abf1f>] ? proc_reg_write+0x0/0x6e

[<c107cc23>] sys_write+0x3b/0x60

[<c1320325>] syscall_call+0x7/0xb

INFO: Object 0xf61daee0 @offset=12000

SLUB unraid/md: kmem_cache_destroy called for cache that still has objects.

Pid: 1290, comm: emhttp Not tainted 2.6.37.6-unRAID #4

Call Trace:

[<c131e61b>] ? printk+0xf/0x11

[<c107ad7c>] kmem_cache_destroy+0x101/0x147

[<f84d70ad>] shrink_stripes+0x98/0xa7 [md_mod]

[<f84d70ec>] unraid_stop+0x30/0x4d [md_mod]

[<f84d4177>] do_stop+0x80/0xa7 [md_mod]

[<f84d66e5>] T.835+0x6e/0x93 [md_mod]

[<f84d6cdd>] md_proc_write+0x4a1/0x67c [md_mod]

[<c108c65f>] ? inode_init_always+0x171/0x17a

[<c108c69c>] ? alloc_inode+0x34/0x62

[<c108c6f4>] ? get_new_inode_fast+0x2a/0xeb

[<c103b5c1>] ? wake_up_bit+0x57/0x5b

[<f84d40ac>] ? md_proc_open+0xd/0x1b [md_mod]

[<c10abc92>] ? proc_reg_open+0x90/0xef

[<f84d40ba>] ? md_proc_release+0x0/0x11 [md_mod]

[<c107b332>] ? __dentry_open+0x118/0x1da

[<c108362f>] ? path_put+0x20/0x23

[<c10845a2>] ? finish_open+0x111/0x134

[<c108362f>] ? path_put+0x20/0x23

[<c108626b>] ? do_filp_open+0x3d4/0x43a

[<c10af336>] proc_file_write+0x4c/0x60

[<c10abf79>] proc_reg_write+0x5a/0x6e

[<c10af2ea>] ? proc_file_write+0x0/0x60

[<c107cb19>] vfs_write+0x8a/0xfd

[<c10abf1f>] ? proc_reg_write+0x0/0x6e

[<c107cc23>] sys_write+0x3b/0x60

[<c1320325>] syscall_call+0x7/0xb

 

Is this a known issue?

Do I have anything to worry about from a data integrity point of view?

 

Thanks,

 

Andy.

 

Posted

A kill -9 cannot be caught or captured.  Therefore the processes you terminated might not have been able to terminate cleanly.

 

In any case, it might still be a bug, but use "kill" and not "kill -9" in the future.  It allows the programs to stop if they can in a sane state.

Posted

Thanks for replying, Joe.

 

I'd already tried a 'nicer' kill on it, and it failed to die. Something was up with Plex, as the main shutdown script had failed to clean up this process.

 

Not sure why killing a userland process from Plex would impact on the unraid_stop() mechanism.

Posted

Just tried shutting down my unRAID box (b11), so that I could upgrade to b12, and got this error in the syslog:

 

unraid: unraid_stop() called with 1 active stripes!

=============================================================================

BUG unraid/md: Objects remaining on kmem_cache_close()

-----------------------------------------------------------------------------

 

I've got SABnbzd, Sickbeard, and Plex installed on the box, and had clicked the stop button for all of them.

However, there was still one Plex process running, with files open on the cache drive.

I ssh'd in, and kill -9'd it, which is something I've done before when the array was stuck on 'Unmounting'.

 

As soon as I'd done this, then the following messages were output on the console, and into the syslog:

 

mdcmd (22): spinup 0

mdcmd (23): spinup 1

mdcmd (24): spinup 2

mdcmd (25): spinup 3

mdcmd (26): spinup 4

mdcmd (27): stop

md1: stopping

md: recovery thread woken up ...

md: recovery thread checking parity...

md: using 1152k window, over a total of 1953514552 blocks.

md: md_do_sync: got signal, exit...

md2: stopping

md3: stopping

md4: stopping

unraid: unraid_stop() called with 1 active stripes!

=============================================================================

BUG unraid/md: Objects remaining on kmem_cache_close()

-----------------------------------------------------------------------------

 

INFO: Slab 0xf2787b00 objects=13 used=1 fp=0xf61db840 flags=0x40004080

Pid: 1290, comm: emhttp Not tainted 2.6.37.6-unRAID #4

Call Trace:

[<c1078444>] slab_err+0x51/0x58

[<c10481b7>] ? smp_call_function_single+0xc3/0xd4

[<c1078c99>] ? flush_cpu_slab+0x0/0x3b

[<c107abe7>] T.880+0x4a/0xde

[<c107ad24>] kmem_cache_destroy+0xa9/0x147

[<f84d70ad>] shrink_stripes+0x98/0xa7 [md_mod]

[<f84d70ec>] unraid_stop+0x30/0x4d [md_mod]

[<f84d4177>] do_stop+0x80/0xa7 [md_mod]

[<f84d66e5>] T.835+0x6e/0x93 [md_mod]

[<f84d6cdd>] md_proc_write+0x4a1/0x67c [md_mod]

[<c108c65f>] ? inode_init_always+0x171/0x17a

[<c108c69c>] ? alloc_inode+0x34/0x62

[<c108c6f4>] ? get_new_inode_fast+0x2a/0xeb

[<c103b5c1>] ? wake_up_bit+0x57/0x5b

[<f84d40ac>] ? md_proc_open+0xd/0x1b [md_mod]

[<c10abc92>] ? proc_reg_open+0x90/0xef

[<f84d40ba>] ? md_proc_release+0x0/0x11 [md_mod]

[<c107b332>] ? __dentry_open+0x118/0x1da

[<c108362f>] ? path_put+0x20/0x23

[<c10845a2>] ? finish_open+0x111/0x134

[<c108362f>] ? path_put+0x20/0x23

[<c108626b>] ? do_filp_open+0x3d4/0x43a

[<c10af336>] proc_file_write+0x4c/0x60

[<c10abf79>] proc_reg_write+0x5a/0x6e

[<c10af2ea>] ? proc_file_write+0x0/0x60

[<c107cb19>] vfs_write+0x8a/0xfd

[<c10abf1f>] ? proc_reg_write+0x0/0x6e

[<c107cc23>] sys_write+0x3b/0x60

[<c1320325>] syscall_call+0x7/0xb

INFO: Object 0xf61daee0 @offset=12000

SLUB unraid/md: kmem_cache_destroy called for cache that still has objects.

Pid: 1290, comm: emhttp Not tainted 2.6.37.6-unRAID #4

Call Trace:

[<c131e61b>] ? printk+0xf/0x11

[<c107ad7c>] kmem_cache_destroy+0x101/0x147

[<f84d70ad>] shrink_stripes+0x98/0xa7 [md_mod]

[<f84d70ec>] unraid_stop+0x30/0x4d [md_mod]

[<f84d4177>] do_stop+0x80/0xa7 [md_mod]

[<f84d66e5>] T.835+0x6e/0x93 [md_mod]

[<f84d6cdd>] md_proc_write+0x4a1/0x67c [md_mod]

[<c108c65f>] ? inode_init_always+0x171/0x17a

[<c108c69c>] ? alloc_inode+0x34/0x62

[<c108c6f4>] ? get_new_inode_fast+0x2a/0xeb

[<c103b5c1>] ? wake_up_bit+0x57/0x5b

[<f84d40ac>] ? md_proc_open+0xd/0x1b [md_mod]

[<c10abc92>] ? proc_reg_open+0x90/0xef

[<f84d40ba>] ? md_proc_release+0x0/0x11 [md_mod]

[<c107b332>] ? __dentry_open+0x118/0x1da

[<c108362f>] ? path_put+0x20/0x23

[<c10845a2>] ? finish_open+0x111/0x134

[<c108362f>] ? path_put+0x20/0x23

[<c108626b>] ? do_filp_open+0x3d4/0x43a

[<c10af336>] proc_file_write+0x4c/0x60

[<c10abf79>] proc_reg_write+0x5a/0x6e

[<c10af2ea>] ? proc_file_write+0x0/0x60

[<c107cb19>] vfs_write+0x8a/0xfd

[<c10abf1f>] ? proc_reg_write+0x0/0x6e

[<c107cc23>] sys_write+0x3b/0x60

[<c1320325>] syscall_call+0x7/0xb

 

Is this a known issue?

Do I have anything to worry about from a data integrity point of view?

 

Thanks,

 

Andy.

 

 

This might be a bug in -beta11 which is now fixed in -beta12, or else it can be a new bug, or else it can be some interaction with plugin and shutting down.  If you had posted the entire system log, I might have been able to determine this.

Posted

Hi Tom,

 

Complete syslog is now attached.

 

Some further information. Plex was installed, and being run, but the plugin created by Stokkes on this thread:

http://lime-technology.com/forum/index.php?topic=14446.0

 

Install directory was /mnt/cache/.plex for the main binaries, and /mnt/cache/.plexlib for the user application data.

 

I had used his stop mechanism from within the unRAID GUI, under Utils -> Plex Media Server, but this did not stop all of the processes. When I clicked on the unRAID GUI to stop the array, I did not realise this. The array was then repeatedly trying to unmount disks, so I went looking for a stray process, found the Plex one, and attempted to kill it. It wouldn't die with a standard kill, so I had to kill -9 it.

 

I've since discovered that running Plex from this plugin is somewhat flaky, even if I install Plex separately, and just use the plugin to launch the server. I see weird behaviour, such as media directories being ignored, but which are detected if I launch the server from the command line. Not sure why this is, and haven't had chance to post about it yet on the forum thread.

 

If you need any more information, please let me know.

 

Regards,

 

Andy.

syslog.stripe.error.txt

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...