Jump to content

[SOLVED] SAS Errors after spinning down drives with hdparm


Recommended Posts

I am using unraid 4.7 so I was unsure if this was the proper place however I don't see a 4.7 support area.

My system log is attached.

 

Alright first some background info:

I have two drives hooked up in my server that are not part of the array, I figure if I have a disk go down or need to add space on the fly I can just stop the array and add one of the new disks and be on my way as opposed to having to unplug everything and take it to my workstation etc...

Well these drives never spin down--see this thread: http://lime-technology.com/forum/index.php?topic=6379.0 in short they are being spun up each time unmenu checks their status--so when i start up the server I telnet in and run hdparm -Y /dev/sdb and hdparm -Y /dev/sdc to spin them down and as long as I'm not in the unmenu interface for long enough to let it query the drive status they stay spun down and everything is kosher.

 

However today after a restart I went to do my usual spin down routine and noticed the server was returning my hdparm requests slower than usual. So I went to unmenu and clicked disk management and that is when all hell breaks loose (see attachment for full syslog). UNMENU becomes unresponsive however I was still able to access telnet and I attempted to issue a clean power down. The monitor showed that it was attempting to power down however it hung and I eventually ended up turning it off by holding down the power button.

the first two lines below show me jumping in on telnet to see what exactly was going on. the rest of the sample below is just errors.

 

Apr  5 12:58:33 VOID in.telnetd[17821]: connect from 192.168.0.253 (192.168.0.253)
Apr  5 12:58:41 VOID login[17822]: ROOT LOGIN  on `pts/0' from `192.168.0.253'
Apr  5 12:59:25 VOID kernel: sas: command 0xf62d4b40, task 0xc3d1e000, timed out: BLK_EH_NOT_HANDLED
Apr  5 12:59:25 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 12:59:25 VOID kernel: sas: trying to find task 0xc3d1e000
Apr  5 12:59:25 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e000
Apr  5 12:59:25 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 12:59:25 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e000
Apr  5 12:59:25 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 12:59:25 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e000 failed to abort
Apr  5 12:59:25 VOID kernel: sas: task 0xc3d1e000 is not at LU: I_T recover
Apr  5 12:59:25 VOID kernel: sas: I_T nexus reset for dev 0500000000000000
Apr  5 12:59:25 VOID kernel: sas: I_T 0500000000000000 recovered
Apr  5 12:59:25 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 12:59:35 VOID kernel: sas: command 0xc3ea1900, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 12:59:35 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 12:59:35 VOID kernel: sas: trying to find task 0xc3d1e140
Apr  5 12:59:35 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140
Apr  5 12:59:35 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 12:59:35 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140
Apr  5 12:59:35 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 12:59:35 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort
Apr  5 12:59:35 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover
Apr  5 12:59:35 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 12:59:35 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 12:59:35 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:00:10 VOID kernel: sas: command 0xf6253180, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:00:10 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:00:10 VOID kernel: sas: trying to find task 0xc3d1e140
Apr  5 13:00:10 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140
Apr  5 13:00:10 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:00:10 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140
Apr  5 13:00:10 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:00:10 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort
Apr  5 13:00:10 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover
Apr  5 13:00:10 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 13:00:10 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 13:00:10 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:00:18 VOID kernel: sas: command 0xc4467c00, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:00:18 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:00:18 VOID kernel: sas: trying to find task 0xc3d1e140
Apr  5 13:00:18 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140
Apr  5 13:00:18 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:00:18 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140
Apr  5 13:00:18 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:00:18 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort
Apr  5 13:00:18 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover
Apr  5 13:00:18 VOID kernel: sas: I_T nexus reset for dev 0500000000000000
Apr  5 13:00:18 VOID kernel: sas: I_T 0500000000000000 recovered
Apr  5 13:00:18 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:00:26 VOID kernel: sas: command 0xf6253600, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:00:26 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:00:26 VOID kernel: sas: trying to find task 0xc3d1e140
Apr  5 13:00:26 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140
Apr  5 13:00:26 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:00:26 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140
Apr  5 13:00:26 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:00:26 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort
Apr  5 13:00:26 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover
Apr  5 13:00:26 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 13:00:26 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 13:00:26 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:04:04 VOID unraid-swapfile[18801]: Initiating unRAID swap-file.
Apr  5 13:04:18 VOID kernel: NTFS driver 2.1.29 [Flags: R/O MODULE].
Apr  5 13:04:25 VOID kernel: sas: command 0xe6873cc0, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:04:25 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:04:25 VOID kernel: sas: trying to find task 0xc3d1e140
Apr  5 13:04:25 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140
Apr  5 13:04:25 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:04:25 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140
Apr  5 13:04:25 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:04:25 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort
Apr  5 13:04:25 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover
Apr  5 13:04:25 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 13:04:25 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 13:04:25 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:04:56 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:04:56 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:04:56 VOID kernel: sas: trying to find task 0xc3d1e140
Apr  5 13:04:56 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140
Apr  5 13:04:56 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:04:56 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140
Apr  5 13:04:56 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:04:56 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort
Apr  5 13:04:56 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover
Apr  5 13:04:56 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 13:04:56 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 13:04:56 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:05:02 VOID kernel: ------------[ cut here ]------------
Apr  5 13:05:02 VOID kernel: WARNING: at drivers/ata/libata-core.c:5186 ata_qc_issue+0x10b/0x308()
Apr  5 13:05:02 VOID kernel: Hardware name: To Be Filled By O.E.M.
Apr  5 13:05:02 VOID kernel: Modules linked in: ntfs md_mod xor ide_gd_mod atiixp ahci r8169 mvsas libsas scst scsi_transport_sas
Apr  5 13:05:02 VOID kernel: Pid: 19105, comm: hdparm Not tainted 2.6.32.9-unRAID #8
Apr  5 13:05:02 VOID kernel: Call Trace:
Apr  5 13:05:02 VOID kernel:  [<c102449e>] warn_slowpath_common+0x60/0x77
Apr  5 13:05:02 VOID kernel:  [<c10244c2>] warn_slowpath_null+0xd/0x10
Apr  5 13:05:02 VOID kernel:  [<c11b624d>] ata_qc_issue+0x10b/0x308
Apr  5 13:05:02 VOID kernel:  [<c11ba260>] ata_scsi_translate+0xd1/0xff
Apr  5 13:05:02 VOID kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Apr  5 13:05:02 VOID kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Apr  5 13:05:02 VOID kernel:  [<c11baa40>] ata_sas_queuecmd+0x120/0x1d7
Apr  5 13:05:02 VOID kernel:  [<c11bc6df>] ? ata_scsi_pass_thru+0x0/0x21d
Apr  5 13:05:02 VOID kernel:  [<f843969a>] sas_queuecommand+0x65/0x20d [libsas]
Apr  5 13:05:02 VOID kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Apr  5 13:05:02 VOID kernel:  [<c11a82c0>] scsi_dispatch_cmd+0x147/0x181
Apr  5 13:05:02 VOID kernel:  [<c11ace4d>] scsi_request_fn+0x351/0x376
Apr  5 13:05:02 VOID kernel:  [<c1126798>] __blk_run_queue+0x78/0x10c
Apr  5 13:05:02 VOID kernel:  [<c1124446>] elv_insert+0x67/0x153
Apr  5 13:05:02 VOID kernel:  [<c11245b8>] __elv_add_request+0x86/0x8b
Apr  5 13:05:02 VOID kernel:  [<c1129343>] blk_execute_rq_nowait+0x4f/0x73
Apr  5 13:05:02 VOID kernel:  [<c11293dc>] blk_execute_rq+0x75/0x91
Apr  5 13:05:02 VOID kernel:  [<c11292cc>] ? blk_end_sync_rq+0x0/0x28
Apr  5 13:05:02 VOID kernel:  [<c112636f>] ? get_request+0x204/0x28d
Apr  5 13:05:02 VOID kernel:  [<c11269d6>] ? get_request_wait+0x2b/0xd9
Apr  5 13:05:02 VOID kernel:  [<c112c2bf>] sg_io+0x22d/0x30a
Apr  5 13:05:02 VOID kernel:  [<c112c5a8>] scsi_cmd_ioctl+0x20c/0x3bc
Apr  5 13:05:02 VOID kernel:  [<c11b3257>] sd_ioctl+0x6a/0x8c
Apr  5 13:05:02 VOID kernel:  [<c112a420>] __blkdev_driver_ioctl+0x50/0x62
Apr  5 13:05:02 VOID kernel:  [<c112ad1c>] blkdev_ioctl+0x8b0/0x8dc
Apr  5 13:05:02 VOID kernel:  [<c1131e2d>] ? kobject_get+0x12/0x17
Apr  5 13:05:02 VOID kernel:  [<c112b0f8>] ? get_disk+0x4a/0x61
Apr  5 13:05:02 VOID kernel:  [<c101b028>] ? kmap_atomic+0x14/0x16
Apr  5 13:05:02 VOID kernel:  [<c11334a5>] ? radix_tree_lookup_slot+0xd/0xf
Apr  5 13:05:02 VOID kernel:  [<c104a179>] ? filemap_fault+0xb8/0x305
Apr  5 13:05:02 VOID kernel:  [<c1048c43>] ? unlock_page+0x18/0x1b
Apr  5 13:05:02 VOID kernel:  [<c1057c63>] ? __do_fault+0x3a7/0x3da
Apr  5 13:05:02 VOID kernel:  [<c105985f>] ? handle_mm_fault+0x42d/0x8f1
Apr  5 13:05:02 VOID kernel:  [<c108b6c6>] block_ioctl+0x2a/0x32
Apr  5 13:05:02 VOID kernel:  [<c108b69c>] ? block_ioctl+0x0/0x32
Apr  5 13:05:02 VOID kernel:  [<c10769d5>] vfs_ioctl+0x22/0x67
Apr  5 13:05:02 VOID kernel:  [<c1076f33>] do_vfs_ioctl+0x478/0x4ac
Apr  5 13:05:02 VOID kernel:  [<c105dcdd>] ? do_mmap_pgoff+0x232/0x294
Apr  5 13:05:02 VOID kernel:  [<c1076f93>] sys_ioctl+0x2c/0x45
Apr  5 13:05:02 VOID kernel:  [<c1002935>] syscall_call+0x7/0xb
Apr  5 13:05:02 VOID kernel: ---[ end trace 67ab7e794839da63 ]---
Apr  5 13:05:09 VOID kernel: sas: command 0xc44b3300, task 0xc3d1e000, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:05:27 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:05:27 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:05:27 VOID kernel: sas: trying to find task 0xc3d1e000
Apr  5 13:05:27 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e000
Apr  5 13:05:27 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:05:27 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e000
Apr  5 13:05:27 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:05:27 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e000 failed to abort
Apr  5 13:05:27 VOID kernel: sas: task 0xc3d1e000 is not at LU: I_T recover
Apr  5 13:05:27 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 13:05:27 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 13:05:27 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:05:35 VOID kernel: sas: command 0xc4465c00, task 0xc3d1e280, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:05:58 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:05:58 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:05:58 VOID kernel: sas: trying to find task 0xc3d1e280
Apr  5 13:05:58 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e280
Apr  5 13:05:58 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:05:58 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e280
Apr  5 13:05:58 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:05:58 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e280 failed to abort
Apr  5 13:05:58 VOID kernel: sas: task 0xc3d1e280 is not at LU: I_T recover
Apr  5 13:05:58 VOID kernel: sas: I_T nexus reset for dev 0500000000000000
Apr  5 13:05:58 VOID kernel: sas: I_T 0500000000000000 recovered
Apr  5 13:05:58 VOID kernel: sas: trying to find task 0xc3d1e140
Apr  5 13:05:58 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140
Apr  5 13:05:58 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:05:58 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140
Apr  5 13:05:58 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:05:58 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort
Apr  5 13:05:58 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover
Apr  5 13:05:58 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 13:05:58 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 13:05:58 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:06:06 VOID kernel: sas: command 0xf62e5a80, task 0xc3d1e000, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:06:29 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:06:29 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:06:29 VOID kernel: sas: trying to find task 0xc3d1e000
Apr  5 13:06:29 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e000
Apr  5 13:06:29 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:06:29 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e000
Apr  5 13:06:29 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:06:29 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e000 failed to abort
Apr  5 13:06:29 VOID kernel: sas: task 0xc3d1e000 is not at LU: I_T recover
Apr  5 13:06:29 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 13:06:29 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 13:06:29 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:07:00 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:07:00 VOID kernel: sas: Enter sas_scsi_recover_host
Apr  5 13:07:00 VOID kernel: sas: trying to find task 0xc3d1e140
Apr  5 13:07:00 VOID kernel: sas: sas_scsi_find_task: aborting task 0xc3d1e140
Apr  5 13:07:00 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1701:mvs_abort_task:rc= 5
Apr  5 13:07:00 VOID kernel: sas: sas_scsi_find_task: querying task 0xc3d1e140
Apr  5 13:07:00 VOID kernel: /usr/src/sas/trunk/mvsas_tgt/mv_sas.c 1645:mvs_query_task:rc= 5
Apr  5 13:07:00 VOID kernel: sas: sas_scsi_find_task: task 0xc3d1e140 failed to abort
Apr  5 13:07:00 VOID kernel: sas: task 0xc3d1e140 is not at LU: I_T recover
Apr  5 13:07:00 VOID kernel: sas: I_T nexus reset for dev 0400000000000000
Apr  5 13:07:00 VOID kernel: sas: I_T 0400000000000000 recovered
Apr  5 13:07:00 VOID kernel: sas: --- Exit sas_scsi_recover_host
Apr  5 13:07:31 VOID kernel: sas: command 0xc3e30780, task 0xc3d1e140, timed out: BLK_EH_NOT_HANDLED
Apr  5 13:07:31 VOID kernel: sas: Enter sas_scsi_recover_host

 

By following the same process as the first time I have been able to successfully reproduce this error twice now. I haven't installed or changed anything recently on the server except for enabling the unraid swap file addon in package manager today (though I am unsure if that was before or after I started having these problems). I am running a parity check now and will have to test when it is finished if the problem still persists without the swap file addon enabled.

 

I am aware it is recommended to upgrade to the latest stable release and see if the problem still persists, I just have not had the time to research the process and I wouldn't want to upgrade now until I get whatever this issue is sorted. So I guess bottom line what I am asking is, are these errors something I should be concerned with as far as the integrity of my array goes? Or is it a bug of some kind that may very well be fixed if I do upgrade to the new version of unraid?

 

EDIT: I forgot to mention, the stock web interface continues to work fine even when I can't connect to unmenu. Not sure if that matters just trying to provide all the potentially relevant info I can think of.

syslog.zip

Link to comment

Just saw your edit -- it sounds like you may have a package conflict in UnMenu.    Not sure if it's related to the change you just did or not.

 

But I'd try either disabling UnMenu or simply changing it so the only packages it loads are those you absolutely need (e.g. UPS control and clean powerdown) and no others.

 

Then you should be able to boot;  telnet in and issue your two spindown commands with hdparm; and everything should be fine.    Just don't access UnMenu and the drives should stay spundown.

 

But I'd give strong consideration to upgrading to v5.0.5  :)

Link to comment

Thanks for the advice. I think for now I would rather just shut it down and unplug the two spares to avoid any further issues until I have the time to upgrade to v5.0.5. I have been slacking on my backups since I lent my friend  my external dock so I would hate to have anything happen without having a recent backup.

Is http://lime-technology.com/wiki/index.php/Migrating_from_unRAID_4.7_to_unRAID_5.0 still the proper procedure for upgrading to 5.0.5?

Link to comment

Yes, the process outlined will work.  Although I think the simplest thing is to just wipe your flash drive (SAVE your key file !!), redo it with v5.0.5, and then just assign all your drives [be CERTAIN you know which ones are data and which one is the parity drive (and cache if you have one)].

 

Link to comment

Yes, the process outlined will work.  Although I think the simplest thing is to just wipe your flash drive (SAVE your key file !!), redo it with v5.0.5, and then just assign all your drives [be CERTAIN you know which ones are data and which one is the parity drive (and cache if you have one)].

Yea that would probably prove the least problematic. I'm assuming the data drives have to be in the exact same order as well?

Link to comment

No, the order of the data drives doesn't matter.  The only important thing is that you assign the correct parity and (if you have one) cache drives.

 

IMPORTANT NOTE:  BEFORE you do this, be sure to run a parity check and be sure everything is good (no errors).    If there are any sync errors corrected, do it again to confirm it's all good after the corrections.

 

Link to comment

Thanks. I believe I discovered what my crash was caused by. It was caused by the swap file plugin for unmenu. The reason it created such a mess was because I had it set to create the swap on my cache drive however I guess when I moved the server the other week the cable for the cache drive must have wiggled loose enough to lose a connection (the card its plugged into has really loose connectors). I'm surprised unraid didn't make a big deal about a drive being missing even though it's not part of the array. Seems like something you would want to be informed of.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...