Some trouble when mover started this morning. Unable to reboot or shutdown.


Recommended Posts

So I checked my logs like I do every morning and saw an error I'd never seen before. I went back in the log and it started when Mover was initiated and then just keeps repeating itself. And the file that needs moving never actually got moved either.

 

Can someone have a look at a snippet from the logs (below) and tell me what's going on? I'm going to reboot the thing in the meantime.

 

May 11 03:40:01 Tower logger: mover started

May 11 03:40:01 Tower logger: skipping "Downloads"

May 11 03:40:01 Tower logger: moving "Media"

May 11 03:40:01 Tower logger: ./Media/tv/Blindspot/Season 1/Blindspot.S01E21.Of.Whose.Uneasy.Route.mkv

May 11 03:40:01 Tower logger: .d..t...... ./

May 11 03:40:01 Tower logger: .d..t...... Media/

May 11 03:40:01 Tower logger: .d..t...... Media/tv/

May 11 03:40:01 Tower logger: .d..t...... Media/tv/Blindspot/

May 11 03:40:01 Tower logger: .d..t...... Media/tv/Blindspot/Season 1/

May 11 03:40:01 Tower logger: >f+++++++++ Media/tv/Blindspot/Season 1/Blindspot.S01E21.Of.Whose.Uneasy.Route.mkv

May 11 03:40:13 Tower kernel: BUG: unable to handle kernel paging request at 000020000000001c

May 11 03:40:13 Tower kernel: IP: [<ffffffff810b1322>] find_get_entry+0x41/0x7d

May 11 03:40:13 Tower kernel: PGD 0

May 11 03:40:13 Tower kernel: Oops: 0000 [#1] PREEMPT SMP

May 11 03:40:13 Tower kernel: Modules linked in: xt_nat veth ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat md_mod r8169 ahci mii i2c_i801 libahci

May 11 03:40:13 Tower kernel: CPU: 2 PID: 2391 Comm: shfs Not tainted 4.1.18-unRAID #1

May 11 03:40:13 Tower kernel: Hardware name: MSI MS-7846/H87M-E35 (MS-7846), BIOS V17.5 07/19/2014

May 11 03:40:13 Tower kernel: task: ffff8800d4690860 ti: ffff88040bbc0000 task.ti: ffff88040bbc0000

May 11 03:40:13 Tower kernel: RIP: 0010:[<ffffffff810b1322>]  [<ffffffff810b1322>] find_get_entry+0x41/0x7d

May 11 03:40:13 Tower kernel: RSP: 0018:ffff88040bbc3c28  EFLAGS: 00010246

May 11 03:40:13 Tower kernel: RAX: ffff880002659c98 RBX: 0000200000000000 RCX: 00000000fffffffa

May 11 03:40:13 Tower kernel: RDX: ffff880002659c98 RSI: ffff880002659c98 RDI: 0000000000000000

May 11 03:40:13 Tower kernel: RBP: ffff88040bbc3c48 R08: ffff880002659b60 R09: ffff88040bbc3c10

May 11 03:40:13 Tower kernel: R10: 0000000000000026 R11: ffffffff81170ff6 R12: 0000000000036d22

May 11 03:40:13 Tower kernel: R13: ffff8801883b30b8 R14: ffff8801883b30b0 R15: 0000000000036d22

May 11 03:40:13 Tower kernel: FS:  00002b917f13c700(0000) GS:ffff88041eb00000(0000) knlGS:0000000000000000

May 11 03:40:13 Tower kernel: CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033

May 11 03:40:13 Tower kernel: CR2: 000020000000001c CR3: 000000040bb2a000 CR4: 00000000001406e0

May 11 03:40:13 Tower kernel: Stack:

May 11 03:40:13 Tower kernel: 000000000000000f ffff8801883b2f60 000000000000000f 00000000000200da

May 11 03:40:13 Tower kernel: ffff88040bbc3c98 ffffffff810b1b01 ffff88040bbc3d08 ffffffff810ba20e

May 11 03:40:13 Tower kernel: ffff88040bbc3d08 ffff8801883b2f60 0000000000001000 ffff8801883b30b0

May 11 03:40:13 Tower kernel: Call Trace:

May 11 03:40:13 Tower kernel: [<ffffffff810b1b01>] pagecache_get_page+0x28/0x153

May 11 03:40:13 Tower kernel: [<ffffffff810ba20e>] ? balance_dirty_pages_ratelimited+0x1c/0x74d

May 11 03:40:13 Tower kernel: [<ffffffff810b1dda>] grab_cache_page_write_begin+0x24/0x3b

May 11 03:40:13 Tower kernel: [<ffffffff81159cca>] reiserfs_write_begin+0x5a/0x1bc

May 11 03:40:13 Tower kernel: [<ffffffff810b1990>] generic_perform_write+0xd1/0x185

May 11 03:40:13 Tower kernel: [<ffffffff810b2a1f>] __generic_file_write_iter+0x9f/0x147

May 11 03:40:13 Tower kernel: [<ffffffff810b2bd9>] generic_file_write_iter+0x112/0x17d

May 11 03:40:13 Tower kernel: [<ffffffff810fd986>] __vfs_write+0x8f/0xb8

May 11 03:40:13 Tower kernel: [<ffffffff810fdf05>] vfs_write+0xad/0x165

May 11 03:40:13 Tower kernel: [<ffffffff810fe76a>] SyS_pwrite64+0x50/0x81

May 11 03:40:13 Tower kernel: [<ffffffff815f7b2e>] system_call_fastpath+0x12/0x71

May 11 03:40:13 Tower kernel: Code: 68 cb fc ff 4c 89 e6 4c 89 ef e8 dc 3e 2a 00 48 85 c0 48 89 c6 74 2d 48 8b 18 48 85 db 74 38 f6 c3 03 74 07 f6 c3 01 74 2e eb d9 <8b> 53 1c 85 d2 74 d2 8d 7a 01 89 d0 f0 0f b1 7b 1c 39 d0 74 08

May 11 03:40:13 Tower kernel: RIP  [<ffffffff810b1322>] find_get_entry+0x41/0x7d

May 11 03:40:13 Tower kernel: RSP <ffff88040bbc3c28>

May 11 03:40:13 Tower kernel: CR2: 000020000000001c

May 11 03:40:13 Tower kernel: ---[ end trace 90a15233e4f0355e ]---

May 11 03:41:13 Tower kernel: INFO: rcu_preempt detected stalls on CPUs/tasks: {} (detected by 0, t=60002 jiffies, g=93716631, c=93716630, q=7744)

May 11 03:41:13 Tower kernel: All QSes seen, last rcu_preempt kthread activity 0 (7461739593-7461739593), jiffies_till_next_fqs=3, root ->qsmask 0x0

May 11 03:41:13 Tower kernel: swapper/0      R  running task        0    0      0 0x00000000

May 11 03:41:13 Tower kernel: 0000000000000000 ffff88041ea03de8 ffffffff81063526 ffffffff8182a500

May 11 03:41:13 Tower kernel: ffff88041ea169c0 ffff88041ea03e68 ffffffff81081a7e 0000000000000000

May 11 03:41:13 Tower kernel: ffffffff810662bc ffffffff81883030 ffffffff81804450 ffffffff8182a558

May 11 03:41:13 Tower kernel: Call Trace:

May 11 03:41:13 Tower kernel: <IRQ>  [<ffffffff81063526>] sched_show_task+0xd2/0xd7

May 11 03:41:13 Tower kernel: [<ffffffff81081a7e>] rcu_check_callbacks+0x52a/0x679

May 11 03:41:13 Tower kernel: [<ffffffff810662bc>] ? __update_cpu_load+0xa8/0xac

May 11 03:41:13 Tower kernel: [<ffffffff8108f3dd>] ? tick_sched_handle+0x34/0x34

May 11 03:41:13 Tower kernel: [<ffffffff8108368c>] update_process_times+0x2a/0x4f

May 11 03:41:13 Tower kernel: [<ffffffff8108f3db>] tick_sched_handle+0x32/0x34

May 11 03:41:13 Tower kernel: [<ffffffff8108f413>] tick_sched_timer+0x36/0x5f

May 11 03:41:13 Tower kernel: [<ffffffff81083bf8>] __run_hrtimer.isra.29+0x57/0xb0

May 11 03:41:13 Tower kernel: [<ffffffff81084141>] hrtimer_interrupt+0xd8/0x1c6

May 11 03:41:13 Tower kernel: [<ffffffff810339d6>] local_apic_timer_interrupt+0x4e/0x52

May 11 03:41:13 Tower kernel: [<ffffffff81033ec7>] smp_apic_timer_interrupt+0x3a/0x4b

May 11 03:41:13 Tower kernel: [<ffffffff815f895e>] apic_timer_interrupt+0x6e/0x80

May 11 03:41:13 Tower kernel: <EOI>  [<ffffffff81012756>] ? native_sched_clock+0x28/0x8e

May 11 03:41:13 Tower kernel: [<ffffffff814ddef1>] ? cpuidle_enter_state+0xb9/0x114

May 11 03:41:13 Tower kernel: [<ffffffff814dde87>] ? cpuidle_enter_state+0x4f/0x114

May 11 03:41:13 Tower kernel: [<ffffffff814ddf6e>] cpuidle_enter+0x12/0x14

May 11 03:41:13 Tower kernel: [<ffffffff810728dc>] cpu_startup_entry+0x1e2/0x2b2

May 11 03:41:13 Tower kernel: [<ffffffff815e56b1>] rest_init+0x85/0x89

May 11 03:41:13 Tower kernel: [<ffffffff818a7ed4>] start_kernel+0x415/0x422

May 11 03:41:13 Tower kernel: [<ffffffff818a78b5>] ? set_init_arg+0x56/0x56

May 11 03:41:13 Tower kernel: [<ffffffff818a7120>] ? early_idt_handler_array+0x120/0x120

May 11 03:41:13 Tower kernel: [<ffffffff818a74c6>] x86_64_start_reservations+0x2a/0x2c

May 11 03:41:13 Tower kernel: [<ffffffff818a75ae>] x86_64_start_kernel+0xe6/0xf5

May 11 03:44:13 Tower kernel: INFO: rcu_preempt detected stalls on CPUs/tasks: {} (detected by 0, t=240007 jiffies, g=93716631, c=93716630, q=30592)

May 11 03:44:13 Tower kernel: All QSes seen, last rcu_preempt kthread activity 0 (7461919598-7461919598), jiffies_till_next_fqs=3, root ->qsmask 0x0

May 11 03:44:13 Tower kernel: swapper/0      R  running task        0    0      0 0x00000000

May 11 03:44:13 Tower kernel: 0000000000000000 ffff88041ea03de8 ffffffff81063526 ffffffff8182a500

May 11 03:44:13 Tower kernel: ffff88041ea169c0 ffff88041ea03e68 ffffffff81081a7e 0000000000000000

May 11 03:44:13 Tower kernel: ffffffff810662bc ffffffff81883030 ffffffff81804450 ffffffff8182a558

May 11 03:44:13 Tower kernel: Call Trace:

May 11 03:44:13 Tower kernel: <IRQ>  [<ffffffff81063526>] sched_show_task+0xd2/0xd7

May 11 03:44:13 Tower kernel: [<ffffffff81081a7e>] rcu_check_callbacks+0x52a/0x679

May 11 03:44:13 Tower kernel: [<ffffffff810662bc>] ? __update_cpu_load+0xa8/0xac

May 11 03:44:13 Tower kernel: [<ffffffff8108f3dd>] ? tick_sched_handle+0x34/0x34

May 11 03:44:13 Tower kernel: [<ffffffff8108368c>] update_process_times+0x2a/0x4f

May 11 03:44:13 Tower kernel: [<ffffffff8108f3db>] tick_sched_handle+0x32/0x34

May 11 03:44:13 Tower kernel: [<ffffffff8108f413>] tick_sched_timer+0x36/0x5f

May 11 03:44:13 Tower kernel: [<ffffffff81083bf8>] __run_hrtimer.isra.29+0x57/0xb0

May 11 03:44:13 Tower kernel: [<ffffffff81084141>] hrtimer_interrupt+0xd8/0x1c6

May 11 03:44:13 Tower kernel: [<ffffffff810339d6>] local_apic_timer_interrupt+0x4e/0x52

May 11 03:44:13 Tower kernel: [<ffffffff81033ec7>] smp_apic_timer_interrupt+0x3a/0x4b

May 11 03:44:13 Tower kernel: [<ffffffff815f895e>] apic_timer_interrupt+0x6e/0x80

May 11 03:44:13 Tower kernel: <EOI>  [<ffffffff81012756>] ? native_sched_clock+0x28/0x8e

May 11 03:44:13 Tower kernel: [<ffffffff814ddef1>] ? cpuidle_enter_state+0xb9/0x114

May 11 03:44:13 Tower kernel: [<ffffffff814dde87>] ? cpuidle_enter_state+0x4f/0x114

May 11 03:44:13 Tower kernel: [<ffffffff814ddf6e>] cpuidle_enter+0x12/0x14

May 11 03:44:13 Tower kernel: [<ffffffff810728dc>] cpu_startup_entry+0x1e2/0x2b2

May 11 03:44:13 Tower kernel: [<ffffffff815e56b1>] rest_init+0x85/0x89

May 11 03:44:13 Tower kernel: [<ffffffff818a7ed4>] start_kernel+0x415/0x422

May 11 03:44:13 Tower kernel: [<ffffffff818a78b5>] ? set_init_arg+0x56/0x56

May 11 03:44:13 Tower kernel: [<ffffffff818a7120>] ? early_idt_handler_array+0x120/0x120

May 11 03:44:13 Tower kernel: [<ffffffff818a74c6>] x86_64_start_reservations+0x2a/0x2c

May 11 03:44:13 Tower kernel: [<ffffffff818a75ae>] x86_64_start_kernel+0xe6/0xf5

May 11 03:47:13 Tower kernel: INFO: rcu_preempt detected stalls on CPUs/tasks: {} (detected by 0, t=420012 jiffies, g=93716631, c=93716630, q=53316)

May 11 03:47:13 Tower kernel: All QSes seen, last rcu_preempt kthread activity 0 (7462099603-7462099603), jiffies_till_next_fqs=3, root ->qsmask 0x0

May 11 03:47:13 Tower kernel: swapper/0      R  running task        0    0      0 0x00000000

May 11 03:47:13 Tower kernel: 0000000000000000 ffff88041ea03de8 ffffffff81063526 ffffffff8182a500

May 11 03:47:13 Tower kernel: ffff88041ea169c0 ffff88041ea03e68 ffffffff81081a7e 0000000000000000

May 11 03:47:13 Tower kernel: ffffffff819d86c0 ffffffff81883030 ffffffff81804450 ffffffff8182a558

May 11 03:47:13 Tower kernel: Call Trace:

May 11 03:47:13 Tower kernel: <IRQ>  [<ffffffff81063526>] sched_show_task+0xd2/0xd7

May 11 03:47:13 Tower kernel: [<ffffffff81081a7e>] rcu_check_callbacks+0x52a/0x679

May 11 03:47:13 Tower kernel: [<ffffffff8108f3dd>] ? tick_sched_handle+0x34/0x34

May 11 03:47:13 Tower kernel: [<ffffffff8108368c>] update_process_times+0x2a/0x4f

May 11 03:47:13 Tower kernel: [<ffffffff8108f3db>] tick_sched_handle+0x32/0x34

May 11 03:47:13 Tower kernel: [<ffffffff8108f413>] tick_sched_timer+0x36/0x5f

May 11 03:47:13 Tower kernel: [<ffffffff81083bf8>] __run_hrtimer.isra.29+0x57/0xb0

May 11 03:47:13 Tower kernel: [<ffffffff81084141>] hrtimer_interrupt+0xd8/0x1c6

May 11 03:47:13 Tower kernel: [<ffffffff810339d6>] local_apic_timer_interrupt+0x4e/0x52

May 11 03:47:13 Tower kernel: [<ffffffff81033ec7>] smp_apic_timer_interrupt+0x3a/0x4b

May 11 03:47:13 Tower kernel: [<ffffffff815f895e>] apic_timer_interrupt+0x6e/0x80

May 11 03:47:13 Tower kernel: <EOI>  [<ffffffff81012756>] ? native_sched_clock+0x28/0x8e

May 11 03:47:13 Tower kernel: [<ffffffff814ddef1>] ? cpuidle_enter_state+0xb9/0x114

May 11 03:47:13 Tower kernel: [<ffffffff814dde87>] ? cpuidle_enter_state+0x4f/0x114

May 11 03:47:13 Tower kernel: [<ffffffff814ddf6e>] cpuidle_enter+0x12/0x14

May 11 03:47:13 Tower kernel: [<ffffffff810728dc>] cpu_startup_entry+0x1e2/0x2b2

May 11 03:47:13 Tower kernel: [<ffffffff815e56b1>] rest_init+0x85/0x89

May 11 03:47:13 Tower kernel: [<ffffffff818a7ed4>] start_kernel+0x415/0x422

May 11 03:47:13 Tower kernel: [<ffffffff818a78b5>] ? set_init_arg+0x56/0x56

May 11 03:47:13 Tower kernel: [<ffffffff818a7120>] ? early_idt_handler_array+0x120/0x120

May 11 03:47:13 Tower kernel: [<ffffffff818a74c6>] x86_64_start_reservations+0x2a/0x2c

May 11 03:47:13 Tower kernel: [<ffffffff818a75ae>] x86_64_start_kernel+0xe6/0xf5

May 11 03:50:13 Tower kernel: INFO: rcu_preempt detected stalls on CPUs/tasks: {} (detected by 0, t=600017 jiffies, g=93716631, c=93716630, q=75877)

May 11 03:50:13 Tower kernel: All QSes seen, last rcu_preempt kthread activity 3 (7462279608-7462279605), jiffies_till_next_fqs=3, root ->qsmask 0x0

May 11 03:50:13 Tower kernel: swapper/0      R  running task        0    0      0 0x00000000

May 11 03:50:13 Tower kernel: 0000000000000000 ffff88041ea03de8 ffffffff81063526 ffffffff8182a500

May 11 03:50:13 Tower kernel: ffff88041ea169c0 ffff88041ea03e68 ffffffff81081a7e 0000000000000000

May 11 03:50:13 Tower kernel: ffffffff810662bc ffffffff81883030 ffffffff81804450 ffffffff8182a558

May 11 03:50:13 Tower kernel: Call Trace:

May 11 03:50:13 Tower kernel: <IRQ>  [<ffffffff81063526>] sched_show_task+0xd2/0xd7

May 11 03:50:13 Tower kernel: [<ffffffff81081a7e>] rcu_check_callbacks+0x52a/0x679

May 11 03:50:13 Tower kernel: [<ffffffff810662bc>] ? __update_cpu_load+0xa8/0xac

May 11 03:50:13 Tower kernel: [<ffffffff8108f3dd>] ? tick_sched_handle+0x34/0x34

May 11 03:50:13 Tower kernel: [<ffffffff8108368c>] update_process_times+0x2a/0x4f

May 11 03:50:13 Tower kernel: [<ffffffff8108f3db>] tick_sched_handle+0x32/0x34

May 11 03:50:13 Tower kernel: [<ffffffff8108f413>] tick_sched_timer+0x36/0x5f

May 11 03:50:13 Tower kernel: [<ffffffff81083bf8>] __run_hrtimer.isra.29+0x57/0xb0

May 11 03:50:13 Tower kernel: [<ffffffff81084141>] hrtimer_interrupt+0xd8/0x1c6

May 11 03:50:13 Tower kernel: [<ffffffff810339d6>] local_apic_timer_interrupt+0x4e/0x52

May 11 03:50:13 Tower kernel: [<ffffffff81033ec7>] smp_apic_timer_interrupt+0x3a/0x4b

May 11 03:50:13 Tower kernel: [<ffffffff815f895e>] apic_timer_interrupt+0x6e/0x80

May 11 03:50:13 Tower kernel: <EOI>  [<ffffffff81012756>] ? native_sched_clock+0x28/0x8e

May 11 03:50:13 Tower kernel: [<ffffffff814ddef1>] ? cpuidle_enter_state+0xb9/0x114

May 11 03:50:13 Tower kernel: [<ffffffff814dde87>] ? cpuidle_enter_state+0x4f/0x114

May 11 03:50:13 Tower kernel: [<ffffffff814ddf6e>] cpuidle_enter+0x12/0x14

May 11 03:50:13 Tower kernel: [<ffffffff810728dc>] cpu_startup_entry+0x1e2/0x2b2

May 11 03:50:13 Tower kernel: [<ffffffff815e56b1>] rest_init+0x85/0x89

May 11 03:50:13 Tower kernel: [<ffffffff818a7ed4>] start_kernel+0x415/0x422

May 11 03:50:13 Tower kernel: [<ffffffff818a78b5>] ? set_init_arg+0x56/0x56

May 11 03:50:13 Tower kernel: [<ffffffff818a7120>] ? early_idt_handler_array+0x120/0x120

May 11 03:50:13 Tower kernel: [<ffffffff818a74c6>] x86_64_start_reservations+0x2a/0x2c

May 11 03:50:13 Tower kernel: [<ffffffff818a75ae>] x86_64_start_kernel+0xe6/0xf5

May 11 03:53:13 Tower kernel: INFO: rcu_preempt detected stalls on CPUs/tasks: {} (detected by 0, t=780022 jiffies, g=93716631, c=93716630, q=98834)

May 11 03:53:13 Tower kernel: All QSes seen, last rcu_preempt kthread activity 0 (7462459613-7462459613), jiffies_till_next_fqs=3, root ->qsmask 0x0

May 11 03:53:13 Tower kernel: swapper/0      R  running task        0    0      0 0x00000000

May 11 03:53:13 Tower kernel: 0000000000000000 ffff88041ea03de8 ffffffff81063526 ffffffff8182a500

May 11 03:53:13 Tower kernel: ffff88041ea169c0 ffff88041ea03e68 ffffffff81081a7e 0000000000000000

May 11 03:53:13 Tower kernel: ffffffff810662bc ffffffff81883030 ffffffff81804450 ffffffff8182a558

May 11 03:53:13 Tower kernel: Call Trace:

May 11 03:53:13 Tower kernel: <IRQ>  [<ffffffff81063526>] sched_show_task+0xd2/0xd7

May 11 03:53:13 Tower kernel: [<ffffffff81081a7e>] rcu_check_callbacks+0x52a/0x679

May 11 03:53:13 Tower kernel: [<ffffffff810662bc>] ? __update_cpu_load+0xa8/0xac

May 11 03:53:13 Tower kernel: [<ffffffff8108f3dd>] ? tick_sched_handle+0x34/0x34

May 11 03:53:13 Tower kernel: [<ffffffff8108368c>] update_process_times+0x2a/0x4f

May 11 03:53:13 Tower kernel: [<ffffffff8108f3db>] tick_sched_handle+0x32/0x34

May 11 03:53:13 Tower kernel: [<ffffffff8108f413>] tick_sched_timer+0x36/0x5f

May 11 03:53:13 Tower kernel: [<ffffffff81083bf8>] __run_hrtimer.isra.29+0x57/0xb0

May 11 03:53:13 Tower kernel: [<ffffffff81084141>] hrtimer_interrupt+0xd8/0x1c6

May 11 03:53:13 Tower kernel: [<ffffffff810339d6>] local_apic_timer_interrupt+0x4e/0x52

May 11 03:53:13 Tower kernel: [<ffffffff81033ec7>] smp_apic_timer_interrupt+0x3a/0x4b

May 11 03:53:13 Tower kernel: [<ffffffff815f895e>] apic_timer_interrupt+0x6e/0x80

May 11 03:53:13 Tower kernel: <EOI>  [<ffffffff81012756>] ? native_sched_clock+0x28/0x8e

May 11 03:53:13 Tower kernel: [<ffffffff814ddef1>] ? cpuidle_enter_state+0xb9/0x114

May 11 03:53:13 Tower kernel: [<ffffffff814dde87>] ? cpuidle_enter_state+0x4f/0x114

May 11 03:53:13 Tower kernel: [<ffffffff814ddf6e>] cpuidle_enter+0x12/0x14

May 11 03:53:13 Tower kernel: [<ffffffff810728dc>] cpu_startup_entry+0x1e2/0x2b2

May 11 03:53:13 Tower kernel: [<ffffffff815e56b1>] rest_init+0x85/0x89

May 11 03:53:13 Tower kernel: [<ffffffff818a7ed4>] start_kernel+0x415/0x422

May 11 03:53:13 Tower kernel: [<ffffffff818a78b5>] ? set_init_arg+0x56/0x56

May 11 03:53:13 Tower kernel: [<ffffffff818a7120>] ? early_idt_handler_array+0x120/0x120

May 11 03:53:13 Tower kernel: [<ffffffff818a74c6>] x86_64_start_reservations+0x2a/0x2c

May 11 03:53:13 Tower kernel: [<ffffffff818a75ae>] x86_64_start_kernel+0xe6/0xf5

May 11 03:56:13 Tower kernel: INFO: rcu_preempt detected stalls on CPUs/tasks: {} (detected by 0, t=960027 jiffies, g=93716631, c=93716630, q=121531)

May 11 03:56:13 Tower kernel: All QSes seen, last rcu_preempt kthread activity 3 (7462639618-7462639615), jiffies_till_next_fqs=3, root ->qsmask 0x0

May 11 03:56:13 Tower kernel: swapper/0      R  running task        0    0      0 0x00000000

May 11 03:56:13 Tower kernel: 0000000000000000 ffff88041ea03de8 ffffffff81063526 ffffffff8182a500

May 11 03:56:13 Tower kernel: ffff88041ea169c0 ffff88041ea03e68 ffffffff81081a7e 0000000000000000

May 11 03:56:13 Tower kernel: ffffffff810662bc ffffffff81883030 ffffffff81804450 ffffffff8182a558

 

And it just keeps repeating the same thing...

Link to comment

Well crap. I can't stop the array cause it thinks mover is still running. Spinning down the drives isn't actually doing anything now even though it looks like they're all spun down already.

 

How the heck to get this to allow me to reboot to see if it helps???

Link to comment

Ok... I managed to get the thing to reboot after killing a few processes left over from the failed mover thing. When it came back up it immediately started a Parity check. I stopped that and am trying a "clean" reboot.

 

What should I do after this thing comes back up????

Link to comment

Start with a Memtest from the unRAID boot menu first, a long one, multiple passes, just to eliminate a memory problem.

 

Then you may want to use Check Disk File systems on the drive or drives involved with the moves.

 

Alright. I ran a memtest that lasted 3 hours (2 passes) and had no errors. After this when I restarted I was getting some strange error so I unplugged the box, waited a few seconds, jumped in the BIOS to make sure nothing was hinky, and then restarted again. Booted right up.

 

I checked the filesystem on both the cache and the drive that was being written too via mover and both had no errors. I'm going to check the other disks (not the parity) anyway and then start everything up again. Probably run a parity check on it, too.

 

Failing any of that coming back with errors, any other ideas?

Link to comment

I had similar issues when the whole box would just hang. Only way out was pushing the power button.

Usually I saw the shdf (?) process taking 100% CPU and I suspected it had something to do with the mover.

It all went away after I changed all my drives to XFS.

 

See this sticky for details: http://lime-technology.com/forum/index.php?topic=37490.0

 

Technically, now would be an OK time to do that if everyone thinks it's a good idea, regardless of current issue. My unRaid box is just a backup and a server for one app currently. So I can fairly easily copy all my media back over to it after reformatting. It'll take several days though!

 

Thoughts?

Link to comment

Finished checking all drives and none showed any issues. Is it everyone's consensus I should take the opportunity to change over to XFS?

 

If yes, I'd need to backup my cache drive data and copy that back over after the change. Also, and I apologize for the question but it's been too long... Do I disable parity before I start changing the drives and leave it off until all the data is copied back into the array so it doesn't slow down the process? Or is that a bad idea?

Link to comment

. Also, and I apologize for the question but it's been too long... Do I disable parity before I start changing the drives and leave it off until all the data is copied back into the array so it doesn't slow down the process? Or is that a bad idea?

it is really up to you.  Disabling parity speeds up the process but lleaves you susceptible to data loss if a disk fails (and you are probably working the system harder than normal during this process).  I think most people leave parity enabled for safety as it can be a long process from a time-elapsed perspective.  Typically a copy is started and then left to run while you get on with something else.
Link to comment

. Also, and I apologize for the question but it's been too long... Do I disable parity before I start changing the drives and leave it off until all the data is copied back into the array so it doesn't slow down the process? Or is that a bad idea?

it is really up to you.  Disabling parity speeds up the process but lleaves you susceptible to data loss if a disk fails (and you are probably working the system harder than normal during this process).  I think most people leave parity enabled for safety as it can be a long process from a time-elapsed perspective.  Typically a copy is started and then left to run while you get on with something else.

 

I'm going to follow this up and say, if you've got good backups or this is only a backup server (I think you said that) then I think you are at very low risk of data loss if you disable the parity when you are copying the data on the drive.

Link to comment

. Also, and I apologize for the question but it's been too long... Do I disable parity before I start changing the drives and leave it off until all the data is copied back into the array so it doesn't slow down the process? Or is that a bad idea?

it is really up to you.  Disabling parity speeds up the process but lleaves you susceptible to data loss if a disk fails (and you are probably working the system harder than normal during this process).  I think most people leave parity enabled for safety as it can be a long process from a time-elapsed perspective.  Typically a copy is started and then left to run while you get on with something else.

 

Alright. Cool. I hate having to destroy my media backup in order to do this, but it doesn't look like I have a choice. I haven't seen anyone else suggest anything else for the problem I had.  :(

Link to comment

. Also, and I apologize for the question but it's been too long... Do I disable parity before I start changing the drives and leave it off until all the data is copied back into the array so it doesn't slow down the process? Or is that a bad idea?

it is really up to you.  Disabling parity speeds up the process but lleaves you susceptible to data loss if a disk fails (and you are probably working the system harder than normal during this process).  I think most people leave parity enabled for safety as it can be a long process from a time-elapsed perspective.  Typically a copy is started and then left to run while you get on with something else.

 

I'm going to follow this up and say, if you've got good backups or this is only a backup server (I think you said that) then I think you are at very low risk of data loss if you disable the parity when you are copying the data on the drive.

 

My main media is on a Raid 5 array and this box is my backup. For now anyway.

Link to comment

As the sticky post explains, you really only need to make enough free space to start with your biggest disk. After that it is just shuffling the data from one to the other.

Could be an opportunity to buy a new biggest disk. It was my excuse anyway.

 

Link to comment

As the sticky post explains, you really only need to make enough free space to start with your biggest disk. After that it is just shuffling the data from one to the other.

Could be an opportunity to buy a new biggest disk. It was my excuse anyway.

 

Thanks. I'll take a closer look at that. I do have enough free space on one drive that I could use to shuffle initially. Not sure I understand how the shuffle would work though. It would have to be at the disk device level and not the share level. I think 

 

Anyway, I'll take a good read through that and pick a direction. This would certainly be faster than copying 6TB over my network again!

Link to comment

Alright. I read over the meat of the post on the first page and since my situation doesn't match that criteria I'd to spell out what I "think" I need to do based on it. And if this is getting way too complicated I may just go back to my first option, but this in theory should be cleaner and safer. I think.

 

I have 3 4TB drives + 4TB Parity.

2 drives are 50% full (so 2TB).

The last drive, disk3, has only 1.21TB on it with 2.79TB free. This would be my target disk for copy/swapping since it's the only one with space.

 

Step 1 would be step 6 in the guide, create the temp directory on disk3.

 

Step 2 would be to run the rsync command to copy disk1 over to the temp directory on disk3 and then verify that there were no corruptions in the output file.

 

Step 3, providing there were no errors, I stop the array and change the file system on disk1 to XFS.

 

Step 4, start the array back up.

 

Step 5, format disk1 XFS (weird that it's 2 steps, but that's just me...).

 

Step 6 is now back to step 6 in the guide, with [dest] now disk1 a the source ONLY the temp directory on disk3.

 

Follow through to step 10, but I actually do need to delete the temp directory now on disk3 to make room for disk2.

 

Rinse and repeat for disk2.

 

The process would be the same for disk3 except I'd be copying it to disk1 or disk2. It's going to be a really tight fit though and a little scary to be honest... but again that might just be me.

 

Or should I just say F it and buy another 4TB WD Red and put it in to do this swapping??

 

Thanks all. Sorry this thready has now kind of strayed a bit. :)

Link to comment

I wrote up a modified version of the procedure (found here) that may or may not be of interest.  It works with whole drives, so it doesn't fit what you are wanting to do, using a temporary space to copy to and from.  But the rsync discussion may be useful.

 

Thanks. I guess if I buy a new drive this would be the way to do the conversion.

Link to comment

Curious... how long have you been on v6? Is this a recent upgrade?

 

When I had my issues with RFS disks, I had zero issues running v5 but as soon as I booted v6 and the mover ran once or twice, it would lock up. So I reverted to v5 and everything was OK again for months. Upgraded to v6 and same issue. Eventually converting RFS -> XFS fixed it for me.

 

If it's not a recent upgrade, I wonder why this started all of a sudden. Perhaps you crossed some unknown threshold on the RFS disks or something.. maybe they have filled up and that's triggering it?

 

Also, maybe give this a try? http://lime-technology.com/forum/index.php?topic=48763.msg468269#msg468269

Link to comment

Curious... how long have you been on v6? Is this a recent upgrade?

 

When I had my issues with RFS disks, I had zero issues running v5 but as soon as I booted v6 and the mover ran once or twice, it would lock up. So I reverted to v5 and everything was OK again for months. Upgraded to v6 and same issue. Eventually converting RFS -> XFS fixed it for me.

 

If it's not a recent upgrade, I wonder why this started all of a sudden. Perhaps you crossed some unknown threshold on the RFS disks or something.. maybe they have filled up and that's triggering it?

 

Also, maybe give this a try? http://lime-technology.com/forum/index.php?topic=48763.msg468269#msg468269

 

I can't give you an exact time, but it wasn't recent. Once it became non-beta I upgraded. This is the first hiccup I've had.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.