NeuralBlade Posted January 25, 2015 Share Posted January 25, 2015 I upgraded from version 5 to the latest version 6 beta12 a few days ago. After around a day the web GUI becomes unresponsive and all the shares becomes unavailable. This is ow the third time that this has happened. It was completely stable with version 5. Any attempts to power it from the terminal fails even with the latest powerdown scripts. I noticed the following in the logs when around the time it became unresponsive. Jan 25 18:27:13 JFM-NAS kernel: INFO: rcu_sched self-detected stall on CPU { 1} (t=240051 jiffies g=2447745 c=2447744 q=876529) Jan 25 18:27:13 JFM-NAS kernel: Task dump for CPU 1: Jan 25 18:27:13 JFM-NAS kernel: shfs R running task 0 11811 1 0x00000008 Jan 25 18:27:13 JFM-NAS kernel: 0000000000000000 ffff88011fc23de8 ffffffff8105cc09 0000000000000001 Jan 25 18:27:13 JFM-NAS kernel: 0000000000000001 ffff88011fc23e00 ffffffff8105f2c4 ffffffff81822d00 Jan 25 18:27:13 JFM-NAS kernel: ffff88011fc23e30 ffffffff810766a5 ffffffff81822d00 ffff88011fc2e0c0 Jan 25 18:27:13 JFM-NAS kernel: Call Trace: Jan 25 18:27:13 JFM-NAS kernel: <IRQ> [<ffffffff8105cc09>] sched_show_task+0xbe/0xc3 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8105f2c4>] dump_cpu_task+0x34/0x38 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff810766a5>] rcu_dump_cpu_stacks+0x6a/0x8c Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff81078ead>] rcu_check_callbacks+0x1e1/0x4ff Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff81086659>] ? tick_sched_handle+0x34/0x34 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8107ac1a>] update_process_times+0x38/0x60 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff81086657>] tick_sched_handle+0x32/0x34 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8108668e>] tick_sched_timer+0x35/0x53 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8107b149>] __run_hrtimer.isra.29+0x57/0xb0 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8107b634>] hrtimer_interrupt+0xd9/0x1c0 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8102ea78>] local_apic_timer_interrupt+0x4f/0x52 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8102ee4a>] smp_apic_timer_interrupt+0x3a/0x4b Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff815ead9d>] apic_timer_interrupt+0x6d/0x80 Jan 25 18:27:13 JFM-NAS kernel: <EOI> [<ffffffff81147a96>] ? __discard_prealloc+0x8/0xb1 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff81147ba2>] reiserfs_discard_all_prealloc+0x43/0x4c Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff81163ed6>] do_journal_end+0x4e1/0xc57 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff81164ba6>] journal_end+0xad/0xb4 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8115e7f5>] reiserfs_do_truncate+0x2e2/0x425 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff8114dbb7>] reiserfs_truncate_file+0x1b7/0x2d0 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff810b2f95>] ? truncate_pagecache+0x4d/0x54 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff811512fe>] reiserfs_setattr+0x242/0x294 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff81108ea6>] notify_change+0x1dd/0x2d1 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff810f24aa>] do_truncate+0x64/0x89 Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff810f5505>] ? __sb_start_write+0x9a/0xce Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff810f27d4>] SyS_ftruncate+0x115/0x12a Jan 25 18:27:13 JFM-NAS kernel: [<ffffffff815e9fa9>] system_call_fastpath+0x16/0x1b I had two docker containers active at the time. syslog.zip Link to comment
BRiT Posted January 25, 2015 Share Posted January 25, 2015 Others who have had similar issues worked around the issue after receiving no other guidance that worked, by converting off of ReiserFS to XFS. Once they no longer were using RFS their issues went away. You might want to send a PM directly to the LimeTech staff to get them officially involved. Link to comment
NeuralBlade Posted January 25, 2015 Author Share Posted January 25, 2015 Ouch . that is going to be painful process with 10 data drives Link to comment
SmallwoodDR82 Posted January 26, 2015 Share Posted January 26, 2015 http://lime-technology.com/forum/index.php?topic=37311.0 Link to comment
dgaschk Posted January 26, 2015 Share Posted January 26, 2015 See check disk filesystems in my sig. Link to comment
loady Posted February 19, 2015 Share Posted February 19, 2015 That's interesting reading...A lot of similarities, when I first set about moving over to v6 I read a lot of info, it seemed to be said that the cache drive should be formatted to xfs, I did this and it was running fine but due some further reading which suggested you now no longer need your cache formatted to xfs I re partitioned the cache back to reiserf. .I think this might be when the problems started, though I'm not 100% sure of it. What I'm thinking is that could the problem be firstly resolved by reformatting the cache to xfs as this will take a lot left time?? Link to comment
itimpi Posted February 19, 2015 Share Posted February 19, 2015 That's interesting reading...A lot of similarities, when I first set about moving over to v6 I read a lot of info, it seemed to be said that the cache drive should be formatted to xfs, I did this and it was running fine but due some further reading which suggested you now no longer need your cache formatted to xfs I re partitioned the cache back to reiserf. .I think this might be when the problems started, though I'm not 100% sure of it. Actually the statement was that you no longer needed your cache drive formtted to BTRFS to support docker. XFS is now the recommende default file system all other critieria being equal. [quote-What I'm thinking is that could the problem be firstly resolved by reformatting the cache to xfs as this will take a lot left time?? I think most users who have not gone to BTRFS for features like drive pooling and/or trim support are probably now probably using XFS for their cache drives. Link to comment
trurl Posted February 19, 2015 Share Posted February 19, 2015 I think most users who have not gone to BTRFS for features like drive pooling and/or trim support are probably now probably using XFS for their cache drives. After the recent issues with b13 I am seriously considering doing away with my cache pool. It hasn't really given me any problems, but I am still on b12 because of this, and it's not clear there is enough advantage to justify using another slot in this way when I could use it for another array drive. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.