pyrater Posted September 27, 2015 Share Posted September 27, 2015 So i needed to do some office cleaning and executed "powerdown".... Now the server is not coming back up...... Please see attached picture. Its just sitting here after every reboot... Hangs with both SSD's installed Link to comment
pyrater Posted September 27, 2015 Author Share Posted September 27, 2015 narrowed it down to my two VM SSD's..... Comes up now without those two... Obviously the array wont start but i can now get to the webgui... (missing too many drives) Link to comment
pyrater Posted September 27, 2015 Author Share Posted September 27, 2015 If you shutdown the array and plug in the SSD you get this... Also does this on boot if you remove #1 SSD..... Link to comment
pyrater Posted September 29, 2015 Author Share Posted September 29, 2015 Can i get any support at all?... this is kind of a big deal not being able to use ANY ssds SSD's are both 4 months old: Sandisk SDSSDA-240G SSD Plus Link to comment
BRiT Posted September 29, 2015 Share Posted September 29, 2015 The forums are better for user to user support. Support via the forums from Limetech and it's Staff is extremely hit and miss. You need to directly contact @Limetech / @JonP for support. Link to comment
itimpi Posted September 29, 2015 Share Posted September 29, 2015 Can i get any support at all?... this is kind of a big deal not being able to use ANY ssds SSD's are both 4 months old: Sandisk SDSSDA-240G SSD Plus You are the first person who has reported issues using SSD's. I think a significant proportion of the unRAID users who are active in this forum have SSD's as part of their unRAID setup. I suspect it is something that is not unRAID specific that is causing your issues although I have no idea what it might be. Link to comment
pyrater Posted September 29, 2015 Author Share Posted September 29, 2015 i agree 100% but i have no clue where to start to fix it. All i know is it works fine without them the second i put one in it crashes / will not boot. Link to comment
lr5v Posted September 29, 2015 Share Posted September 29, 2015 Got to say I am surprised this isn't a common occurrence as ssd drives thoretically have a limited number of writes they can take. This happened to my unraid, all good until re-boot when just would not start. Had to drag the server down from the loft, remove everything and test each piece of hardware until the culprit was found to be the ssd cache drive. The drive was formerly my media centre C drive, so well used, replaced it with a laptop 2.5 drive -not as quick but working. Link to comment
RobJ Posted September 30, 2015 Share Posted September 30, 2015 All i know is it works fine without them the second i put one in it crashes / will not boot. Can you test them, one by one, in another computer? In your unRAID server, does the BIOS see them? Have you tried them in motherboard ports? Just trying to come up with different ideas, to get a better handle on what's wrong with them. It seems extremely unlikely for both to fail simultaneously, the same way. And normally a failed drive doesn't affect the rest of the system, it's just not accessible. Perhaps there was a significant electrical event, like a nearby lightning strike? Link to comment
pyrater Posted September 30, 2015 Author Share Posted September 30, 2015 I tested them in another computer, pulled the data off one of them, the other failed to read the data. Bios sees the drives, tried them in different spots on my 24 bay server, Server is protected by a 1500w UPS. All i know is the server DOES NOT like these two drives. Either one of them, glad to know someone else had the same problem. lr5v did it work with another SSD? I am wondering if replacing the SSDs will work or if i should try to RMA them. they are 4 months old for crying out loud. Link to comment
pyrater Posted October 3, 2015 Author Share Posted October 3, 2015 OK so i am at a loss, i replaced ALL MY SSD's with brand new SSD's and its still crashing!!!!! Even when booting safe mode it goes to login: then crash with the text as seen below. I can login VIA putty but not the webgui... I got the log please see below... Oct 2 18:29:04 Icarus kernel: sdh: sdh1 Oct 2 18:29:04 Icarus kernel: sd 21:0:0:0: [sdh] Attached SCSI disk Oct 2 18:29:05 Icarus kernel: ata26: SATA link up 3.0 Gbps (SStatus 123 SControl 300) Oct 2 18:29:05 Icarus kernel: ata26.00: ATA-9: SanDisk SDSSDA240G, 153611406099, U21010RL, max UDMA/133 Oct 2 18:29:05 Icarus kernel: ata26.00: 468862128 sectors, multi 1: LBA48 NCQ (depth 31/32) Oct 2 18:29:05 Icarus kernel: ------------[ cut here ]------------ Oct 2 18:29:05 Icarus kernel: kernel BUG at drivers/ata/sata_mv.c:2120! Oct 2 18:29:05 Icarus kernel: invalid opcode: 0000 [#1] PREEMPT SMP Oct 2 18:29:05 Icarus kernel: Modules linked in: md_mod(-) w83627hf hwmon_vid k10temp sata_mv forcedeth sata_nv pata_amd acpi_cpufreq Oct 2 18:29:05 Icarus kernel: CPU: 1 PID: 1149 Comm: scsi_eh_25 Not tainted 4.1.7-unRAID #3 Oct 2 18:29:05 Icarus kernel: Hardware name: Supermicro H8DM8-2/H8DM8-2, BIOS 080014 10/22/2009 Oct 2 18:29:05 Icarus kernel: task: ffff88011af41920 ti: ffff88021a7cc000 task.ti: ffff88021a7cc000 Oct 2 18:29:05 Icarus kernel: RIP: 0010:[<ffffffffa008c634>] [<ffffffffa008c634>] mv_qc_prep+0x147/0x1f2 [sata_mv] Oct 2 18:29:05 Icarus kernel: RSP: 0018:ffff88021a7cf868 EFLAGS: 00010006 Oct 2 18:29:05 Icarus kernel: RAX: ffff8800d9def40a RBX: ffff8800d9e41d58 RCX: ffff8800d9def400 Oct 2 18:29:05 Icarus kernel: RDX: ffff8800d9def447 RSI: 0000000000000001 RDI: ffff8800d9e41d58 Oct 2 18:29:05 Icarus kernel: RBP: ffff88021a7cf888 R08: ffff8800db90b060 R09: 0000000000000046 Oct 2 18:29:05 Icarus kernel: R10: 00000000d9e4377f R11: ffff8800db90b098 R12: ffff8800d9e15818 Oct 2 18:29:05 Icarus kernel: R13: 0000000000000001 R14: ffffffff81810a80 R15: ffff8800db90b098 Oct 2 18:29:05 Icarus kernel: FS: 00002af014b03600(0000) GS:ffff88011bc40000(0000) knlGS:0000000000000000 Oct 2 18:29:05 Icarus kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 000000008005003b Oct 2 18:29:05 Icarus kernel: CR2: 00000000006e4c88 CR3: 00000000017f9000 CR4: 00000000000006e0 Oct 2 18:29:05 Icarus kernel: Stack: Oct 2 18:29:05 Icarus kernel: ffff8800d9e41e58 ffff8800d9e41d58 ffff8800d9e41e58 ffff8800d9e40000 Oct 2 18:29:05 Icarus kernel: ffff88021a7cf8e8 ffffffff8145d238 ffff88021a7cf8b8 ffff88021a7cf9b8 Oct 2 18:29:05 Icarus kernel: 0000000100000000 0000000100000002 ffff88021a7cf8c8 ffff8800d9e42268 Oct 2 18:29:05 Icarus kernel: Call Trace: Oct 2 18:29:05 Icarus kernel: [<ffffffff8145d238>] ata_qc_issue+0x268/0x2c7 Oct 2 18:29:05 Icarus kernel: [<ffffffff8145d515>] ata_exec_internal_sg+0x27e/0x478 Oct 2 18:29:05 Icarus kernel: [<ffffffff8145d78d>] ata_exec_internal+0x7e/0x8b Oct 2 18:29:05 Icarus kernel: [<ffffffff810783f1>] ? vprintk_default+0x18/0x1a Oct 2 18:29:05 Icarus kernel: [<ffffffff81465f33>] ata_read_log_page+0xdf/0x11d Oct 2 18:29:05 Icarus kernel: [<ffffffff8145eb30>] ata_dev_configure+0x9ed/0xee3 Oct 2 18:29:05 Icarus kernel: [<ffffffff8145d7bf>] ? ata_do_dev_read_id+0x25/0x27 Oct 2 18:29:05 Icarus kernel: [<ffffffff814684b6>] ata_eh_recover+0x77f/0xf86 Oct 2 18:29:05 Icarus kernel: [<ffffffff8146b07d>] ? ata_sff_softreset+0x166/0x166 Oct 2 18:29:05 Icarus kernel: [<ffffffffa008d627>] ? mv5_reset_hc+0x10d/0x10d [sata_mv] Oct 2 18:29:05 Icarus kernel: [<ffffffffa008b731>] ? mv_pmp_hardreset+0x58/0x58 [sata_mv] Oct 2 18:29:05 Icarus kernel: [<ffffffff8146cb5c>] ? ata_bmdma_interrupt+0x184/0x184 Oct 2 18:29:05 Icarus kernel: [<ffffffff8146d65a>] sata_pmp_error_handler+0xfb/0x82d Oct 2 18:29:05 Icarus kernel: [<ffffffff81057498>] ? try_to_grab_pending+0x45/0x146 Oct 2 18:29:05 Icarus kernel: [<ffffffff81082a85>] ? lock_timer_base.isra.25+0x26/0x4a Oct 2 18:29:05 Icarus kernel: [<ffffffffa008ac88>] mv_pmp_error_handler+0x7b/0x84 [sata_mv] Oct 2 18:29:05 Icarus kernel: [<ffffffff81468f7e>] ata_scsi_port_error_handler+0x21b/0x55f Oct 2 18:29:05 Icarus kernel: [<ffffffff8146934e>] ata_scsi_error+0x8c/0xb7 Oct 2 18:29:05 Icarus kernel: [<ffffffff81448b30>] scsi_error_handler+0xac/0x4a0 Oct 2 18:29:05 Icarus kernel: [<ffffffff81448a84>] ? scsi_eh_get_sense+0xd4/0xd4 Oct 2 18:29:05 Icarus kernel: [<ffffffff81448a84>] ? scsi_eh_get_sense+0xd4/0xd4 Oct 2 18:29:05 Icarus kernel: [<ffffffff8105c792>] kthread+0xd6/0xde Oct 2 18:29:05 Icarus kernel: [<ffffffff8105c6bc>] ? kthread_create_on_node+0x172/0x172 Oct 2 18:29:05 Icarus kernel: [<ffffffff815f5a12>] ret_from_fork+0x42/0x70 Oct 2 18:29:05 Icarus kernel: [<ffffffff8105c6bc>] ? kthread_create_on_node+0x172/0x172 Oct 2 18:29:05 Icarus kernel: Code: 12 66 89 51 0a eb 26 0f b6 43 2a 80 cc 11 66 89 41 0a 48 8d 41 0e 0f b6 53 2f 80 ce 11 66 89 51 0c eb 0a 48 8d 41 0a 84 d2 74 02 <0f> 0b 0f b6 53 30 80 ce 12 66 89 10 0f b6 53 2c 80 ce 13 66 89 Oct 2 18:29:05 Icarus kernel: RIP [<ffffffffa008c634>] mv_qc_prep+0x147/0x1f2 [sata_mv] Oct 2 18:29:05 Icarus kernel: RSP <ffff88021a7cf868> Oct 2 18:29:05 Icarus kernel: ---[ end trace bf441de84c1a7ded ]--- Oct 2 18:29:05 Icarus kernel: note: scsi_eh_25[1149] exited with preempt_count 1 Oct 2 18:29:05 Icarus kernel: ------------[ cut here ]------------ Oct 2 18:29:05 Icarus kernel: WARNING: CPU: 1 PID: 1149 at kernel/smp.c:292 smp_call_function_single+0x81/0xef() Oct 2 18:29:05 Icarus kernel: Modules linked in: md_mod(-) w83627hf hwmon_vid k10temp sata_mv forcedeth sata_nv pata_amd acpi_cpufreq Oct 2 18:29:05 Icarus kernel: CPU: 1 PID: 1149 Comm: scsi_eh_25 Tainted: G D 4.1.7-unRAID #3 Oct 2 18:29:05 Icarus kernel: Hardware name: Supermicro H8DM8-2/H8DM8-2, BIOS 080014 10/22/2009 Oct 2 18:29:05 Icarus kernel: 0000000000000009 ffff88021a7cf478 ffffffff815eff9a 0000000000000000 Oct 2 18:29:05 Icarus kernel: 0000000000000000 ffff88021a7cf4b8 ffffffff810477cb ffff88011bc55ec0 Oct 2 18:29:05 Icarus kernel: ffffffff81092c4d ffff88011af41920 0000000000000001 0000000000000001 Oct 2 18:29:05 Icarus kernel: Call Trace: Oct 2 18:29:05 Icarus kernel: [<ffffffff815eff9a>] dump_stack+0x4c/0x6e Oct 2 18:29:05 Icarus kernel: [<ffffffff810477cb>] warn_slowpath_common+0x97/0xb1 Oct 2 18:29:05 Icarus kernel: [<ffffffff81092c4d>] ? smp_call_function_single+0x81/0xef Oct 2 18:29:05 Icarus kernel: [<ffffffff810a73c2>] ? perf_event_refresh+0x39/0x39 Oct 2 18:29:05 Icarus kernel: [<ffffffff81047879>] warn_slowpath_null+0x15/0x17 Oct 2 18:29:05 Icarus kernel: [<ffffffff81092c4d>] smp_call_function_single+0x81/0xef Oct 2 18:29:05 Icarus kernel: [<ffffffff810a5e9b>] task_function_call+0x44/0x4e Oct 2 18:29:05 Icarus kernel: [<ffffffff810ab285>] ? __perf_event_task_sched_out+0x35b/0x35b Oct 2 18:29:05 Icarus kernel: [<ffffffff810a74d8>] perf_cgroup_exit+0x19/0x1b Oct 2 18:29:05 Icarus kernel: [<ffffffff8109cd09>] cgroup_exit+0xa9/0xc8 Oct 2 18:29:05 Icarus kernel: [<ffffffff810497fc>] do_exit+0x3e0/0x8db Oct 2 18:29:05 Icarus kernel: [<ffffffff810789cc>] ? kmsg_dump+0xa7/0xb0 Oct 2 18:29:05 Icarus kernel: [<ffffffff8100e19b>] oops_end+0xb5/0xba Oct 2 18:29:05 Icarus kernel: [<ffffffff8100e2d4>] die+0x55/0x61 Oct 2 18:29:05 Icarus kernel: [<ffffffff8100b55e>] do_trap+0x66/0x11e Oct 2 18:29:05 Icarus kernel: [<ffffffff8100b6e0>] do_error_trap+0xca/0xdc Oct 2 18:29:05 Icarus kernel: [<ffffffffa008c634>] ? mv_qc_prep+0x147/0x1f2 [sata_mv] Oct 2 18:29:05 Icarus kernel: [<ffffffff81356e41>] ? put_dec+0x53/0x58 Oct 2 18:29:05 Icarus kernel: [<ffffffff8135819c>] ? number.isra.13+0x11e/0x216 Oct 2 18:29:05 Icarus kernel: [<ffffffff8106d55f>] ? pick_next_task_fair+0x106/0x41a Oct 2 18:29:05 Icarus kernel: [<ffffffff81357a38>] ? string.isra.3+0x3d/0xa4 Oct 2 18:29:05 Icarus kernel: [<ffffffff8100a49b>] ? __switch_to+0x43c/0x4c8 Oct 2 18:29:05 Icarus kernel: [<ffffffff8100bafe>] do_invalid_op+0x1b/0x1d Oct 2 18:29:05 Icarus kernel: [<ffffffff815f6a1e>] invalid_op+0x1e/0x30 Oct 2 18:29:05 Icarus kernel: [<ffffffffa008c634>] ? mv_qc_prep+0x147/0x1f2 [sata_mv] Oct 2 18:29:05 Icarus kernel: [<ffffffff8145d238>] ata_qc_issue+0x268/0x2c7 Oct 2 18:29:05 Icarus kernel: [<ffffffff8145d515>] ata_exec_internal_sg+0x27e/0x478 Oct 2 18:29:05 Icarus kernel: [<ffffffff8145d78d>] ata_exec_internal+0x7e/0x8b Oct 2 18:29:05 Icarus kernel: [<ffffffff810783f1>] ? vprintk_default+0x18/0x1a Oct 2 18:29:05 Icarus kernel: [<ffffffff81465f33>] ata_read_log_page+0xdf/0x11d Oct 2 18:29:05 Icarus kernel: [<ffffffff8145eb30>] ata_dev_configure+0x9ed/0xee3 Oct 2 18:29:05 Icarus kernel: [<ffffffff8145d7bf>] ? ata_do_dev_read_id+0x25/0x27 Oct 2 18:29:05 Icarus kernel: [<ffffffff814684b6>] ata_eh_recover+0x77f/0xf86 Oct 2 18:29:05 Icarus kernel: [<ffffffff8146b07d>] ? ata_sff_softreset+0x166/0x166 Oct 2 18:29:05 Icarus kernel: [<ffffffffa008d627>] ? mv5_reset_hc+0x10d/0x10d [sata_mv] Oct 2 18:29:05 Icarus kernel: [<ffffffffa008b731>] ? mv_pmp_hardreset+0x58/0x58 [sata_mv] Oct 2 18:29:05 Icarus kernel: [<ffffffff8146cb5c>] ? ata_bmdma_interrupt+0x184/0x184 Oct 2 18:29:05 Icarus kernel: [<ffffffff8146d65a>] sata_pmp_error_handler+0xfb/0x82d Oct 2 18:29:05 Icarus kernel: [<ffffffff81057498>] ? try_to_grab_pending+0x45/0x146 Oct 2 18:29:05 Icarus kernel: [<ffffffff81082a85>] ? lock_timer_base.isra.25+0x26/0x4a Oct 2 18:29:05 Icarus kernel: [<ffffffffa008ac88>] mv_pmp_error_handler+0x7b/0x84 [sata_mv] Oct 2 18:29:05 Icarus kernel: [<ffffffff81468f7e>] ata_scsi_port_error_handler+0x21b/0x55f Oct 2 18:29:05 Icarus kernel: [<ffffffff8146934e>] ata_scsi_error+0x8c/0xb7 Oct 2 18:29:05 Icarus kernel: [<ffffffff81448b30>] scsi_error_handler+0xac/0x4a0 Oct 2 18:29:05 Icarus kernel: [<ffffffff81448a84>] ? scsi_eh_get_sense+0xd4/0xd4 Oct 2 18:29:05 Icarus kernel: [<ffffffff81448a84>] ? scsi_eh_get_sense+0xd4/0xd4 Oct 2 18:29:05 Icarus kernel: [<ffffffff8105c792>] kthread+0xd6/0xde Oct 2 18:29:05 Icarus kernel: [<ffffffff8105c6bc>] ? kthread_create_on_node+0x172/0x172 Oct 2 18:29:05 Icarus kernel: [<ffffffff815f5a12>] ret_from_fork+0x42/0x70 Oct 2 18:29:05 Icarus kernel: [<ffffffff8105c6bc>] ? kthread_create_on_node+0x172/0x172 Oct 2 18:29:05 Icarus kernel: ---[ end trace bf441de84c1a7dee ]--- Link to comment
pyrater Posted October 3, 2015 Author Share Posted October 3, 2015 Full syslog here http://pastebin.com/W8KQSnw4 If i roll ALL THE WAY back to unRAIDServer-6.0.1-x86_64.zip it works... It is indeed not a problem with my hardware with is annoying to say the least... Does anyone have the download links to the 6.1-RC1 - RC 6 for testing... .This bug was introduced by limetech at some point.... Can i get an official response from JonP or Limetech, is there hope for a fix down the road? 6.1.3 Hangs on boot 6.1.2 Hangs on boot as well Link to comment
RobJ Posted October 3, 2015 Share Posted October 3, 2015 That's a very interesting board, a SuperMicro based on the NVidia chipsets, with the quirks of the nForce boards. The onboard ports are using the sata_nv module (the NVidia SATA support), and you have another SATA controller using the sata_mv module (the early Marvell SATA support). This is the ONLY time I have ever seen this combination, and to be honest, I'm not sure how much I trust it. It looks like SuperMicro was testing the nForce based board, but must not have liked it, as I don't think they ever used it again. This is also the first time I have seen the sata_mv module in 64 bit v6, which may be fine, but just means I don't have any success stories for it. I'm sorry, but I would have to consider replacing the board. A long shot, but one thing you can try, that is a very old BIOS, from 2009, that can't know anything about SSD's. Try checking for an update to it. Link to comment
pyrater Posted October 3, 2015 Author Share Posted October 3, 2015 Rob, thank you for the reply; this setup has worked for years until something changed in unraid.... like i said it works fine on 6.0.1 Link to comment
BRiT Posted October 3, 2015 Share Posted October 3, 2015 Rob, thank you for the reply; this setup has worked for years until something changed in unraid.... like i said it works fine on 6.0.1 The linux Kernel has been updated to keep up to date with bug fixes and security updates. Link to comment
pyrater Posted October 3, 2015 Author Share Posted October 3, 2015 So im stuck with 6.0.1 or i have to buy new hardware that doesn't seem right... and its expensive. Link to comment
BRiT Posted October 3, 2015 Share Posted October 3, 2015 The nForce hardware has never been what I consider stable. There have been multiple times in the past were there were corruptions, tracked back to nForce hardware. Link to comment
RXWatcher Posted October 3, 2015 Share Posted October 3, 2015 I'm really sorry you feel that way. I personally want unraid to move forward. That nvidia chipset is known to be bad. It was a poor decision to purchase that board. We must move forward because modern boards need the more modern kernels too. It's not just security patches. You have build in kernel drivers for modern chipsets. I have to regularly move up linux versions when I install new servers because only X version supports my chipsets. I cant install RHEL 6.2 on a brand new server. It wont boot. Link to comment
WeeboTech Posted October 3, 2015 Share Posted October 3, 2015 So im stuck with 6.0.1 or i have to buy new hardware that doesn't seem right... and its expensive. Did you try re-arranging the drive layout and putting the suspect drives on different a controller. Perhaps use an add in controller known to be reliable. Link to comment
pyrater Posted October 3, 2015 Author Share Posted October 3, 2015 They are all using add-on controllers, none are on the on-board ports, they are all connected to 3 x SAT2-MV8 Raid cards. Which is why i am confused as to why the motherboard would be the issue... I did try using different ports, but i haven't tried to connect the SSD'd to the MOBO directly... Seemed like the better issue was to report the bug and try and see if they could do a quick patch or fix it in the next release.... http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm Link to comment
WeeboTech Posted October 3, 2015 Share Posted October 3, 2015 They are all using add-on controllers, none are on the on-board ports, they are all connected to 3 x SAT2-MV8 Raid cards. Which is why i am confused as to why the motherboard would be the issue... I did try using different ports, but i haven't tried to connect the SSD'd to the MOBO directly... Seemed like the better issue was to report the bug and try and see if they could do a quick patch or fix it in the next release.... http://www.supermicro.com/products/accessories/addon/AOC-SAT2-MV8.cfm My mistake, I must have missed that. Perhaps try to isolate the issue and put them on the motherboard. (just to see). Link to comment
jonp Posted October 9, 2015 Share Posted October 9, 2015 I don't think we have enough info to ascertain if: A) this is really a kernel bug and; B) what is causing it However, the fact that you are the only person so far that has seemed to experience it (and I have multiple systems with multiple various SSD brands in them), chances are, it's hardware-related. I realize a previous version may not have this issue, but that doesn't necessarily mean that it is an issue in software. Have you found anyone else with the same issue? Link to comment
pyrater Posted October 10, 2015 Author Share Posted October 10, 2015 Seems like a software issue, it runs 100% smooth on 6.0.1 with everything on the add-in cards. It only crashes when its on the latest version of unraid... I now have the SSD's installed directly to the mobo, i will try the latest version of unraid now.... :cross fingers: Link to comment
pyrater Posted October 10, 2015 Author Share Posted October 10, 2015 Happy to report that with the SSD's connected directly to the mobo and not the ADD-IN cards 6.1.3 works. The problem is related to those cards and the latest update... SAT2-MV8 Raid cards + Unraid 6.1.3 = NOT WORKING FOR SSD'S Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.