Jump to content
TheWombat

[SOLVED] Disabling RAID from AOC-SASLP-MV8 firmware .21

35 posts in this topic Last Reply

Recommended Posts

I have been having intermittent issues during booting my UnRaid where the AOC-SASLP-MV8 cards on firmware .21 gets stuck on initializing the RAID functionality. This means that reboots can take a while and since the RAID functionality of the card is not needed for UnRaid I wanted to disable it.

 

The RAID functionality seems to have been introduced in the .21 firmware, as it was not available by default in the earlier .15 firmware.

 

In looking at the "Firmware_3.1.0.15n" that is available I noticed a tool called "mdu.exe" which is the Marvell Dos Setup Utility. It also includes a PDF user guide that explains the configuration settings available. Neither 'mdu.exe" or the text configuration setting files were included in the "Firmware_3.1.0.21".

 

I created a dos bootable usb key as per if I was going to downgrade from .21 to .15n. Then when I was at the dos prompt, rather than flash the bios I instead used "mdu.exe"

 

I then typed "backup_cfg" to create a backup of the current HBA configuration and RAID configuration.

I then edited the following two lines in the "hba0_cfg.txt" that was created:

[RAID FEATURE]=DISABLE

[RAID5]=DISABLE

 

I then typed "restore_cfg -f hba0_cfg.txt" at which point I received errors stating that "SPINUP_TIME should be 0~10". This error makes no sense as there is no entry "SPINUP_TIME", however there is an entry "[sPINUP TIME]" (without the _) however this is already set to "0"

 

After an hour of messing around I realized that for some reason I can't backup, edit and then restore the file.

 

I instead looked at the file "6480.txt" that came with the "Firmware_3.1.0.15n" zip file. I couldn't see any material differences in this file compared to my "hba0_cfg.txt" that had been backed up from the card itself.

 

I therefore edited the following lines in "6480.txt"

[RAID FEATURE]=DISABLE

[RAID5]=DISABLE

 

I then used the command "restore_cfg" and restored "6480.txt" to the SAS card. This worked without any errors!

 

I then rebooted and was very pleased to see that I had firmware .21 (still) and now no RAID in the bootup sequence or the BIOS as shown below (please excuse the poor quality of the photos!):

 

note: There is no message in the photo below about Initializing RAID after the last Detecting Port of the first Controller

 

DSC045651280x960_zps1776ab3a.jpg

 

note: There is no RAID option now in the BIOS which shows it is 3.1.0.21

 

DSC045591280x960_zps41a480a6.jpg

 

I repeated the same for the second AOC-SASLP-MV8. On the "restore_cfg" I received an error, so I ran the command a second time and it all worked fine.

 

Here you can see the two AOC-SASLP-MV8 cards re-installed and ready for boot up.

 

DSC04561960x1280_zps9efa13ec.jpg

 

I have now rebooted, UnRaid seems to be working fine, there are no obvious errors in the Syslog and I am currently part way through a parity check.

 

So far a great result.

 

Pros:

Boot Up sequence is quicker as no need to initialize RAID function on the AOC-SASLP-MV8 card.

Boot Up sequence is quicker as the RAID function initialization no longer intermittently hangs and times out.

Still able to use firmware .21

 

Cons:

None so far other than the 5-10 minutes to update the configuration file

 

UPDATE:

 

Now that the server is up and running I've investigated some more on the file "hba0_cfg.txt" that was created by the backup_cfg command and compared it to the "6480.txt" file that came with the .15n firmware.

 

The "hba0_cfg.txt" has a line: "[DELAY TIME(s)=16]" compared to "6480.txt" which has the line: "[DELAY TIME(s)=5]". While I have not validated this yet, I believe it was this line that was causing the error "SPINUP_TIME should be 0~10". I am not in a position at present to try and re-update the SAS card(s) to confirm this due to having preclears running on 4TB HDDs, however this seems to be the the only variation I can see that fits the message.

 

If this is indeed the issue, then it makes the process 'safer' as you can backup_cfg, edit your actual config file from your actual SAS card to DISABLE RAID, and then restore same file. The reality though is that the file is not materially variant from the 6480.txt file.

 

TheWombat

Share this post


Link to post

As an update I am running a Pre-Clear on a 4TB HGST 5400rpm drive and a Parity Check across 14 drives and getting 112 MB/s and 60-70 MB/s respectively which on 5.0RC12a I find reasonable.

 

I will be moving 10+ TB of data between the HDDs on the array over the next few days as I am re-organizing the data storage so this will give the SASLP cards a good test. I will aim to post an update next weekend. Early indications continue to be positive and the update is worth it for no other reason then to improve the reboot speed.

 

Update

 

The parity check completed successfully in 30982 seconds (8.5 hours) which is about normal for my setup. Things are looking positive!

 

TheWombat

 

Share this post


Link to post

Last update unless I encounter any issues in the future related to this.

 

The changes have been 100% fine on my configuration, I have rebooted at least 20 times since making the change and there have been no issues as a result of the modification. The boot up sequence is now significantly quicker as there is no initialization of the RAID, and I have 100% no hangs or timeouts on the boot-up whereas before I had the RAID initialization step timing out about 35% of the time.

 

I would suggest that this 5-10 minute change is a "no brainer" to do for anyone using the AOC-SASLP-MV8 cards with firmware .21, and also removes one of the reasons for not previously upgrading from .15n to .21.

 

Note: the update doesn't make the .21 firmware smaller, hence there may still be boot-up issues for some motherboards due to bios size issues.

 

TheWombat

Share this post


Link to post

is every card the same? or we have to export the cfg for the card.. modify and load back. I have two cards in my system, so just curious of the process. Also you mention the '[DELAY TIME(s)=16]', did you end up changing this?

Share this post


Link to post

I ended up using the same default file for both my cards. If I was to do it again I would re-try the backup, edit, restore approach as it only adds 60 seconds or so onto the procedure. If you have concerns then I am planning to try it again this weekend with the backup, edit, restore just to confirm if I was correct on my original supposition as to what caused the error.

 

The default config file I used on the restore had the 16s setting set to 5s hence why I believe this was the cause of the error.

 

The only config attribute that would vary by card is the serial number however you can leave it the same for both cards without any issues.

 

HTH

 

TheWombat

 

 

Sent using Tapatalk

Share this post


Link to post

before any changes, the output of: mdu info -o hba > info.txt

Initializeing...
Adapter id:                  0
Product:                     11ab-6485
Sub Vendor ID:               11ab
Sub Device ID:               6480
Chip Revision:               1
BIOS Version:                3.1.0.21
# of ports:                  8
Alarm:                       Support
Supported port type:         SATA SAS 
Supported Raid Mode:         
maximum disk in one VD:      0
PM:                          Support
Expander:                    Support
Migrate:                     Not support
Media patrol:                Not support
maximum support VD:          0
MaxHD:                       128
MaxHD_Ext:                   128

 

ran the command 'mdu backup_cfg', it created HBA0_CFG.TXT. attached to post

i have two cards.. wonder why it only exported data about the first one..

 

so looked at my backup config, appears that i already have the raid feature disabled (pretty sure i did this from the sas's bios when i first got the cards).

[RAID FEATURE]=DISABLE
[RAID5]=DISABLE

 

booted up box and went into sas's bios (control+m), the raid mode is set to JBOD and the hdd detect times is set to 16. the range there is 10 to 32.

HBA0_CFG.TXT

Share this post


Link to post

While I have 2 SASLP cards I did the backup-restore on my desktop PC so only had 1 card in at a time. While I would have expected the mdu backup_cfg to create a HBA0_CFG.TXT and a HBA1_CFG.TXT it may be a 'feature' that it doesn't :-)

 

I've compared your CFG.TXT file to the one from my card and also to the default 6480.txt config file that I restored and the only differences I can see are:

[sERIAL_NUMBER] - which I'd expect to differ

[DELAY TIME(s)] - where I changed mine from 16 to 5

[sASADDRESS] for each port which I'd expect to differ

 

Good to see that you've been successful though although it is odd your second card already had the raid feature disabled as I'm not aware of any way to do this within the actual SAS BIOS when you press control+m

 

TheWombat

 

Share this post


Link to post

Its been about 2 months now since I've done it, but I'm pretty sure I set the mode to JBOD and disabled int13. I did not change anything with mdu today. Was going to but stopped once I saw that RAID was already disabled. Looks like the lowest time I'm going to be able to do is 10sec with the sas bios. A few extra seconds on startup isn't a big deal.. since I just leave it running 24/7.

Share this post


Link to post

I decided to upgrade both of my AOC-SASLP-MV8 controllers to firmware version .21 from .15 over the weekend.  After successfully flashing both cards, one of them displayed the RAID initialization message and the other did not.  I displayed the BIOS for each controller and one showed the RAID options and the other just showed JBOD with no RAID tab at the top of the display.  I created a USB boot drive with the firmware .15 files and modified the 6480.txt file to disable the RAID functions.  I restored the configuration using the 6480.txt file and the server now boots without the RAID initiatialization function.  Checking the BIOS indicates that this feature no longer exists.

 

I had tried to create a backup configuration file but nothing was created.  I'm doing this on my server with only one controller installed at a time and I'm using a USB drive to boot from. 

 

After the server boots with all hardware installed, it displays both controllers with all connected drives listed.  At the bottom of the screen it displays the options to enter BIOS setup by pressing Ctrl + M or press Space to continue.  When I try to do either, it appears to be unresponsive.  I decided to let it sit for a few minutes while I left the room.  When I returned it had booted into unRAID.

 

The server is taking far longer to boot with the .21 firmware than it ever did with the .15 version.  Is there something I'm missing here?  Do I need to have a hard drive installed to write the configuration file to?

Share this post


Link to post

I decided to upgrade both of my AOC-SASLP-MV8 controllers to firmware version .21 from .15 over the weekend.  After successfully flashing both cards, one of them displayed the RAID initialization message and the other did not.  I displayed the BIOS fopr each controller and one showed the RAID options and the other just showed JBOD with no RAID tab at the top of the display.  I created a USB boot drive with the firmware .15 files and modified the 6480.txt file to disable the RAID functions.  I restored the configuration using the 6480.txt file and the server now boots without the RAID initiatialization function.  Checking the BIOOS indicates that this feature no longer exists.

 

I had tried to create a backup configuration file but nothing was created.  I'm doing this on my server with only one controller installed at a time and I'm using a USB drive to boot from.  After the server boots, it displays both controllers with all connected drives listed.  At the bottom of the screen it displays the options to enter BIOS setup by pressing Ctrl + M or press SPACE to continue.  When I try to do either, it appears to be unresponsive.  I decided to let it sit for a few minutes while I left the room.  When I returned it had booted into unRAID.

 

The server is taking far longer to boot with the .21 firmware than it ever did with the .15 version.  Is there something I'm missing here?  Do I need to have a hard drive installed to write the configuration file to?

 

The configuration file when you back it up should be written to the USB key where you were running mdu from. In my case I have this as a separate USB key from my UNRAID USB key since it needs to boot into DOS. When I ran the mdu backup_cfg command it created a file HBA0_CFG.TXT

 

My boot up doesn't seem noticeably slower than .15 although I didn't time either. It pauses for maybe 2 seconds on the 'Press CTRL+M' message. Is it just on this message that you find it is taking longer, or are there other steps between when you switch on and the UNRAID boot option appearing?

 

There are settings in the BIOS for disk spinups and delays, please post your config and I can compare to mine.

 

TheWombat

Share this post


Link to post

I used a separate USB flash drive configured as a boot drive with the aforementioned files installed.  When I tried the backup_cfg command I got an error indicating there was no virtual drive or something to that effect.  I just restored the 6480.txt file that had the RAID functions disabled.  I only restored it to the controller that displayed the RAID initialization message during bootup.  I did adjust the delay from 0 to 1, but that's probably inconsequential.

 

The delay is when it displays the overal configuration with both controllers listed along with the drives connected to each.  The Ctrl + M/Space bar message is displayed at the bottom of the screen, but does not appear to respond to any keyboard inputs.  I didn't time it to see how long it sits there, but I suspect it's on the order of several minutes.  I won't get a chance to check it until tomorrow evening as I have to be somewhere after work this evening and won't get home until late.  I'll take another shot at trying to backup the configuration and see what happens.

Share this post


Link to post

I installed a new Seagate 7200.14 3TB drive in my unRAID server yesterday and connected it to one of the upgraded controllers.  Since all of my other drives are 2TB or less, I was installing it as the new parity drive.  The drive showed up when each of the controllers was scanned for attached drives, but when I checked the unRAID web GUI the drive was not there.  The parity drive indicated it was not installed, which is what I expected to see.  When I stopped the array and looked at the drop-down menu to display the available drives to use as the parity drive there were no drives listed.  It just said "Not installed."

 

I ran the SeaTools diagnostic on the drive on another PC just to make sure there wasn't a problem with it and it passed both the long and short tests.  I reinstalled it in the unRAID array, but this time I connected it directly to one of the SATA ports on the motherboard.  When it booted up and I stopped the array, the new Seagate drive was listed.  I have it running a parity check while I'm at work so I'm hoping it will be completed later this evening.

 

The fact that the drive did not show up when connected to the controller was extremely disappointing.  I may move it to the other controller just to see if it shows up.

 

BTW, I timed the boot sequence beginning from the time the screen is displayed with both controllers and their associated drives with the Ctrl + M or Space options listed at the bottom to the time it continues with the boot sequence and loads unRAID.  This time interval was just over four minutes.  This same sequence with firmware .15 only took a second or two to complete.  I may try reflashing the firmware again to see if it clears things up.

Share this post


Link to post

Something seems very odd with your setup compared to mine. I have had no issue with 1,2 & 4TB drives and also my entire boot up sequence takes perhaps 1-2 minutes (I'll time it when I remove some HDDs this weekend). I'm assuming you've done the usual items of disabling INT 13 etc on both cards?

 

Try and reflash and see what happens. I have downgraded and upgraded the firmware on my cards a few times and not had any issues like you are reporting.

 

The change that the CONFIG file makes should not have any impacts like you are stating, all it does is disable the RAID functionality - the actual firmware remains the same.

 

TheWombat

Share this post


Link to post

I vaguely remember something about INT 13 with regards to my unRAID setup, but it's been so long ago I don't recall what it was all about.  I'll search on the topic and refresh my memory.

 

The controllers seem to be working fine after the system boots up, other than not detecting the 3TB drive in unRAID.

 

I just refreshed my memory regarding the INT 13 setting.  I'll have to check the BIOS on each controller and see how it's set.  I'm pretty sure I had it disabled with firmware .15, but I assume I'll have to reset it with the new firmware.

Share this post


Link to post

I disabled Int13h on both controllers when I went home for lunch.  I see no difference in the long boot times with it disabled.  I plan on exploring the BIOS settings in more detail when I get home this evening to see if there's something else I might need to change.  Otherwise, I'm stumped.  I never had this issue with firmware version .15.

 

FYI - I forgot to mention that I'm using unRAID 5.0-rc12a.  Motherboard is an Asus micro-ATX FM1 model with an AMD A4-3400 CPU.  I forget whether I have 4 or 8GB of RAM at the moment.  Case is a Supermicro SC846TQ-R900B 24-bay server rack.  SATA controllers:  six SATA ports on motherboard, two AOC-SASLP-MV8's, and one Promise SATA300 TX-4 4-port controller (only using two ports since it's in a PCI slot).  PCI-e x1 slot is occupied by an Intel NIC.  Currently only 20 of the 24 available drive bays are occupied.

Share this post


Link to post

I disabled Int13h on both controllers when I went home for lunch.  I see no difference in the long boot times with it disabled.  I plan on exploring the BIOS settings in more detail when I get home this evening to see if there's something else I might need to change.  Otherwise, I'm stumped.  I never had this issue with firmware version .15.

 

FYI - I forgot to mention that I'm using unRAID 5.0-rc12a.  Motherboard is an Asus micro-ATX FM1 model with an AMD A4-3400 CPU.  I forget whether I have 4 or 8GB of RAM at the moment.  Case is a Supermicro SC846TQ-R900B 24-bay server rack.  SATA controllers:  six SATA ports on motherboard, two AOC-SASLP-MV8's, and one Promise SATA300 TX-4 4-port controller (only using two ports since it's in a PCI slot).  PCI-e x1 slot is occupied by an Intel NIC.  Currently only 20 of the 24 available drive bays are occupied.

 

It's pretty straight forward to downgrade to .15 and see if that resolves the issue or if it is something else causing this. I've not heard of .21 causing the issues you've experienced although I've not searched the forum in detail.

 

TheWombat

 

Share this post


Link to post

Downgrading to .15 isn't really an option.  The main reason I upgraded to .21 was so I could use 3 or 4TB drives.  The fact that it's only an issue when I boot up is mostly an annoyance at this point since I don't constantly reboot my system.  It makes any sort of troubleshooting a longer process than it needs to be.  It works fine after unRAID is loaded.  The thing that bothers me the most right now is the fact that my new 3TB parity drive wasn't even seen by unRAID.  I would have expected it to at least show up, even if it was somehow limited in capacity.  I'll play with it some more over the weekend when I have more time.

Share this post


Link to post

Good news on the 3TB drive and .21.  I returned the 3TB drive to it's original slot connected to one of the controllers after it rebuilt parity from scratch.  The drive is now recognized by unRAID when connected to the controller as a 3TB drive.  I'm still not sure why it didn't see it previously.  I can only assume it wasn't seated properly on the backplane.

 

I looked at the controller BIOS settings and didn't see anything that would likely be the cause of the long startup delay.  I may have to chalk this up as one of those little mysteries that can't be solved.  As long as unRAID works with 3TB drives or larger I can learn to live with it.

Share this post


Link to post

Some further thoughts - I don't even know if this is relevant, but when I checked the BIOS settings after upgrading to firmware version .21 the Int13h setting was enabled.  I disabled it on both controllers, but the long boot time still occurs.  It was after I changed this setting that I swapped the 3TB drive back to one of the controller ports.  I'm wondering if this setting had anything to do with the 3TB drive not being recognized initially.  I don't know why it would, but it's the only thing that changed after the initial drive installation other than setting it up as the new parity drive and running a parity check, which was performed prior to disabling Int13h on the controllers.  The parity check was performed with the 3TB drive connected directly to one of the motherboard SATA ports.

Share this post


Link to post

Good day Crew,

 

I am in the process of setting up a new server using mostly recycled equipment - and 4 TB drives.

 

I have upgraded my spare AOC-SASLP-MV8 to the latest Firmware_3.1.0.21.

I do not have Firmware_3.1.0.15n and after spending over 8 hours searching, I have been unable to find "mdu.exe" which is the Marvell Dos Setup Utility from any source.

 

Can anyone provide that as this boot time takes a VERY LONG TIME with 4TB drives attached.  I can provide a secure upload link to my work's website if you like.

 

Thank you all in advance!

Chris

Share this post


Link to post

Take a look at the sticky post on Controller Cards by Rajahal in this sub forum. There is a link to 15n on it in step 8.

 

TheWombat

 

 

Sent using Tapatalk

Share this post


Link to post

Sorry for resurrecting this for my own gains, but a quick question: I'm upgrading my unRAID case to one with SAS backplanes.  Would I be right in thinking that I can get an AOC-SASLP-MV8, disable RAID and plug it into two of the backplanes?  Or would I need an HBA card (or SAS support on a motherboard?)?

 

Apologies again - SAS is new to me.

Share this post


Link to post

Wombat, is it necessary to flash .21 again if its already on .21 ? I bought a few cards and it seems like its already on .21 with jbod mode enabled, but im having issues with UnRAID detecting the drives. The cards are detecting the hdd during boot and i have int 13 disabled.

Share this post


Link to post

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.