Failed to Allocate memory for Kernel command line, bailing out booting kernel failed: bad file number


h3xcmd

Recommended Posts

file.thumb.jpeg.ef10306ad23df734ac58fbb2a162c791.jpeg

 

Failed to Allocate memory for Kernel command line, bailing out
booting kernel failed: bad file number

-UnRaid Trial- 18days remaining at the time of writing this. 

  1. Created a 6.3.5 USB using the USB creator tool for windows. Started the Trial for 2 weeks with MBR BIOS boot. 
  2. Saw that the new stable 6.4.0 was out, used the WebGUI to update
  3. Reboot required
  4. Upon loaded WebGUI enabled UEFI in the Flash media on main page (so far so good) Reboot
  5. (Set Server BIOS settings to default) Changed my Servers BIOS system setting to boot to UEFI
  6. Deleted previous UEFI boot menu paths (Windows, ESXi..etc) 
  7. Created a UEFI boot entry pointing to /EFI/Boot/Boot64.efi (1st attempt I did not create a UEFI boot path same result)
  8. Rebooted server
  9. UnRaid Boots up in UEFI
  10. Auto boot count down
  11.  loading bzroot...ok
  12. "Failed to Allocate memory for Kernel command line, bailing out
    booting kernel failed: bad file number"
     

I had this same issue prior to starting the trial creating a 6.4.0RC19 Boot usb. Just I knew there was not going to be any support for a BETA now that 6.4.0 is stable any Ideas or possible solutions. 

I should also note that all of the boot options including memtest all have the same failed message as above. 

I can only Tab to edit >/bzimage initrd=/bzroot_

I really don't want to be stuck using legacy boot mode. Thanks in advance to any suggestions or help. 

[EDIT] 01/16/2018 @9:55p
So after a little research apparently This is an old issue from 05/2015 with Syslinux and Dell Poweredge R410. T420,
 R320 servers with no resolution? Though why would esxi Work, it is also syslinux UEFI bootable? I will find an answer combining our CPU's will get there faster. Thanks again 


> the message comes from main.c when allocate_pages() fails.
> allocate_pages() is just a wrapper that calls BS->AllocatePages.

http://www.syslinux.org/archives/2015-April/023398.html
http://www.syslinux.org/archives/2015-May/023466.html
https://github.com/geneC/syslinux/blob/master/efi/main.c


 

 
-----------------------------
Dell PowerEdge T420  | 24gb ECC Ram | 2x 12core Xeon E5-2420 | 5 Drives (1TB parity | 3TB | 500GB cache) Its a Power House for OpenFLIXR

 

 

Link to comment

Someone more knowledgeably than I will probably also jump in but I don't see the problem with using the legacy boot option.  LimeTech has previously stated that they only added the UEFI boot option because there are now some new MB's that did not had the legacy boot option.  

 

You might make sure that you have the latest BIOS for your MB.

 

Link to comment
17 hours ago, Frank1940 said:

You might make sure that you have the latest BIOS for your MB


BIOS update has been recently updated to the latest revision [ PowerEdge T420 BIOS Version 2.4.2 (07 Apr 2015)] when troubleshooting the BETA 6.4.0 R19 UEFI support. That was my first instinct.

 

Server was released in 2013 and I am the 2nd owner.  I Totally understand that Legacy is the "Workaround" Solution However If I am going to buy an unraid license on there latest stable release that supports UEFI, I would expect that UEFI work on older hardware that supports UEFI not specifically "newer". The choice for syslinux is the same reason how VMware ESXi has there trial and license-key setup. Its just secure from a loss prevention standpoint.

I do Have additional, 2x4GB of ECC Ram on each bank (Error-correcting code memory) installed pulled from a T410 I had. I'll attempt to remove it and leave the stock 16gb Ram see if that makes a difference.       

----------------------------
Dell PowerEdge T420  | 24gb ECC Ram | 2x 12core Xeon E5-2420 | 5 Drives (1TB parity | 3TB | 500GB cache) Its a Power House for OpenFLIXR

Edited by h3xcmd
Link to comment

Selection_010.png.ff168083d67b359ba4ed67356f3858df.png

So I removed Mushkin Higher speed non-ECC Ram, Same issue. I did Check the USB and I have FSCK000-FSCK0010 REC files... I back them up. I even backed up the whole USB drive. Created a new 6.4.9 USB wEFI overwrote the Config folder with backup. Same. 

 

SAMSUNG 4GB PC3L-10600E DDR3-1333 UNBUFFERED ECC MEMORY MODULE M391B5273DH0-YH9
(*Black BLANK)
SAMSUNG 4GB PC3L-10600E DDR3-1333 UNBUFFERED ECC MEMORY MODULE M391B5273DH0-YH9
(*Black BLANK)
Mushkin Stealth®  — 996988S - 8GB (2x4GB) DDR3 UDIMM PC3L-12800 9-9-9-24 Stealth (996988S) ECC=NO
(*Black BLANK)


SAMSUNG 4GB PC3L-10600E DDR3-1333 UNBUFFERED ECC MEMORY MODULE M391B5273DH0-YH9
(*Black BLANK)
SAMSUNG 4GB PC3L-10600E DDR3-1333 UNBUFFERED ECC MEMORY MODULE M391B5273DH0-YH9
(*Black BLANK)
Mushkin Stealth®  — 996988S - 8GB (2x4GB) DDR3 UDIMM PC3L-12800 9-9-9-24 Stealth (996988S) ECC=NO
(*Black BLANK)

[UPDATE] Day 2 11:25p EST
After resetting Setting Server BIOS defaults, turning off Lifecycle controller, Disabling the PERC310 RAID controller, changing Memory type settings, Removing the memory, adding it back in, Rebuilding the bootable USB, Diskpart clean USB, and other frustrating settings, I lost BIOS BOOT and I do not see my usb in the BIOS boot order, I keep getting not a bootable device try again. This is part Why I really would like UEFI support from the Kernel command line. Server BIOS is not the same as a traditional Desktop BIOS and UEFI just Boots right up no issues or questions asked.  If you forget to enable the tiniest Legacy BIOS server setting it throws everything offline. 

In this event before I turn off the lights, thrown in the towel and call it a night, I have discovered that LifeCycle Controller needed to be updated along with 6 others(Driver Pac, RAID update, NIC update, iDRAC update, UEFI DIAG update) .. Other than just the BIOS..  So I am running that. Still open to any suggestions to try for a resolution. 
https://www.dell.com/support/article/us/en/04/sln292343/how-to-update-dell-server-using-the-integrated-lifecycle-controller-update-platform-lcc-update?lang=en

Giving that a try next... Not keeping my hopes up though. 

     

Edited by h3xcmd
Update
Link to comment

[Day 3]
The Update I did, timed out for unknown length of time, I woke up in the middle of the night to a screaming server (All Fans Full Blast) ready to take off. iDrac Initializing...stuck problem Soo Firmware update basically failed, So Server is Down for the count. I am attempting via Dell's UEFI bootable media to patch and update the firmware specifically for this server:

Read that this update method could take more than an hour...  Meanwhile the Server fans at full speed is kinda soothing and relaxing... 


[Update]

after 3 attempts getting apply_bundles.sh invalid 

Created a WinPE bootable USB
http://windowsmatters.com/2017/10/02/gandalfs-win10pe-x64-redstone-2-build-15063-version-10-01-2017/


To attempt to download and install the idrac.exe  firmware and I am blessed with another error missing some tbs.dll noticing that I need windows Server OS. 
Pulled all Drives in Raid array, Inserted a mirror copy of Windows 2012 server. But... I do not remember the Admin Login creds. 

Circled around to 
https://www.dell.com/support/article/us/en/04/sln296511/updating-dell-poweredge-servers-via-bootable-media-iso?lang=en

and Downloaded the older (month difference) iso , Used Rufus and Failed 90%. diskpart USB, clean, full format. Encountered errors at 98%,
lost a USB Drive. 

Square 1. 
Downloading Windows Server 2012..... New USB... To be Continued..............
 
 

Edited by h3xcmd
Link to comment

So New (previously owned) Motherboard Came in, Yesterday, All is working
Took me a bit to set up the BIOS legacy boot...

Had to Diskpart the USB and copy the backup over to the same USB drive 
 

Diskpart
list Disk 
select disk 1
clean
create partition primary 
select partition 1
active
format fs=fat32 quick label="UNRAID"
exit

And then I had to run the makebootable.bat file

 

Server Back up and running and I am defiantly afraid to do any updates to the system board now.. I don't think ill find another T420 Mobo for $50 again. I did test UEFI, See it boots to the EFI just fine. Just the unRAID menu for bzroot is not good..I have no Idea how to fix that. But Legacy it is.. And I am not moving away from Unraid.    
 

Link to comment
  • 9 months later...

Interesting, after my motherboard replacement , I have not bothered to update the bios from its original version v1  or lifecycle controllers iDrac, I’ve succumbed to the fact to just leave shit the way it is using legacy BIOS boot for unraid.  It’s unfortunate that at the time of setting up that I was not able to boot UEFi and struggled with my setup. Everything been up and running 24/7 and at the end of the day that’s all that matters even though this was not my initial set up plan. Good looks and when time comes to make a change I’ll take a look at that settings once more. I’ve had some serious ptsd Over this and I’m good now.  

Link to comment
  • 10 months later...
On 11/5/2018 at 2:03 PM, WizP said:

I had this problem myself, but tracked it down to me pre allocating video memory. I had 1GB. Set the value to auto and unraid booted fine after that. 

 

I know this is a bit of a necro post, but could you explain how you set the value? I am experiencing this issue and need to get unraid to boot in UEFI. Thanks in advance.

Link to comment
  • 4 months later...

R720 here, same issue.

 

Also, fails to boot reliably on internal USB in legacy mode, have to go through F11 menu at boot and manually select it, because just setting the boot order doesn't stick, when trying to boot without intervention it doesn't detect it and just pulls up the raid controller.

 

Damn you, Dell.

Link to comment

Legacy auto boots on the dell t420, and I’ve given up on uefi boot mode. I’m sure there missing impi stuff the Unraid could work on. It be nice to control the fan speeds and other area on PowerEdge servers.

 

I’m not sure where you have the unraid usb plugged into but mine is inserted internal. You could use The front USB ports as they are on same usb hub controller, either one is fine. The rear usb ports are on a different usb hub controller that do not communicate to the bootable USB ports and I can see where you may have to select manually ever time.

 

Also another possibility is that you do not have lifecycle controller configured correctly/ maybe you do or maybe you forgot to remove the uefi flag on the usb.

 

https://www.dell.com/support/article/us/en/04/sln292433/dell-poweredge-no-boot-device-available-is-displayed-during-startup?lang=en

 

 

Sent from my iPhone using Tapatalk

 

Link to comment

I can boot seamlessly without intervention on both front and back USB ports on the R720.

That's why it's weird af.

The internal USB port is near the PSU and one of the PCI slot "chambers".

 

Lifecycle controller completes inventory check on boot without a single error, and iDrac is clean, configured and show no quirks.
I litterally painfull went through every menu in both iDrac, LifeCycle and the Bios to check if I was missing something, and the sole thing I saw was in bios, asking to activate or not the internal USB, which I did, and even tried the old "switch off and back on" thing just to check on an eventual bios derp.


And it's not like the port was physically broken, since when hitting F11 to go to the boot menu before it actually tries to boot to the device, then it shows up.

 

It's just when it automatically does its thing, following the boot order which is with the usb key first and the raid controller second, then it can't find the USB anymore, just the hba/raid controller. That's just plain weird.

 

 

As for the UEFI, yes, it's enabled, since that same unraid flash drive I booted in UEFI on an other computer as a sanity check, and it went seamlessly through the boot process and all.

If UEFI wasn't enabled on it, it wouldn't even reach unraid's boot menu in UEFI mode.

The "Failed to Allocate memory for Kernel command line, bailing out booting kernel failed: bad file number" is VERY MUCH not a "I'm too dumb to enable uefi", but more "syslinux craps its pants" or "some systems firmwares are derp with syslinux", it's a known issue, some managed to fix it, no all said how, the few who said I tried their way to no avail.

 

 

So I'm reaching to unraid's community hoping people would shoot ideas or eventually if someone actually had this happen to them on a R720, and fixed it. Both the syslinux/firmware issue, which seems to plague several systems, and if any chance, the vanishing internal USB issue.

Edited by Keexrean
Link to comment
  • 2 months later...

Hi Guys, 

 

Any update on UEFI boot solution on Dell PowerEdge T420?

 

I have similar issue. The UEFI boot with Unraid gave me "booting kernel failed: bad file number". 

I am actually trying to test Unraid on my old Dell server running EXSi. I was not able to boot up USB Unraid automatically via UEFI mode.

When I set the boot setting into the BIOS Boot (Instead of UEFI mode) and manually select the Unraid USB drive. It boot up nicely. 

However when I tried to boot up automatically, the server boot into Hard Drive C (EXSi installation) instead of Unraid USB drive even with higher Drive sequence order. I will double check again if Drive sequence order was set correctly. I was not to sure. 

IMG_0664.thumb.jpg.b31812614b9dcb011f088af66343a2ea.jpg

 

I notice there is option to Disable Hard drive C. I am not sure if this should be disabled. What I want is first boot into USB when fail go to Hard drive C (EXSi). 
This way I can switch between Unraid and EXSi as needed without meddling with BIOS setting again. With UEFI mode this is possible. 

IMG_0657.thumb.jpg.e46658f33dc087488f1eeb5bfd067777.jpg

Link to comment
  • 2 weeks later...

I have a NVS315 incoming in the mail, I'll try UEFI boot with it on unraid and embedded video controller disabled. Will let you know by editing this message. It's a single slot low profile. Knowing I will only every use 2 of the 3 available low profile slots (1 for a SFP+ 10Gbps double port card, 1 for an HBA), I very much not care about having a card doing nothing taking up the 3rd slot.

For info, before you ask about the HBA:
- Yes I do have a mini mono Perc H310 already, driving the front back planein JBOD, for the hard drives. But I didn't wanted to put my 2.5 SSDs in the front 3.5 trays, nor have them run off the same controller, which is 8x PCIe x2, so hitting both the array and the cache could lead to a sad bottleneck.
- Yes I added a SAS HBA to throw my SSDs in the back of the chassis, internally, on their own dedicated controller, in the corner on top of the PSUs, stacked on double sided tape, so I can still have a full height PCIe card in the Riser3.
- Yes, I found where to sip 5volts on the motherboard to power them.

Link to comment
  • 2 weeks later...

NVS315 arrived, slapped it into the server.

 

Outputs console's video no issue.

 

And if obviously disabling onboard video completely cuts off iDrac console, for some reason it even borked as far as preventing me to access bios via the NVS315 displaying to a physical screen.

Obviously means trying to boot to UEFI with it is a no go.

 

Had to hard reset BIOS to get access again, and now I'm having a hard time remembering the exact magic combination of parameter I dialed to silence that monster.

 

 

 

Seems like that route is a no go as a work around the syslinux kernel derp.

Link to comment
  • 5 months later...
  • 1 month later...
  • 2 months later...

@grphx @CrimsonBrew @Widget Nope.
Unless trying some weird technics I have seen flung around, UEFI seem to be a no-go (at least on a R720) because on the onboard video chip, which you can disable for an add-in video card but then you loose Idrac's remote screen and access to bios at boot time.

Though the importance of booting in legacy and UEFI isn't that big of a deal for virtualization purposes, since the boot method of the host has minimal impact on the vm's boot method, and legacy boot allows sometimes some peculiar hardware to work in passthrough when it derps in UEFI.

Honestly, I don't think we're missing much.
UEFI boot on unraid I think gets only usefull in some cases I heard about people having their server not being able to boot properly in legacy after some update, but that switching to UEFI solved the trick (on way more recent hardware than our beaters here, though).

Link to comment
  • 2 months later...
  • 3 weeks later...

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.