Jump to content

UEFI Boot and GUI Mode Problems


Zoroeyes

Recommended Posts

Hi

 

I’ve just put together a new, fairly high-end, system with the intention of having a computer that can provide a good workstation experience when required and a good media server when the workstation is not in use. Therefore I bought into the unRaid ideal of running the unRaid OS and hosting a windows VM on top of it for when it was required. However, I have hit my first problem.

 

My hardware is as follows:

 

CPU: AMD Threadripper 2950x (16 core)

RAM: 32gb Team Group Pro Dark 3200MHz

Mobo: MSI MEG Creation X399

GPU: Gigabyte 960 GTX

HBA: LSI 9305-24i (direct support for 24 HDDs)

PCIE Storage: MSI XPander 2 PCIE Card with 4 x Adata XPG SX8200 Pro 1TB NVME drives

NIC: onboard dual Gigabit LAN + PCIE HP dual 10GBe (sfp+ card)

Case: 4U Rack server case with 24 hot swap bays and a 6GB/s backplane for my 24 WD red (4tb) spinners (main media storage)

PSU: Corsair HX1200i PSU

 

My intention is to use the 4x 1TB NVME drives in a striped raid (using AMD’s built-in raid capabilities) as an unRaid write-cache and also the drive that hosts my VM.

 

Now to the fun part. I’ve no idea how unRaid will see the raid volume that you create at a BIOS level as I understand it then needs additional AMD software to put it all together in Windows. But that’s a problem for later.

 

My problem right now is that, to access the raid capabilities that Threadripper provides, you must use a UEFI enabled BIOS. However, when I have UEFI enabled, then unRaid will not boot into the UI on the host machine. Instead it starts the server successfully but sits with a blinking cursor on the screen rather than the usual login prompt. I know the server is running because I can access the web GUI from another machine, just not on the host. I’ve seen other people with this issue, but they’ve resolved it by disabling UEFI, which I don’t see as an option because it is required for the NVME raid functions (I’m happy to be told otherwise if this is not the case).

 

To test this I’m using the latest stable build on an evaluation license (I have 4 unRaid licenses on other servers but am reluctant to use on of these until I’m further towards a working solution). 

 

Any help would be much appreciated as I’m really keen to see unRaid in action on this new build.

 

Thanks in advance

 

Link to comment
5 hours ago, Zoroeyes said:

I’ve no idea how unRaid will see the raid volume that you create at a BIOS level as I understand it then needs additional AMD software to put it all together in Windows. But that’s a problem for later.

Unless your motherboard has a full hardware RAID controller, AND unraid has the drivers built in for said RAID controller, the volume will not be visible or useable at all. The chances of being able to use the BIOS RAID with unraid is pretty much nil.

 

Your options, assuming the MSI Xpander card is supported in unraid, is to set up a BTRFS RAID volume on the devices. I believe with 4 identical devices the best option would be RAID10, but I could be wrong.

 

First you need to verify the 4 devices are visible to unraid at all.

 

5 hours ago, Zoroeyes said:

I know the server is running because I can access the web GUI from another machine

In the web GUI, go to tools and download the diagnostics zip file and attach the whole file to your next post.

Link to comment

Thanks for coming back to me jonathanm, I have to admit that I thought that might be the case, given that even Windows requires additional drivers to make use of the raid despite the initial setup in BIOS.

 

I can see all 4 NVME devices in the unRaid GUI so I think the BTRFS option is the way to go. My questions with that approach would be:

 

1) If I create a raid 0 array and use it to host a VM, will the striped array be presented to the VM as a single volume or 4 devices that then need raiding in Windows?

2) What is the performance of BTRFS raid like in comparison to traditional, hardware raid solutions?

 

I’m all about speed here, not bothered about resilience as I’ll only delete stuff that’s being copied to the unRaid box once it’s complete and moved, and I’ll also plan to backup the VM to the main array too.

 

I’ll try to get a diagnostics file when I’m next in front of the server, but I will say that if I disable UEFI in BIOS, the GUI loads just fine. My exact problem is mentioned in another thread but the solution there was simply to turn off UEFI. If I can achieve what I want in terms of raid on unRaid and the planned Windows VM via the BTRFS approach then this may actually be an option, but it does seem odd that the GUI doesn’t load with UEFI enabled?

Edited by Zoroeyes
Link to comment
3 hours ago, Zoroeyes said:

1) If I create a raid 0 array and use it to host a VM, will the striped array be presented to the VM as a single volume or 4 devices that then need raiding in Windows?

It won't be presented to the VM at all.

 

If you set up a VM the normal way, you will be given the option to create a vdisk image file presented to the VM as a hard drive, which by default be created on the cache pool that is created by the 4 devices in whichever BTRFS RAID level you choose. The default RAID level is RAID1, you can manually change that to other levels as desired.

 

If you don't set up the devices as a cache pool, they will be available to manually pass through to the VM, resulting in the VM being in charge of how they are used, RAID or single volumes or whatever.

 

I recommend watching some of SpaceInvader One's youtube videos for more ideas on how things can or should be done.

Link to comment

Thanks jonathanm

 

What I was trying to say is, given that the cache pool will be a raid 0 and appear as a single volume with (hopefully) some performance benefit through striping, then if i provide a portion of that volume to the VM, will the VM also be using (a portion of) all 4 drives and benefitting from the performance of the striping too.

 

So, i think, by creating the vdisk image file on the raid 0 cache pool, I’ll have access to potentially 4tb in size and any interaction with the vdisk will benefit from the slightly faster speeds provided by the underlying raid 0.

 

thanks

Edited by Zoroeyes
Link to comment

As requested, my diagnostics file.

 

Hopefully this will give some clues as to why I can't get into the GUI under UEFI on the server itself (I can obviously get to the GUI from another machine - hence the diagnostics file).

 

Finally, from some reading that I've done, it sounds like BTRFS raid is quite slow (even raid 0), some examples where showing 4x NVME drives in a BTRFS raid 0 performing considerably slower than the single write speed of just one of the drives under windows? this has now got me worried that using that approach for my cache/VM drives will waste the potential of my NVME drives. Has anyone had any experience with BTRFS raid 0 on unraid either positive or negative (sorry this should probably be the basis of another post, this one should be focussed on the GUI/UEFI issue).

 

unraidserver-diagnostics-20191118-2016.zip

Link to comment
6 hours ago, Zoroeyes said:

I can't get into the GUI under UEFI on the server itself

This should be common problem, you may try enable CSM in BIOS but force boot UEFI only. ( But I think most people won't satsify the outcome of local GUI )

 

For AMD NVME RAID, seems AMD not offical support in Linux, btw 3rd party make it possible. But I never try this or found people showing how good or bad running in Linux

https://community.amd.com/thread/222449

 

 

6 hours ago, Zoroeyes said:

Has anyone had any experience with BTRFS raid 0 on unraid

I try it ( @johnnie.black have a good detail guide ) with mechanical disks and got nice speed, i.e. 4 - 10 disks in RAID0. Practically in 4 disks.

But for AMD NVMe RAID or BTRFS RAID with NVMe, I think it will different story, this quite interesting if you make this success and show how far ( speed ) it go.

 

Link to comment

Hi

 

I believe I have all the legacy options enabled in BIOS (UEFI in this case) and it's not helping. I can go back to a non-UEFI setup if I absolutely have to but I'd really just prefer to solve the problem so that I can properly use the hardware as the manufacturer intended. Some people are successfully seeing the GUI under UEFI boot, so I'm sure it's something specific about my mobo, which I'm hoping the previously attached diagnostics file might help with. However, if no-one on the forum can help then I may need to pass it to support so they can give it some consideration.

Link to comment
  • 3 months later...

Hi,

Was the support able to solve the issue. I'm facing the same the system is up and working but I'm not able to boot in GUI mode. The cursor is just blinking on the top right.

I'm using a Fujitsu D3644-B MoBo who has no legacy or CSM mode. Any help is welcome.

Edit: I'm was also interested to past trough my iGpu (i3-9350K Intel 9th gen Coffee Lake) therefore I followed this guide. The insertions of this line in the /boot/config/go file solved the issue for booting in into GuI mode. As modprobe i915 is intel Specific it will not solve your issue but maybe something similar exist for your setup

modprobe i915
chmod -R 777 /dev/dri

Probably only the first line is needed.

Best Regards

Edited by 14yannick
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...