[Solved] Can't start array after adding 2 NVME drives to the config


Recommended Posts

After adding 2 1 Tb western digital NVME drives to my motherboard- all operations seem to freeze the OS.  I added these drives to pass through to a couple of windows 10 VMs but everything I do after adding them causes my rig to freeze.  Any assistance would be appreciated.  I can start it in maintenance mode but any formating of the drives or starting the array freezes the system and requires a hard reboot.  My bios and unraid both recognize the drives but can't get past initializing the array without a full freeze requiring reboot.  I can't even collect log files when it freezes.

Edited by jordanmw
Link to comment

No shell commands and gui becomes unresponsive.  Haven't tried tailing the syslog yet but have removed one drive and got the same result.   If I remove both, I am back to normal and the array starts. Also should mention that when I first installed them, it made me reassign the cache drives but still failed to start.

Edited by jordanmw
Link to comment

So if I can't run diag on the array without it freezing- what is my next step?  If I remove both drives, I am back to normal, array starts and works like normal.  Is there some way I can have it run diag on startup and save the file onto the usb drive?  Has anyone else had similar issues after adding a nvme drive?  Someone?

Link to comment

Well apparently I managed to get a NVMe drive that has issues with linux, saw this on a amazon review:

"I've had issues trying to access this drive in Ubuntu 18.04. Freezes the system. Not sure if its a driver or bios or something else. Western Digital forums and Ubuntu forums show other people have had similar issues with this drive. Still works great in Windows 10 though."

 

So I guess I need to find out what needs to be added to use the drive properly.  Once I find what is needed, I hope LT will be willing to add it into the distro.....

Link to comment

Here are some of the reports of the issues with linux distros and WD black NVMe issues:

https://community.wd.com/t/linux-support-for-wd-black-nvme-2018/225446/5

 

Other users of X399 boards claim I need to add this to the boot config:

nvme_core.default_ps_max_latency_us=5500

Apparently some other people have a bios setting that can be changed instead- not seeing that option on my x399 taichi.

 

Usually I would just add this to grub, what do I need to do on unraid to get this enabled?

Edited by jordanmw
Link to comment
5 hours ago, saarg said:

You could try adding it in the syslinux.cfg file on the append line of the used boot entry. 

On the main page, click the n the flash drive and scroll down to the syslinux.cfg part and you can edit it in the webgui. 

Thanks saarg- will give it a shot tonight.  I was going to try last night but ended up hosting a lan party on it with monster hunter world as the game of choice- so much 4 player fun- especially since they finally patched to enable 21:9!

Link to comment

Looks like it's a bios update again that makes things right.  3.50 bios says updates for m.2 raid enhancements but added the required parameter for the nvme to function properly.   Unraid again, is not the culprit.   Threadripper still has some issues that are being worked out, but at least they are fixing these issues quickly... still maintaining that this is probably the most value conscious build I have ever pulled off.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.