unRAID Server Release 6.0-beta5a-x86_64 Available


Recommended Posts

  • Replies 148
  • Created
  • Last Reply

Top Posters In This Topic

Yes unRAID saw them all.  Because when I first setup the drives it was without the pciback.hide in Syslinux.cfg.  I even thought about just passing through the drives individually but didn't like what I saw in windows for device names with the 3 temporary off the MB ports I passed through.  Plus I could go past 24 drives this way since Xen is hiding the 8 off the M1015.  In order to pass through the controller at first I used two other commands I saw on the forum when I setup my WHS2011 VM.  Sorry don't remember the syntax at work.  The problem was I had to redo those commands every time I wanted to boot the VM.  By adding the pciback.hide command to Syslinux.cfg I could just start the VM.  It is now ready to be auto started - just haven't done that yet.  I only seem to remember I need to do that when I'm here at work not at home with the server. No I don't have external access setup to my house network too scared to do that.

 

I see, well that tells me what I need to know provided that you booted Xen when you first setup the drives. Seems to be more of a problem with my 9201, hopefully a new kernel fixes the problem.

Link to comment

I am assuming this is still not safe to run on my main server?

 

Sent from my GT-I9505 using Tapatalk

I don't have any test servers all of them are doing the work of production servers.  I haven't had any problems with the unRAID side of it on 6B4 have yet to install 6b5a.  Have had a few problems with setup of my WHS2011 VM but seem to have solved all but one annoyance in Windows Device manager and a phantom drive.  Possibly as a result of cfg file settings and the PVHVM? drivers I installed.
Link to comment

Hi Tom, LSI is still broken under Xen, none of the disks are recognized even though it looks like the LSI modules have loaded. If I boot into Unraid without Xen, then everything works as expected.

 

We are using Supermicro MBD-X10SL7-F motherboards in next batch of AVS-10/4 servers which includes an on-board LSI 2308 that works great with or without Xen.  Probably the issue you are seeing will be solved with a kernel update (because a kernel update implies driver updates).  I was hoping -beta5 would include this but all the bad PR over 'heartbleed' has made it necessary for us to push that fix out now.

so where is the 5.x with the heartbleed fix?

 

Link to comment

Hi Tom, LSI is still broken under Xen, none of the disks are recognized even though it looks like the LSI modules have loaded. If I boot into Unraid without Xen, then everything works as expected.

 

We are using Supermicro MBD-X10SL7-F motherboards in next batch of AVS-10/4 servers which includes an on-board LSI 2308 that works great with or without Xen.  Probably the issue you are seeing will be solved with a kernel update (because a kernel update implies driver updates).  I was hoping -beta5 would include this but all the bad PR over 'heartbleed' has made it necessary for us to push that fix out now.

so where is the 5.x with the heartbleed fix?

 

5.x has no openSSL installed by default. so no need to fix it. Your addons are most certainly vulnerable though. You will have to talk to the authors. You can see what packs are installed in /boot/packages/...

Nevertheless I am hoping for a 5.0.6 soon  :-[

Link to comment

I upgraded earlier today and have noticed that the GUI doesn't display spun down drives anymore. Looking at the log drives are spinning down, but when looking at the GUI (either UnRAID GUI or UnMENU GUI) it appears all drives are spun up except parity.

 

Anyone else seeing this? I had no issues with any of the previous betas and have no plugins installed. Just the archVM for all my apps.

Nothing changed in this area, and indeed, it seems to work correctly on our test servers with both blinking and dimmed indicators.  I did notice that SSD's could be "spun down" - obviously need to fix that one.

 

Strange. I did nothing but copy of the 4 files and reboot. I will reboot again later and see what happens. I am just happy that from the logs the drives are actually spinning down at least.

 

When I woke up this morning all drives in the GUI showed blinking lights (even though I had not rebooted). Not sure what happened, but it seems okay now.

Link to comment

I have a M1015 passed through to WHS2011 VM running on unRAID 6B4 Xen on SuperMicro MB X7SBE.  No problems with that arrangement.  Does that help any? I did have to make the M1015 NOT provide boot devices so WHS2011 would boot off the Virtual Drive I setup.

 

If it is passed through to WHS2001 then its hard to say. What would happen if you do not pass it through? Can you see any of the drives in unraid that are connected to it if it is not passed through?

Yes unRAID saw them all.  Because when I first setup the drives it was without the pciback.hide in Syslinux.cfg.  I even thought about just passing through the drives individually but didn't like what I saw in windows for device names with the 3 temporary off the MB ports I passed through.  Plus I could go past 24 drives this way since Xen is hiding the 8 off the M1015.  In order to pass through the controller at first I used two other commands I saw on the forum when I setup my WHS2011 VM.  Sorry don't remember the syntax at work.  The problem was I had to redo those commands every time I wanted to boot the VM.  By adding the pciback.hide command to Syslinux.cfg I could just start the VM.  It is now ready to be auto started - just haven't done that yet.  I only seem to remember I need to do that when I'm here at work not at home with the server. No I don't have external access setup to my house network too scared to do that.

 

Try passing through the device I'd of the pcie slot the card is sitting in instead of the just the card. This is what I had to do for a sata card in a previous build. Don't know why but this worked while passing through the card didn't.

 

Sent from my Nexus 4 using Tapatalk

 

 

Link to comment

Yes unRAID saw them all.  Because when I first setup the drives it was without the pciback.hide in Syslinux.cfg.  I even thought about just passing through the drives individually but didn't like what I saw in windows for device names with the 3 temporary off the MB ports I passed through.  Plus I could go past 24 drives this way since Xen is hiding the 8 off the M1015.  In order to pass through the controller at first I used two other commands I saw on the forum when I setup my WHS2011 VM.  Sorry don't remember the syntax at work.  The problem was I had to redo those commands every time I wanted to boot the VM.  By adding the pciback.hide command to Syslinux.cfg I could just start the VM.  It is now ready to be auto started - just haven't done that yet.  I only seem to remember I need to do that when I'm here at work not at home with the server. No I don't have external access setup to my house network too scared to do that.

 

I see, well that tells me what I need to know provided that you booted Xen when you first setup the drives. Seems to be more of a problem with my 9201, hopefully a new kernel fixes the problem.

 

@Bonzi (and for others awareness),

Not really sure if this will provide you with any more clues to your problem as I haven't gone back to search what exactly your problem was with LSI and this version of unRaid, but I thought I may as well provide my recent LSI experience for the records.

 

In summary, I recently added a LSI 9201-16i HBA card to my X10-SL7 motherboard and quite surprisingly did not encounter one problem at all after transferring my existing hard drives. I was expecting the worst but had zero installation issues with this card to the X10-SL7 and booting successfully back into unRaid as if nothing had changed.

 

I am still on beta 4. I had already filled the x10-sl7 lsi ports with WD hard drives and was ready to shift them to the LSI 9201-16i card. All I had to was as simple as ensuring the mover script had completed it's process, before shutting down the array and shutting down the server. Inserted the HBA card, disconnected the onboard sata ports and cables to the HDDs and reattached them with new breakout cables to the HBA card. Powered up and went into BIOS to check the card was being recognised etc, but didn't have to change any setting. Upon booting into unRaid/Xen, all of my hard drives had been allocated to their correct disk position, including the parity. I now don't have any hard drive connected to the motherboard's LSI.

 

I presently have two drives attached on the HBA card being precleared using Joe's 1.14 version of the script, as I had done with all of my other hard drives. I haven't experienced any problem with his 1.14 script version either except that if I reboot unRaid, the preclear status is lost for that drive. But I don't think that's been a problem when eventually adding the drive to the array because formatting through the webGUI was pretty straightforward. I have not yet tried his 1.15 version of the script but will do so once the current preclears have finished.

 

Anyway, like I said, not sure if this will help you or confuse the matter more. Apologies in advance.

 

cheers,

 

gwl

Link to comment

@Bonzi (and for others awareness),

Not really sure if this will provide you with any more clues to your problem as I haven't gone back to search what exactly your problem was with LSI and this version of unRaid, but I thought I may as well provide my recent LSI experience for the records.

 

In summary, I recently added a LSI 9201-16i HBA card to my X10-SL7 motherboard and quite surprisingly did not encounter one problem at all after transferring my existing hard drives. I was expecting the worst but had zero installation issues with this card to the X10-SL7 and booting successfully back into unRaid as if nothing had changed.

 

I am still on beta 4. I had already filled the x10-sl7 lsi ports with WD hard drives and was ready to shift them to the LSI 9201-16i card. All I had to was as simple as ensuring the mover script had completed it's process, before shutting down the array and shutting down the server. Inserted the HBA card, disconnected the onboard sata ports and cables to the HDDs and reattached them with new breakout cables to the HBA card. Powered up and went into BIOS to check the card was being recognised etc, but didn't have to change any setting. Upon booting into unRaid/Xen, all of my hard drives had been allocated to their correct disk position, including the parity. I now don't have any hard drive connected to the motherboard's LSI.

 

I presently have two drives attached on the HBA card being precleared using Joe's 1.14 version of the script, as I had done with all of my other hard drives. I haven't experienced any problem with his 1.14 script version either except that if I reboot unRaid, the preclear status is lost for that drive. But I don't think that's been a problem when eventually adding the drive to the array because formatting through the webGUI was pretty straightforward. I have not yet tried his 1.15 version of the script but will do so once the current preclears have finished.

 

Anyway, like I said, not sure if this will help you or confuse the matter more. Apologies in advance.

 

cheers,

 

gwl

 

Thanks, that is surprising. I wonder why its not working for me. I'll continue to test and try to figure out what the problem is.

 

EDIT: If I do rmmod mpt2sas and then reload it using modprobe mpt2sas then all of my disks are recognized.

Link to comment

One more question - I have searched all over and feel really dumb but I cannot figure out how to boot into "Dom 0" mode so Xen is working.  I run headless and don't even have a monitor anymore where I could see the boot options - is that what I am missing?  Anyway to set to boot with Xen support via .cfg or go files?  I really have looked all over but most Xen posts seem to assume you have already booted properly to Xen mode.

 

OK - after more digging I found you can edit syslinux.cfg from the Unraid GUI.  Default settings below:

 

default /syslinux/menu.c32

menu title Lime Technology

prompt 0

timeout 50

label unRAID OS

  menu default

  kernel /bzimage

  append initrd=/bzroot

label unRAID OS Safe Mode (no plugins)

  kernel /bzimage

  append initrd=/bzroot unraidsafemode

label Memtest86+

  kernel /memtest

label Xen/unRAID OS

  kernel /syslinux/mboot.c32

  append /xen --- /bzimage --- /bzroot

label Xen/unRAID OS Safe Mode (no plugins)

  kernel /syslinux/mboot.c32

  append /xen --- /bzimage --- /bzroot unraidsafemode

 

What do I edit to use Xen?

Link to comment

One more question - I have searched all over and feel really dumb but I cannot figure out how to boot into "Dom 0" mode so Xen is working.  I run headless and don't even have a monitor anymore where I could see the boot options - is that what I am missing?  Anyway to set to boot with Xen support via .cfg or go files?  I really have looked all over but most Xen posts seem to assume you have already booted properly to Xen mode.

Edit /boot/syslinux/syslinux.cfg and move the "menu default" line to the section you want to boot by default, i.e., under the line "label Xen/unRAID OS"

 

Link to comment
Thanks, that is surprising. I wonder why its not working for me. I'll continue to test and try to figure out what the problem is.

 

EDIT: If I do rmmod mpt2sas and then reload it using modprobe mpt2sas then all of my disks are recognized.

 

Ah, that answers the question I had in my mind - I wondered whether yours was one of the new cards, using the mpt3sas driver.

 

So I'll chip in to report that my LSI-based Supermicro card works fine with the mpt2sas driver in beta4 and beta5a.

 

 

I wonder what is the difference in your case?  Is it a simple timing/race hazard?

 

Link to comment
... the mpt2sas driver in beta4 and beta5a...

 

Do you (or anyone) know which version of the driver it's running?

 

I was looking at the downloads to flash my SAS2008 firmware to the latest, and it looks like the Linux driver was updated to v19 a little over a month ago

Link to comment

 

Ah, that answers the question I had in my mind - I wondered whether yours was one of the new cards, using the mpt3sas driver.

 

So I'll chip in to report that my LSI-based Supermicro card works fine with the mpt2sas driver in beta4 and beta5a.

 

 

I wonder what is the difference in your case?  Is it a simple timing/race hazard?

 

Yes, I think it must be a timing problem since it is working fine now.

Link to comment

You specify the path for the vm in the cfg file. As long as your ssd is mounted on unraid, just point to that location in the cfg. That is what I did for my Ubuntu vm. It is on an ssd that is mounted outside of the array, but is not a cache drive

Link to comment

I'm unable to save network settings with this version.

 

I am trying to test some settings to try to get wget to work in putty, so I'm trying to change the Gateway in the GUI.  I changed it from the IP of my router to my modem, and hit apply.  The dialog ran across the bottom...

 

Stop AVAHI...Stop NFS...Stop SMB...

 

Then that message goes away, but the screen itself just stays grey.  It never comes back to having black/selectable dialog boxes, and just sits there.  I can actually select the tabs along the top, but cannot select any boxes in the GUI.

 

If I navigate away, then come back to this screen, the Gateway box is blank.  I've tried several times, and also restarted the server, but it remains unchangeable.

 

I manually edited the network.cfg file, then rebooted the server, and the Gateway box is still blank :(

 

I'm running the New Permissions script now, because I'm out of ideas on what else might work.

 

***EDIT -  That didn't help.  I've been doing more testing, and the thread discussing that is here...

 

http://lime-technology.com/forum/index.php?topic=33131.msg305914#msg305914

 

I still can't get a gateway of 192.168.1.254 to stick, no matter what I try, but I have discovered why the settings never comes back.  it's the "Enable STP:" setting.  When set to "yes", the network config gui is jacked up, and doesn't come available again.  I changes it to "no" and now it comes back to a usable page after hitting apply.  however, it will NOT save the gateway address above, and I can't imagine why.

Link to comment

Have an weird one here.  ASRock MB (890FX Deluxe 4) that has IOMMU and the AMD 1090T processor that I previously ran ESXi on.  I am making the switch to unRaid w/ native Xen.

 

If I boot unRaid, everything works fine. (Syslog attached "unRaid-Boot.txt).  If I boot Xen, I get a whole bunch of failures related to the hard drives and when it finally makes it to the unraid webgui most of my drives are missing. (Syslog attached "Xen-Boot.txt).

 

Any ideas?  Thanks in advanced!

unRaid-Boot.txt

Xen-Boot.txt

Link to comment

Strange - 6.0-beta5a - no longer do disks outside of the array (and NOT being used/mounted/etc) spin down.  If I manually spin them down they kick back up after just a few minutes.

 

I've never seen drives outside of the array spin down, though I haven't experienced the spinup issue.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.