unRAID Server Release 6.0-beta5a-x86_64 Available


Recommended Posts

When there are multiple enabled NIC's there are race conditions in linux affecting how they get enumerated.  The network setup scripts always uses 'eth0'.  There are few ways to solve this but easiest is just turn on bonding.  You don't have to have cables connected to all the ports, in fact you only need one cable, but it won't matter which port you plug it into.

 

Makes sense, thanks.

 

Would it be 'better' to connect both ethernet cables, and is there a preferred setting for the bonding mode?

Link to comment
  • Replies 148
  • Created
  • Last Reply

Top Posters In This Topic

I have an ATI Radeon X600 SE video card installed in my machine, but it seems the drivers for this card aren't installed in unRAID.

 

I found the drivers page here...

 

http://support.amd.com/en-us/download/desktop/legacy?product=Legacy1&os=Linux%20x86_64

 

But I'm not sure how I should try to install them (or, honestly, if I need to).

 

This machine has an integrated GPU also, which I have hooked up via HDMI to my monitor, and it does display the unRAID bootup stuff, so I know unRAID recognizes that, but lspci does not list the Radeon card, nor does

 

lspci | grep VGA

00:02.0 VGA compatible controller: Intel Corporation Haswell Integrated Graphics Controller (rev 06)

 

How can I get unRAID to recognize the Radeon card, so I can pass it thru to my Windows VM (until we eventually get iGPU passthru working)?

 

Thanks.

Link to comment

How can I get unRAID to recognize the Radeon card, so I can pass it thru to my Windows VM (until we eventually get iGPU passthru working)

 

You dont must ;) you even should hide for unRAID ;) if you want map to some VM

 

I don't understand you answer, but please re-read my full post.  The card isn't even showing up in unRAID, so I can't hide it from unRAID, as it's not available in the first place.

Link to comment

I have an ATI Radeon X600 SE video card installed in my machine, but it seems the drivers for this card aren't installed in unRAID.

 

I found the drivers page here...

 

http://support.amd.com/en-us/download/desktop/legacy?product=Legacy1&os=Linux%20x86_64

 

But I'm not sure how I should try to install them (or, honestly, if I need to).

 

This machine has an integrated GPU also, which I have hooked up via HDMI to my monitor, and it does display the unRAID bootup stuff, so I know unRAID recognizes that, but lspci does not list the Radeon card, nor does

 

lspci | grep VGA

00:02.0 VGA compatible controller: Intel Corporation Haswell Integrated Graphics Controller (rev 06)

 

How can I get unRAID to recognize the Radeon card, so I can pass it thru to my Windows VM (until we eventually get iGPU passthru working)?

 

Thanks.

 

Justin.  Can you post your full lspci printout here?

 

Also please try moving the card to a different PCI slot if you can.

 

I can confirm that you do not need to ever install drivers in dom0 (unRAID) for a device to show up in lspci.

 

Sent from my Nexus 5 using Tapatalk

 

Link to comment

My father-in-law is running 5.0.5 and it does the same thing.  He has an ASUS motherboard with 2 onboard NICs, and most times when he reboots the server, it switches to the other one, and he has to switch the cable over to the other NIC to get the network back...

 

Couldn't you just disable one of the NICs at BIOS level ?

Link to comment

Justin.  Can you post your full lspci printout here?

 

Also please try moving the card to a different PCI slot if you can.

 

I can confirm that you do not need to ever install drivers in dom0 (unRAID) for a device to show up in lspci.

 

Sure, here you go...

 

root@media:~# lspci
00:00.0 Host bridge: Intel Corporation Haswell DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Haswell PCI Express x16 Controller (rev 06)
00:01.1 PCI bridge: Intel Corporation Haswell PCI Express x8 Controller (rev 06)
00:01.2 PCI bridge: Intel Corporation Haswell PCI Express x4 Controller (rev 06)
00:02.0 VGA compatible controller: Intel Corporation Haswell Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Haswell HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation Lynx Point USB xHCI Host Controller (rev 04)
00:16.0 Communication controller: Intel Corporation Lynx Point MEI Controller #1 (rev 04)
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04)
00:1a.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #2 (rev 04)
00:1b.0 Audio device: Intel Corporation Lynx Point High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #1 (rev d4)
00:1c.1 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #2 (rev d4)
00:1c.2 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #3 (rev d4)
00:1c.3 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #4 (rev d4)
00:1c.4 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #5 (rev d4)
00:1c.5 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #6 (rev d4)
00:1d.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation Lynx Point LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation Lynx Point 6-port SATA Controller 1 [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation Lynx Point SMBus Controller (rev 04)
02:00.0 Ethernet controller: Intel Corporation 82574L Gigabit Network Connection
03:00.0 SCSI storage controller: Marvell Technology Group Ltd. 88SX7042 PCI-e 4-port SATA-II (rev 02)
06:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
07:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
08:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
09:00.0 Multimedia controller: Philips Semiconductors SAA7160 (rev 02)

 

it's an old card, so I just figured unRAID didn't recognize it.

 

Family is watching videos from the server right now, so I'll try moving it later and report back.

Link to comment

I'm not 100% sure that this is related to this beta, but my mover script stopped deleting files from the cache drive after doing the rsync. The root cause seems to be an error copying over the extended attributes:

 

rsync: rsync_xal_set: lsetxattr(""/mnt/user0/Media"","user.org.netatalk.supports-eas.FJFhLJ") failed: Operation not supported (95)

 

the above is on the screen when I run mover manually. This is also in the syslog:

 

May 20 03:40:02 storage shfs/user0: shfs_setxattr: setxattr: user.org.netatalk.supports-eas.LR79PZ /mnt/disk1/Media (95) Operation not supported

 

To verify, I edited the mover script to remove the -X option, and it completed normally.

Link to comment

My father-in-law is running 5.0.5 and it does the same thing.  He has an ASUS motherboard with 2 onboard NICs, and most times when he reboots the server, it switches to the other one, and he has to switch the cable over to the other NIC to get the network back...

 

Couldn't you just disable one of the NICs at BIOS level ?

Yeah.  The main reason is he was going to setup bonding once he put in a better switch, and wouldn't have to reboot to setup the bonding.  He only reboots on a rare occasion.  You know how those lists of to-dos always pile up... :)

Link to comment

I have an ATI Radeon X600 SE video card installed in my machine, but it seems the drivers for this card aren't installed in unRAID.

 

Justin.  Can you post your full lspci printout here?

 

Also please try moving the card to a different PCI slot if you can.

 

I can confirm that you do not need to ever install drivers in dom0 (unRAID) for a device to show up in lspci.

 

Okay, finally got a chance to move the card to a different slot (replacing my NIC).  I also found another video card in another machine that I added to unRAID (HD5550).  I installed this card in the slot that used to house the X600 card.

 

I rebooted unRAID, and it failed to start (problems with pciback addresses, due to missing NIC???)

 

I went back to the stock syslinux.cfg file and it booted fine.

 

Here are the new lspci results (it found the HD5550 card, but still does not find the X600)

 

root@media:~# lspci
00:00.0 Host bridge: Intel Corporation Haswell DRAM Controller (rev 06)
00:01.0 PCI bridge: Intel Corporation Haswell PCI Express x16 Controller (rev 06)
00:01.1 PCI bridge: Intel Corporation Haswell PCI Express x8 Controller (rev 06)
00:01.2 PCI bridge: Intel Corporation Haswell PCI Express x4 Controller (rev 06)
00:02.0 Display controller: Intel Corporation Haswell Integrated Graphics Controller (rev 06)
00:03.0 Audio device: Intel Corporation Haswell HD Audio Controller (rev 06)
00:14.0 USB controller: Intel Corporation Lynx Point USB xHCI Host Controller (rev 04)
00:19.0 Ethernet controller: Intel Corporation Ethernet Connection I217-V (rev 04)
00:1a.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #2 (rev 04)
00:1b.0 Audio device: Intel Corporation Lynx Point High Definition Audio Controller (rev 04)
00:1c.0 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #1 (rev d4)
00:1c.1 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #2 (rev d4)
00:1c.2 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #3 (rev d4)
00:1c.3 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #4 (rev d4)
00:1c.4 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #5 (rev d4)
00:1c.5 PCI bridge: Intel Corporation Lynx Point PCI Express Root Port #6 (rev d4)
00:1d.0 USB controller: Intel Corporation Lynx Point USB Enhanced Host Controller #1 (rev 04)
00:1f.0 ISA bridge: Intel Corporation Lynx Point LPC Controller (rev 04)
00:1f.2 SATA controller: Intel Corporation Lynx Point 6-port SATA Controller 1 [AHCI mode] (rev 04)
00:1f.3 SMBus: Intel Corporation Lynx Point SMBus Controller (rev 04)
01:00.0 VGA compatible controller: AMD/ATI [Advanced Micro Devices, Inc.] Redwood LE [Radeon HD 5550]
01:00.1 Audio device: AMD/ATI [Advanced Micro Devices, Inc.] Redwood HDMI Audio [Radeon HD 5000 Series]
03:00.0 SCSI storage controller: Marvell Technology Group Ltd. 88SX7042 PCI-e 4-port SATA-II (rev 02)
06:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
07:00.0 Ethernet controller: Intel Corporation I211 Gigabit Network Connection (rev 03)
08:00.0 SATA controller: ASMedia Technology Inc. ASM1062 Serial ATA Controller (rev 01)
09:00.0 Multimedia controller: Philips Semiconductors SAA7160 (rev 02)

 

In looking into installing drivers for the X600, it seems that support for this card was removed in newer kernels, so maybe it does need to be added manually after all.  I've tried, but ran into problems (see my other thread).

 

I'm pretty sure this did work in unRAID 5, since I didn't have an onboard GPU with the hardware I used for v5 (I upgraded motherboard and processor with v6), and that card was the only video card installed while I was running v5, and I know I was able to see the console with v5.  In fact, I only bought that card because I could not get v5 to run without having some kind of video card installed.

 

Please let me know what more I can do to help diagnose/resolve this issue.

 

thanks.

 

Link to comment

I'm not sure whether this is a problem with the current beta, or not, but I've never seen it before:

 

I looked at my server this morning and noticed that the drive activity led was lit for my disk5.  I looked at the web interface and all appeared normal (disk 5 was spun up).  I looked at the last entries in the log and all appeared normal.  I then shut down both my VMs from the web interface.  Next, I stopped the array from the web interface - it spent a long time unmounting user shares but, eventually, this completed and the drive activity light went off.

 

On studying the log, I noticed that mover had started running at 3am and didn't complete until after I had issued the array stop command at 9.27.  There was only one file to copy - 10GB.

 

Why did the mover process hang for so long?

If it had been a hardware issue, could stopping the array have caused mover to complete?

Could some other process have caused mover to hang with the destination drive held active?

 

I attach the syslog ...

syslog-20140523-095538.txt.zip

Link to comment

I'm not sure whether this is a problem with the current beta, or not, but I've never seen it before:

 

I looked at my server this morning and noticed that the drive activity led was lit for my disk5.  I looked at the web interface and all appeared normal (disk 5 was spun up).  I looked at the last entries in the log and all appeared normal.  I then shut down both my VMs from the web interface.  Next, I stopped the array from the web interface - it spent a long time unmounting user shares but, eventually, this completed and the drive activity light went off.

 

On studying the log, I noticed that mover had started running at 3am and didn't complete until after I had issued the array stop command at 9.27.  There was only one file to copy - 10GB.

 

Why did the mover process hang for so long?

If it had been a hardware issue, could stopping the array have caused mover to complete?

Could some other process have caused mover to hang with the destination drive held active?

 

I attach the syslog ...

 

Did you have a VM running and the mover was trying to move the .img at 3:40 AM?  Also, what kind of VMs?  Windows or Linux?

Link to comment

I'm not sure whether this is a problem with the current beta, or not, but I've never seen it before:

 

I looked at my server this morning and noticed that the drive activity led was lit for my disk5.  I looked at the web interface and all appeared normal (disk 5 was spun up).  I looked at the last entries in the log and all appeared normal.  I then shut down both my VMs from the web interface.  Next, I stopped the array from the web interface - it spent a long time unmounting user shares but, eventually, this completed and the drive activity light went off.

 

On studying the log, I noticed that mover had started running at 3am and didn't complete until after I had issued the array stop command at 9.27.  There was only one file to copy - 10GB.

 

Why did the mover process hang for so long?

If it had been a hardware issue, could stopping the array have caused mover to complete?

Could some other process have caused mover to hang with the destination drive held active?

 

I attach the syslog ...

 

Did you have a VM running and the mover was trying to move the .img at 3:40 AM?  Also, what kind of VMs?  Windows or Linux?

 

I have two ArchLinux VMs running.  Their config and image files are in a cache-only share.

 

Last night was a similar situation - one .mkv file to move from cache to Movies and it was all done and dusted in less than seven minutes

Link to comment
  • 2 weeks later...

I got a nasty crash today. Was running 5a stock and ArchVM 4.0, with some disk intensive addons running on the cache drive only (array disks are configured but not being accessed). The cache drive was accessed via an NFS share (/net/Tower/mnt/user...). Had been running stable for several days.

 

This morning I woke up to an unresponsive server. Thanks to IPMI I was able to capture the dying console and reboot the machine from 500 miles away! (Very cool!) Maybe not many hints but thought I would post what I had. By the way, even from the console I was not able to get to a command prompt. I executed a computer reset and poweron via IPMI and it came up fine and has been running stable all day under high I/O usage, getting that funky error message on the VM's console (xennet: skb rides the rocket).

 

Although I might understand that the vm OS might crash running the addons, the crashing vm should not have brought down unRAID and the entire box.

Crash.JPG.6e5a8a34ac5b1f0010a05a03c4ee5ace.JPG

Link to comment

I got a nasty crash today. ...

 

This has already been experienced and reported by several people.  It is believed that the new kernel, to be included with b6, will fix this.

 

In the meantime, several of us have pinned one, or more, cpu to Dom0, and not experienced the problem since.

Link to comment

I got a nasty crash today. ...

 

This has already been experienced and reported by several people.  It is believed that the new kernel, to be included with b6, will fix this.

 

In the meantime, several of us have pinned one, or more, cpu to Dom0, and not experienced the problem since.

 

hi bjp999, just to echo peter's comments, there have been a number of people seeing the exact same crash, peter and myself included, the workaround is pinning a number of cores to dom0 and excluding these cores from your domU's, this for me has completely stabilised my system, i was seeing crashes at around the 4-5 day mark, now with the pinning in place i have an uptime exceeding 20 days, so im fairly confident this seems to get around this particular issue.

 

my theory (which could be completely wrong!) as to why its happening is cpu starvation of dom0 due to heavy cpu usage in domU's, thus casuing the xen process to not have enough cycles to process requests and causing the whole thing to collapse, bringing down domU's and dom0, pinning of course alleviates this by guaranteeing enough cpu time in dom0 to process xen commands, plausible?.

 

there is a post linked below which seems to point at a later kernel potentially fixing/debugging the issue, i think its def worth looking at the issue again once 6.0-beta6 is released, as both kernel and xen version will be later versions, link to xen post:-

 

http://lists.xen.org/archives/html/xen-devel/2013-10/msg00585.html

Link to comment

Only time I have had that exact error is when I have pinned cpus to dom0 in syslinux.cfg. I pinned 2 cpus and allocated 4-7 in the vm

 

Thanks and sorry I was not monitoring the 6.0 forums more closely. My unRaid ( dom0 ) is pretty quiet now as I have just a few low capacity disks in the array. But my cache is a large disk that I eventually plan to add to the prod array.

 

When my issue was happening the disk has been driven hard for days.

 

Can you point me to the configuration you did to avoid this issue?

Link to comment

Only time I have had that exact error is when I have pinned cpus to dom0 in syslinux.cfg. I pinned 2 cpus and allocated 4-7 in the vm

 

Thanks and sorry I was not monitoring the 6.0 forums more closely. My unRaid ( dom0 ) is pretty quiet now as I have just a few low capacity disks in the array. But my cache is a large disk that I eventually plan to add to the prod array.

 

When my issue was happening the disk has been driven hard for days.

 

Can you point me to the configuration you did to avoid this issue?

 

take a look here for how to pin cores for dom0 and domU:- http://lime-technology.com/forum/index.php?topic=33459.msg309486#msg309486

Link to comment

Only time I have had that exact error is when I have pinned cpus to dom0 in syslinux.cfg. I pinned 2 cpus and allocated 4-7 in the vm

 

Thanks and sorry I was not monitoring the 6.0 forums more closely. My unRaid ( dom0 ) is pretty quiet now as I have just a few low capacity disks in the array. But my cache is a large disk that I eventually plan to add to the prod array.

 

When my issue was happening the disk has been driven hard for days.

 

Can you point me to the configuration you did to avoid this issue?

 

take a look here for how to pin cores for dom0 and domU:- http://lime-technology.com/forum/index.php?topic=33459.msg309486#msg309486

I was just trying to pin cores for performance reasons. But pinning cores crashes my server but helps others who crash with default. I can run stable with default syslinux.cfg.  Right now I just have 4GB allocated to dom0.

Link to comment

I was just trying to pin cores for performance reasons. But pinning cores crashes my server but helps others who crash with default. I can run stable with default syslinux.cfg.  Right now I just have 4GB allocated to dom0.

 

running pinned caused you issues because you hadn't excluded the pinned cores from your domU's, this in turn makes the cpu starvation issue more prevalent, as your then restricting dom0 to only 1-2 cores and at the same time still using those cores in your vm's.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.