unRAID Server Release 6.2.0-beta21 Available


Recommended Posts

If you're referring to running unRAID as a VM guest, there is currently a passthrough bug. Any hardware passed through to the unRAID VM will no longer be seen in 6.2. You might be okay if you RDM your drives, but my servers have their controllers passed through, so I can't confirm

 

I know we're in the minority and running unsupported by running unRAID inside ESXi, but for me this would be a deal breaker.  Hopefully it's something that can be fixed before 6.2 final.  I'd really hate to lose unRAID because I need to run it inside ESXi.

If you tell limetech what needs to be changed to make it work, and that change doesn't interfere with running unraid on baremetal, they have been very helpful in the past in making requested changes. If you don't know or can't find out how to fix it, then chances are slim.
Link to comment
  • Replies 545
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

If you're referring to running unRAID as a VM guest, there is currently a passthrough bug. Any hardware passed through to the unRAID VM will no longer be seen in 6.2. You might be okay if you RDM your drives, but my servers have their controllers passed through, so I can't confirm

 

I know we're in the minority and running unsupported by running unRAID inside ESXi, but for me this would be a deal breaker.  Hopefully it's something that can be fixed before 6.2 final.  I'd really hate to lose unRAID because I need to run it inside ESXi.

 

Indeed there is no chance I can upgrade to 6.2 if passthrough isn't working.  I'm already in the process of moving my dockers off UnRAID to a Linux VM though so I guess it would be easier to move to SnapRAID if I'm only using UnRAID for my storage.  I'd prefer not to as I'm comfortable with UnRAID.

Link to comment

I know we're in the minority and running unsupported by running unRAID inside ESXi, but for me this would be a deal breaker.  Hopefully it's something that can be fixed before 6.2 final.  I'd really hate to lose unRAID because I need to run it inside ESXi.

If you tell limetech what needs to be changed to make it work, and that change doesn't interfere with running unraid on baremetal, they have been very helpful in the past in making requested changes. If you don't know or can't find out how to fix it, then chances are slim.

 

For testing, I compiled a 4.4.6 kernel on Slackware 14.1 x64 using the standard kernel as a base.  I tried passing through my SAS card and the kernel froze on boot as soon as it finished initialising.  Looks like it might be a kernel problem.  Not sure where to go from here.  Happy to test/tryout anything.  I'll have a go at trying different kernel options but it's a bit of a shot in the dark.

Link to comment

*** SOLVED ***

Had to restart all connecting Windows machines after downgrade from Beta to Stable. Error code told "Missing protocol" - after reboot everything was fine.

 

 

I just went back from Beta-21 to 6.1.9 stable per this instructions:

 

Download the 6.1.9 release and extract the bzroot and bzimage files.  Copy these over to your flash drive and delete thr bzroot-gui file that is present on your flash drive.

 

After a reboot my servers came back onine, I can SSH - but I can't connect via SMB. None of my shares does work. I always get "Permission denied" if I call them via e.g. "\\Tower\disk1\..." or "\\Tower\sharename".

 

What do I need to do in addition?

 

Thanks in advance.

 

Link to comment

I've been playing around with unRAID beta 21 for the last week or so and have had some issues with the following:

 

Whenever I go to edit a VM, the "Primary vDisk Location" always resets to none.  If I make another adjustment and hit save, my original vDisk location is cleared from the configuration and I have to go back and set the vDisk location again. 

 

Other than that, things have been working pretty smoothly with the exception of not being able to get my GTX 760 working in a VM (code 43 in windows issue).  I thought that I read that that issue was supposed to have been put in place with 6.2 (not sure if it's in the beta at this time though).

 

A note: I moved to the beta from unRAID 6.1.8 as an FYI.

Link to comment

Don't think this is a "bug" but my cpu speeds seem to be different, idle / low usage used to sit at 800mhz and stay that way. now its constantly jumping up and down to uneven numbers. (could be possible showing exact numbers now instead of rounding) but still jumping up down constantly from lowest speed to high speed. cpu utilization is 1-2% when all this speed jumping happens.

 

only other change i have made is a cache pool with nvme drives, not sure if that has something to do with it.

 

edit: I'm not so sure if this is even an issue. seems after a few days my cpu usage has settled down.

Link to comment

I realize that UnRAID is not officially supported on VMware but I was wondering if there are any users who have successfully upgraded their virtualized 6.1.9 setups to 6.2.0-beta21 and what your experience has been.

 

Sad that this seems to be the current point of view -- I specifically chose unRAID a couple of years ago because it was well-behaved as a VMware ESXi guest.  I do hope that the apparent kernel issue (based on the conjecture from the other linked thread) can be resolved before 6.2 is released.  On the other hand, 6.19 is working well for me on VMware, and will likely perform well for my simple needs for at least another year or so, in case I need to look for an alternative.

Link to comment

So i posted a while back about my vm being shut off due to unraid stating it was out of memory - today this issue happened again so i dropped my main vm's memory allocation from 14gb down to 12gb

 

I have the dynamix stats plugin installed and found that doing this 2gb change in the stats i now had almost an extra 4GB of memory free to the system! This seems very odd so i decided to ssh into my server and run htop and look at my vms

 

VM 1 - OVMF - allocated 12GB - htop VIRT reports 14.7G - htop RES reports 14.5G

VM 2 - SeaBios - allocated 8GB - htop VIRT reports 9707M - htop RES reports 9452M

 

Does anyone know why so much extra memory is being allocated? Setting my VM1 to 14GB the allocated memory jumps up to almost 18GB! no wonder i am running out of memory if 2 vm's are using 28GB of my 32GB when only 22GB is allocated

I have no idea if this is due to the new vm changes in the betas but it is unusual and i have only started to see these "out of memory" vm shutdowns recently in beta 21

 

I know it is a beta so this is not a complaint - just something interesting to look into

Link to comment

Been using for a week, but I have an odd problem:

 

All of my Dockers say they have an update available, but when I go to update them they do not connect and ultimately fail. The Dockers themselves have full network connectivity, and when I SSH to the unraid host I can ping github/etc. just fine. Any thoughts? Using mostly linuxserver.io Docker images.

 

EDIT: Appears that the Docker engine does not like Jumbo frames. I have 2 NICs bonded, and the NIC interfaces and the bond0 interface all have MTU9000 set. This appears to be the bug.

Capture.PNG.c64ebc2f50b2dd24f885343c91791d49.PNG

Link to comment

I set up a backup server with the the beta. After adding three drives to preclear, I noticed that smart was disabled on all of them. Rebooting didn't take care of that either. I had to turn smart on for each drive through command line.

 

Isn't unraid supposed to turn smart on for each drive? Shouldn't it? I never had that issue before.

Link to comment

I set up a backup server with the the beta. After adding three drives to preclear, I noticed that smart was disabled on all of them. Rebooting didn't take care of that either. I had to turn smart on for each drive through command line.

 

Isn't unraid supposed to turn smart on for each drive? Shouldn't it? I never had that issue before.

This happened to me once or twice, but adding the disk to the array turned SMART on.

Link to comment

I set up a backup server with the the beta. After adding three drives to preclear, I noticed that smart was disabled on all of them. Rebooting didn't take care of that either. I had to turn smart on for each drive through command line.

 

Isn't unraid supposed to turn smart on for each drive? Shouldn't it? I never had that issue before.

I assume you are using the Preclear beta?  Try posting this in the Preclear beta thread.  I'm guessing it isn't sending the 'enable SMART' command at start.  Some brand new drives don't have it turned on.

Link to comment

I set up a backup server with the the beta. After adding three drives to preclear, I noticed that smart was disabled on all of them. Rebooting didn't take care of that either. I had to turn smart on for each drive through command line.

 

Isn't unraid supposed to turn smart on for each drive? Shouldn't it? I never had that issue before.

I assume you are using the Preclear beta?  Try posting this in the Preclear beta thread.  I'm guessing it isn't sending the 'enable SMART' command at start.  Some brand new drives don't have it turned on.

Yes, I'm using the beta. I was waiting for confirmation here before I posted over there.

 

Two of the drives are new (new to unraid, they were previously in Windows computers). Third one was already precleared before (with smart turned on). Somehow when I disconnected and reconnected the drive to unraid smart is turned off again.

 

By the way is smart turned off and on in the drive or in the os?

 

Link to comment

After upgrading to this beta from 6.1 stable (Because of Windows 10 performance; or lack thereof), I can't seem to get any video signal anymore.

 

Not from UnRAID's GPU, not from any VM's assigned GPU.

 

The VM log seems rather useless to me:

warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 1]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 2]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 3]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 4]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 5]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 6]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 7]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 8]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 9]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 12]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 13]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 14]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 15]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 0]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 1]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 2]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 3]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 4]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 5]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 6]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 7]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 8]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 9]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 12]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 13]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 14]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 15]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 16]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 17]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 23]
warning: host doesn't support requested feature: CPUID.80000001H:EDX [bit 24]
libusb: error [op_set_configuration] failed, error -1 errno 28
2016-05-08T21:32:23.746163Z qemu-system-x86_64: libusb_set_configuration: -99 [OTHER]

 

Here is the UnRAID log:

May 8 23:32:15 Unimatrix_Zero kernel: vgaarb: device changed decodes: PCI:0000:07:00.0,olddecodes=io+mem,decodes=io+mem:owns=none
May 8 23:32:15 Unimatrix_Zero kernel: device vnet0 entered promiscuous mode
May 8 23:32:15 Unimatrix_Zero kernel: docker0: port 5(vnet0) entered forwarding state
May 8 23:32:15 Unimatrix_Zero kernel: docker0: port 5(vnet0) entered forwarding state
May 8 23:32:16 Unimatrix_Zero kernel: vfio_ecap_init: 0000:07:00.0 hiding ecap 0x19@0x900
May 8 23:32:18 Unimatrix_Zero acpid: input device has been disconnected, fd 6
May 8 23:32:20 Unimatrix_Zero kernel: usb 10-2.4.1: reset low-speed USB device number 7 using xhci_hcd
May 8 23:32:20 Unimatrix_Zero kernel: usb 8-1.4: USB disconnect, device number 4
May 8 23:32:20 Unimatrix_Zero kernel: usb 10-2.4.1: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
May 8 23:32:21 Unimatrix_Zero kernel: usb 8-1.4: new full-speed USB device number 5 using xhci_hcd
May 8 23:32:21 Unimatrix_Zero kernel: usb 10-2.3: reset full-speed USB device number 5 using xhci_hcd
May 8 23:32:21 Unimatrix_Zero kernel: usb 8-1.4: ep 0x82 - rounding interval to 64 microframes, ep desc says 80 microframes
May 8 23:32:21 Unimatrix_Zero kernel: input: Metadot - Das Keyboard Das Keyboard as /devices/pci0000:00/0000:00:09.0/0000:04:00.0/usb8/8-1/8-1.4/8-1.4:1.0/0003:24F0:0140.000D/input/input13
May 8 23:32:21 Unimatrix_Zero kernel: hid-generic 0003:24F0:0140.000D: input,hidraw2: USB HID v1.10 Keyboard [Metadot - Das Keyboard Das Keyboard] on usb-0000:04:00.0-1.4/input0
May 8 23:32:21 Unimatrix_Zero kernel: input: Metadot - Das Keyboard Das Keyboard as /devices/pci0000:00/0000:00:09.0/0000:04:00.0/usb8/8-1/8-1.4/8-1.4:1.1/0003:24F0:0140.000E/input/input14
May 8 23:32:21 Unimatrix_Zero kernel: hid-generic 0003:24F0:0140.000E: input,hidraw3: USB HID v1.10 Device [Metadot - Das Keyboard Das Keyboard] on usb-0000:04:00.0-1.4/input1
May 8 23:32:21 Unimatrix_Zero kernel: usb 10-2.2: reset full-speed USB device number 4 using xhci_hcd
May 8 23:32:21 Unimatrix_Zero kernel: usb 10-2.4.1: reset low-speed USB device number 7 using xhci_hcd
May 8 23:32:22 Unimatrix_Zero kernel: usb 10-2.4.1: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
May 8 23:32:22 Unimatrix_Zero kernel: usb 10-2.3: reset full-speed USB device number 5 using xhci_hcd
May 8 23:32:23 Unimatrix_Zero kernel: usb 10-2.2: reset full-speed USB device number 4 using xhci_hcd
May 8 23:32:23 Unimatrix_Zero kernel: usb 10-2.2: reset full-speed USB device number 4 using xhci_hcd
May 8 23:32:23 Unimatrix_Zero kernel: usb 10-2.2: Not enough bandwidth for new device state.
May 8 23:32:24 Unimatrix_Zero kernel: usb 10-2.4.1: reset low-speed USB device number 7 using xhci_hcd
May 8 23:32:24 Unimatrix_Zero kernel: usb 10-2.4.1: ep 0x81 - rounding interval to 64 microframes, ep desc says 80 microframes
May 8 23:32:24 Unimatrix_Zero kernel: usb 10-2.3: reset full-speed USB device number 5 using xhci_hcd
May 8 23:32:26 Unimatrix_Zero kernel: kvm: zapping shadow pages for mmio generation wraparound
May 8 23:32:26 Unimatrix_Zero kernel: kvm: zapping shadow pages for mmio generation wraparound
May 8 23:32:30 Unimatrix_Zero kernel: docker0: port 5(vnet0) entered forwarding state

 

I can't make head or tails of it, it worked fine, now it doesn't, and the logs seem useless to me.

 

Thanks in advance for any help!

 

P.S.

Is it possible to do a clean install on my USB stick (with 6.2 b21) and keep my curent Array in one piece?

Link to comment

By the way is smart turned off and on in the drive or in the os?

It's a software toggle in the drive itself, saved across power off.  You can turn it on yourself with -

  smartctl -s on /dev/sdX

- or if necessary -

  smartctl -d ata -s on /dev/sdX

 

Link to comment

By the way is smart turned off and on in the drive or in the os?

It's a software toggle in the drive itself, saved across power off.  You can turn it on yourself with -

  smartctl -s on /dev/sdX

- or if necessary -

  smartctl -d ata -s on /dev/sdX

Well, that means smart was somehow turned off on the one drive because it was precleared a few weeks ago and I have the preclear report showing smart being on then.

 

Does that constitute a bug in unraid beta? I know I didn't turn it off

Link to comment

By the way is smart turned off and on in the drive or in the os?

It's a software toggle in the drive itself, saved across power off.  You can turn it on yourself with -

  smartctl -s on /dev/sdX

- or if necessary -

  smartctl -d ata -s on /dev/sdX

Well, that means smart was somehow turned off on the one drive because it was precleared a few weeks ago and I have the preclear report showing smart being on then.

 

Does that constitute a bug in unraid beta? I know I didn't turn it off

I of course cannot answer definitively.  It is my own opinion that neither unRAID nor any unRAID addon ever turns SMART off.  I say that because I can't imagine any reason why anything associated with unRAID would ever do so.

 

I did a little more research, and in the smartmontools docs found the line "In principle the SMART feature settings are preserved over power-cycling".  That phrase "in principle" leaves room for the possibility that not all drives do that, which is consistent with general SMART inconsistency!  So I think it's possible that that drive needs to have SMART enabled after each power off, something unRAID and the older Preclear scripts probably do at start.

Link to comment

So i posted a while back about my vm being shut off due to unraid stating it was out of memory - today this issue happened again so i dropped my main vm's memory allocation from 14gb down to 12gb

 

I have the dynamix stats plugin installed and found that doing this 2gb change in the stats i now had almost an extra 4GB of memory free to the system! This seems very odd so i decided to ssh into my server and run htop and look at my vms

 

VM 1 - OVMF - allocated 12GB - htop VIRT reports 14.7G - htop RES reports 14.5G

VM 2 - SeaBios - allocated 8GB - htop VIRT reports 9707M - htop RES reports 9452M

 

Does anyone know why so much extra memory is being allocated? Setting my VM1 to 14GB the allocated memory jumps up to almost 18GB! no wonder i am running out of memory if 2 vm's are using 28GB of my 32GB when only 22GB is allocated

I have no idea if this is due to the new vm changes in the betas but it is unusual and i have only started to see these "out of memory" vm shutdowns recently in beta 21

 

I know it is a beta so this is not a complaint - just something interesting to look into

 

Almost exact same thing here.

 

I have very few plugins that aren't resource intensive at all (all of them are Dynamix extensions), not a single Docker, and 3 VMs running W10 with 4GB+4GB+3GB of RAM, with 16GB of available RAM.

If I set them up with 4GB+4GB+4GB, all of them immediately shutdown when they're all up.

 

I've done a full memtest and everything is correct, 0 errors. It might seems that the RAM overhead needed for each VM is roughly ~25% of it's allocated memory. unRAID reports 200MB of used RAM and 1.4GB of cached RAM when all VMs are off, which should allow plenty of room to allocate 12GB to VMs. (I can provide any logs required)

 

I think that there needs to be at least a tool, or configuration, that allows unRAID users to reserve some amount of RAM to all of KVM or to each VM in particular so, at least, unRAID logs some kind of error or halts the startup of a VM; instead of shutting everything down.

 

Edit: here's a snapshot of RAM usage with all 3 VMs up (4GB+4GB+3GB)

xgnCjc6.png

 

RAM usage with all VMs off:

MeV9dVF.png

Link to comment

I started a thread in V6 support forum but wonder if it's getting passed over since I'm running B21...  http://lime-technology.com/forum/index.php?topic=48979.0

 

GUI is unresponsive... was working previously (updated to B21 when released).  Was finally able to get GUI to boot if I disabled all plugins and set Array and Docker to not start up.  GUI locked up as soon as I tried to start array.  Did a fs repair on one drive, started array and then it hung again within a few seconds. 

 

Hoping to get more eyes to see if there something I'm missing.

 

Attached Diag Zip from this morning.

 

Please help

-Sw2

 

media-diagnostics-20160509-0703.zip

Link to comment

I started a thread in V6 support forum but wonder if it's getting passed over since I'm running B21...  http://lime-technology.com/forum/index.php?topic=48979.0

 

GUI is unresponsive... was working previously (updated to B21 when released).  Was finally able to get GUI to boot if I disabled all plugins and set Array and Docker to not start up.  GUI locked up as soon as I tried to start array.  Did a fs repair on one drive, started array and then it hung again within a few seconds. 

 

Hoping to get more eyes to see if there something I'm missing.

 

Attached Diag Zip from this morning.

 

Please help

-Sw2

 

Connect a monitor and keyboard and boot in the gui mode and see if that locks up.  If not it is not a gui issue and is probably a networking issue.

 

EDIT: From the diagnostics, it looks like disk6 has a problem.  You are getting kernel panics when that disk is mounting.  It appears the file system has a problem.

 

May  8 21:46:16 media emhttp: shcmd (723): mkdir -p /mnt/disk6
May  8 21:46:16 media emhttp: shcmd (724): set -o pipefail ; mount -t auto -o noatime,nodiratime /dev/md6 /mnt/disk6 |& logger
May  8 21:46:16 media kernel: XFS (md6): Mounting V5 Filesystem
May  8 21:46:16 media kernel: XFS (md6): Starting recovery (logdev: internal)
May  8 21:46:16 media kernel: XFS (md6): _xfs_buf_find: Block out of range: block 0x8e8e05f28, EOFS 0x1d1c0be48 
May  8 21:46:16 media kernel: ------------[ cut here ]------------
May  8 21:46:16 media kernel: WARNING: CPU: 1 PID: 7528 at fs/xfs/xfs_buf.c:472 _xfs_buf_find+0x7f/0x28c()
May  8 21:46:16 media kernel: Modules linked in: md_mod x86_pkg_temp_thermal igb coretemp i2c_i801 kvm_intel kvm e1000e mvsas ptp ahci libsas libahci i2c_algo_bit pps_core scsi_transport_sas [last unloaded: md_mod]
May  8 21:46:16 media kernel: CPU: 1 PID: 7528 Comm: mount Not tainted 4.4.6-unRAID #1

Link to comment

Connect a monitor and keyboard and boot in the gui mode and see if that locks up.  If not it is not a gui issue and is probably a networking issue.

 

EDIT: From the diagnostics, it looks like disk6 has a problem.  You are getting kernel panics when that disk is mounting.  It appears the file system has a problem.

 

May  8 21:46:16 media emhttp: shcmd (723): mkdir -p /mnt/disk6
May  8 21:46:16 media emhttp: shcmd (724): set -o pipefail ; mount -t auto -o noatime,nodiratime /dev/md6 /mnt/disk6 |& logger
May  8 21:46:16 media kernel: XFS (md6): Mounting V5 Filesystem
May  8 21:46:16 media kernel: XFS (md6): Starting recovery (logdev: internal)
May  8 21:46:16 media kernel: XFS (md6): _xfs_buf_find: Block out of range: block 0x8e8e05f28, EOFS 0x1d1c0be48 
May  8 21:46:16 media kernel: ------------[ cut here ]------------
May  8 21:46:16 media kernel: WARNING: CPU: 1 PID: 7528 at fs/xfs/xfs_buf.c:472 _xfs_buf_find+0x7f/0x28c()
May  8 21:46:16 media kernel: Modules linked in: md_mod x86_pkg_temp_thermal igb coretemp i2c_i801 kvm_intel kvm e1000e mvsas ptp ahci libsas libahci i2c_algo_bit pps_core scsi_transport_sas [last unloaded: md_mod]
May  8 21:46:16 media kernel: CPU: 1 PID: 7528 Comm: mount Not tainted 4.4.6-unRAID #1

 

Yes, I can get to the GUI now remotely (once I disabled plugins and set Array Start/Dockers to not autostart)... I can't seem to get the full "GUI Mode" to work at all from my Physical Server with a Monitor/KB/Mouse connected... Its not even an option when I boot my Flash  This is in my Flash Settings on Syslinux Configuration tab but no option on startup (on monitor connected to server).

label unRAID OS GUI Mode
  menu default
  kernel /bzimage
  append initrd=/bzroot,/bzroot-gui

 

*EDIT* I was able to hit tab and append the ",/bzroot-gui" on the main Unraid item and the GUI started... odd that it didn't show up on the list as it should.

 

I also ran a xfs_repair -L on Disk 6 after this log was created this morning.  Attached is a current one from Remote GUI, Array started in Maintenance Mode only... assume if I try to mount array regularly it will hang again.

 

The issue now is when I start the array, everything grinds to a halt again, GUI becomes unresponsive... this was the last bit of the last Diag Zip that seems to be happening dozens of times per second...

 

May  9 10:45:24 media kernel: swapper/0: page allocation failure: order:0, mode:0x2080020
May  9 10:45:24 media kernel: CPU: 0 PID: 0 Comm: swapper/0 Not tainted 4.4.6-unRAID #1
May  9 10:45:24 media kernel: Hardware name: Supermicro X10SAE/X10SAE, BIOS 3.0 05/20/2015
May  9 10:45:24 media kernel: 0000000000000000 ffff88041dc03c28 ffffffff813688da 0000000000000000
May  9 10:45:24 media kernel: 0000000000000000 ffff88041dc03cc0 ffffffff810bc9b0 ffffffff818b0e38
May  9 10:45:24 media kernel: ffff88041dff9b00 ffffffffffffffff ffffffff008b0680 0000000000000000
May  9 10:45:24 media kernel: Call Trace:
May  9 10:45:24 media kernel: <IRQ>  [<ffffffff813688da>] dump_stack+0x61/0x7e
May  9 10:45:24 media kernel: [<ffffffff810bc9b0>] warn_alloc_failed+0x10f/0x127
May  9 10:45:24 media kernel: [<ffffffff810bf9c7>] __alloc_pages_nodemask+0x870/0x8ca
May  9 10:45:24 media kernel: [<ffffffff814333a9>] ? device_has_rmrr+0x5a/0x63
May  9 10:45:24 media kernel: [<ffffffff810bfabd>] __alloc_page_frag+0x9c/0x15f
May  9 10:45:24 media kernel: [<ffffffff8152e310>] __napi_alloc_skb+0x61/0xc1
May  9 10:45:24 media kernel: [<ffffffffa053e92a>] igb_poll+0x441/0xc06 [igb]
May  9 10:45:24 media kernel: [<ffffffff815390ac>] net_rx_action+0xd8/0x226
May  9 10:45:24 media kernel: [<ffffffff8104d4c0>] __do_softirq+0xc3/0x1b6
May  9 10:45:24 media kernel: [<ffffffff8104d73d>] irq_exit+0x3d/0x82
May  9 10:45:24 media kernel: [<ffffffff8100db9a>] do_IRQ+0xaa/0xc2
May  9 10:45:24 media kernel: [<ffffffff8161ab42>] common_interrupt+0x82/0x82
May  9 10:45:24 media kernel: <EOI>  [<ffffffff815041b7>] ? cpuidle_enter_state+0xf0/0x148
May  9 10:45:24 media kernel: [<ffffffff81504170>] ? cpuidle_enter_state+0xa9/0x148
May  9 10:45:24 media kernel: [<ffffffff81504231>] cpuidle_enter+0x12/0x14
May  9 10:45:24 media kernel: [<ffffffff81076247>] call_cpuidle+0x4e/0x50
May  9 10:45:24 media kernel: [<ffffffff810763cf>] cpu_startup_entry+0x186/0x1fd
May  9 10:45:24 media kernel: [<ffffffff8160fbdd>] rest_init+0x84/0x87
May  9 10:45:24 media kernel: [<ffffffff818eaec0>] start_kernel+0x3f7/0x404
May  9 10:45:24 media kernel: [<ffffffff818ea120>] ? early_idt_handler_array+0x120/0x120
May  9 10:45:24 media kernel: [<ffffffff818ea339>] x86_64_start_reservations+0x2a/0x2c
May  9 10:45:24 media kernel: [<ffffffff818ea421>] x86_64_start_kernel+0xe6/0xf3
May  9 10:45:24 media kernel: Mem-Info:
May  9 10:45:24 media kernel: active_anon:468687 inactive_anon:4711 isolated_anon:0
May  9 10:45:24 media kernel: active_file:443016 inactive_file:3009187 isolated_file:32
May  9 10:45:24 media kernel: unevictable:0 dirty:64349 writeback:152019 unstable:0
May  9 10:45:24 media kernel: slab_reclaimable:51705 slab_unreclaimable:30682
May  9 10:45:24 media kernel: mapped:51722 shmem:85744 pagetables:5236 bounce:0
May  9 10:45:24 media kernel: free:17874 free_pcp:104 free_cma:0
May  9 10:45:24 media kernel: Node 0 DMA free:15580kB min:12kB low:12kB high:16kB active_anon:304kB inactive_anon:16kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15984kB managed:15900kB mlocked:0kB dirty:0kB writeback:0kB mapped:32kB shmem:320kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
May  9 10:45:24 media kernel: lowmem_reserve[]: 0 3512 16022 16022
May  9 10:45:24 media kernel: Node 0 DMA32 free:51276kB min:3524kB low:4404kB high:5284kB active_anon:572120kB inactive_anon:3316kB active_file:391584kB inactive_file:2440188kB unevictable:0kB isolated(anon):0kB isolated(file):128kB present:3607096kB managed:3597428kB mlocked:0kB dirty:61208kB writeback:129236kB mapped:48616kB shmem:74916kB slab_reclaimable:44168kB slab_unreclaimable:26384kB kernel_stack:3376kB pagetables:5800kB unstable:0kB bounce:0kB free_pcp:144kB local_pcp:120kB free_cma:0kB writeback_tmp:0kB pages_scanned:44 all_unreclaimable? no
May  9 10:45:24 media kernel: lowmem_reserve[]: 0 0 12510 12510
May  9 10:45:24 media kernel: Node 0 Normal free:4640kB min:12564kB low:15704kB high:18844kB active_anon:1302324kB inactive_anon:15512kB active_file:1380480kB inactive_file:9596560kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:13074432kB managed:12810880kB mlocked:0kB dirty:196188kB writeback:478840kB mapped:158240kB shmem:267740kB slab_reclaimable:162652kB slab_unreclaimable:96344kB kernel_stack:11968kB pagetables:15144kB unstable:0kB bounce:0kB free_pcp:272kB local_pcp:140kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
May  9 10:45:24 media kernel: lowmem_reserve[]: 0 0 0 0
May  9 10:45:24 media kernel: Node 0 DMA: 1*4kB (U) 1*8kB (U) 1*16kB (U) 0*32kB 3*64kB (UM) 2*128kB (UM) 1*256kB (U) 1*512kB (M) 2*1024kB (UM) 2*2048kB (UM) 2*4096kB (M) = 15580kB
May  9 10:45:24 media kernel: Node 0 DMA32: 499*4kB (ME) 306*8kB (UME) 807*16kB (UME) 358*32kB (UME) 93*64kB (UME) 23*128kB (UME) 7*256kB (ME) 1*512kB (E) 7*1024kB (M) 2*2048kB (M) 0*4096kB = 51276kB
May  9 10:45:24 media kernel: Node 0 Normal: 324*4kB (M) 140*8kB (UME) 89*16kB (UME) 35*32kB (M) 3*64kB (M) 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 5152kB
May  9 10:45:24 media kernel: 3537970 total pagecache pages
May  9 10:45:24 media kernel: 0 pages in swap cache
May  9 10:45:24 media kernel: Swap cache stats: add 0, delete 0, find 0/0
May  9 10:45:24 media kernel: Free swap  = 0kB
May  9 10:45:24 media kernel: Total swap = 0kB
May  9 10:45:24 media kernel: 4174378 pages RAM
May  9 10:45:24 media kernel: 0 pages HighMem/MovableOnly
May  9 10:45:24 media kernel: 68326 pages reserved

media-diagnostics-20160509-1421.zip

Link to comment
Guest
This topic is now closed to further replies.