unRAID Server Release 6.2.0-beta21 Available


Recommended Posts

  • Replies 545
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

Is there a way to bypass the internet connectivity check within the Beta and will this be something that will be implemented into the final 6.2 code?

 

I'm asking because in my setup, I have a pfSense VM built into unRAID which is enabled when the Array starts up and provides LAN and WAN connectivity to all my devices around the home. Now, since the array cannot start without having first performed its internet connectivity check, I can not start my pfSense VM etc. End result is zero LAN and WAN connectivity available for anything and the unRAID arrays does not start.

 

I understand the reasoning behind it all, but wondered if there is a way to disable it (optionally) so it at least alloys my array to start to give me LAN and WAN connectivity.

 

unRAID 6.1.9 works perfectly, due to the lack of internet connectivity checking.

 

Thanks

Link to comment

Is there a way to bypass the internet connectivity check within the Beta and will this be something that will be implemented into the final 6.2 code?

 

Your best bet is to contact Limetech via Email as if there is such a feature, they're not likely going to publicly state it. But be ready for it to be a nope.

Link to comment

Is there a way to bypass the internet connectivity check within the Beta and will this be something that will be implemented into the final 6.2 code?

 

Your best bet is to contact Limetech via Email as if there is such a feature, they're not likely going to publicly state it. But be ready for it to be a nope.

 

This is already answered by LT, see http://lime-technology.com/forum/index.php?topic=47408.msg454240#msg454240

 

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

 

I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue.

 

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

 

I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue.

 

I too have had several issues, when transferring files (between drives in array, between drives and cache, between local USB and drives in array) with everything locking up, losing GUI, etc but can still SSH and my Powerdown script also does nothing.  Hopefully the LT guys are reading this and notice a pattern of some kind here.

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

 

I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue.

 

I too have had several issues, when transferring files (between drives in array, between drives and cache, between local USB and drives in array) with everything locking up, losing GUI, etc but can still SSH and my Powerdown script also does nothing.  Hopefully the LT guys are reading this and notice a pattern of some kind here.

 

I'm transfering over 50GB right now after trying out the workaround of adjusting the md_num_stripes. So far I haven't run into an issue. Usually for me it would lock up almost immediately.

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

 

I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue.

 

I too have had several issues, when transferring files (between drives in array, between drives and cache, between local USB and drives in array) with everything locking up, losing GUI, etc but can still SSH and my Powerdown script also does nothing.  Hopefully the LT guys are reading this and notice a pattern of some kind here.

 

I'm transfering over 50GB right now after trying out the workaround of adjusting the md_num_stripes. So far I haven't run into an issue. Usually for me it would lock up almost immediately.

 

I made the md_num_stripes change on mine, but havn't attempted any large file transfers since (and was tired of hard power cycling my server several times a day when doing previous trouble shooting.

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

 

I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue.

 

I too have had several issues, when transferring files (between drives in array, between drives and cache, between local USB and drives in array) with everything locking up, losing GUI, etc but can still SSH and my Powerdown script also does nothing.  Hopefully the LT guys are reading this and notice a pattern of some kind here.

 

I'm transfering over 50GB right now after trying out the workaround of adjusting the md_num_stripes. So far I haven't run into an issue. Usually for me it would lock up almost immediately.

 

I made the md_num_stripes change on mine, but havn't attempted any large file transfers since (and was tired of hard power cycling my server several times a day when doing previous trouble shooting.

 

i reported this same problem since beta 16 i don't think they know or can reproduce this, worst is i can't clean power down

 

what does md_num_stripes do and what should i set it as? to fix this problem.

Link to comment

Ok so i am having another weird issue appear, i am trying to transfer files from an unraid share to a usb connected to one of my vms, the usb is passed to the vm via a pci-e usb controller that is passed through directly

 

After transferring around 200gb i get all of samba lock up and be inaccessible - i lose access to the unraid gui as well but everything else works; vms, ssh, etc.

I managed to get powerdown to run which fetched the log but actually did not manage to shut down the system it just sat there

 

I have checked my logs and can see nothing wrong or reported incorrectly - i have tried this transfer using both my vms on the system but it seems to fail on both after 200gb

Can anyone see any issues in my logs? At the moment everything seems to be working solid except for the actual nas part (i can write as much as i like but get issues with reads)

 

I am unsure if this is occurring also due to the fact my vms seem to be leaking memory heavily and on a fresh boot my system will have 9gb memory left, after 7 days a 12gb vm jumps to 19gb memory usage and the system has less then 1gb free

The issue i am having at present can happen after an hour however

 

Regards,

Jamie

 

Edit: This issue does seem to be beta related as i was able to transfer over 4tb's of data to backup usbs on 6.1.9

 

I've been having this issue as well. Any time I try and transfer files from a share to a flash drive or another computer, it will lock up Samba, WebGUI, Dockers, pretty much everything. I can SSH as well, but powerdown does nothing. I've also observed this same behavior when Sabnzbd is processing very large files and moving them into the array. Seems it's definitely because of heavy IO load. I'll try the suggested fix of changing the md_num_stripes to 8192 and report back if that works around the issue.

 

I too have had several issues, when transferring files (between drives in array, between drives and cache, between local USB and drives in array) with everything locking up, losing GUI, etc but can still SSH and my Powerdown script also does nothing.  Hopefully the LT guys are reading this and notice a pattern of some kind here.

 

I'm transfering over 50GB right now after trying out the workaround of adjusting the md_num_stripes. So far I haven't run into an issue. Usually for me it would lock up almost immediately.

 

I made the md_num_stripes change on mine, but havn't attempted any large file transfers since (and was tired of hard power cycling my server several times a day when doing previous trouble shooting.

 

i reported this same problem since beta 16 i don't think they know or can reproduce this, worst is i can't clean power down

 

what does md_num_stripes do and what should i set it as? to fix this problem.

I haven't read up on what it does. Maybe someone else here can answer that. I did set the value to 8192 per the recommendation a couple pages back. So far things have been good. I'll report back if it ends up locking everything up again.

 

Sent from my XT1575 using Tapatalk

 

 

Link to comment

I haven't read up on what it does. Maybe someone else here can answer that. I did set the value to 8192 per the recommendation a couple pages back. So far things have been good. I'll report back if it ends up locking everything up again.

 

Sent from my XT1575 using Tapatalk

Good to know that the temp fix works.

 

md_num_stripes is explained here

 

Summary is the higher, the more active pieces of 4k IO that can be active simultaneously. So increasing it suggests there is something with unRAID processing which cause a lot of stripes to become stuck in active state. Considering this wasn't reported with 6.1.9, it has got to be the new codes.

 

I would connect the 4k IO dot to the NVMe support but that's 1 man's 2p.

Link to comment

I'm seeing a lot of this recently in my syslog:

May 18 20:01:39 Server kernel: qemu-system-x86: page allocation failure: order:4, mode:0x260c0c0
May 18 20:01:39 Server kernel: CPU: 1 PID: 27063 Comm: qemu-system-x86 Not tainted 4.4.6-unRAID #1
May 18 20:01:39 Server kernel: Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./X99-SLI-CF, BIOS F21a 01/12/2016
May 18 20:01:39 Server kernel: 0000000000000000 ffff88089a03f8d0 ffffffff813688da 0000000000000001
May 18 20:01:39 Server kernel: 0000000000000004 ffff88089a03f968 ffffffff810bc9b0 0260c0c000000010
May 18 20:01:39 Server kernel: ffff880800000040 0000000400000040 0000000000000004 0000000000000004
May 18 20:01:39 Server kernel: Call Trace:
May 18 20:01:39 Server kernel: [<ffffffff813688da>] dump_stack+0x61/0x7e
May 18 20:01:39 Server kernel: [<ffffffff810bc9b0>] warn_alloc_failed+0x10f/0x127
May 18 20:01:39 Server kernel: [<ffffffff810bf9c7>] __alloc_pages_nodemask+0x870/0x8ca
May 18 20:01:39 Server kernel: [<ffffffff810bfbcb>] alloc_kmem_pages_node+0x4b/0xb3
May 18 20:01:39 Server kernel: [<ffffffff810f40c2>] kmalloc_large_node+0x24/0x52
May 18 20:01:39 Server kernel: [<ffffffff810f6861>] __kmalloc_node+0x22/0x153
May 18 20:01:39 Server kernel: [<ffffffff81020994>] reserve_ds_buffers+0x18a/0x33f
May 18 20:01:39 Server kernel: [<ffffffff8101b3e0>] x86_reserve_hardware+0x135/0x147
May 18 20:01:39 Server kernel: [<ffffffff8101b442>] x86_pmu_event_init+0x50/0x1c9
May 18 20:01:39 Server kernel: [<ffffffff810ade55>] perf_try_init_event+0x41/0x72
May 18 20:01:39 Server kernel: [<ffffffff810ae2a6>] perf_event_alloc+0x420/0x5b0
May 18 20:01:39 Server kernel: [<ffffffffa0098579>] ? kvm_perf_overflow+0x35/0x35 [kvm]
May 18 20:01:39 Server kernel: [<ffffffff810b0210>] perf_event_create_kernel_counter+0x22/0x124
May 18 20:01:39 Server kernel: [<ffffffffa009868f>] pmc_reprogram_counter+0xbf/0x104 [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa00988e1>] reprogram_fixed_counter+0xc7/0xd8 [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa0098941>] reprogram_counter+0x4f/0x54 [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa00989b5>] kvm_pmu_handle_event+0x6f/0x8d [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa00816e1>] kvm_arch_vcpu_ioctl_run+0xad9/0x104e [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa007ba13>] ? kvm_arch_vcpu_load+0x133/0x16c [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa00730e9>] kvm_vcpu_ioctl+0x178/0x499 [kvm]
May 18 20:01:39 Server kernel: [<ffffffff8149e416>] ? vfio_pci_read+0x11/0x16
May 18 20:01:39 Server kernel: [<ffffffff8149a03a>] ? vfio_device_fops_read+0x1f/0x29
May 18 20:01:39 Server kernel: [<ffffffff81109460>] ? __vfs_read+0x21/0xb1
May 18 20:01:39 Server kernel: [<ffffffff811175fd>] do_vfs_ioctl+0x3a3/0x416
May 18 20:01:39 Server kernel: [<ffffffff8111f5e7>] ? __fget+0x72/0x7e
May 18 20:01:39 Server kernel: [<ffffffff811176ae>] SyS_ioctl+0x3e/0x5c
May 18 20:01:39 Server kernel: [<ffffffff8161a0ae>] entry_SYSCALL_64_fastpath+0x12/0x6d
May 18 20:01:39 Server kernel: Mem-Info:
May 18 20:01:39 Server kernel: active_anon:1769977 inactive_anon:7535 isolated_anon:0
May 18 20:01:39 Server kernel: active_file:571686 inactive_file:390922 isolated_file:0
May 18 20:01:39 Server kernel: unevictable:4944875 dirty:1110 writeback:0 unstable:0
May 18 20:01:39 Server kernel: slab_reclaimable:266286 slab_unreclaimable:38176
May 18 20:01:39 Server kernel: mapped:27724 shmem:101866 pagetables:17989 bounce:0
May 18 20:01:39 Server kernel: free:44119 free_pcp:31 free_cma:0
May 18 20:01:39 Server kernel: Node 0 DMA free:15736kB min:8kB low:8kB high:12kB active_anon:128kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:32kB isolated(anon):0kB isolated(file):0kB present:15980kB managed:15896kB mlocked:32kB dirty:0kB writeback:0kB mapped:48kB shmem:160kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
May 18 20:01:39 Server kernel: lowmem_reserve[]: 0 821 32053 32053
May 18 20:01:39 Server kernel: Node 0 DMA32 free:125548kB min:584kB low:728kB high:876kB active_anon:95684kB inactive_anon:828kB active_file:1300kB inactive_file:1556kB unevictable:529080kB isolated(anon):0kB isolated(file):0kB present:851636kB managed:841968kB mlocked:529080kB dirty:4kB writeback:0kB mapped:2152kB shmem:8932kB slab_reclaimable:54024kB slab_unreclaimable:8800kB kernel_stack:1056kB pagetables:2964kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
May 18 20:01:39 Server kernel: lowmem_reserve[]: 0 0 31232 31232
May 18 20:01:39 Server kernel: Node 0 Normal free:35192kB min:22248kB low:27808kB high:33372kB active_anon:6984096kB inactive_anon:29312kB active_file:2285444kB inactive_file:1562132kB unevictable:19250388kB isolated(anon):0kB isolated(file):0kB present:32505856kB managed:31981696kB mlocked:19250388kB dirty:4436kB writeback:0kB mapped:108696kB shmem:398372kB slab_reclaimable:1011120kB slab_unreclaimable:143904kB kernel_stack:13152kB pagetables:68992kB unstable:0kB bounce:0kB free_pcp:120kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
May 18 20:01:39 Server kernel: lowmem_reserve[]: 0 0 0 0
May 18 20:01:39 Server kernel: Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 1*32kB (M) 3*64kB (UM) 1*128kB (U) 2*256kB (UM) 1*512kB (M) 2*1024kB (UM) 2*2048kB (UM) 2*4096kB (M) = 15736kB
May 18 20:01:39 Server kernel: Node 0 DMA32: 369*4kB (UME) 573*8kB (UME) 410*16kB (UME) 284*32kB (UME) 198*64kB (UME) 130*128kB (UM) 81*256kB (UME) 53*512kB (UME) 18*1024kB (UME) 4*2048kB (M) 0*4096kB = 125516kB
May 18 20:01:39 Server kernel: Node 0 Normal: 5707*4kB (UME) 1663*8kB (UM) 15*16kB (UM) 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 36372kB
May 18 20:01:39 Server kernel: 1064490 total pagecache pages
May 18 20:01:39 Server kernel: 0 pages in swap cache
May 18 20:01:39 Server kernel: Swap cache stats: add 0, delete 0, find 0/0
May 18 20:01:39 Server kernel: Free swap  = 0kB
May 18 20:01:39 Server kernel: Total swap = 0kB
May 18 20:01:39 Server kernel: 8343368 pages RAM
May 18 20:01:39 Server kernel: 0 pages HighMem/MovableOnly
May 18 20:01:39 Server kernel: 133478 pages reserved

 

Time and time again (and again).

 

Given the timing of when this started, I had just starting playing with the Emby Beta docker, and noticed it was using up to ~4GB of memory, however the dashboard memory usage was never above 85% total. Maybe it started using previously cached ram, and this caused the error to start?

 

I did some searching and others had a similar issue (qemu-system-x86: page allocation failure: order:4), but it seemed to have been fixed in the more recent Linux kernel's (so shouldn't be the issue).

IDK, diagnostics attached.

server-diagnostics-20160518-2049.zip

Link to comment

I'm seeing a lot of this recently in my syslog:

May 18 20:01:39 Server kernel: qemu-system-x86: page allocation failure: order:4, mode:0x260c0c0
May 18 20:01:39 Server kernel: CPU: 1 PID: 27063 Comm: qemu-system-x86 Not tainted 4.4.6-unRAID #1
May 18 20:01:39 Server kernel: Hardware name: Gigabyte Technology Co., Ltd. To be filled by O.E.M./X99-SLI-CF, BIOS F21a 01/12/2016
May 18 20:01:39 Server kernel: 0000000000000000 ffff88089a03f8d0 ffffffff813688da 0000000000000001
May 18 20:01:39 Server kernel: 0000000000000004 ffff88089a03f968 ffffffff810bc9b0 0260c0c000000010
May 18 20:01:39 Server kernel: ffff880800000040 0000000400000040 0000000000000004 0000000000000004
May 18 20:01:39 Server kernel: Call Trace:
May 18 20:01:39 Server kernel: [<ffffffff813688da>] dump_stack+0x61/0x7e
May 18 20:01:39 Server kernel: [<ffffffff810bc9b0>] warn_alloc_failed+0x10f/0x127
May 18 20:01:39 Server kernel: [<ffffffff810bf9c7>] __alloc_pages_nodemask+0x870/0x8ca
May 18 20:01:39 Server kernel: [<ffffffff810bfbcb>] alloc_kmem_pages_node+0x4b/0xb3
May 18 20:01:39 Server kernel: [<ffffffff810f40c2>] kmalloc_large_node+0x24/0x52
May 18 20:01:39 Server kernel: [<ffffffff810f6861>] __kmalloc_node+0x22/0x153
May 18 20:01:39 Server kernel: [<ffffffff81020994>] reserve_ds_buffers+0x18a/0x33f
May 18 20:01:39 Server kernel: [<ffffffff8101b3e0>] x86_reserve_hardware+0x135/0x147
May 18 20:01:39 Server kernel: [<ffffffff8101b442>] x86_pmu_event_init+0x50/0x1c9
May 18 20:01:39 Server kernel: [<ffffffff810ade55>] perf_try_init_event+0x41/0x72
May 18 20:01:39 Server kernel: [<ffffffff810ae2a6>] perf_event_alloc+0x420/0x5b0
May 18 20:01:39 Server kernel: [<ffffffffa0098579>] ? kvm_perf_overflow+0x35/0x35 [kvm]
May 18 20:01:39 Server kernel: [<ffffffff810b0210>] perf_event_create_kernel_counter+0x22/0x124
May 18 20:01:39 Server kernel: [<ffffffffa009868f>] pmc_reprogram_counter+0xbf/0x104 [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa00988e1>] reprogram_fixed_counter+0xc7/0xd8 [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa0098941>] reprogram_counter+0x4f/0x54 [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa00989b5>] kvm_pmu_handle_event+0x6f/0x8d [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa00816e1>] kvm_arch_vcpu_ioctl_run+0xad9/0x104e [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa007ba13>] ? kvm_arch_vcpu_load+0x133/0x16c [kvm]
May 18 20:01:39 Server kernel: [<ffffffffa00730e9>] kvm_vcpu_ioctl+0x178/0x499 [kvm]
May 18 20:01:39 Server kernel: [<ffffffff8149e416>] ? vfio_pci_read+0x11/0x16
May 18 20:01:39 Server kernel: [<ffffffff8149a03a>] ? vfio_device_fops_read+0x1f/0x29
May 18 20:01:39 Server kernel: [<ffffffff81109460>] ? __vfs_read+0x21/0xb1
May 18 20:01:39 Server kernel: [<ffffffff811175fd>] do_vfs_ioctl+0x3a3/0x416
May 18 20:01:39 Server kernel: [<ffffffff8111f5e7>] ? __fget+0x72/0x7e
May 18 20:01:39 Server kernel: [<ffffffff811176ae>] SyS_ioctl+0x3e/0x5c
May 18 20:01:39 Server kernel: [<ffffffff8161a0ae>] entry_SYSCALL_64_fastpath+0x12/0x6d
May 18 20:01:39 Server kernel: Mem-Info:
May 18 20:01:39 Server kernel: active_anon:1769977 inactive_anon:7535 isolated_anon:0
May 18 20:01:39 Server kernel: active_file:571686 inactive_file:390922 isolated_file:0
May 18 20:01:39 Server kernel: unevictable:4944875 dirty:1110 writeback:0 unstable:0
May 18 20:01:39 Server kernel: slab_reclaimable:266286 slab_unreclaimable:38176
May 18 20:01:39 Server kernel: mapped:27724 shmem:101866 pagetables:17989 bounce:0
May 18 20:01:39 Server kernel: free:44119 free_pcp:31 free_cma:0
May 18 20:01:39 Server kernel: Node 0 DMA free:15736kB min:8kB low:8kB high:12kB active_anon:128kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:32kB isolated(anon):0kB isolated(file):0kB present:15980kB managed:15896kB mlocked:32kB dirty:0kB writeback:0kB mapped:48kB shmem:160kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes
May 18 20:01:39 Server kernel: lowmem_reserve[]: 0 821 32053 32053
May 18 20:01:39 Server kernel: Node 0 DMA32 free:125548kB min:584kB low:728kB high:876kB active_anon:95684kB inactive_anon:828kB active_file:1300kB inactive_file:1556kB unevictable:529080kB isolated(anon):0kB isolated(file):0kB present:851636kB managed:841968kB mlocked:529080kB dirty:4kB writeback:0kB mapped:2152kB shmem:8932kB slab_reclaimable:54024kB slab_unreclaimable:8800kB kernel_stack:1056kB pagetables:2964kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
May 18 20:01:39 Server kernel: lowmem_reserve[]: 0 0 31232 31232
May 18 20:01:39 Server kernel: Node 0 Normal free:35192kB min:22248kB low:27808kB high:33372kB active_anon:6984096kB inactive_anon:29312kB active_file:2285444kB inactive_file:1562132kB unevictable:19250388kB isolated(anon):0kB isolated(file):0kB present:32505856kB managed:31981696kB mlocked:19250388kB dirty:4436kB writeback:0kB mapped:108696kB shmem:398372kB slab_reclaimable:1011120kB slab_unreclaimable:143904kB kernel_stack:13152kB pagetables:68992kB unstable:0kB bounce:0kB free_pcp:120kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no
May 18 20:01:39 Server kernel: lowmem_reserve[]: 0 0 0 0
May 18 20:01:39 Server kernel: Node 0 DMA: 0*4kB 1*8kB (U) 1*16kB (U) 1*32kB (M) 3*64kB (UM) 1*128kB (U) 2*256kB (UM) 1*512kB (M) 2*1024kB (UM) 2*2048kB (UM) 2*4096kB (M) = 15736kB
May 18 20:01:39 Server kernel: Node 0 DMA32: 369*4kB (UME) 573*8kB (UME) 410*16kB (UME) 284*32kB (UME) 198*64kB (UME) 130*128kB (UM) 81*256kB (UME) 53*512kB (UME) 18*1024kB (UME) 4*2048kB (M) 0*4096kB = 125516kB
May 18 20:01:39 Server kernel: Node 0 Normal: 5707*4kB (UME) 1663*8kB (UM) 15*16kB (UM) 0*32kB 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 36372kB
May 18 20:01:39 Server kernel: 1064490 total pagecache pages
May 18 20:01:39 Server kernel: 0 pages in swap cache
May 18 20:01:39 Server kernel: Swap cache stats: add 0, delete 0, find 0/0
May 18 20:01:39 Server kernel: Free swap  = 0kB
May 18 20:01:39 Server kernel: Total swap = 0kB
May 18 20:01:39 Server kernel: 8343368 pages RAM
May 18 20:01:39 Server kernel: 0 pages HighMem/MovableOnly
May 18 20:01:39 Server kernel: 133478 pages reserved

 

Time and time again (and again).

 

Given the timing of when this started, I had just starting playing with the Emby Beta docker, and noticed it was using up to ~4GB of memory, however the dashboard memory usage was never above 85% total. Maybe it started using previously cached ram, and this caused the error to start?

 

I did some searching and others had a similar issue (qemu-system-x86: page allocation failure: order:4), but it seemed to have been fixed in the more recent Linux kernel's (so shouldn't be the issue).

IDK, diagnostics attached.

 

I just noticed my logs getting dumped full of these same messages but stopped when I shut down my Win10 VM.

 

The one thing I noticed yesterday evening is that the VM is not displaying video over the passed through GTX960 when it used to the day prior, the VM could see my receiver and did notice when I unplugged the HDMI cable.

 

My diagnostic file is too large to attach.

Link to comment

Disregard my comments for now.

 

My RMA'd motherboard is a bit more finicky with my ram (exact same model as previously).

I can pass Memtest single threaded just fine (as I have tested) but on attempt for multi thread (F2) it never starts and just sits there, requiring a reset.

I was able to set the memory enhancement setting from "standard" to "stability" and the multi thread ran last night, however I woke up to UnRaid being booted (meaning it crashed and caused a reboot). Plan to bump up the voltage a tad, or relax the timings and hopefully all will pass as intended.

My experiences/issues are likely related to this at least somewhat.

 

Sorry to crap up the thread (as in likely not related to this release).

Link to comment

Disregard my comments for now.

 

My RMA'd motherboard is a bit more finicky with my ram (exact same model as previously).

I can pass Memtest single threaded just fine (as I have tested) but on attempt for multi thread (F2) it never starts and just sits there, requiring a reset.

I was able to set the memory enhancement setting from "standard" to "stability" and the multi thread ran last night, however I woke up to UnRaid being booted (meaning it crashed and caused a reboot). Plan to bump up the voltage a tad, or relax the timings and hopefully all will pass as intended.

My experiences/issues are likely related to this at least somewhat.

 

Sorry to crap up the thread (as in likely not related to this release).

 

Yeah... Memtest 5.01 is pretty old and buggy tbh :P

We use it at work, leaving it Single threaded works 100% and always catches errors (Leaving it on for 72hrs per server)

 

Multithreaded / SMP however... On our lowest end servers it works alright, but move up to Hexcore Xeons with DDR4 etc.... Memtest doesn't get along with that hardware haha

 

Single threaded mode should be more than ample at catching memory errors though :)

Link to comment

I just noticed my logs getting dumped full of these same messages but stopped when I shut down my Win10 VM.

 

The one thing I noticed yesterday evening is that the VM is not displaying video over the passed through GTX960 when it used to the day prior, the VM could see my receiver and did notice when I unplugged the HDMI cable.

 

My diagnostic file is too large to attach.

 

Hmm, so it's not just me.

Well I run 3 concurrent Windows 10 VM's, and haven't seen this issue until recently.

Since I have been going through hell with an RMA'd MB and video card (they seriously sent me 2 bad replacements; repeat RMA process), I cannot rule out that it is tied to changes with me swapping the board/video card to identical models.

I believe the dropping of my USB ports happens after this error repeats X amount of times (over and over), but I'll have to look further into diagnosing that.

 

 

Yeah... Memtest 5.01 is pretty old and buggy tbh :P

We use it at work, leaving it Single threaded works 100% and always catches errors (Leaving it on for 72hrs per server)

 

Multithreaded / SMP however... On our lowest end servers it works alright, but move up to Hexcore Xeons with DDR4 etc.... Memtest doesn't get along with that hardware haha

 

Single threaded mode should be more than ample at catching memory errors though :)

Interesting information, thanks!

Since you still test with 5.0.1 at work, is there a specific reason to not use the newest stable of V6.3.0?

I was never concerned with running it in multithread, however I had read recommendations to "really stress it" to use it that way.

I have no issues in default/single threaded in regards to it passing, so then I may really have an issue.. =(

 

My motherboard definitely has reports of some being more picky about ram than others (meaning the board itself from one identical model to testing with another).

My original one ran with the XMP profile set without any issues, this one (and the "borrowed one" in between) will not reliably operate with XMP enabled, so I have considered it a leave off option.

I'll do some more testing, however would certainly appreciate any insight/thoughts from the LT crew.

Totally get this is beta, but are others experiencing similar issues, is it an actual issue (or just me), etc...

 

As to my previous USB devices dropping, it happened again last night just prior to me saying "F*** it" and running Memtest.

The devices that drop are on the 4 ports that are from a secondary controller (built into the MB), all other chipset USB ports continue to work as expected.

I may just try exclusively using the chipset ones, and see if this is resolved.

Link to comment

Has there been any official comments from LT after this post ?  Just curious if I missed anything as to the general status and progression until the next beta hits.  More curious than anything else ..

 

 

To daigo and anyone else that has had system stability issues relating to VMs, array vdisks, etc.

 

I just wanted to touch base on this issue because we have been trying a to recreate this on our systems here and have been unsuccessful.  There are multiple people reporting issues like this, but it's definitely not affecting everyone (nor would I say even the majority of users).  I've tried copying large files to and from both array and cache-based vdisks.  I've tried bulk copies to and from SMB shares.  I've tried bulk copies from mounting ISOs inside Linux VMs and copying data from them to the vdisk of the linux VM.  No matter what, the systems here remain solid and stable, no crashes or log events of any kind.

 

What this means is that we are still investigating and we are continuing to patch QEMU and the kernel so we can see if this issue is better addressed in a future beta release.

 

I wish I had more to say on this issue but for now, until we can recreate it, it's going to be an ongoing research problem.

Link to comment

I just noticed my logs getting dumped full of these same messages but stopped when I shut down my Win10 VM.

 

The one thing I noticed yesterday evening is that the VM is not displaying video over the passed through GTX960 when it used to the day prior, the VM could see my receiver and did notice when I unplugged the HDMI cable.

 

My diagnostic file is too large to attach.

 

Hmm, so it's not just me.

Well I run 3 concurrent Windows 10 VM's, and haven't seen this issue until recently.

 

Logs have been clean ever since I took off the video card.

Link to comment

Instructions to roll back to 6.1.9?

 

I confirmed VMware passthrough doesn't work on 6.2 so need to revert back.

If you upgraded from 6.1.9 directly to 6.2beta21, then copy the files within the previous folder on the flash drive to the root of the flash and then reboot.

 

If you've gone through more than 1 version of the betas, then download 6.1.9 from LT's website, extract bzimage and bzroot from them and overwrite those two files on the flash drive and then reboot.

 

All this works if you've only got a single parity drive set up.  Not sure what happens if you have 2 set up.

 

Link to comment

Instructions to roll back to 6.1.9?

 

I confirmed VMware passthrough doesn't work on 6.2 so need to revert back.

If you upgraded from 6.1.9 directly to 6.2beta21, then copy the files within the previous folder on the flash drive to the root of the flash and then reboot.

 

If you've gone through more than 1 version of the betas, then download 6.1.9 from LT's website, extract bzimage and bzroot from them and overwrite those two files on the flash drive and then reboot.

 

All this works if you've only got a single parity drive set up.  Not sure what happens if you have 2 set up.

 

Thanks.  All set.

Link to comment

 

 

Yeah... Memtest 5.01 is pretty old and buggy tbh :P

We use it at work, leaving it Single threaded works 100% and always catches errors (Leaving it on for 72hrs per server)

 

Multithreaded / SMP however... On our lowest end servers it works alright, but move up to Hexcore Xeons with DDR4 etc.... Memtest doesn't get along with that hardware haha

 

Single threaded mode should be more than ample at catching memory errors though :)

 

Yeah wasnt ddr4 support only put in memtest 6.0?  @limetech why dont you upgrade the memtest in unraid?

Link to comment

I use 6.2 beta since 1st release, today I got a problem about Cache disk MOVER. ( I haven't VM just pure storage with several plugin)

 

I can reproduce same problem again

 

1. Remote end mount a disk (by rsync, rsync -a xxxxx xxxxx, rights was READ-ONLY)

2. start MOVER to move the file from cache to array (the directory not in remote mount disk, even include I expect no problem)

 

Symptom

- The MOVER (also rsync) process just running, but no transfer or very slow.

- The emhttp was normal but CPU usage 100%, if telnet use "top" just little usage.

- Can't stop the MOVER or array, if kill those rsync generate by MOVER, I can execute 'stop array', but it won't success and hang the emhttp.

 

Anyone have idea, thanks

 

Link to comment

 

 

Yeah... Memtest 5.01 is pretty old and buggy tbh :P

We use it at work, leaving it Single threaded works 100% and always catches errors (Leaving it on for 72hrs per server)

 

Multithreaded / SMP however... On our lowest end servers it works alright, but move up to Hexcore Xeons with DDR4 etc.... Memtest doesn't get along with that hardware haha

 

Single threaded mode should be more than ample at catching memory errors though :)

 

Yeah wasnt ddr4 support only put in memtest 6.0?  @limetech why dont you upgrade the memtest in unraid?

 

Memtest86 versions have almost always been confusing, and the current state isn't any better.  I won't go into the history, you can look it up, but currently there are 2 sources, both based on the original source code.  One is open source and fully distributable, is the one included with unRAID, but unfortunately has fallen behind, has only a few devs (perhaps only one), and the last version is 5.01, released in 2013, the exact version we have.  The other was taken commercial by PassMark, and has been greatly updated, currently on 6.3.0, with comprehensive support for recent technologies.  They do provide a free version with no restrictions on usage, but only provide it as part of a bootable image that doesn't look like it could be included with others software.  Perhaps there is a way, but I'd be wary of PassMark lawyers breathing on your neck.

 

Due to the current state, the version included is a good first step, but if you have more recent tech, such as DDR4 and modern motherboards and CPU's, you should probably download and create a bootable flash with the latest PassMark Memtest86 and run it instead.

Link to comment
Guest
This topic is now closed to further replies.