Jump to content

Rich

Members
  • Posts

    268
  • Joined

  • Last visited

Everything posted by Rich

  1. Well I waited a week and performed another check and sadly got errors :'( so its not the port. I'm now in the process of swapping out components. I've swapped out the sas cable and have started another check, i'm a few hours in and this has come up in the syslog again, however this time there were no corrected sectors afterwards (see reply #, so no errors (yet). Could someone explain to me what the error indicates please? Dec 21 18:21:53 unRAID kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Dec 21 18:21:53 unRAID kernel: sas: trying to find task 0xffff8800187da000 Dec 21 18:21:53 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff8800187da000 Dec 21 18:21:53 unRAID kernel: sas: sas_scsi_find_task: task 0xffff8800187da000 is aborted Dec 21 18:21:53 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff8800187da000 is aborted Dec 21 18:21:53 unRAID kernel: sas: ata16: end_device-8:1: cmd error handler Dec 21 18:21:53 unRAID kernel: sas: ata15: end_device-8:0: dev error handler Dec 21 18:21:53 unRAID kernel: sas: ata16: end_device-8:1: dev error handler Dec 21 18:21:53 unRAID kernel: sas: ata17: end_device-8:2: dev error handler Dec 21 18:21:53 unRAID kernel: ata16.00: exception Emask 0x0 SAct 0x40 SErr 0x0 action 0x6 frozen Dec 21 18:21:53 unRAID kernel: sas: ata18: end_device-8:3: dev error handler Dec 21 18:21:53 unRAID kernel: ata16.00: failed command: READ FPDMA QUEUED Dec 21 18:21:53 unRAID kernel: ata16.00: cmd 60/00:00:10:d6:37/04:00:2b:00:00/40 tag 6 ncq 524288 in Dec 21 18:21:53 unRAID kernel: res 40/00:1c:00:d8:1f/00:00:1d:00:00/40 Emask 0x4 (timeout) Dec 21 18:21:53 unRAID kernel: ata16.00: status: { DRDY } Dec 21 18:21:53 unRAID kernel: ata16: hard resetting link Dec 21 18:21:53 unRAID kernel: sas: sas_form_port: phy1 belongs to port1 already(1)! Dec 21 18:21:55 unRAID kernel: drivers/scsi/mvsas/mv_sas.c 1430:mvs_I_T_nexus_reset for device[1]:rc= 0 Dec 21 18:21:56 unRAID kernel: ata16.00: configured for UDMA/133 Dec 21 18:21:56 unRAID kernel: ata16: EH complete Dec 21 18:21:56 unRAID kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Dec 21 18:24:45 unRAID kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Dec 21 18:24:45 unRAID kernel: sas: trying to find task 0xffff88010761e500 Dec 21 18:24:45 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff88010761e500 Dec 21 18:24:45 unRAID kernel: sas: sas_scsi_find_task: task 0xffff88010761e500 is aborted Dec 21 18:24:45 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff88010761e500 is aborted Dec 21 18:24:45 unRAID kernel: sas: ata17: end_device-8:2: cmd error handler Dec 21 18:24:45 unRAID kernel: sas: ata15: end_device-8:0: dev error handler Dec 21 18:24:45 unRAID kernel: sas: ata16: end_device-8:1: dev error handler Dec 21 18:24:45 unRAID kernel: sas: ata17: end_device-8:2: dev error handler Dec 21 18:24:45 unRAID kernel: ata17.00: exception Emask 0x0 SAct 0x40000000 SErr 0x0 action 0x6 frozen Dec 21 18:24:45 unRAID kernel: sas: ata18: end_device-8:3: dev error handler Dec 21 18:24:45 unRAID kernel: ata17.00: failed command: READ FPDMA QUEUED Dec 21 18:24:45 unRAID kernel: ata17.00: cmd 60/00:00:90:ac:c3/04:00:2c:00:00/40 tag 30 ncq 524288 in Dec 21 18:24:45 unRAID kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Dec 21 18:24:45 unRAID kernel: ata17.00: status: { DRDY } Dec 21 18:24:45 unRAID kernel: ata17: hard resetting link Dec 21 18:24:45 unRAID kernel: sas: sas_form_port: phy2 belongs to port2 already(1)! Dec 21 18:24:47 unRAID kernel: drivers/scsi/mvsas/mv_sas.c 1430:mvs_I_T_nexus_reset for device[2]:rc= 0 Dec 21 18:24:48 unRAID kernel: ata17.00: configured for UDMA/133 Dec 21 18:24:48 unRAID kernel: ata17: EH complete Dec 21 18:24:48 unRAID kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1
  2. Good shout, i'll give the 'good port' another week or so and continue running parity checks every few days and if i don't get any more problems i'll get the controller RMA'd
  3. Cool, its only two months old so i'll get it swapped for a replacement and hopefully that will solve the problem. Thank you very much for all your help johnnie.black, much appreciated
  4. I spoke too soon! Just had over 1000 sync errors appear. The sas cable i'm using, doesn't create any errors when used with the second port on the same controller. Which to me, suggests the first sas port is the problem. Does that sound like reasonable cause for all the above faults? Dec 10 12:04:00 unRAID afpd[3920]: child[11098]: asev_del_fd: 4 Dec 10 12:23:05 unRAID afpd[3920]: child[16112]: asev_del_fd: 5 Dec 10 13:26:43 unRAID kernel: sas: Enter sas_scsi_recover_host busy: 1 failed: 1 Dec 10 13:26:43 unRAID kernel: sas: trying to find task 0xffff8803d312f000 Dec 10 13:26:43 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff8803d312f000 Dec 10 13:26:43 unRAID kernel: sas: sas_scsi_find_task: task 0xffff8803d312f000 is aborted Dec 10 13:26:43 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff8803d312f000 is aborted Dec 10 13:26:43 unRAID kernel: sas: ata15: end_device-8:0: cmd error handler Dec 10 13:26:43 unRAID kernel: sas: ata15: end_device-8:0: dev error handler Dec 10 13:26:43 unRAID kernel: ata15.00: exception Emask 0x0 SAct 0x0 SErr 0x0 action 0x6 frozen Dec 10 13:26:43 unRAID kernel: ata15.00: failed command: READ DMA EXT Dec 10 13:26:43 unRAID kernel: ata15.00: cmd 25/00:00:b0:64:8f/00:04:32:00:00/e0 tag 18 dma 524288 in Dec 10 13:26:43 unRAID kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Dec 10 13:26:43 unRAID kernel: ata15.00: status: { DRDY } Dec 10 13:26:43 unRAID kernel: ata15: hard resetting link Dec 10 13:26:43 unRAID kernel: sas: ata16: end_device-8:1: dev error handler Dec 10 13:26:43 unRAID kernel: sas: ata17: end_device-8:2: dev error handler Dec 10 13:26:43 unRAID kernel: sas: ata18: end_device-8:3: dev error handler Dec 10 13:26:43 unRAID kernel: sas: sas_form_port: phy0 belongs to port0 already(1)! Dec 10 13:26:45 unRAID kernel: drivers/scsi/mvsas/mv_sas.c 1430:mvs_I_T_nexus_reset for device[0]:rc= 0 Dec 10 13:26:46 unRAID kernel: ata15.00: configured for UDMA/133 Dec 10 13:26:46 unRAID kernel: ata15: EH complete Dec 10 13:26:46 unRAID kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 1 tries: 1 Dec 10 13:26:46 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 000003DF, slot [5]. Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259200 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259208 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259216 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259224 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259232 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259240 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259248 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259256 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259264 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259272 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259280 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259288 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259296 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259304 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259312 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259320 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259328 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259336 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259344 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259352 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259360 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259368 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259376 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259384 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259392 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259400 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259408 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259416 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259424 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259432 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259440 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259448 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259456 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259464 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259472 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259480 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259488 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259496 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259504 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259512 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259520 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259528 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259536 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259544 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259552 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259560 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259568 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259576 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259584 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259592 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259600 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259608 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259616 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259624 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259632 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259640 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259648 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259656 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259664 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259672 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259680 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259688 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259696 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259704 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259712 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259720 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259728 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259736 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259744 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259752 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259760 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259768 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259776 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259784 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259792 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259800 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259808 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259816 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259824 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259832 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259840 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259848 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259856 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259864 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259872 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259880 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259888 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259896 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259904 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259912 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259920 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259928 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259936 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259944 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259952 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259960 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259968 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259976 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259984 Dec 10 13:26:46 unRAID kernel: md: recovery thread: PQ corrected, sector=848259992 Dec 10 13:26:46 unRAID kernel: md: recovery thread: stopped logging Dec 10 13:26:54 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000300, slot [7]. Dec 10 13:27:02 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000230, slot [8]. Dec 10 13:27:10 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000210, slot [5]. Dec 10 13:27:18 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000250, slot [5]. Dec 10 13:27:26 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 00000210, slot [6]. Dec 10 13:27:26 unRAID kernel: sas: Enter sas_scsi_recover_host busy: 2 failed: 2 Dec 10 13:27:26 unRAID kernel: sas: trying to find task 0xffff8801d31d3a00 Dec 10 13:27:26 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff8801d31d3a00 Dec 10 13:27:26 unRAID kernel: sas: sas_scsi_find_task: task 0xffff8801d31d3a00 is aborted Dec 10 13:27:26 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff8801d31d3a00 is aborted Dec 10 13:27:26 unRAID kernel: sas: trying to find task 0xffff8800090a7d00 Dec 10 13:27:26 unRAID kernel: sas: sas_scsi_find_task: aborting task 0xffff8800090a7d00 Dec 10 13:27:26 unRAID kernel: sas: sas_scsi_find_task: task 0xffff8800090a7d00 is aborted Dec 10 13:27:26 unRAID kernel: sas: sas_eh_handle_sas_errors: task 0xffff8800090a7d00 is aborted Dec 10 13:27:26 unRAID kernel: sas: ata16: end_device-8:1: cmd error handler Dec 10 13:27:26 unRAID kernel: sas: ata15: end_device-8:0: dev error handler Dec 10 13:27:26 unRAID kernel: sas: ata16: end_device-8:1: dev error handler Dec 10 13:27:26 unRAID kernel: ata16.00: exception Emask 0x0 SAct 0x60 SErr 0x0 action 0x6 frozen Dec 10 13:27:26 unRAID kernel: sas: ata17: end_device-8:2: dev error handler Dec 10 13:27:26 unRAID kernel: sas: ata18: end_device-8:3: dev error handler Dec 10 13:27:26 unRAID kernel: ata16.00: failed command: READ FPDMA QUEUED Dec 10 13:27:26 unRAID kernel: ata16.00: cmd 60/10:00:a0:74:8f/00:00:32:00:00/40 tag 5 ncq 8192 in Dec 10 13:27:26 unRAID kernel: res 40/00:08:48:f5:36/00:00:2a:00:00/40 Emask 0x4 (timeout) Dec 10 13:27:26 unRAID kernel: ata16.00: status: { DRDY } Dec 10 13:27:26 unRAID kernel: ata16.00: failed command: READ FPDMA QUEUED Dec 10 13:27:26 unRAID kernel: ata16.00: cmd 60/00:00:b0:74:8f/04:00:32:00:00/40 tag 6 ncq 524288 in Dec 10 13:27:26 unRAID kernel: res 40/00:ff:00:00:00/00:00:00:00:00/00 Emask 0x4 (timeout) Dec 10 13:27:26 unRAID kernel: ata16.00: status: { DRDY } Dec 10 13:27:26 unRAID kernel: ata16: hard resetting link Dec 10 13:27:26 unRAID kernel: sas: sas_form_port: phy1 belongs to port1 already(1)! Dec 10 13:27:28 unRAID kernel: drivers/scsi/mvsas/mv_sas.c 1430:mvs_I_T_nexus_reset for device[1]:rc= 0 Dec 10 13:27:29 unRAID kernel: ata16.00: configured for UDMA/133 Dec 10 13:27:29 unRAID kernel: ata16: EH complete Dec 10 13:27:29 unRAID kernel: sas: --- Exit sas_scsi_recover_host: busy: 0 failed: 2 tries: 1 Dec 10 13:27:29 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 0000022F, slot [4]. Dec 10 13:27:29 unRAID kernel: drivers/scsi/mvsas/mv_94xx.c 625:command active 000005FF, slot [9].
  5. All four disks, (one of them is disk8) run off the same sas port on the controller. I ran xfs repair and it never highlighted anything, so i assume that fixed the problem? I then swapped the sas cable to the spare port on the controller and ran three parity checks (running mover in between), all coming back with zero errors. I have now swapped the cable back to its original port and am running another parity check, which is looking good so far. Is it possible that the above file system problem could have caused the issues with the other four disks, which in turn cause the parity errors?
  6. I did that earlier this week and it was for 24 hours, as i was trying to identify the cause of my sync errors It came back with zero errors. Is there anything else i can check? Also, would there have been an impact of the kernel crashing, because everything seemed to be running OK, all the dockers and VM, plus the parity check were and still are running fine? Thank you
  7. I'm in the process of testing the components connected to the four drives, results so far indicate that it might be one of the ports on the controller. When running further parity checks however, i am seeing the below error in the syslog and its repeated 157 times during roughly the middle of the parity check. Is this related and also can anyone tell me what it actually means? Thank you Dec 8 04:03:53 unRAID kernel: 68348 pages reserved Dec 8 04:03:53 unRAID kernel: qemu-system-x86: page allocation failure: order:4, mode:0x260c0c0 Dec 8 04:03:53 unRAID kernel: CPU: 0 PID: 9253 Comm: qemu-system-x86 Tainted: G W 4.4.30-unRAID #2 Dec 8 04:03:53 unRAID kernel: Hardware name: ASUS All Series/Z87-K, BIOS 1402 11/05/2014 Dec 8 04:03:53 unRAID kernel: 0000000000000000 ffff88034a117798 ffffffff8136f79f 0000000000000001 Dec 8 04:03:53 unRAID kernel: 0000000000000004 ffff88034a117830 ffffffff810bd527 000000010260c0c0 Dec 8 04:03:53 unRAID kernel: ffff88034ec0e000 0000000400000040 0000000000000010 0000000000000004 Dec 8 04:03:53 unRAID kernel: Call Trace: Dec 8 04:03:53 unRAID kernel: [<ffffffff8136f79f>] dump_stack+0x61/0x7e Dec 8 04:03:53 unRAID kernel: [<ffffffff810bd527>] warn_alloc_failed+0x10f/0x127 Dec 8 04:03:53 unRAID kernel: [<ffffffff810c0548>] __alloc_pages_nodemask+0x870/0x8ca Dec 8 04:03:53 unRAID kernel: [<ffffffff810c074c>] alloc_kmem_pages_node+0x4b/0xb3 Dec 8 04:03:53 unRAID kernel: [<ffffffff810f4d58>] kmalloc_large_node+0x24/0x52 Dec 8 04:03:53 unRAID kernel: [<ffffffff810f7501>] __kmalloc_node+0x22/0x153 Dec 8 04:03:53 unRAID kernel: [<ffffffff810209b0>] reserve_ds_buffers+0x18c/0x33d Dec 8 04:03:53 unRAID kernel: [<ffffffff8101b3fc>] x86_reserve_hardware+0x135/0x147 Dec 8 04:03:53 unRAID kernel: [<ffffffff8101b45e>] x86_pmu_event_init+0x50/0x1c9 Dec 8 04:03:53 unRAID kernel: [<ffffffff810ae7bd>] perf_try_init_event+0x41/0x72 Dec 8 04:03:53 unRAID kernel: [<ffffffff810aec0e>] perf_event_alloc+0x420/0x66e Dec 8 04:03:53 unRAID kernel: [<ffffffffa00f958e>] ? kvm_dev_ioctl_get_cpuid+0x1c0/0x1c0 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffff810b0bbb>] perf_event_create_kernel_counter+0x22/0x112 Dec 8 04:03:53 unRAID kernel: [<ffffffffa00f96d9>] pmc_reprogram_counter+0xbf/0x104 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00f992b>] reprogram_fixed_counter+0xc7/0xd8 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa03d0987>] intel_pmu_set_msr+0xe0/0x2ca [kvm_intel] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00f9b2c>] kvm_pmu_set_msr+0x15/0x17 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00dba57>] kvm_set_msr_common+0x921/0x983 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa03d0400>] vmx_set_msr+0x2ec/0x2fe [kvm_intel] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00d8424>] kvm_set_msr+0x61/0x63 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa03c99c4>] handle_wrmsr+0x3b/0x62 [kvm_intel] Dec 8 04:03:53 unRAID kernel: [<ffffffffa03ce63f>] vmx_handle_exit+0xfbb/0x1053 [kvm_intel] Dec 8 04:03:53 unRAID kernel: [<ffffffffa03d0105>] ? vmx_vcpu_run+0x30e/0x31d [kvm_intel] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00e1f92>] kvm_arch_vcpu_ioctl_run+0x38a/0x1080 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00dc938>] ? kvm_arch_vcpu_load+0x6b/0x16c [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00dc9b5>] ? kvm_arch_vcpu_load+0xe8/0x16c [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00d2cff>] kvm_vcpu_ioctl+0x178/0x499 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffffa00d5152>] ? kvm_vm_ioctl+0x3e8/0x5d8 [kvm] Dec 8 04:03:53 unRAID kernel: [<ffffffff8111869e>] do_vfs_ioctl+0x3a3/0x416 Dec 8 04:03:53 unRAID kernel: [<ffffffff8112070e>] ? __fget+0x72/0x7e Dec 8 04:03:53 unRAID kernel: [<ffffffff8111874f>] SyS_ioctl+0x3e/0x5c Dec 8 04:03:53 unRAID kernel: [<ffffffff81629c2e>] entry_SYSCALL_64_fastpath+0x12/0x6d Dec 8 04:03:53 unRAID kernel: Mem-Info: Dec 8 04:03:53 unRAID kernel: active_anon:536036 inactive_anon:6133 isolated_anon:0 Dec 8 04:03:53 unRAID kernel: active_file:520606 inactive_file:1053929 isolated_file:0 Dec 8 04:03:53 unRAID kernel: unevictable:1754170 dirty:135 writeback:0 unstable:0 Dec 8 04:03:53 unRAID kernel: slab_reclaimable:64958 slab_unreclaimable:17450 Dec 8 04:03:53 unRAID kernel: mapped:37272 shmem:89672 pagetables:9137 bounce:0 Dec 8 04:03:53 unRAID kernel: free:69997 free_pcp:0 free_cma:0 Dec 8 04:03:53 unRAID kernel: Node 0 DMA free:15872kB min:132kB low:164kB high:196kB active_anon:0kB inactive_anon:0kB active_file:0kB inactive_file:0kB unevictable:0kB isolated(anon):0kB isolated(file):0kB present:15956kB managed:15872kB mlocked:0kB dirty:0kB writeback:0kB mapped:0kB shmem:0kB slab_reclaimable:0kB slab_unreclaimable:0kB kernel_stack:0kB pagetables:0kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? yes Dec 8 04:03:53 unRAID kernel: lowmem_reserve[]: 0 3299 15839 15839 Dec 8 04:03:53 unRAID kernel: Node 0 DMA32 free:97140kB min:28124kB low:35152kB high:42184kB active_anon:399664kB inactive_anon:4252kB active_file:452216kB inactive_file:884428kB unevictable:1560752kB isolated(anon):0kB isolated(file):0kB present:3525636kB managed:3515880kB mlocked:1560752kB dirty:8kB writeback:0kB mapped:41496kB shmem:73524kB slab_reclaimable:54680kB slab_unreclaimable:14348kB kernel_stack:2128kB pagetables:7436kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Dec 8 04:03:53 unRAID kernel: lowmem_reserve[]: 0 0 12540 12540 Dec 8 04:03:53 unRAID kernel: Node 0 Normal free:166976kB min:106908kB low:133632kB high:160360kB active_anon:1744480kB inactive_anon:20280kB active_file:1630208kB inactive_file:3331288kB unevictable:5455928kB isolated(anon):0kB isolated(file):0kB present:13105152kB managed:12841600kB mlocked:5455928kB dirty:532kB writeback:0kB mapped:107592kB shmem:285164kB slab_reclaimable:205152kB slab_unreclaimable:55452kB kernel_stack:14000kB pagetables:29112kB unstable:0kB bounce:0kB free_pcp:0kB local_pcp:0kB free_cma:0kB writeback_tmp:0kB pages_scanned:0 all_unreclaimable? no Dec 8 04:03:53 unRAID kernel: lowmem_reserve[]: 0 0 0 0 Dec 8 04:03:53 unRAID kernel: Node 0 DMA: 0*4kB 0*8kB 0*16kB 0*32kB 2*64kB (U) 1*128kB (U) 1*256kB (U) 0*512kB 1*1024kB (U) 1*2048kB (U) 3*4096kB (M) = 15872kB Dec 8 04:03:53 unRAID kernel: Node 0 DMA32: 5981*4kB (UME) 4632*8kB (UME) 1769*16kB (UME) 256*32kB (UME) 0*64kB 0*128kB 0*256kB 0*512kB 0*1024kB 0*2048kB 0*4096kB = 97476kB Dec 8 04:03:53 unRAID kernel: Node 0 Normal: 37123*4kB (UMEH) 1915*8kB (UMEH) 50*16kB (UMEH) 21*32kB (UH) 12*64kB (H) 8*128kB (H) 3*256kB (H) 0*512kB 0*1024kB 0*2048kB 0*4096kB = 167844kB Dec 8 04:03:53 unRAID kernel: 1664245 total pagecache pages Dec 8 04:03:53 unRAID kernel: 0 pages in swap cache Dec 8 04:03:53 unRAID kernel: Swap cache stats: add 0, delete 0, find 0/0 Dec 8 04:03:53 unRAID kernel: Free swap = 0kB Dec 8 04:03:53 unRAID kernel: Total swap = 0kB Dec 8 04:03:53 unRAID kernel: 4161686 pages RAM Dec 8 04:03:53 unRAID kernel: 0 pages HighMem/MovableOnly
  8. Thank you so much for the quick reply, I'll check and post back
  9. Hey all, I'd appreciate some help trying to diagnosing repeated parity sync errors i am getting after upgrading my server :'( With the release of unRAID 6.2 i decided to upgrade to dual parity drives, i already had a 6TB parity, so just added another 6TB drive. When adding the extra drive i also changed the case, upgraded the PSU to a Corsair modular RM750i 750W and added another Supermicro SAS2LP-MV8 controller (please see my sig for all the hardware i'm using). Before the upgrade i never had any parity sync errors at all, but now, i get errors with each sync i do that has seen data written to the array since the last check. If no data has been written to the array there don't seem to be any errors. I have performed smart checks on all drives and they're all ok, i have done a 24 hour memtest and that came back with no errors as well. I haven't swapped out any cabling, but i have checked all connections and everything seems ok. I'm currently half way through another parity check which has thrown up errors again, so am going to attach the syslog so far. I can see the errors in there, but i am not sure what they mean? Any help would be really appreciated as i'm out of my depth here and am unsure how to go forward with diagnosing a cause for the problem. Thank you syslog.txt
  10. Cool, that's what I was hoping and expecting to hear, thank you for confirming
  11. Hey all, Firstly, this is an awesome container, thank you so much I've got everything setup how I want and after testing each address and log in credentials, etc, nginx is seemingly working as it should, but I would appreciate feedback on my config. My question is regarding the openvpn-as docker, I'm struggling to find any decent documentation regarding setting up nginx to allow an openvpn connection through. Is this actually possible and is it worth it? I assumed having only one port open on my router was the thing to aim for, but if I end up with two ports open and they're both secure, is the recommendation I still pass openvpn through nginx? If it's possible, which it looks like it isn't? I'd appreciate any advice and also any config examples from others that have done something similar. Thank you server { listen 80; server_name xxxx.dyndns.biz; return 301 https://$server_name$request_uri; } server { listen 443 ssl default_server; root /config/www; index index.html index.htm index.php; server_name xxxx.dyndns.biz, 192.168.1.100; ssl_certificate /config/keys/fullchain.pem; ssl_certificate_key /config/keys/privkey.pem; ssl_dhparam /config/nginx/dhparams.pem; ssl_ciphers 'ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-AES256-GCM-SHA384:DHE-RSA-AES128-GCM-SHA256:DHE-DSS-AES128-GCM-SHA256:kEDH+AESGCM:ECDHE-RSA-AES128-SHA256:ECDHE-ECDSA-AES128-SHA256:ECDHE-RSA-AES128-SHA:ECDHE-ECDSA-AES128-SHA:ECDHE-RSA-AES256-SHA384:ECDHE-ECDSA-AES256-SHA384:ECDHE-RSA-AES256-SHA:ECDHE-ECDSA-AES256-SHA:DHE-RSA-AES128-SHA256:DHE-RSA-AES128-SHA:DHE-DSS-AES128-SHA256:DHE-RSA-AES256-SHA256:DHE-DSS-AES256-SHA:DHE-RSA-AES256-SHA:AES128-GCM-SHA256:AES256-GCM-SHA384:AES128-SHA256:AES256-SHA256:AES128-SHA:AES256-SHA:AES:CAMELLIA:DES-CBC3-SHA:!aNULL:!eNULL:!EXPORT:!DES:!RC4:!MD5:!PSK:!aECDH:!EDH-DSS-DES-CBC3-SHA:!EDH-RSA-DES-CBC3-SHA:!KRB5-DES-CBC3-SHA'; ssl_prefer_server_ciphers on; client_max_body_size 0; location / { try_files $uri $uri/ /index.html /index.php?$args =404; } location ~ \.php$ { fastcgi_split_path_info ^(.+\.php)(/.+)$; fastcgi_pass unix:/var/run/php5-fpm.sock; fastcgi_index index.php; fastcgi_param SCRIPT_FILENAME $document_root$fastcgi_script_name; include fastcgi_params; } location /plexpy { proxy_pass http://192.168.1.100:8191; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd_admin; } location /couchpotato { proxy_pass http://192.168.1.100:8083; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd_admin; } location /sickrage { proxy_pass http://192.168.1.100:8082; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd_admin; } location /plexrequests { proxy_pass http://192.168.1.100:3000; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd_shared; } location /nzbhydra { proxy_pass http://192.168.1.100:5075; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd_admin; } location /mylar { proxy_pass http://192.168.1.100:8090; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd_admin; } location /sabnzbd { proxy_pass http://192.168.1.100:8081; proxy_set_header Host $host; proxy_set_header X-Real-IP $remote_addr; proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for; auth_basic "Restricted"; auth_basic_user_file /config/nginx/.htpasswd_admin; } }
  12. Awesome, thank you very much. I'll start looking into authentication next then. My first mission was just to get access working
  13. It's easier to use than a VPN, and it allows my family to access things, like photos etc without me having to setup a VPN to my whole network for them, I just create a new user and password and send them the details. Thanks for the reply. So am I safe in thinking then, that the primary uses for a reverse proxy in unRAID's case is secure access, ease of access and also specific access (unlike a VPN giving access to the entire network)? One last question, for those of you that have set up things like CouchPotato SickBeard/Rage and Sabnzbd with Apache, is there some kind of authentication you add on the proxy, or are you relying on the applications inbuilt sign in pages?
  14. Can i ask what peoples use cases are for Apache as a reverse proxy? I get that it allows for a more secure way to access sites externally and if you're using https it allows for a certificate for the proxy only and not per exposed address, but in quite a few forums I've read, people are also running VPNs, which to my noobie brain allows for secure access as well. So why not just use a VPN? I'm enjoying the research and have so far got Apache up and running over http, with plexpy and plex requests. I just want to make sure i'm not missing something with why or how people are using this feature Thank you, Rich
  15. Hey, just starting out with this docker and currently know zero about apache and setting it up, so reading as much as I can. Has the content from these links been completly removed or just relocated, as both links appear to be dead? It seems that others have found them useful. Thanks, Rich
  16. +2 I would like an app drive too. Purely for VMs, docker containers and image
  17. You were right, it's fully explained in the announcement. For some reason google didn't pick that up when I was searching earlier. Thanks for pointing me in the right direction Rich
  18. Thanks for the quick reply. That's annoying, especially as it was working perfectly before 6.2 I've just tried a clean install of my flash drive and have exactly the same problem, so obviously it's not due to anything lingering after upgrading. I did have a quick look through the FAQ but i'll check it out again and look at the 6.2 announcement too. Thank you
  19. Hi All, I'm having a problem after upgrading to 6.2, The locations i have been using are, /mnt/disks/VMs/Docker/docker.img/ /mnt/disks/VMs/Docker/appdata/ I upgraded via the GUI straight from 6.1.9 and run my docker image and appdata directory on a btrfs ssd kept out of the array via Unassigned Devices. After restarting and booting into 6.2 i went to the docker menu and deleted my old img as prompted, but unraid would not create a new img and docker remained 'not running'. After some playing around i discovered if i changed the location to the cache drive, a new img was created with no problem. I then tried to copy the image back to my preferred location on the ssd, but got the following message and docker would again not start, If i copied the img back to the cache drive however, docker starts with no issue. Also, the appdata directory has been on the ssd the whole time and has worked as expected when i reinstalled my containers. Its almost like there's a record somewhere preventing a new img being created in the location of the old one? I'd be grateful if anyone can help me with a solution, as i'd like the img back on my ssd instead of my cache drive and would prefer not to have to start my wipe my flash drive and start from fresh. Rich
  20. I can confirm that dropping to one core worked for me as well
  21. Hi All, I have just swapped over to UD from using SNAP, with an SSD that holds my Windows VM image and all my docker containers. I read through the help and saw this, however my SSD is formatted with btrfs, which i have been lead to believe is better suited for my use case (please correct me if i am wrong). My question is, will trim work with btrfs, or will i need to reformat the drive to xfs or ext4 if i want trim to run? Thank you, Rich
  22. Hi saarg, Have you made any progress on the keyboard side of things, or still not getting anywhere? Thank you
  23. Well I just forced a new update for the container and that's sorted the problem. So thank you if any devs were responsible for the fix?
×
×
  • Create New...