Jump to content

laterdaze

Members
  • Posts

    25
  • Joined

  • Last visited

Posts posted by laterdaze

  1. For what its worth, I accomplished the same thing by employing OPNSense routers running WirdGuard VPN software on both sites.  Using rclone cron jobs I can copy/sync/move folders between my unRAID servers.  Probably better this way, no additional setup in the router other than WireGuard.  Works great. 

  2. 21 hours ago, tr0910 said:

    Did you try and create a wireguard VM? Doesn't have to be slack. That should be trivial. Unraid KVM makes it easy.

    Adding to base os it's not trivial

    Sent from my chisel, carved into granite
     

    I have only tried what I described so far.  Since it seems WireGuard will be in the Linux kernel soon I just thought it would be a natural fit for unRAID. I just see remotely separated unRAID systems bi-directionaly syncing data via WireGuard vpn with no need for a particularly powerful router as just port forwarding would be required.  I do something similar with pfSense/OpenVPN and rclone. A private personal cloud by invitation only.

  3. I decided to use your instructions to build ZFS for unraid 6.3.2 and all went well until:

     

    Hunk #4 succeeded at 2515 (offset 2 lines).

    Hunk #5 succeeded at 2610 (offset 2 lines).

    patching file drivers/pci/quirks.c

      HOSTCC  scripts/basic/fixdep

    scripts/basic/fixdep.c:105:23: fatal error: sys/types.h: No such file or directory

    #include <sys/types.h>

                          ^

    compilation terminated.

    make[1]: *** [scripts/basic/fixdep] Error 1

    make: *** [scripts_basic] Error 2

     

    3.1) Do you want to run Menu Config ? [y/N] y

     

    Are there other dependencies I should have installed first?

     

    Thanks for your work.  I'm a long time ZFS user and would really like Lime-Tech to integrate ZFS support.  The snapshot/rollback capabilities are unparalleled.

     

  4. Thanks for that.  Meanwhile I perused the kernel sources and it seems that while trying to coalesce some socket buffers some math didn't add up so it pulled the chain.  Better safe than sorry.  That stack of kernel code seems like a well worn path so it will take some one with "enlightened foreprudence" to figure that out... 

  5. This has not happened before 6.3.1 upgrade, not that it means anything though.  I've been running the Mellanox driver with a ConnectX-2 for quite a while.  The BIOS has an update, will do that when I get a chance.

     

    Feb 14 11:35:21 unRAID kernel: ------------[ cut here ]------------

    Feb 14 11:35:21 unRAID kernel: WARNING: CPU: 2 PID: 14091 at net/core/skbuff.c:4313 skb_try_coalesce+0x22f/0x31d

    Feb 14 11:35:21 unRAID kernel: Modules linked in: xt_nat veth vhost_net tun vhost macvtap macvlan kvm_intel kvm md_mod xt_CHECKSUM iptable_mangle ipt_REJECT nf_reject_ipv4 ebtable_filter ebtables ipt_MASQUERADE nf_nat_masquerade_ipv4 iptable_nat nf_conntrack_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat mlx4_en mlx4_core ptp pps_core r8169 mii x86_pkg_temp_thermal coretemp ahci i2c_i801 i2c_smbus i2c_core libahci wmi video backlight [last unloaded: md_mod]

    Feb 14 11:35:21 unRAID kernel: CPU: 2 PID: 14091 Comm: Threadpool work Not tainted 4.9.8-unRAID #1

    Feb 14 11:35:21 unRAID kernel: Hardware name: System manufacturer System Product Name/P8H77-I, BIOS 0904 10/15/2012

    Feb 14 11:35:21 unRAID kernel: ffff88021fb03af8 ffffffff813a34fa 0000000000000000 ffffffff819a7fee

    Feb 14 11:35:21 unRAID kernel: ffff88021fb03b38 ffffffff8104d04c 000010d9132d9000 ffff8801e5d34d00

    Feb 14 11:35:21 unRAID kernel: ffff8801e5d34f00 00000000000004c0 ffff88021fb03b94 0000000000000575

    Feb 14 11:35:21 unRAID kernel: Call Trace:

    Feb 14 11:35:21 unRAID kernel: <IRQ>

    Feb 14 11:35:21 unRAID kernel: [<ffffffff813a34fa>] dump_stack+0x61/0x7e

    Feb 14 11:35:21 unRAID kernel: [<ffffffff8104d04c>] __warn+0xb8/0xd3

    Feb 14 11:35:21 unRAID kernel: [<ffffffff8104d114>] warn_slowpath_null+0x18/0x1a

    Feb 14 11:35:21 unRAID kernel: [<ffffffff8157a3be>] skb_try_coalesce+0x22f/0x31d

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815e58b8>] tcp_try_coalesce+0x38/0x97

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815e5db4>] tcp_queue_rcv+0x5c/0x101

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815ea7bb>] tcp_rcv_established+0x2b2/0x5ac

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815f20ae>] tcp_v4_do_rcv+0x98/0x1c8

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815f47e7>] tcp_v4_rcv+0x8aa/0xaec

    Feb 14 11:35:21 unRAID kernel: [<ffffffffa0025215>] ? ipv4_confirm+0x7a/0xd0 [nf_conntrack_ipv4]

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815d4faa>] ip_local_deliver_finish+0xf4/0x1c3

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815d5540>] ip_local_deliver+0xcc/0xe1

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815d4eb6>] ? inet_del_offload+0x40/0x40

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815d536d>] ip_rcv_finish+0x2f4/0x2ff

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815d5896>] ip_rcv+0x341/0x358

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815d5079>] ? ip_local_deliver_finish+0x1c3/0x1c3

    Feb 14 11:35:21 unRAID kernel: [<ffffffff81586b4c>] __netif_receive_skb_core+0x5e9/0x69f

    Feb 14 11:35:21 unRAID kernel: [<ffffffffa0377cd5>] ? mlx4_en_process_rx_cq+0x83e/0xa43 [mlx4_en]

    Feb 14 11:35:21 unRAID kernel: [<ffffffff815871c6>] __netif_receive_skb+0x13/0x55

    Feb 14 11:35:21 unRAID kernel: [<ffffffff81588124>] process_backlog+0xa1/0x13f

    Feb 14 11:35:21 unRAID kernel: [<ffffffff81587f1f>] net_rx_action+0xe2/0x246

    Feb 14 11:35:21 unRAID kernel: [<ffffffff81050eca>] __do_softirq+0xbb/0x1af

    Feb 14 11:35:21 unRAID kernel: [<ffffffff8105116e>] irq_exit+0x53/0x94

    Feb 14 11:35:21 unRAID kernel: [<ffffffff8102009e>] do_IRQ+0xaa/0xc2

    Feb 14 11:35:21 unRAID kernel: [<ffffffff8167db42>] common_interrupt+0x82/0x82

    Feb 14 11:35:21 unRAID kernel: <EOI>

    Feb 14 11:35:21 unRAID kernel: ---[ end trace 790ce744c3e754ca ]---

    unraid-diagnostics-20170215-0833.zip

  6. Thanks for that. I've attached the drives SMART info and the syslog.  The SMART info is unremarkable.  Some of the drives have been powered on almost 4 years, which is about how long I've been running this array. The syslog shows a drive/controller interface fatal error around 12:36:37. Rut-roh...  I have another drive being precleared and if the same drive faults again I'll replace it. I'm sure I don't need say but this probably has nothing to do with rc6.  Btw, the "diagnostic" command from the cmdline never completed and I couldn't break out of it, had to kill the session.

    unraid-txt.zip

  7. My machine has done this a couple of times since I upgraded to rc6(not that that's the cause).  The GUI and all shares are unresponsive. It's hung now with atop running if someone wants me to look at anything.  Previously I had to use shutdown which caused a parity check, seemed ok after that.

    58aad84ee6942_atop-shfs-100.JPG.ef76925d0108fac821e114ad938e4afb.JPG

×
×
  • Create New...