Jump to content

Frank76

Members
  • Posts

    9
  • Joined

  • Last visited

Report Comments posted by Frank76

  1. 4 hours ago, Can0nfan said:

    @Frank76 downgrading is pretty easy if you need a copy of the 6.5.3 zip file it should still be available under downloads but if not I have a copy for this very reason. DM me if you need it

    https://s3.amazonaws.com/dnld.lime-technology.com/stable/unRAIDServer-6.5.3-x86_64.zip

    Thanks for the link. I thought I was on to something because once I disabled the block level backups, it was looking like it was working. It actually finished a few jobs. but sadly it just died on me again after less than 24 hours. I will downgrade as soon as I can. I'm not really clear on the procedure to downgrade. Do I extract the zip file and overwrite what is in the /boot/previous directory, then go into the webui, tools, update os and select previous version?

     

    Thanks!

  2. 15 hours ago, ajeffco said:

    Hello,

     

    Another night of no crash.

    
    root@tower:~# uptime
     11:28:15 up 1 day, 12:58,  1 user,  load average: 0.01, 0.00, 0.00

     

    @Frank76 Sorry to hear your still having problems.  I've run Synology backups manually and let them run by schedule and haven't had trouble since the two changes.  I've also run 2 macbook timemachine backups at the same time as a manual synology backup scan each day since the changes, and it hasn't crashed.  I want to say that for certain one of my crashes occurred when there was no I/O going to the unraid rig.

    I just did another test, and it took a whopping 17 minutes before the storage locked up again. I'm using Cloudberry from a vm, with the data source being a nfs mounted unraid share, and the destination is an automounted sshfs filesystem running to a disk at a friend's house. It has been working well for a few months now until recently. I'm also wondering if the new feature block level backup feature of cloudberry that could be compounding the issue. I'll disable the block level backup and see if I can get a successful backup.

  3. On 9/28/2018 at 8:47 AM, ajeffco said:

    Good Morning,

     

    My unraid rig has survived the night without crashing!  I'll be watching it closely and will report back if anything happens.

     

    @Frank76 I also turned off the Tunable (enable Direct IO): hich had been enabled.

    Last night I disabled the Direct IO, and rebooted, and woke up this morning to the same issue. It seems like my backups are too IO intensive for NFS to handle. I'm getting very close to downgrading, but I have lost the ability to go back to 6.5.x (at least easily). In the mean time I will disable my backups. Hopefully that will prevent my server from crashing again.

  4. On 9/26/2018 at 1:51 PM, limetech said:

    Please try this test.  Go to Settings/NFS and set the "Tunable (fuse_remember)" setting to 0.  This might cause your clients to crash and burn due to "stale file handle", but maybe not.  Would be interesting test.

    I came here because I've been having problems getting my backup completed all day. This is the first time I've had to reboot my server aside from upgrades (and the extended power outage like last weekend). I have set the fuse_remember to 0, and I will report back if I'm actually able to complete my backup.

     

    Here's the trace from my last crash:

    [ 2714.619041] ------------[ cut here ]------------
    [ 2714.619043] nfsd: non-standard errno: -103
    [ 2714.619077] WARNING: CPU: 1 PID: 10676 at fs/nfsd/nfsproc.c:817
    nfserrno+0x44/0x4a [nfsd]
    [ 2714.619078] Modules linked in: xt_nat xt_CHECKSUM iptable_mangle
    ipt_REJECT ebtable_filter ebtables ip6table_filter ip6_tables vhost_net
    tun vhost tap veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4
    nf_defrag_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat xfs nfsd
    lockd grace sunrpc md_mod i915 i2c_algo_bit iosf_mbi drm_kms_helper drm
    intel_gtt agpgart syscopyarea sysfillrect sysimgblt fb_sys_fops it87
    hwmon_vid bonding e1000e r8169 mii x86_pkg_temp_thermal
    intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul
    crc32c_intel ghash_clmulni_intel pcbc aesni_intel aes_x86_64
    crypto_simd cryptd ahci libahci glue_helper intel_cstate intel_uncore
    i2c_i801 video intel_rapl_perf i2c_core backlight thermal button
    acpi_pad fan pcc_cpufreq [last unloaded: e1000e]
    [ 2714.619113] CPU: 1 PID: 10676 Comm: nfsd Not tainted 4.18.8-unRAID
    #1
    [ 2714.619114] Hardware name: Gigabyte Technology Co., Ltd. Z97-
    HD3P/Z97-HD3P, BIOS F2 09/17/2014
    [ 2714.619118] RIP: 0010:nfserrno+0x44/0x4a [nfsd]
    [ 2714.619118] Code: c0 48 83 f8 22 75 e2 80 3d b3 06 01 00 00 bb 00 00
    00 05 75 17 89 fe 48 c7 c7 3b 6a 42 a0 c6 05 9c 06 01 00 01 e8 8a 1c c3
    e0 <0f> 0b 89 d8 5b c3 48 83 ec 18 31 c9 ba ff 07 00 00 65 48 8b 04 25
    [ 2714.619140] RSP: 0018:ffffc90001d53dc0 EFLAGS: 00010282
    [ 2714.619142] RAX: 0000000000000000 RBX: 0000000005000000 RCX:
    0000000000000007
    [ 2714.619143] RDX: 0000000000000000 RSI: ffff88041fa56470 RDI:
    ffff88041fa56470
    [ 2714.619144] RBP: ffffc90001d53e10 R08: 0000000000000003 R09:
    ffffffff8220a800
    [ 2714.619144] R10: 00000000000003d4 R11: 0000000000012ddc R12:
    ffff88040908a008
    [ 2714.619145] R13: 000000008de30000 R14: ffff88040908a168 R15:
    0000000000000002
    [ 2714.619146] FS:  0000000000000000(0000) GS:ffff88041fa40000(0000)
    knlGS:0000000000000000
    [ 2714.619147] CS:  0010 DS: 0000 ES: 0000 CR0: 0000000080050033
    [ 2714.619148] CR2: 000000000094e690 CR3: 0000000001e0a006 CR4:
    00000000001626e0
    [ 2714.619148] Call Trace:
    [ 2714.619153]  nfsd_open+0x15e/0x17c [nfsd]
    [ 2714.619157]  nfsd_read+0x45/0xec [nfsd]
    [ 2714.619161]  nfsd3_proc_read+0x95/0xda [nfsd]
    [ 2714.619164]  nfsd_dispatch+0xb4/0x169 [nfsd]
    [ 2714.619170]  svc_process+0x4b5/0x666 [sunrpc]
    [ 2714.619173]  ? nfsd_destroy+0x48/0x48 [nfsd]
    [ 2714.619175]  nfsd+0xeb/0x142 [nfsd]
    [ 2714.619179]  kthread+0x10b/0x113
    [ 2714.619181]  ? kthread_flush_work_fn+0x9/0x9
    [ 2714.619183]  ret_from_fork+0x35/0x40
    [ 2714.619185] ---[ end trace 94c2c1298e7ff70a ]---

     

    Update: Nope, the fuse_remember set to 0 option didn't help at all

     

     

×
×
  • Create New...