jphipps

Members
  • Posts

    334
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jphipps's Achievements

Contributor

Contributor (5/14)

3

Reputation

  1. Update: Must be an issue, all my nfs mounts to unraid are hung... Just checking my error log after upgradeing to 6.6.0-rc2 and saw the following error message, not sure if it is an issue or not: Sep 8 10:00:00 Tower kernel: ------------[ cut here ]------------ Sep 8 10:00:00 Tower kernel: nfsd: non-standard errno: -103 Sep 8 10:00:00 Tower kernel: WARNING: CPU: 2 PID: 14895 at fs/nfsd/nfsproc.c:817 nfserrno+0x44/0x4a [nfsd] Sep 8 10:00:00 Tower kernel: Modules linked in: xt_nat veth ipt_MASQUERADE iptable_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_nat_ipv4 iptable_filter ip_tables nf_nat xfs nfsd lockd grace sunrpc md_mod ipmi_devintf bonding e1000e x86_pkg_temp_thermal intel_powerclamp coretemp kvm_intel kvm crct10dif_pclmul crc32_pclmul crc32c_intel ghash_clmulni_intel cryptd mpt3sas mvsas ipmi_ssif libsas intel_cstate intel_uncore i2c_i801 intel_rapl_perf i2c_core ahci raid_class libahci scsi_transport_sas video thermal button fan backlight ipmi_si pcc_cpufreq [last unloaded: e1000e] Sep 8 10:00:00 Tower kernel: CPU: 2 PID: 14895 Comm: nfsd Not tainted 4.18.6-unRAID #1 Sep 8 10:00:00 Tower kernel: Hardware name: Supermicro X9SCL/X9SCM/X9SCL/X9SCM, BIOS 2.0c 10/17/2013 Sep 8 10:00:00 Tower kernel: RIP: 0010:nfserrno+0x44/0x4a [nfsd] Sep 8 10:00:00 Tower kernel: Code: c0 48 83 f8 22 75 e2 80 3d b3 06 01 00 00 bb 00 00 00 05 75 17 89 fe 48 c7 c7 3b 0a 26 a0 c6 05 9c 06 01 00 01 e8 5c 7c df e0 <0f> 0b 89 d8 5b c3 48 83 ec 18 31 c9 ba ff 07 00 00 65 48 8b 04 25 Sep 8 10:00:00 Tower kernel: RSP: 0018:ffffc9000358fdc0 EFLAGS: 00010282 Sep 8 10:00:00 Tower kernel: RAX: 0000000000000000 RBX: 0000000005000000 RCX: 0000000000000007 Sep 8 10:00:00 Tower kernel: RDX: 0000000000000000 RSI: ffff88062fd16470 RDI: ffff88062fd16470 Sep 8 10:00:00 Tower kernel: RBP: ffffc9000358fe10 R08: 0000000000000003 R09: ffffffff82215800 Sep 8 10:00:00 Tower kernel: R10: 00000000000005d7 R11: 000000000001de74 R12: ffff8805d4a6e408 Sep 8 10:00:00 Tower kernel: R13: 0000000019070000 R14: ffff8805d4a6e568 R15: 0000000000000100 Sep 8 10:00:00 Tower kernel: FS: 0000000000000000(0000) GS:ffff88062fd00000(0000) knlGS:0000000000000000 Sep 8 10:00:00 Tower kernel: CS: 0010 DS: 0000 ES: 0000 CR0: 0000000080050033 Sep 8 10:00:00 Tower kernel: CR2: 000014b58251e850 CR3: 0000000001e0a002 CR4: 00000000000606e0 Sep 8 10:00:00 Tower kernel: Call Trace: Sep 8 10:00:00 Tower kernel: nfsd_open+0x15e/0x17c [nfsd] Sep 8 10:00:00 Tower kernel: nfsd_read+0x45/0xec [nfsd] Sep 8 10:00:00 Tower kernel: nfsd3_proc_read+0x95/0xda [nfsd] Sep 8 10:00:00 Tower kernel: nfsd_dispatch+0xb4/0x169 [nfsd] Sep 8 10:00:00 Tower kernel: svc_process+0x4b5/0x666 [sunrpc] Sep 8 10:00:00 Tower kernel: ? nfsd_destroy+0x48/0x48 [nfsd] Sep 8 10:00:00 Tower kernel: nfsd+0xeb/0x142 [nfsd] Sep 8 10:00:00 Tower kernel: kthread+0x10b/0x113 Sep 8 10:00:00 Tower kernel: ? kthread_flush_work_fn+0x9/0x9 Sep 8 10:00:00 Tower kernel: ret_from_fork+0x35/0x40 Sep 8 10:00:00 Tower kernel: ---[ end trace f9ed8c5ab3595bf7 ]--- tower-diagnostics-20180908-1151.zip
  2. From Chrome, you can right-click in the page and click on inspector, then from the network tab do a load of the page and you can click on the request and see the request and response headers.
  3. Well, it labeled a 'Check'... If it is running an non-correcting parity check, it wouldn't make any difference and should actually find a failing (or failed) drive. (I admit that I don't know exactly what the results might be if the correcting check was being done but since the default is correcting, I would assume that nothing bad would occur.) And it is always better to find a problem before you are actually using parity to rebuilt a drive! I would also assume that most of the time you wouldn't know if a drive was in a failing state when the test is automatically started so is your question really more along the lines of "Should I allow an automatic parity check to start if I know I have a failed drive"? My question would be "why would you"? It takes a long time for the check to run and you could be rebuilding the bad drive during that time. If I had a failed drive and did not have a replacement on hand, I would (and have) shut the server down until I had received a new drive to replace it. I would not want to take a chance on a second drive failing while waiting for the replacement! I thought in the past it wouldn't allow a parity check to run since it would be in an unprotected state. I was wondering what the expected behavior should be. It still shows as a valid party since the dual parity is in place, so I am assuming that is why it was allowed to run. I figured I would let it complete, since in theory I still have dual party so should be able withstand another drive failure without loosing data. The failed drive was relatively new, so there wasn't alot of data on it.
  4. I am currently running with dual parity and yesterday after noon I had a drive failure. I pulled the bad drive out because it was making a clicking noise. As good timing would have it, my monthly parity check kicked off a few hours later. It seems to be running ok with no errors so far and about 3 hours left. Should the parity check run if you have a failed disk?
  5. I am also all Macs, and I have found that NFS works the best for me. I have started testing SMB with the new beta of 6.2, but use NFS from most mounts...
  6. Possibly. Delete /boot/config/dynamix.plg Good Call, that fixed it.. Thanks..
  7. I had reverted back to 6.1.9 because of issues with my Realtek ethernet card. I decided to install a different eth card to get around the issue, but when I re-updated to 6.2 B20, the Array Operations screen no longer has the section for doing a parity check. I am wondering if it was do to the dynamix update for 6.1.9 that I had applied before re-upgrading. Any ideas? Screen shot https://www.dropbox.com/s/wa2findx9v3om40/Screen%20Shot%202016-04-05%20at%208.08.51%20PM.png?dl=0
  8. System hung again, the console was black and I wasn't tailing the syslog at the time, but assuming it is the Realtek driver again. Rolling back to 6.1.9 to see if it goes away...
  9. I assume you have already tested the Realtek with 6.1? It would be useful to know if replacing the Realtek under 6.2 clears up the networking issues. I must have used my Intel card in another machine. Guess if it happens again, Ill have to try reverting back to 6.1...
  10. Yeah, it is the one on the motherboard and have been using with unRaid since 5.x without any issues. Ill try the Intel card tonight and see how it works out. Thanks for your help..
  11. System boots fine and starts the array without issue, then is quiet for about 90 minutes, then suddenly at Mar 29 21:48:02, something goes wrong with the Realtek NIC, and a Call Trace is reported. There's no previous link down message, but there are almost nothing but link up messages for the rest of the syslog until attempted shutdown. You'll notice that the link up messages are all at intervals of multiples of 6 seconds. They start at somewhat random 6 second intervals, but quickly settle into a series of 42 seconds, then 48 seconds,then they stay almost completely at 60, 66, and 72 second intervals until the end. It's too soon to conclude that the Realtek or its driver is defective, but I suspect that if you replaced it with an Intel NIC, you would not see these issues. I think I have an Intel NIC laying around. What would you suggest as the next course of action, revert back to 6.1 to test out the Realtek, or test out the Intel under 6.2? Thanks, Jeff
  12. I have been having an issue with the past 2 beta's with loosing connectivity to the server and the console becoming mostly unresponsive. I did happen to get the syslog copied onto the flash drive before rebooting, but couldn't get the diagnostic run because of the UI not responding. syslog.zip
  13. I noticed that in my log too, almost like emhttp is starting NFS twice, but doesn't seem to hurt anything and nfs is working for me. dumpster-diagnostics-20160318-1929.zip
  14. Not sure if these messages mean anything or not. I noticed them in my syslog: Mar 13 08:40:03 Dumpster emhttp: mdcmd: write: No such device or address Mar 13 08:40:03 Dumpster kernel: mdcmd (43): spindown 0 Mar 13 08:40:03 Dumpster kernel: mdcmd (44): spindown 1 Mar 13 08:40:03 Dumpster emhttp: mdcmd: write: No such device or address Mar 13 08:40:03 Dumpster emhttp: mdcmd: write: No such device or address Mar 13 08:40:03 Dumpster emhttp: mdcmd: write: No such device or address Mar 13 08:40:03 Dumpster emhttp: mdcmd: write: No such device or address Mar 13 08:40:03 Dumpster kernel: mdcmd (45): spindown 2 Mar 13 08:40:03 Dumpster kernel: mdcmd (46): spindown 3 Mar 13 08:40:03 Dumpster kernel: mdcmd (47): spindown 4 Mar 13 08:40:03 Dumpster kernel: mdcmd (48): spindown 5 Mar 13 08:40:03 Dumpster emhttp: mdcmd: write: No such device or address Mar 13 08:40:03 Dumpster emhttp: mdcmd: write: No such device or address Mar 13 08:40:03 Dumpster kernel: mdcmd (49): spindown 6
  15. Finally got NFS to work... A bit of a long process. I basically had to convert from portmap over to rpcbind and upgrade/install a few packages. I installed the following packages: libtirpc-1.0.1-x86_64-2.txz nfs-utils-1.3.3-x86_64-1.txz rpcbind-0.2.3-x86_64-1.txz and switched the rc.rpc to start rpcbind instead of rpc.portmap and also comment out the 2 ipv6 lines from the /etc/netconfig Now I can mount over nfs...