Jump to content

limetech

Administrators
  • Posts

    10,192
  • Joined

  • Last visited

  • Days Won

    196

Report Comments posted by limetech

  1. 2 minutes ago, macor64 said:

    my configuration is much simpler that ajeffco's in that I only have 2 nfs shares which are named, ironically: 

    yes diagnostics would be nice as well as seeing your /etc/exports file, which we omit from diagnostics.zip for some reason.  (That file only defined NFS exports.)

     

    Also try above test to set 'fuse_remember' to 0.

  2. What is the nature of the transfer taking place when the crash happens?

    That is, reading from server or writing to server?

    Large files or small files?

    What program is running in the client?

     

    Can you post your /etc/exports file?

     

    I'm having a hard time reproducing this bug.

  3. 5 hours ago, edgedog said:

    Is this issue on your radar and being worked? Is there anything we can provide you to help you do so? Thanks!

    Yes, you should already know we need diagnostics.zip not just a syslog snippet which barely helps to troubleshoot anything.

     

    Also it appears you are running Unraid in an ESXi virtual machine.  We cannot reproduce this exact issue because we cannot duplicate your exact config.  That said, it's possible you are running out of memory.  This is because NFS uses an archaic concept called "file handles" which is a numeric value that maps to a file, instead of a path.  In a lot of file systems this maps to the inode number.  In 'shfs' there are no fixed inodes that correspond to files.  Instead inodes are generated and kept in memory by FUSE.  That "remember=330" mount option tells FUSE to keep these inodes in memory for 5 1/2 minutes.  This was chosen because the typical modern NFS client will cache file handles for 5 minutes.  If the client asks for I/O on that handle within 5 min and the handle is no longer valid, you get "stale file handle" messages.  After 5 min, the client typically uses a path to re-read the file handle.  However you can open alot of files in 5 minutes.  This is made worse if you have something like 'cache_dirs' plugin running against shfs mount points.  Maybe try increasing memory allotted to the VM and/or reduce that 'remember' value.

     

    On the other hand, it could be an entirely different issue, don't have enough info to determine this.

  4. 25 minutes ago, jonp said:

    Luckily we have a threadripper and threadripper 2 as well and we too see some issues with stuttering (but not the boot time issues documented here).  I know we're looking into it.

     

    This provides a nice insight into our release process.  In the past we would hold back stable releases until all the stuff that previously seemed to work in a prior stable, for some reason quit working correctly in current development.

     

    Clearly if this is something we did, meaning a bug in s/w we write, then probably we would hold back the release.  But in this case it's not so clear where the issue is.  Therefore, we will probably be releasing 6.6.0 stable even if this issue persists.  This is because 6.6.0 includes lots of updates and security fixes which need to be published and this particular issue, though extremely annoying, does not affect a huge percentage of the user base.

     

  5. 1 minute ago, bonienl said:

    This behavior exists since version 6.0 was introduced.

    By default bonding is enabled to allow people to connect to any available port of their system and avoid them complaining about their connection not working because they didn't use eth0 (which happened frequently in the past).

     

     

    And with motherboards with multiple RJ45 connectors it can be unclear which one is "eth0".

×
×
  • Create New...