Jump to content

kernel: CIFS VFS: Close unmatched open


Recommended Posts

I am getting kernel: CIFS VFS: Close unmatched open errors in my log on one of my servers when the server is mapping an SMB share to another UNRaid server does anyone know what might be causing this or if it is a problem?

 

I am thinking it might be something wrong the the other server and this has now locked up a couple of times but not sure if the above errors from the other server are causing it or a symptom on a problem I have on the source server so any insight appreciated

Link to comment

You can go directly to the correct support thread for any of your plugins by going to the Plugins page in the Unraid webUI and selecting its Support Thread link.

 

Similarly, you can go directly to the correct support thread for any of your dockers by clicking on its icon on the Dashboard or Dockers page and selecting Support.

 

Finally, there is also a (?) link to the support thread for any plugin or docker on its listing on the Apps page.

Link to comment

I posted here as I am not 100% sure the plugin is the issue more that it might be something on the other server as I have other mappings to other locations from the same server and they are not have issues, some of these are on the same server

 

However I will raise in the plug-in forum

Link to comment

Yes the dockers are using slave mode, this error is not docker related though as if the mapping exists between the 2 servers as when the dockers are not running I am still getting the errors .. looks like the errors appear every 1-3 minutes.

Link to comment
  • 1 year later...

Hi Everyone,

 

I ran into the same issue. My setup is different. My unraid server has two smb shares: pve-iso and pve-backup. Both shares are mounted on my proxmox host: pve.

 

On regular basis my pve-backup share mount point will become stale resulting in failing backups. Search on Internet suggests that different kernel bugs could result in this issue. Most common suggestion is to lower smbversion to 3.0 or 2.0.  In my case this didn't help. And I didn't expect to help because only pve-backup share is becoming stale. The only difference between pve-iso and pve-backup share is 'cache' option. Because pve-iso is mostly read share and I don't care about performance cache is set to 'no' . On other hand pve-backup's cache is set to yes. Based on that I managed reliably to reproduce the problem:

1. Make sure the share is not stale.

2. Run the backup scripts. The share stayed healthy.

3. Manually ran the unraid's Mover. The share become stale on my PVE server.

 

Next, I changed pve-backup's cache to 'no'. And I can't reproduce the issue. I will keep eye on it for sometime and see if this is exactly the issue.

 

Looks like when the file is moved from cache pool to the disk array result in broken share.

 

Stay tuned...

 

UPDATE: It have been few weeks since I disabled the cache on 'pve-backup' share and it has stayed connected since.

Edited by SAL-e
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...