Unthar Posted June 8, 2020 Share Posted June 8, 2020 I am getting kernel: CIFS VFS: Close unmatched open errors in my log on one of my servers when the server is mapping an SMB share to another UNRaid server does anyone know what might be causing this or if it is a problem? I am thinking it might be something wrong the the other server and this has now locked up a couple of times but not sure if the above errors from the other server are causing it or a symptom on a problem I have on the source server so any insight appreciated Quote Link to comment
trurl Posted June 8, 2020 Share Posted June 8, 2020 6 minutes ago, Unthar said: mapping an SMB share Are you using the Unassigned Devices plugin for this? Quote Link to comment
Unthar Posted June 8, 2020 Author Share Posted June 8, 2020 Yes I am using Unassigned Devices plugin, I thought that was the only way to share drives between 2 UnRaid servers. The mapping is there for Plex to access media stored on the second sever Quote Link to comment
trurl Posted June 8, 2020 Share Posted June 8, 2020 You can go directly to the correct support thread for any of your plugins by going to the Plugins page in the Unraid webUI and selecting its Support Thread link. Similarly, you can go directly to the correct support thread for any of your dockers by clicking on its icon on the Dashboard or Dockers page and selecting Support. Finally, there is also a (?) link to the support thread for any plugin or docker on its listing on the Apps page. Quote Link to comment
Unthar Posted June 8, 2020 Author Share Posted June 8, 2020 I posted here as I am not 100% sure the plugin is the issue more that it might be something on the other server as I have other mappings to other locations from the same server and they are not have issues, some of these are on the same server However I will raise in the plug-in forum Quote Link to comment
trurl Posted June 8, 2020 Share Posted June 8, 2020 By mappings, I assume you mean dockers? Are you mapping them with the slave access mode? Quote Link to comment
Unthar Posted June 8, 2020 Author Share Posted June 8, 2020 Yes the dockers are using slave mode, this error is not docker related though as if the mapping exists between the 2 servers as when the dockers are not running I am still getting the errors .. looks like the errors appear every 1-3 minutes. Quote Link to comment
SAL-e Posted September 3, 2021 Share Posted September 3, 2021 (edited) Hi Everyone, I ran into the same issue. My setup is different. My unraid server has two smb shares: pve-iso and pve-backup. Both shares are mounted on my proxmox host: pve. On regular basis my pve-backup share mount point will become stale resulting in failing backups. Search on Internet suggests that different kernel bugs could result in this issue. Most common suggestion is to lower smbversion to 3.0 or 2.0. In my case this didn't help. And I didn't expect to help because only pve-backup share is becoming stale. The only difference between pve-iso and pve-backup share is 'cache' option. Because pve-iso is mostly read share and I don't care about performance cache is set to 'no' . On other hand pve-backup's cache is set to yes. Based on that I managed reliably to reproduce the problem: 1. Make sure the share is not stale. 2. Run the backup scripts. The share stayed healthy. 3. Manually ran the unraid's Mover. The share become stale on my PVE server. Next, I changed pve-backup's cache to 'no'. And I can't reproduce the issue. I will keep eye on it for sometime and see if this is exactly the issue. Looks like when the file is moved from cache pool to the disk array result in broken share. Stay tuned... UPDATE: It have been few weeks since I disabled the cache on 'pve-backup' share and it has stayed connected since. Edited September 23, 2021 by SAL-e Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.