fysmd multiple issues (from: Unassigned Devices thread)


Recommended Posts

Sorry in advance if answered in this (LONG) thread but I've searched and can't find same issue:

 

Two unraid servers running 6.8.2 [server] and 6.8.3 [client] both on physical boxes.  diag attached for client machine.

I run UD on both but I want to mount my media shares using CIFS with UD on the client server.

 

Many work well and are stable but my movies share is not.

sometimes it mounts and works for a period but once it fails it shows mounted on the GUI and in CLI but navigating into share un GUI takes a long time but shows it empty.  In cli:

root@VH1:~# ls /mnt/disks/SERVER_movies
/bin/ls: cannot access '/mnt/disks/SERVER_movies': Stale file handle

 

syslog reports sucessful mount despite it not bing and syslog full of:

root@VH1:~# ls /mnt/disks/SERVER_movies
/bin/ls: cannot access '/mnt/disks/SERVER_movies': Stale file handle

 

I notice that the time shown in syslog must be in a different timezone, machine reports time correctly but logs are -7h

 

( I know my cache disk full at the mo BTW! and a couple of HHDs are reporting smart errors, I'm in eval at the mo while I try to get this puppie working how I want )

 

vh1-diagnostics-20200430-1059.zip

Link to comment
2 hours ago, fysmd said:

I notice that the time shown in syslog must be in a different timezone, machine reports time correctly but logs are -7h

Where exactly are you seeing the "correct time" on your server? Something is wrong somewhere if your logs are not showing the correct time.

Link to comment
3 hours ago, fysmd said:

Sorry in advance if answered in this (LONG) thread but I've searched and can't find same issue:

 

Two unraid servers running 6.8.2 [server] and 6.8.3 [client] both on physical boxes.  diag attached for client machine.

I run UD on both but I want to mount my media shares using CIFS with UD on the client server.

 

Many work well and are stable but my movies share is not.

sometimes it mounts and works for a period but once it fails it shows mounted on the GUI and in CLI but navigating into share un GUI takes a long time but shows it empty.  In cli:

root@VH1:~# ls /mnt/disks/SERVER_movies
/bin/ls: cannot access '/mnt/disks/SERVER_movies': Stale file handle

 

syslog reports sucessful mount despite it not bing and syslog full of:

root@VH1:~# ls /mnt/disks/SERVER_movies
/bin/ls: cannot access '/mnt/disks/SERVER_movies': Stale file handle

 

I notice that the time shown in syslog must be in a different timezone, machine reports time correctly but logs are -7h

 

( I know my cache disk full at the mo BTW! and a couple of HHDs are reporting smart errors, I'm in eval at the mo while I try to get this puppie working how I want )

 

vh1-diagnostics-20200430-1059.zip 219.92 kB · 2 downloads

You really need to solve some of these problems first:

Apr 30 02:01:30 VH1 shfs: cache disk full
### [PREVIOUS LINE REPEATED 10 TIMES] ###
Apr 30 02:03:18 VH1 emhttpd: cmd: /usr/local/emhttp/plugins/dynamix/scripts/tail_log syslog
Apr 30 02:04:04 VH1 emhttpd: shcmd (15922): /usr/local/sbin/mover &> /dev/null &
Apr 30 02:04:23 VH1 kernel: CIFS VFS: Close unmatched open
Apr 30 02:06:36 VH1 shfs: cache disk full
### [PREVIOUS LINE REPEATED 766 TIMES] ###
Apr 30 02:07:34 VH1 move: error: move, 397: No such file or directory (2): lstat: /mnt/disk2/downloads-inprogress/sabincomplete/Miles.Davis.Birth.Of.The.Cool.2019.1080p.BluRay.x264-GETiT/__ADMIN__/SABnzbd_article_e03H9A
Apr 30 02:07:34 VH1 move: error: move, 397: No such file or directory (2): lstat: /mnt/disk2/downloads-inprogress/sabincomplete/Miles.Davis.Birth.Of.The.Cool.2019.1080p.BluRay.x264-GETiT/__ADMIN__/SABnzbd_article_xRsFx4

 

Link to comment
4 hours ago, fysmd said:

( I know my cache disk full at the mo BTW! and a couple of HHDs are reporting smart errors, I'm in eval at the mo while I try to get this puppie working how I want )

1 hour ago, dlandon said:

You really need to solve some of these problems first:

In fact, you should forget about doing anything else until you fix these more serious problems. Disable dockers and VMs in Settings, unmount all Unassigned Devices, fix your time, fix your full cache, and deal with any important SMART warnings.

 

I have moved your posts to their own General Support thread so we can help you work on all these.

 

Link to comment

So a reboot fixed Time discrepancies (the web ui and cli: date both reported correct local time but not syslog)

and the cache disk is no longer full (prefer cache on share  doesnt seem to do what I expected!). 

 

and so far the CIFS mount is stable (and so my cache disk no longer full)

 

Syslog full of:

Apr 30 15:11:30 VH1 kernel: CIFS VFS: Close unmatched open
Apr 30 15:13:00 VH1 kernel: CIFS VFS: Close unmatched open
Apr 30 15:17:01 VH1 kernel: CIFS VFS: Close unmatched open
Apr 30 15:17:01 VH1 kernel: CIFS VFS: Close unmatched open
Apr 30 15:19:01 VH1 kernel: CIFS VFS: Close unmatched open
Apr 30 15:20:02 VH1 kernel: CIFS VFS: Close unmatched open
Apr 30 15:20:33 VH1 kernel: CIFS VFS: Close unmatched open

 

Are they relevant?

 

Link to comment
3 minutes ago, fysmd said:

(prefer cache on share  doesnt seem to do what I expected!)

That setting writes to cache if it has room then overflows to the array, then tries to move any array files back to cache when it has space. The setting means you prefer for the share to stay on cache. What did you think?

 

4 hours ago, fysmd said:

I'm in eval at the mo

Is this another system different from the one you posted about a year ago? I see you have been a forum member for a very long time.

Link to comment

LOL!  I thought exactly what you say and that's same as in help.  In reality though it seems to fill the cache and then complain / fail :( Wondering if I need to combine wiht with a different min free value or something.   .. or maybe the machin was in a funk at the time and just needed restarting.

 

Yes, I'm a very long time user, the "server" is my (now) very stable box but I'm testing out another one to host some containers and virtual machines.   On an eval license and using old disks which wile working, were removed from my live arrar due to the smart warnings.

 

Link to comment
Just now, fysmd said:

different min free value

Unraid has no way to know in advance how large a file will become when it chooses a disk to write. If a disk has less than minimum, it will choose another disk. If it has more than minimum it can choose the disk and if the disk doesn't have enough room the write will fail due to running out of space.

 

Cache Minimum setting is in Global Share Settings. If cache has less than minimum, an array disk will be chosen (overflow) for cache-yes or cache-prefer shares, cache-only shares will not overflow.

 

Each user share has its own Minimum Free setting and it works in a similar manner. If a disk has less than minimum, Unraid will choose another disk (except Split Level has precedence).

 

The usual recommendation is to set Minimum to larger than the largest file you expect to write to the share (or to cache in the case of cache minimum).

Link to comment

so machine ran overnight and this morning I have a different CIFS share in the broken state.

Could it be that they fail due to inactivity?

I'm going to scipt ls of a large directory every five mins to see if that prevents.

 

For info, I'm doing this because my cherished NUC has passed.  I was running ubuntu14 LTS on there and had zero CIFS mount issues they were rock solid.

vh1-diagnostics-20200430-1059.zip

Link to comment

all put back to 1500 last night as reported.

One CIFS share down again this morning :(

in "Main" on web UI, UD shows "music" as mounted but with zero size,used and free.

 

Linux 4.19.107-Unraid.
docker@VH1:~$ df -h
Filesystem          Size  Used Avail Use% Mounted on
  <snip>
//SERVER/TV          66T   45T   21T  69% /mnt/disks/SERVER_TV
//SERVER/music       66T   45T   21T  69% /mnt/disks/SERVER_music
//SERVER/movies      66T   45T   21T  69% /mnt/disks/SERVER_movies
docker@VH1:~$ ls /mnt/disks/SERVER_music
/bin/ls: cannot access '/mnt/disks/SERVER_music': Stale file handle
docker@VH1:~$

 

and still lots of "unmatched open" in syslog.

 

vh1-diagnostics-20200502-0915.zip

Link to comment
3 hours ago, fysmd said:

all put back to 1500 last night as reported.

One CIFS share down again this morning :(

in "Main" on web UI, UD shows "music" as mounted but with zero size,used and free.

 


Linux 4.19.107-Unraid.
docker@VH1:~$ df -h
Filesystem          Size  Used Avail Use% Mounted on
  <snip>
//SERVER/TV          66T   45T   21T  69% /mnt/disks/SERVER_TV
//SERVER/music       66T   45T   21T  69% /mnt/disks/SERVER_music
//SERVER/movies      66T   45T   21T  69% /mnt/disks/SERVER_movies
docker@VH1:~$ ls /mnt/disks/SERVER_music
/bin/ls: cannot access '/mnt/disks/SERVER_music': Stale file handle
docker@VH1:~$

 

and still lots of "unmatched open" in syslog.

 

vh1-diagnostics-20200502-0915.zip 103.74 kB · 0 downloads

Spend a minute and get all your dockers and plugins up to date.

May  2 04:40:01 VH1 root: Fix Common Problems Version 2020.04.19
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application jackett has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application lidarr has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application netdata has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application nzbhydra2 has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application ombi has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application PlexMediaServer has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application radarr has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application sabnzbd has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application sonarr has an update available for it
May  2 04:40:02 VH1 root: Fix Common Problems: Warning: Docker Application unifi-controller has an update available for it

UD is currently at 2020.04.27, your system is at 2020..04.03a.

 

The first thing you should do before asking for help is to be sure all plugins and dockers are up to date.

 

Also verify that Jumbo Frames are disabled on both servers and switch and MTU is set to default on all devices (usually 1500).

Link to comment

they autoupdate but I did it manually earlier.

For info, how would containers impact (host) system mounts?

  The containers do map to these CIFS mounts (with slave option).

 

Syslog:

May  2 12:51:41 VH1 kernel: CIFS VFS: Close unmatched open
May  2 12:53:11 VH1 kernel: CIFS VFS: Close unmatched open
May  2 12:58:14 VH1 kernel: CIFS VFS: Close unmatched open
May  2 12:59:09 VH1 kernel: CIFS VFS: Autodisabling the use of server inode numbers on \\SERVER\downloads. This server doesn't seem to support them properly. Hardlinks will not be recognized on this mount. Consider mounting with the "noserverino" option to silence this message.
May  2 12:59:10 VH1 kernel: CIFS VFS: Autodisabling the use of server inode numbers on \\SERVER\movies. This server doesn't seem to support them properly. Hardlinks will not be recognized on this mount. Consider mounting with the "noserverino" option to silence this message.
May  2 12:59:14 VH1 kernel: CIFS VFS: Close unmatched open
May  2 12:59:45 VH1 kernel: CIFS VFS: Close unmatched open
May  2 13:01:46 VH1 kernel: CIFS VFS: Close unmatched open
May  2 13:02:46 VH1 kernel: CIFS VFS: Close unmatched open

 

Link to comment

OK, so Sunday update:

Music share has gone offline again this morning.

vh1-diagnostics-20200503-1035.zip

 

I dont see anything obvs in syslog - what do you think?

 

Server machine:

eth0: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether bc:5f:f4:d2:db:d4  txqueuelen 1000  (Ethernet)
        RX packets 2477677922  bytes 3071357019049 (2.7 TiB)
        RX errors 5  dropped 533  overruns 0  frame 5
        TX packets 3155609402  bytes 4322561158105 (3.9 TiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 20  memory 0xf0700000-f0720000

eth1: flags=6211<UP,BROADCAST,RUNNING,SLAVE,MULTICAST>  mtu 1500
        ether bc:5f:f4:d2:db:d4  txqueuelen 1000  (Ethernet)
        RX packets 130740224  bytes 17696208835 (16.4 GiB)
        RX errors 41312  dropped 100132  overruns 0  frame 41312
        TX packets 333590388  bytes 442915979439 (412.4 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0

 

 

Client machine:

docker@VH1:~$ ifconfig eth0
eth0: flags=4419<UP,BROADCAST,RUNNING,PROMISC,MULTICAST>  mtu 1500
        ether d8:cb:8a:14:10:d9  txqueuelen 1000  (Ethernet)
        RX packets 315521459  bytes 408413770038 (380.3 GiB)
        RX errors 0  dropped 2992  overruns 0  frame 0
        TX packets 119869768  bytes 23267646022 (21.6 GiB)
        TX errors 0  dropped 0 overruns 0  carrier 0  collisions 0
        device interrupt 19

docker@VH1:~$ ethtool -S eth0
NIC statistics:
     rx_packets: 315619207
     rx_bcast_packets: 396272
     rx_mcast_packets: 588699
     rx_pause_packets: 0
     rx_ctrl_packets: 2995
     rx_fcs_errors: 0
     rx_length_errors: 0
     rx_bytes: 408481320459
     rx_runt_packets: 0
     rx_fragments: 0
     rx_64B_or_less_packets: 1757734
     rx_65B_to_127B_packets: 1000264
     rx_128B_to_255B_packets: 39357056
     rx_256B_to_511B_packets: 6591231
     rx_512B_to_1023B_packets: 4971266
     rx_1024B_to_1518B_packets: 261944652
     rx_1519B_to_mtu_packets: 0
     rx_oversize_packets: 0
     rx_rxf_ov_drop_packets: 0
     rx_rrd_ov_drop_packets: 0
     rx_align_errors: 0
     rx_bcast_bytes: 45617284
     rx_mcast_bytes: 184470451
     rx_address_errors: 0
     tx_packets: 119944390
     tx_bcast_packets: 1902
     tx_mcast_packets: 9153
     tx_pause_packets: 0
     tx_exc_defer_packets: 0
     tx_ctrl_packets: 0
     tx_defer_packets: 0
     tx_bytes: 23281052097
     tx_64B_or_less_packets: 67518
     tx_65B_to_127B_packets: 60869104
     tx_128B_to_255B_packets: 49266597
     tx_256B_to_511B_packets: 3280260
     tx_512B_to_1023B_packets: 154673
     tx_1024B_to_1518B_packets: 6306238
     tx_1519B_to_mtu_packets: 0
     tx_single_collision: 0
     tx_multiple_collisions: 0
     tx_late_collision: 0
     tx_abort_collision: 0
     tx_underrun: 0
     tx_trd_eop: 0
     tx_length_errors: 0
     tx_trunc_packets: 0
     tx_bcast_bytes: 161174
     tx_mcast_bytes: 1320031
     tx_update: 0
docker@VH1:~$

 

Drops look to be control packets on client machine but many more (numerically) on eth1 on server.

I'm going to disable eth1 port on the server machine..  [guess]

 

Also, as I've rebooted server upgraded it to 6.8.3 (same as client machine)

 

Edited by fysmd
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.