Jump to content

[SOLVED] Unraid Cache drive issue


Recommended Posts

Hey guys,

 

The issue I am having has only popped up in the last two days.

 

The cache drive is reading and writing to itself without me doing anything, there are no dockers running in the background except for PLEX and PIHOLE. I have also tried disabling these to solve the problem without any luck.

 

Something is writing between the cache drive and Disk 4 at 60MB/s constantly, the cache drive then gets full and after some time empties itself and the process starts again. (see screenshot)

 

This seems to be happening all day and night when the server is on.

 

LOG:
 


May 19 07:13:04 SkynetHD emhttpd: Starting services...
May 19 07:13:04 SkynetHD emhttpd: shcmd (124): /etc/rc.d/rc.samba restart
May 19 07:13:06 SkynetHD root: Starting Samba: /usr/sbin/smbd -D
May 19 07:13:06 SkynetHD root: /usr/sbin/nmbd -D
May 19 07:13:06 SkynetHD root: /usr/sbin/wsdd
May 19 07:13:06 SkynetHD root: /usr/sbin/winbindd -D
May 19 07:13:06 SkynetHD emhttpd: shcmd (138): /usr/local/sbin/mount_image '/mnt/user/system/docker/docker.img' /var/lib/docker 20
May 19 07:13:07 SkynetHD kernel: BTRFS: device fsid eba9223d-0fa5-405b-b5e5-b40010c09515 devid 1 transid 559782 /dev/loop2
May 19 07:13:07 SkynetHD kernel: BTRFS info (device loop2): disk space caching is enabled
May 19 07:13:07 SkynetHD kernel: BTRFS info (device loop2): has skinny extents
May 19 07:13:07 SkynetHD root: Resize '/var/lib/docker' of 'max'
May 19 07:13:07 SkynetHD kernel: BTRFS info (device loop2): new size for /dev/loop2 is 21474836480
May 19 07:13:07 SkynetHD emhttpd: shcmd (140): /etc/rc.d/rc.docker start
May 19 07:13:07 SkynetHD root: starting dockerd ...
May 19 07:13:11 SkynetHD avahi-daemon[9295]: Joining mDNS multicast group on interface docker0.IPv4 with address 172.17.0.1.
May 19 07:13:11 SkynetHD avahi-daemon[9295]: New relevant interface docker0.IPv4 for mDNS.
May 19 07:13:11 SkynetHD avahi-daemon[9295]: Registering new address record for 172.17.0.1 on docker0.IPv4.
May 19 07:13:11 SkynetHD kernel: IPv6: ADDRCONF(NETDEV_UP): docker0: link is not ready
May 19 07:13:15 SkynetHD rc.docker: bd773a1101284e7ab4a014ff6f0b36640ef4fc2deaeca093a973b27273050d92
May 19 07:13:16 SkynetHD emhttpd: nothing to sync
May 19 07:13:16 SkynetHD unassigned.devices: Mounting 'Auto Mount' Remote Shares...
May 19 07:13:19 SkynetHD rc.docker: binhex-plex: started succesfully!
May 19 07:13:23 SkynetHD kernel: igb 0000:01:00.0 eth0: mixed HW and IP checksum settings.
May 19 07:13:23 SkynetHD kernel: eth0: renamed from vethf8fced5
May 19 07:13:23 SkynetHD kernel: device br0 entered promiscuous mode
May 19 07:13:25 SkynetHD rc.docker: pihole: started succesfully!
May 19 07:17:35 SkynetHD ntpd[1975]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized
May 19 07:23:00 SkynetHD root: Fix Common Problems Version 2020.05.05
May 19 08:45:50 SkynetHD crond[1996]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
May 19 09:13:44 SkynetHD shfs: cache disk full
May 19 09:13:45 SkynetHD shfs: cache disk full
May 19 09:45:51 SkynetHD crond[1996]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
May 19 10:00:01 SkynetHD move: error: move, 397: No such file or directory (2): lstat: /mnt/disk3/appdata/pihole/pihole
May 19 10:45:33 SkynetHD crond[1996]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
May 19 11:45:36 SkynetHD crond[1996]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
May 19 12:45:26 SkynetHD crond[1996]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
May 19 13:45:33 SkynetHD crond[1996]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
May 19 14:45:38 SkynetHD crond[1996]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null
May 19 15:45:56 SkynetHD crond[1996]: exit status 1 from user root /usr/local/sbin/mover &> /dev/null

unraid screenshot.png

Link to comment

You must be overlooking something or left something out in your description. Something is writing new files to the server, or possibly something (not mover) is moving files from one user share to another or possibly one disk to another. But mover can't be the only thing going on here because once it has successfully moved files they are in their final locations according to the user share settings. And mover is the only thing builtin that moves files, so some addon (docker or plugin) would need to be involved somehow, or some user is involved in writing or moving files on the server.

Link to comment

Thats exactly the issue im having, I dont know whats causing it, I have disabled PLEX because i was thinking it might have been causing the issue but it still happens. The only other docker running is PiHole .

 

There are no other users connected as i only have the one account.

Link to comment

Ive done that now to wait and see. So far no activity.

 

Ive just noticed from looking at my screenshot that i posted that its happening once an hour on the minute.

 

I've had a look at the mover settings and thats set to move every hour, but that was never the issue before?

Link to comment
52 minutes ago, kenzo said:

Ive done that now to wait and see. So far no activity.

 

Ive just noticed from looking at my screenshot that i posted that its happening once an hour on the minute.

 

I've had a look at the mover settings and thats set to move every hour, but that was never the issue before?

It might be worth enabling mover logging?  If mover is getting involved then the filenames are likely to give you a clue as to what is causing this.

 

Most people only run mover once a day overnight in a quiet period (which is the default setting)  I assume you had a reason to change the frequency?

Link to comment
1 hour ago, kenzo said:

its happening once an hour on the minute

That is how you have mover scheduled. Mover moves cache-yes shares from cache to array and moves cache-prefer shares from array to cache. But unless something else is putting files in a cache-yes or cache-prefer share, then once the files have been successfully moved there is nothing else for mover to do the next time it runs.

Link to comment

OK the solution to my problem was me changing the Share settings on the folders "appdata" "domains" and system to "Yes" for using cache drive instead of "Prefer".

 

Changing this simple setting solved my problem.

 

Thanks to everyone who helped out.

 

-Ken

Link to comment
  • JorgeB changed the title to [SOLVED] Unraid Cache drive issue
3 hours ago, kenzo said:

OK the solution to my problem was me changing the Share settings on the folders "appdata" "domains" and system to "Yes" for using cache drive instead of "Prefer".

 

Changing this simple setting solved my problem.

 

Thanks to everyone who helped out.

 

-Ken

That is not the recommended solution, and we still don't know what was going on. Prefer is the correct setting for those shares because you want them to stay on cache for better performance of Dockers/VMs and so those won't keep array disks spinning. And Prefer will only move from array to cache and after it has finished moving all it can it won't do anything else. And next time mover runs it won't move those again since they are already where they belong.

 

Setting these to cache yes really doesn't do anything to fix anything, it will just put those on the array where your Docker/VM performance will be affected by the slower parity array, and they will keep array disks spinning. It is true that after mover has finished moving it won't move those again, but that isn't any different than the behavior with Prefer, it is just a different (and not ideal) final destination for the shares.

 

Possibly you were using those user shares for some other purpose than intended, such as using appdata for downloads or something. It would be better to try to figure out the real problem (and the real solution) and so better understand how to avoid causing these problems in the future.

Link to comment

So what can we do to work out what it was? Docker and VMs were disabled and it was still doing it, i honestly cannot think of anything else that would make it behave that way, nor could i work out what was being transferred between those drives... over and over again.

Link to comment
4 hours ago, kenzo said:

So what can we do to work out what it was? Docker and VMs were disabled and it was still doing it, i honestly cannot think of anything else that would make it behave that way, nor could i work out what was being transferred between those drives... over and over again.

Did you ever try to enable mover logging to see what files were being moved?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...