Jump to content

Cache is full?


zmanfarlee

Recommended Posts

Hey all.  My parity check is running (if that matters). 

 

Having an issue where my downloads on SABNZB is pausing cause cache "Disk Full."  My cache disk is only half full.  I am really new here and have beentacking stuff one but this is a tricky one for me.  Any ideas?  Hopefully the below helps:

 

https://imgur.com/a/xN9IiJX

https://imgur.com/D2eV6WZ

https://imgur.com/Hx6nHe6

https://imgur.com/abWAH1G

Link to comment
2 hours ago, Squid said:

Diagnostics would be better.  Along with this from SabNZBD https://forums.unraid.net/topic/57181-real-docker-faq/#comment-564345

 

Thank you so much in advance

What do you mean by Diagnostics? 

Looks like it is affecting all dockers now as only SABnzb will go to the UI.

 

Is the below what you were looking for?

 

This was a warning from fix common problems

Cache Disk free space is less than the cache floor settingAll writes to your cache enabled shares are being redirected to your array. If this is a transient situation, you can ignore this, otherwise adjust your cache floor settings here:

or adjust the frequency of the mover running: or purchase a larger cache drive

 

If I manually run the mover nothing happens

 

This is from log on sabnzb

2018-10-07 17:18:02,743::INFO::[assembler:108] Traceback:
Traceback (most recent call last):
File "/usr/share/sabnzbdplus/sabnzbd/assembler.py", line 98, in run
filepath = self.assemble(nzf, filepath)
File "/usr/share/sabnzbdplus/sabnzbd/assembler.py", line 167, in assemble
fout = open(path, 'ab')
IOError: [Errno 28] No space left on device: u"/config/Downloads/incomplete/Ronja the Robber's Daughter S01 1080p Amazon WEB-DL DD+ 5.1 H.264-TrollHD/Ronja the Robber's Daughter S01 1080p Amazon WEB-DL DD+ 5.1 H.264-TrollHD.part85.rar"
2018-10-07 17:18:02,744::INFO::[downloader:279] Pausing
2018-10-07 17:18:02,744::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:02,845::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:02,947::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,049::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,150::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,252::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,354::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,455::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,557::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,659::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,759::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,861::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:03,963::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,065::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,167::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,269::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,370::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,473::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,575::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,676::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,777::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,878::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:04,980::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,082::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,185::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,287::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,389::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,491::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,592::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,694::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,794::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,795::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,795::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,796::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,796::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,796::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,796::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,796::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,796::INFO::[downloader:783] Thread [email protected]: forcing disconnect
2018-10-07 17:18:05,797::INFO::[downloader:783] Thread [email protected]: forcing disconnect

Link to comment

From your global share settings:

shareCacheFloor="70GB"

From your appdata share setting:

shareUseCache="only"

Your cache drive is 50% full

/dev/sdf1       120G   59G   61G  50% /mnt/cache

And your cache drive is 128GiB before formatting.

User Capacity:    128,035,676,160 bytes [128 GB]

 

Net result is that your cache drive is full according to your settings.  Any write to /mnt/user/appdata (which is cache-only) will fail because of the global share setting.  Lower the cache floor setting down to something more reasonable (20G?) for your use cache.

Link to comment
14 minutes ago, Squid said:

From your global share settings:


shareCacheFloor="70GB"

From your appdata share setting:


shareUseCache="only"

Your cache drive is 50% full


/dev/sdf1       120G   59G   61G  50% /mnt/cache

And your cache drive is 128GiB before formatting.


User Capacity:    128,035,676,160 bytes [128 GB]

 

Net result is that your cache drive is full according to your settings.  Any write to /mnt/user/appdata (which is cache-only) will fail because of the global share setting.  Lower the cache floor setting down to something more reasonable (20G?) for your use cache.

And that is why I just bought you a beer :).  Thanks man.  So quick Q.  It looks like this may have happened because I had to 25gb files downloading at the same time in SABnzb.  It was an accident duplicate.  This being the case does that mean normally that wont happen because files will move over but this coudnt because it was two incomplete files?  Appreciate your help both now and a month and a half ago.  I will try to donate in the future as well.  

Link to comment
3 minutes ago, zmanfarlee said:

This being the case does that mean normally that wont happen because files will move over but this coudnt because it was two incomplete files?

Set the appdata share to be Use Cache: Prefer  That way if this situation happens again, then any writes will go to the array instead of failing.  Once space gets freed up by mover, then it will also move the files back to the cache drive

 

And, thank you :)

 

Link to comment
7 minutes ago, Squid said:

Set the appdata share to be Use Cache: Prefer  That way if this situation happens again, then any writes will go to the array instead of failing.  Once space gets freed up by mover, then it will also move the files back to the cache drive

 

And, thank you :)

 

Awesome Thanks!!  I am way happier to buy you helpers beers and get educated than to have someone try to make me rely on them.  Thanks again!

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...