No space left on device, can't login


Recommended Posts

I'm getting the same thing, all I did was enable the app data share for cache on a 256gb drive and then installed a few containers. Should I just upgrade the cache drive?

 

Warning: session_write_close0: write failed: No space left on device (28) in /usrlocal/emhttp/login.php on line 96 Warning: session _write_close): Failed to write session data (files). Please verify that the current setting of session.save_path is correct (var/lib/php) in /usrlocal/emhttp/ogin.php on line 96

Warning: Cannot modify header information: headers already sent by (output started at (usrlocal/emhttp/login.php:96) in /usr/local/emhttp/login.php on line 98

Link to comment
Starting diagnostics collection...
Warning: file_put_contents(): Only -1 of 5 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 109

Warning: file_put_contents(): Only -1 of 15 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 12 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 16 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 13 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 1018 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 11 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 15 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 14 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 11 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 114

Warning: file_put_contents(): Only -1 of 83 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 130

Warning: file_put_contents(): Only -1 of 2 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 149

Warning: file_put_contents(): Only -1 of 34 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 151

Warning: file_put_contents(): Only -1 of 2 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 149

Warning: file_put_contents(): Only -1 of 34 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 151
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device
echo: write error: No space left on device

Warning: file_put_contents(): Only -1 of 33 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 25 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 53 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 25 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 41 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 25 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 29 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 25 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 25 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 190

Warning: file_put_contents(): Only -1 of 283 bytes written, possibly out of free disk space in /usr/local/emhttp/plugins/dynamix/scripts/diagnostics on line 259
done.
ZIP file '/boot/logs/tower-diagnostics-20220117-2331.zip' created.

 

Edited by tommykmusic
Link to comment

I restarted the server and waited for the error to happen again. When I restarted my Plex container was stopped. I turned it on and started to watch a few media and now the error is back. I think it has to do with Plex but I'm not sure. Anyways here's what I get from 

df -h
Filesystem      Size  Used Avail Use% Mounted on
rootfs           12G   12G     0 100% /
devtmpfs         12G     0   12G   0% /dev
tmpfs            12G     0   12G   0% /dev/shm
cgroup_root     8.0M     0  8.0M   0% /sys/fs/cgroup
tmpfs           128M  1.5M  127M   2% /var/log
/dev/sde1        30G  255M   30G   1% /boot
overlay          12G   12G     0 100% /lib/modules
overlay          12G   12G     0 100% /lib/firmware
tmpfs           1.0M     0  1.0M   0% /mnt/disks
tmpfs           1.0M     0  1.0M   0% /mnt/remotes
/dev/md1        1.9T  513G  1.4T  28% /mnt/disk1
/dev/md2        1.9T   91G  1.8T   5% /mnt/disk2
/dev/md3         13T  2.0T   11T  15% /mnt/disk3
/dev/md5         13T  2.7T   11T  21% /mnt/disk5
/dev/md6        3.7T  1.8T  1.9T  50% /mnt/disk6
/dev/md7        3.7T  1.7T  2.0T  46% /mnt/disk7
/dev/sdl1       239G   12G  227G   5% /mnt/cache_appdata
/dev/sdm1       932G  142G  791G  16% /mnt/downloads
shfs             37T  8.6T   28T  24% /mnt/user0
shfs             37T  8.6T   28T  24% /mnt/user
/dev/loop2      256G  9.9G  245G   4% /var/lib/docker
/dev/loop3      1.0G  3.8M  905M   1% /etc/libvirt

How I can I figure out what is filling up my Ram?

Link to comment
51 minutes ago, itimpi said:

You seem to be using an Unassigned Device for appdata?   Are you sure this is mounted before docker starts as otherwise you will be writing to RAM.

 

is there any reason you are not using an Unraid pool for this purpose or is that UD device meant to be a pool device?

 

I thought my appdata was on my SSD cache drive which is in the unRAID cache pool. Where can I go to check and make sure this is set correctly?

Link to comment
1 minute ago, tommykmusic said:

I thought my appdata was on my SSD cache drive which is in the unRAID cache pool. Where can I go to check and make sure this is set correctly?

 

 

I think I misread the post and it is a pool device - sorry about that :( 

 

It might be worth checking that there are no files directly under / that should not be there?  Cannot see from the 'df' output if this could be the case as it only shows mount points.   You might also want to check if any other folders that are not mount points and are located in RAM appear unexpectedly large (e.g. du -sh /tmp) .

 

Link to comment
12 hours ago, tommykmusic said:

Anyways here's what I get from 

df -h

That indicates that you have filled rootfs, which is the RAM space reserved for the OS files. All sorts of problems can be expected when the OS doesn't have any space left to work with its own files.

 

A common reason for filling rootfs is specifying a path that isn't actual storage, perhaps a container host mapping. Most of the time these should specify a subfolder of /mnt. /mnt/user are the user shares, /mnt/disk1 is disk1, etc.

 

Sometimes we will see this when a user specifies /mnt/cache when they don't really have a cache disk.

 

In your case, it looks like you have a pool named cache_appdata. Possibly you renamed your cache pool. I bet you have some container still specifying /mnt/cache for its appdata instead of /mnt/cache_appdata.

 

12 hours ago, tommykmusic said:

I think it has to do with Plex but I'm not sure.

How do you have plex transcoding configured? Do you have a host mapping specifically for transcode? Does your plex use DVR?

Link to comment
1 hour ago, Squid said:

docker image filling up (you've got it set to be 256Gig now)

Didn't look there, but the renaming of cache pool without updating all references to it may explain filling rootfs

 

@tommykmusic

 

You also need to fix that ridiculous docker.img size. 20G is often more than enough, maybe a little more depending on how many and which dockers. 256G suggests you have indeed been filling it, and making it larger will only make it take longer to fill, as well as making anything you are filling it with inaccessible. The usual cause of filling docker.img is an application specifying a path that isn't mapped.

 

So, it seems you have some containers still mapping /mnt/cache, thus filling rootfs. And some container applications writing to a path that isn't mapped to the host, thus filling docker.img

Link to comment

My appdata share is set to cache YES to use the "Chache_appdata" so all the appdata gets written to the "Chache_appdata" drive. Nothing else to my knowledge is being mapped to "Chache_appdata" other than the appdata share. Originally the "Chache_appdata" drive was named "Plex_appdata" as I was planning on only using it for plex and then changed my mind and I decided to use it for all the appdata thus the rename.

 

Quote

I'd suspect that in the past you've had problems with the docker image filling up (you've got it set to be 256Gig now), and whatever you've done to try and mitigate that has resulted in you storing stuff within RAM itself.

I have it set to 256gb cause I want the appdata share to have access to all that storage in case it is needed.

 

Quote

How do you have plex transcoding configured? Do you have a host mapping specifically for transcode? Does your plex use DVR?

This is how I have Plex configured

 

/config → /mnt/plex_appdata/
/transcode → /mnt/user/appdata/Plex-Media-Server/transcode
/data → /mnt/user/data/

 

if any screenshots would help please let me know what you'd like to see and I can supply.

Link to comment
9 minutes ago, tommykmusic said:

Originally the "Chache_appdata" drive was named "Plex_appdata" as I was planning on only using it for plex and then changed my mind and I decided to use it for all the appdata thus the rename.

 

9 minutes ago, tommykmusic said:

This is how I have Plex configured

 

/config → /mnt/plex_appdata/

 

There's your problem.

Link to comment
14 minutes ago, tommykmusic said:

My appdata share is set to cache YES to use the "Chache_appdata" so all the appdata gets written to the "Chache_appdata" drive

The pool is named cache_appdata, not Chache_appdata. Can't tell whether you have repeated that typo in your configuration since diagnostics are incomplete. And note that linux is case-sensitive, so even if you don't make a typo, Cache_appdata is also wrong.

 

18 minutes ago, tommykmusic said:

I have it set to 256gb cause I want the appdata share to have access to all that storage in case it is needed.

Wrong, docker.img is not the same as the appdata share, and docker.img doesn't need to be large for appdata to have access to lots of storage, since user shares can span disks (but see next paragraph). As I said before, 20G is probably plenty for docker.img size.

 

You also don't want the appdata share set to cache-yes, because that means move it to the array. You want appdata, domains, system shares to be cache-prefer so they will stay on fast storage and not on the array. If these shares are on the array, docker/VM performance will be impacted by slower array, and array disks can't spin down because these files are always open.

 

 

Link to comment
Quote

You also don't want the appdata share set to cache-yes, because that means move it to the array. You want appdata, domains, system shares to be cache-prefer so they will stay on fast storage and not on the array. If these shares are on the array, docker/VM performance will be impacted by slower array, and array disks can't spin down because these files are always open.

It is set to "prefer" sorry.

 

Quote

Wrong, docker.img is not the same as the appdata share, and docker.img doesn't need to be large for appdata to have access to lots of storage, since user shares can span disks (but see next paragraph). As I said before, 20G is probably plenty for docker.img size.

Correct, what i meant was that I want my containers to have access to the full 256gb 

 

Quote

The pool is named cache_appdata, not Chache_appdata. Can't tell whether you have repeated that typo in your configuration since diagnostics are incomplete. And note that linux is case-sensitive, so even if you don't make a typo, Cache_appdata is also wrong.

Also all spelling mistakes are fixed.

Link to comment
14 minutes ago, tommykmusic said:

I want my containers to have access to the full 256gb

containers can access any storage you map to them. Nothing to do with the size of docker.img, which only contains the executables of your containers.

50 minutes ago, trurl said:

20G is probably plenty for docker.img size

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.