Jump to content

Fix Common Problems - Rootfs file is getting full (currently 100 % used)


Recommended Posts

I recently moved Unraid from one machine to another. This involved moving the flash drive and array drives (parity and two disks), but I did not move the NVMe drive I was using as the exclusive cache pool disk as the new machine has an SSD I wanted to use instead. This was the basic procedure I followed:

 

  1. Moved everything I could from the cache pool to the array using the mover.
  2. This did not move everything (despite having shares configured to move from cache to array), so I used UnBalance to move everything else).
  3. Moved the flash drive and array disks to the new machine.
  4. Set shares to move from array to cache and activated mover.
  5. Again, this did not move everything, so I used UnBalance to move the rest.

 

I am pretty amazed at how smoothly the transition went, but I am getting the "Rootfs file is getting full (currently 100 % used)" from Fix Common Problems. Using the recommendations in this post, I am posting in General Support along with my diagnostics files and the results of the memory storage script.

 

Notably, the machine transfer was a downgrade for Unraid. I moved Unraid from a machine with 64 GB of DDR4 RAM and a 2TB NVMe cache drive to a machine with 16 GB of DDR3 RAM and a 256 GB SSD cache drive. The new machine is reporting 74% RAM usage and 104GB cache drive usage.

 

I am looking through the logs to see if anything jumps out. In the meantime, any help with this is greatly appreciated.

sanctum-diagnostics-20230213-1420.zip mem.txt

Edited by magicmagpie
Added link to diagnostic post
Link to comment

The usual reason for filling rootfs is specifying a path, such as a docker host path, that isn't storage.

 

All storage is in a subfolder of /mnt, but not all subfolders of /mnt are necessarily storage.

 

User shares (subfolders of /mnt/user), array disks (/mnt/disk1, etc), pools (/mnt/cache, etc), unassigned disks (/mnt/disks), remote shares (/mnt/remotes) are storage.

 

/boot is the flash drive.

 

All other paths are in rootfs along with the OS. If you fill rootfs the OS has no room to work in.

 

You will have to reboot and try to figure out where you have specified a path that isn't storage. The only pool you have currently is cache. Did you perhaps specify a path to a pool you no longer have?

Link to comment

Thank you @trurl, that was very informative. I spent some time looking into this, and after a few restarts the error actually went away. Not sure what changed to fix the error; I suspect the error might have been erroneous since I was unable to find any data that was not saved to storage. Not surprising since I changed systems.

 

I did order some additional RAM and reexamined some of my Docker settings (such as tuning the vdisk size to 20GB down from 60GB), so hopefully the error will not return.

Link to comment

Of course it will go away after reboot. rootfs is in RAM and whatever you filled it with will be gone when you reboot. Until you fill it again.

 

If you haven't identified the problem don't be surprised if it comes back. It was definitely NOT an erroneous error. I could see in your diagnostics that you had indeed filled rootfs

 

You can see how much of rootfs is used with this

df -h /

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...