Jump to content

No space left on device - But there is


bnevets27

Recommended Posts

Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:38:32 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:42:55 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:43:16 Excelsior shfs/user: err: shfs_create: open: /mnt/cache/system/docker/appdata/plexpy/plexpy.db-shm (28) No space left on device
Jun 12 15:43:16 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:43:16 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:43:45 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:43:45 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:43:45 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Jun 12 15:43:45 Excelsior shfs/user: err: shfs_write: write: (28) No space left on device
Filesystem       1K-blocks        Used   Available Use% Mounted on
rootfs            20562224      817328    19744896   4% /
tmpfs             20637576         588    20636988   1% /run
devtmpfs          20562240           0    20562240   0% /dev
cgroup_root       20637576           0    20637576   0% /sys/fs/cgroup
tmpfs               131072       24904      106168  20% /var/log
/dev/sdb1          4013568      620864     3392704  16% /boot
/dev/md3        3905078064  3280289896   624788168  85% /mnt/disk3
/dev/md4        3905078064  3606664900   298413164  93% /mnt/disk4
/dev/md9        3905078064  3673819200   231258864  95% /mnt/disk9
/dev/md10       3905110812  2882537336  1022573476  74% /mnt/disk10
/dev/md11       3905110812  1981380952  1923729860  51% /mnt/disk11
/dev/md12       3905110812  3108918476   796192336  80% /mnt/disk12
/dev/md13       3905110812   137634948  3767475864   4% /mnt/disk13
/dev/md14       3905110812  1803370240  2101740572  47% /mnt/disk14
/dev/md15       3905078064     3924448  3901153616   1% /mnt/disk15
/dev/md16       3905078064  3471582932   433495132  89% /mnt/disk16
/dev/md22       2928835740  2849459684    79376056  98% /mnt/disk22
/dev/md23       2928835740  2460082128   468753612  84% /mnt/disk23
/dev/md24       2928835740   259663840  2669171900   9% /mnt/disk24
/dev/sdk1        488386552   281144908   207175988  58% /mnt/cache
shfs           47837451600 29519328980 18318122620  62% /mnt/user0
shfs           48325838152 29800473888 18525298608  62% /mnt/user
/dev/loop0        41943040    17939664    22928304  44% /var/lib/docker
/dev/loop1         1048576       17296      925776   2% /etc/libvirt

 

Complaining about no space left on cache but its clearly only 58% used. Whats going on?

 

Link to comment

Its probably this corruption on your cache

Jun 12 00:57:25 Excelsior emhttp: shcmd (210): mkdir -p /mnt/cache
Jun 12 00:57:25 Excelsior emhttp: shcmd (211): set -o pipefail ; mount -t btrfs -o noatime,nodiratime -U 86791a7c-d6e5-4a33-a9c8-e3669a1c89d1 /mnt/cache |& logger
Jun 12 00:57:25 Excelsior kernel: BTRFS info (device sdl1): disk space caching is enabled
Jun 12 00:57:25 Excelsior kernel: BTRFS info (device sdl1): has skinny extents
Jun 12 00:57:25 Excelsior kernel: BTRFS info (device sdl1): bdev /dev/sdk1 errs: wr 29136313, rd 29882604, flush 1579276, corrupt 1616562, gen 24415
Jun 12 00:57:25 Excelsior kernel: BTRFS info (device sdl1): detected SSD devices, enabling SSD mode
Jun 12 00:57:25 Excelsior kernel: BTRFS info (device sdl1): checking UUID tree

that's causing this problem (and as a result, the docker.img file is also complaining)

Jun 12 00:57:36 Excelsior kernel: BTRFS: device fsid 6b55a11a-d534-4432-aeb9-5589cd54b47a devid 1 transid 1046013 /dev/loop0
Jun 12 00:57:36 Excelsior kernel: BTRFS info (device loop0): disk space caching is enabled
Jun 12 00:57:36 Excelsior kernel: BTRFS info (device loop0): has skinny extents
Jun 12 00:57:36 Excelsior kernel: BTRFS warning (device loop0): loop0 checksum verify failed on 945078272 wanted AFFEE1BA found BAB79187 level 0
Jun 12 00:57:36 Excelsior kernel: BTRFS info (device loop0): read error corrected: ino 1 off 945078272 (dev /dev/loop0 sector 1862240)
Jun 12 00:57:36 Excelsior kernel: BTRFS info (device loop0): read error corrected: ino 1 off 945082368 (dev /dev/loop0 sector 1862248)
Jun 12 00:57:36 Excelsior kernel: BTRFS info (device loop0): read error corrected: ino 1 off 945086464 (dev /dev/loop0 sector 1862256)
Jun 12 00:57:36 Excelsior kernel: BTRFS info (device loop0): read error corrected: ino 1 off 945090560 (dev /dev/loop0 sector 1862264)
Jun 12 00:57:36 Excelsior kernel: BTRFS info (device loop0): bdev /dev/loop0 errs: wr 182, rd 0, flush 0, corrupt 0, gen 0
Jun 12 00:57:36 Excelsior kernel: BTRFS info (device loop0): The free space cache file (29360128) is invalid. skip it

 

@johnnie.black however is the resident expert on btrfs and its recovery from an issue such as this.

Link to comment

I thought the cache drive getting full the other day (is what I guess happened) is what caused the corruption. Though I have no idea how it got full but it is possible.

 

I did run a scrub after the initial hard shutdown/"full cache"

 

Isn't the log complaining that my appdata folder is full? I didn't see the log complaining about the docker.img itself.

 

If this is just a straight BTRFS issue then this will be the final straw to get rid of it. I thought a raid 1 cache pool would protect me from headache not cause them. Of the 4 machines I've had with various people all BTRFS formated cache drives have had multiple issues. All on single drives so converting to xfs is a easy solution for them.

 

Sent from my SM-N900W8 using Tapatalk

 

 

 

 

Link to comment
3 hours ago, bnevets27 said:

If this is just a straight BTRFS issue then this will be the final straw to get rid of it. I thought a raid 1 cache pool would protect me from headache not cause them.

 

No, all these errors read/write are caused by a hardware issue, check cables on this SSD:

Jun 12 00:57:25 Excelsior kernel: BTRFS info (device sdl1): bdev /dev/sdk1 errs: wr 29136313, rd 29882604, flush 1579276, corrupt 1616562, gen 24415

 

As for the not enough space errors, see this thread, it's the same problem:

 

 

 

Link to comment

Thanks jonnie.black.

 

Not exactly sure all what happened, I don't think I had size = sized used but can't remember. Ran balance and scrub and everything looks to be normal. was remove so couldn't look at the cable but the log looks clean for now. Next issue is my docker.img went nuts recently. Made some adjustments to what I think was the cause.

 

But whats going on with my docker.img? Is it at 90% usage or 23%?

 

Label: none  uuid: 6b55a11a-d534-4432-aeb9-5589cd54b47a
	Total devices 1 FS bytes used 9.78GiB
	devid    1 size 50.00GiB used 41.41GiB path /dev/loop0



Filesystem     1K-blocks     Used Available Use% Mounted on
/dev/loop0      52428800 11628636  39225460  23% /var/lib/docker

 

Link to comment

Yeah I changed some paths and likely messed one up the other day. Yesterday I went from 30% usage to 99% in about 2hrs. As far as I know I corrected that and did a bit of cleaning which I guess is how I got it back to 23%.

Without nuking the docker.img image I assume the allocated space won't ever come down.

Having the allocated space 90% of the amount of space assigned to docker I assume isn't a problem?

Just for further understanding. I assume this is why you can't shrink the docker.img. So docker will started to allocate more space to itself when it needs it. Kind of like expanding a partition? Later if data is removed then the allocated space never comes back down but you do end up freeing up space from the allocated space (partition).

The size that's set in the docker settings is just a limit to how much docker can expand its allocated space?

Sent from my SM-N900W8 using Tapatalk

Link to comment

Interesting. Since there is no gui button in docker for that I assume it has to be run on cmd line.

I wonder why there isn't a gui button for balance in the docker config but there is for the cache drive.

Also would it make any sense to run balance on a cron job? In the case of docker.img it shouldn't really be changing much so I guess it can be run manually if the user has cleaned up the img.
In the case of the cache drive, with constant adding and removing of files off the cache drive it seems to make sense to run a balance on it periodically, possibly a scrub too?

Sent from my SM-N900W8 using Tapatalk

Link to comment
2 hours ago, bnevets27 said:

I wonder why there isn't a gui button for balance in the docker config but there is for the cache drive.

 

Because it's not really needed.

 

2 hours ago, bnevets27 said:

Also would it make any sense to run balance on a cron job?

 

If the cache gets completed filled and emptied on a regular base it may make sense to avoid your initial problem, although later kernels are less prone to this.

Link to comment

Archived

This topic is now archived and is closed to further replies.

×
×
  • Create New...