loop2 (docker.img) writing a LOT to the cache


S1dney

Recommended Posts

Hey Guys,

 

So recently I noticed a big number of writes to my cache, while I really don't do a lot on there except for running a few docker containers.

I have found some other threads that did not really got a conclusive answer, but hoping posting here will point me in the (or a) direction.

The process loop2 seems to be the one responsible for the writes, the advances docker settings point out this maps to the docker image.

 

The screenshot below shows 2GB written in just a couple of minutes:

Tower.thumb.png.77585ec4008c51308caf86ea0d1dbe95.png

 

Now lsof on the pid doesn't really give a lot of open files on that PID.

LSOF.png.1dc9ed961dea952ecbad3bb1657e251b.png

 

I used the file activity plugin, enabling the cache directory by modifying the config file, but this also shows minor file activity on the cache.

On another thread someone suggested that mysql databases (and more specifically the mode their in) is to blame, but when only Bitwarden is active I also see the loop2 process go bananas (Bitwarden uses mssql instead of mysql).

 

I know SSD's nowadays have increased warranties covering a decent amount of TBW, but still it feels like a big waste (and maybe even a bug).

 

Anyone found out whats actually causing this load (and how to stop it)? Or else has an idea how to find out what the hell loop2 is writing to, cause it really seems like it's just writing in thin air... 😵

 

Any help is greatly appreciated. 

Cheers!

Edited by S1dney
Typo
Link to comment

This has to be related to one or more of your dockers writing something internal to the docker image.  I would suggest that the way forward is to start by disabling all the docker containers and then try enabling them one at a time until you can find the culprit.

 

Someone else mentioned that the write load seemed to be MUCH higher if the docker image was located on an encrypted drive.   Is this the case for you?    I do not use encrypted drives myself so I have no direct evidence if encryption does have this effect.

  • Like 1
Link to comment
10 minutes ago, itimpi said:

This has to be related to one or more of your dockers writing something internal to the docker image.  I would suggest that the way forward is to start by disabling all the docker containers and then try enabling them one at a time until you can find the culprit.

 

Someone else mentioned that the write load seemed to be MUCH higher if the docker image was located on an encrypted drive.   Is this the case for you?    I do not use encrypted drives myself so I have no direct evidence if encryption does have this effect.

Ah, you're right, forgot to include all the details.

I am running encryption on the cache and also have a cache pool with two SSD's.

Initially I thought this was only happening with the mysql running, but it seems to come randomly, some moments the writes go up a lot faster then other moments.

 

If something writing to the docker image, I must be able to trace it right?

3 TB written in one month for just Bitwarden and three other small containers seems like a bug?

 

Unfortunately I'm missing the kind of lowlevel expertise this issue might require.

PS: I might try disabling encryption later on, but since it's my vault data on there I'd rather not :/

Thanks again!

Link to comment
3 minutes ago, itimpi said:

Could well be, but the bug is likely to be within a container rather that at a higher level.

I understand.

But how would I track it down if neither the file activity plugin, nor lsof seem to point out a candidate?

 

I appreciate the efforts!! 🍻

Link to comment

Others have found using Encryption with BTRFS causing massive amounts of writes to their cache drive(sl and the moment they changed to be unencrypted the number of writes dropped drastically. Others have had good luck with single drive cache setup on XFS and Encryption.

 

BTRFS and Encryption = Massive Writes.

  • Like 1
Link to comment

Thanks again for the replies.

I actually tried putting two SATA's in RAID on the motherboard (to have RAID without unRAID knowing it), but the logical drive could not be seen from the OS.

Using RAID on the mobo is a crappy idea to start with is what I'm reading online, but it was worth a try.

 

Now I reformatted the drives to unencrypted btrfs, but with some dockers running I'm seeing similar results.

I'm now encrypting them again, so I can't say anything about the longer term (the containers were started a while when I checked though).

One thing I did notice from the iotop -ao output is that everytime the loop2 process climbes fast in writes, a line as the one below is shown:

-> dockerd -p /var/run/dockerd.pid --log-opt max-size=50m --log-opt max-file=1 --storage-driver=btrfs --log-level=error

 

I might need to figure out more on docker under the hood to give this any meaning but I have the feeling this might indicate excessive logging indeed.

I will dig further into this, maybe I'll find out the cause.

 

Cheers!

Edited by S1dney
Link to comment
On 11/5/2019 at 9:12 AM, BRiT said:

Others have found using Encryption with BTRFS causing massive amounts of writes to their cache drive(sl and the moment they changed to be unencrypted the number of writes dropped drastically. Others have had good luck with single drive cache setup on XFS and Encryption.

 

BTRFS and Encryption = Massive Writes.

Bummer. You say that others have found the above. Do we know if its an issue, a bug, or just the way BTRFS works?

Link to comment
20 minutes ago, Derek_ said:

Bummer. You say that others have found the above. Do we know if its an issue, a bug, or just the way BTRFS works?

Exactly, something I'm keen on knowing as well.

It just seems strange that tooling (like the file activity plugin monitoring the cache) isn't showing these writes and/or the writes appear to vanish in thin air. If something's really hammering on the cache it would at least show up I would expect? Leaving writes that BTRFS does internally..

Link to comment
  • 2 weeks later...

For those coming here for the same, I've done quite some testing lately and was able to pinpoint it to the loopdevice / use of image file on top of the cache.

If interested, all the details in the report.

 

I hope this will get any support from the dev team.

Thank for replying here though.

Link to comment
6 minutes ago, itimpi said:

Sounds like it might be an issue with specific docker containers?

That was what I initially thought also, but the exact same containers running out of the docker.img straightly on the cache are using only 1% of the writes compared to what the same containers running in the docker.img do.

 

I wasn’t able to track any files that were heavily modified either (details in the report :-) ), so this supports that statement also.

 

Cheers!

  • Thanks 1
Link to comment
  • 5 months later...

For the record I have this similar problem and was googling around these forums, reddit... and plex forums.

In my case, the official plex docker is the clear culprit, stopping it practically starves the loop2. Installing other alternatives (linuxserver) radically lowers the amount of writes.

https://www.reddit.com/r/unRAID/comments/ea85gc/high_disk_writes_with_official_plex_docker_in/

https://forums.plex.tv/t/pms-docker-unraid-is-constantly-writing-to-its-docker-home-library/419895

 

Now the numbers: (from iotop -oa -d 60 and collect 1 minute), I start the respective docker a couple minutes beforehand to discount spawning read/writes

  • No plex docker active: 58M Writen by loop2 => 81 GB/day
  • Official plex running: 387M Writen by loop2 => 0.5 TB/day
  • linuxserver/plex running: 36M Written by loop2 =>  50 GB/day

(Clearly there are other things writing around meanwhile, but there is a comfortable factor ~10 difference by not using the official plex docker.

 

 

So, I advice everybody to check their cache usage if using plex, and possibly gain years of life on your SSDs. You can simply point to the old appdata folder in the advanced docker configuration to save reconfiguring and watch progress. I imagine is a very good idea to NOT have both dockers running at the same time (I have and will not try).

 

(I would also advice unraid devs to place some kind of warning regarding these kind SSD killer issues?).


 

 

 

 

 

Link to comment
  • 2 weeks later...

Just want to add my experience to this.. I had the same problem with a lots of writes to the cache via loop2 and i can confirm it was the Official plex docker that caused issues on my system. I changed to linuxserver/plex and the issues went away... 

Link to comment
On 5/3/2020 at 12:45 PM, albertogomcas said:

For the record I have this similar problem and was googling around these forums, reddit... and plex forums.

In my case, the official plex docker is the clear culprit, stopping it practically starves the loop2. Installing other alternatives (linuxserver) radically lowers the amount of writes.

https://www.reddit.com/r/unRAID/comments/ea85gc/high_disk_writes_with_official_plex_docker_in/

https://forums.plex.tv/t/pms-docker-unraid-is-constantly-writing-to-its-docker-home-library/419895

 

Now the numbers: (from iotop -oa -d 60 and collect 1 minute), I start the respective docker a couple minutes beforehand to discount spawning read/writes

  • No plex docker active: 58M Writen by loop2 => 81 GB/day
  • Official plex running: 387M Writen by loop2 => 0.5 TB/day
  • linuxserver/plex running: 36M Written by loop2 =>  50 GB/day

(Clearly there are other things writing around meanwhile, but there is a comfortable factor ~10 difference by not using the official plex docker.

 

 

So, I advice everybody to check their cache usage if using plex, and possibly gain years of life on your SSDs. You can simply point to the old appdata folder in the advanced docker configuration to save reconfiguring and watch progress. I imagine is a very good idea to NOT have both dockers running at the same time (I have and will not try).

 

(I would also advice unraid devs to place some kind of warning regarding these kind SSD killer issues?).


 

 

 

 

 

Question - did you rebuild your config, or use the same plex appdata?

Link to comment

I ran into this same issue also despite not having an encrypted btrfs cache drive, and switching to linuxserver config seems to have resolved it, I'll report back in a couple days if the issue persists. 

 

On 5/19/2020 at 9:14 PM, boomam said:

Question - did you rebuild your config, or use the same plex appdata?

I'm not the person you asked, but plex is very flexible in this regard. I just stopped the official docker, and pointed the linuxserver one to my existing appdata folder and hit start. 

Link to comment

Only just realised yesterday that this was happening to me. My 21 day old 500GB sATA SSD has 26TB written already (yikes!). I can only account for about 6TB of that when installing and not turning off my cache on a share during an initial copy of data.

 

Last night did some trial and error with different running containers and definately saw the official Plex one using ALOT of data. Pihole seems to be using a fair bit for me aswell, although no where near as much. 

 

So having switched now to binhex-plex but still running PiHole, i've got 59.6GB written in the last 8 hours... still seems a little high. 

 

I'm wondering if the linuxserver container will be better than the binhex one?

Link to comment
16 hours ago, Moz80 said:

Only just realised yesterday that this was happening to me. My 21 day old 500GB sATA SSD has 26TB written already (yikes!). I can only account for about 6TB of that when installing and not turning off my cache on a share during an initial copy of data.

 

Last night did some trial and error with different running containers and definately saw the official Plex one using ALOT of data. Pihole seems to be using a fair bit for me aswell, although no where near as much. 

 

So having switched now to binhex-plex but still running PiHole, i've got 59.6GB written in the last 8 hours... still seems a little high. 

 

I'm wondering if the linuxserver container will be better than the binhex one?

 

 

 

So now with 24 hours of iotop data, i have 120GB on loop2 (much better than the TB's i would have had on the official plex container)

 

Interestingly I can see a few Plex Media Server entries (that seem consistent with some watch activity that occured overnight)

44GB of transmission activity seems a bit up there, there was a single 8GB torrent overnight.

 

But if all this activity is seperate to the loop2 entry, what exactly is going on with loop2?

 

Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     350.07 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                                                   
 5041 be/0 root       1936.26 M    119.93 G  0.00 %  0.31 % [loop2]
 5062 be/4 root       1968.00 K      3.21 G  0.00 %  0.08 % [btrfs-transacti]
 5110 be/4 root          0.00 B     65.55 M  0.00 %  0.04 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5090 be/4 root          4.00 K     75.10 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5112 be/4 root          4.00 K     72.13 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5115 be/4 root         16.00 K     79.57 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5114 be/4 root         24.00 K     67.71 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5109 be/4 root        212.00 K     80.83 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5086 be/4 root          0.00 B     67.98 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 9424 be/4 root        484.43 M     54.46 M  0.00 %  0.03 % python Tautulli.py --datadir /config
 4706 be/4 root          0.00 B      0.00 B  0.00 %  0.03 % [unraidd4]
 5075 be/4 root         48.00 K     58.54 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5120 be/4 root         20.00 K     81.87 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5123 be/4 root         52.00 K     64.18 M  0.00 %  0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5119 be/4 root        224.00 K     51.89 M  0.00 %  0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5107 be/4 root         12.00 K     60.95 M  0.00 %  0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5087 be/4 root         48.00 K     55.74 M  0.00 %  0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 7538 be/4 nobody       44.87 G   1104.00 K  0.00 %  0.02 % transmission-daemon -g /config -c /watch -f
 8870 be/4 nobody      488.26 M      2.27 M  0.00 %  0.02 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config
 5113 be/4 root          0.00 B     26.85 M  0.00 %  0.01 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 8267 be/4 nobody        3.07 G     20.00 K  0.00 %  0.00 % Plex Media Server
 8268 be/4 nobody        3.02 G     12.00 K  0.00 %  0.00 % Plex Media Server
 8869 be/4 nobody        3.51 G     27.51 M  0.00 %  0.00 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config
 5108 be/4 root          0.00 B      4.16 M  0.00 %  0.00 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 4865 be/4 root        252.00 K     67.11 M  0.00 %  0.00 % [btrfs-transacti]
 8952 be/4 nobody       86.22 M      7.18 M  0.00 %  0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config
28612 be/4 root         90.55 M      0.00 B  0.00 %  0.00 % homebridge
31972 be/4 root         15.64 M   1756.00 K  0.00 %  0.06 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
32221 be/4 root         11.69 M   1528.00 K  0.00 %  0.06 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
 8868 be/4 nobody       78.38 M   2040.00 K  0.00 %  0.00 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config
 8953 be/4 nobody       79.08 M   1432.00 K  0.00 %  0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config
 3728 be/4 root          4.11 M    352.00 K  0.00 %  0.14 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
 5076 be/4 root         76.00 K    252.00 K  0.00 %  0.37 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
 8160 be/4 nobody      298.57 M    624.00 K  0.00 %  0.00 % Plex Media Server
 9133 be/4 nobody      781.49 M   1476.00 K  0.00 %  0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config
 9134 be/4 nobody      317.68 M   1300.00 K  0.00 %  0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config
 9135 be/4 nobody      212.10 M   1568.00 K  0.00 %  0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config
 5077 be/4 root        236.00 K    140.00 K  0.00 %  0.28 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
28633 be/4 root         16.68 M      0.00 B  0.00 %  0.00 % homebridge-config-ui-x
 8954 be/4 nobody       77.48 M   1108.00 K  0.00 %  0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config
 8638 be/4 nobody        2.34 M     32.00 K  0.00 %  0.00 % tint2 -c /home/nobody/tint2/theme/tint2rc
 4324 be/4 root          6.19 M      8.28 M  0.00 %  0.12 % qemu-system-x86_64 -name guest=Windows 10,debug-thread~n=deny,resourcecontrol=deny -msg timestamp=on [worker]
 4931 be/4 root          5.45 M    128.00 K  0.00 %  0.18 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
28613 be/4 root        428.00 K      0.00 B  0.00 %  0.00 % tail -f /dev/null
18758 be/4 root        948.00 K     11.84 M  0.00 %  0.00 % [kworker/u12:3-btrfs-scrub]
 4922 be/4 root       1608.00 K    104.00 K  0.00 %  0.15 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
 9423 be/4 root          2.70 M    764.00 K  0.00 %  0.00 % python Tautulli.py --datadir /config
27586 be/4 root        110.12 M     68.00 K  0.00 %  0.00 % python Tautulli.py --datadir /config
 9441 be/4 root          6.93 M    244.00 K  0.00 %  0.00 % python Tautulli.py --datadir /config
 8114 be/4 root          0.00 B    304.00 K  0.00 %  0.00 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error
 8415 be/4 root          9.79 M    544.00 K  0.00 %  0.00 % python /usr/bin/supervisord -c /etc/supervisor.conf -n
24792 be/4 root        110.03 M     64.00 K  0.00 %  0.00 % python Tautulli.py --datadir /config

 

Link to comment
8 hours ago, Moz80 said:

 

 

 

So now with 24 hours of iotop data, i have 120GB on loop2 (much better than the TB's i would have had on the official plex container)

 

Interestingly I can see a few Plex Media Server entries (that seem consistent with some watch activity that occured overnight)

44GB of transmission activity seems a bit up there, there was a single 8GB torrent overnight.

 

But if all this activity is seperate to the loop2 entry, what exactly is going on with loop2?

 


Actual DISK READ:       0.00 B/s | Actual DISK WRITE:     350.07 K/s
  TID  PRIO  USER     DISK READ  DISK WRITE  SWAPIN     IO>    COMMAND                                                                                                   
 5041 be/0 root       1936.26 M    119.93 G  0.00 %  0.31 % [loop2]
 5062 be/4 root       1968.00 K      3.21 G  0.00 %  0.08 % [btrfs-transacti]
 5110 be/4 root          0.00 B     65.55 M  0.00 %  0.04 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5090 be/4 root          4.00 K     75.10 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5112 be/4 root          4.00 K     72.13 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5115 be/4 root         16.00 K     79.57 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5114 be/4 root         24.00 K     67.71 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5109 be/4 root        212.00 K     80.83 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5086 be/4 root          0.00 B     67.98 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 9424 be/4 root        484.43 M     54.46 M  0.00 %  0.03 % python Tautulli.py --datadir /config
 4706 be/4 root          0.00 B      0.00 B  0.00 %  0.03 % [unraidd4]
 5075 be/4 root         48.00 K     58.54 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5120 be/4 root         20.00 K     81.87 M  0.00 %  0.03 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5123 be/4 root         52.00 K     64.18 M  0.00 %  0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5119 be/4 root        224.00 K     51.89 M  0.00 %  0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5107 be/4 root         12.00 K     60.95 M  0.00 %  0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 5087 be/4 root         48.00 K     55.74 M  0.00 %  0.02 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 7538 be/4 nobody       44.87 G   1104.00 K  0.00 %  0.02 % transmission-daemon -g /config -c /watch -f
 8870 be/4 nobody      488.26 M      2.27 M  0.00 %  0.02 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config
 5113 be/4 root          0.00 B     26.85 M  0.00 %  0.01 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 8267 be/4 nobody        3.07 G     20.00 K  0.00 %  0.00 % Plex Media Server
 8268 be/4 nobody        3.02 G     12.00 K  0.00 %  0.00 % Plex Media Server
 8869 be/4 nobody        3.51 G     27.51 M  0.00 %  0.00 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config
 5108 be/4 root          0.00 B      4.16 M  0.00 %  0.00 % dockerd -p /var/run/dockerd.pid --storage-driver=btrfs --log-level=error
 4865 be/4 root        252.00 K     67.11 M  0.00 %  0.00 % [btrfs-transacti]
 8952 be/4 nobody       86.22 M      7.18 M  0.00 %  0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config
28612 be/4 root         90.55 M      0.00 B  0.00 %  0.00 % homebridge
31972 be/4 root         15.64 M   1756.00 K  0.00 %  0.06 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
32221 be/4 root         11.69 M   1528.00 K  0.00 %  0.06 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
 8868 be/4 nobody       78.38 M   2040.00 K  0.00 %  0.00 % mono --debug /usr/lib/sonarr/NzbDrone.exe -nobrowser -data=/config
 8953 be/4 nobody       79.08 M   1432.00 K  0.00 %  0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config
 3728 be/4 root          4.11 M    352.00 K  0.00 %  0.14 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
 5076 be/4 root         76.00 K    252.00 K  0.00 %  0.37 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
 8160 be/4 nobody      298.57 M    624.00 K  0.00 %  0.00 % Plex Media Server
 9133 be/4 nobody      781.49 M   1476.00 K  0.00 %  0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config
 9134 be/4 nobody      317.68 M   1300.00 K  0.00 %  0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config
 9135 be/4 nobody      212.10 M   1568.00 K  0.00 %  0.00 % mono --debug /usr/lib/lidarr/Lidarr.exe -nobrowser -data=/config
 5077 be/4 root        236.00 K    140.00 K  0.00 %  0.28 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
28633 be/4 root         16.68 M      0.00 B  0.00 %  0.00 % homebridge-config-ui-x
 8954 be/4 nobody       77.48 M   1108.00 K  0.00 %  0.00 % mono --debug /usr/lib/radarr/Radarr.exe -nobrowser -data=/config
 8638 be/4 nobody        2.34 M     32.00 K  0.00 %  0.00 % tint2 -c /home/nobody/tint2/theme/tint2rc
 4324 be/4 root          6.19 M      8.28 M  0.00 %  0.12 % qemu-system-x86_64 -name guest=Windows 10,debug-thread~n=deny,resourcecontrol=deny -msg timestamp=on [worker]
 4931 be/4 root          5.45 M    128.00 K  0.00 %  0.18 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
28613 be/4 root        428.00 K      0.00 B  0.00 %  0.00 % tail -f /dev/null
18758 be/4 root        948.00 K     11.84 M  0.00 %  0.00 % [kworker/u12:3-btrfs-scrub]
 4922 be/4 root       1608.00 K    104.00 K  0.00 %  0.15 % shfs /mnt/user -disks 63 2048000000 -o noatime,allow_other -o remember=0
 9423 be/4 root          2.70 M    764.00 K  0.00 %  0.00 % python Tautulli.py --datadir /config
27586 be/4 root        110.12 M     68.00 K  0.00 %  0.00 % python Tautulli.py --datadir /config
 9441 be/4 root          6.93 M    244.00 K  0.00 %  0.00 % python Tautulli.py --datadir /config
 8114 be/4 root          0.00 B    304.00 K  0.00 %  0.00 % containerd --config /var/run/docker/containerd/containerd.toml --log-level error
 8415 be/4 root          9.79 M    544.00 K  0.00 %  0.00 % python /usr/bin/supervisord -c /etc/supervisor.conf -n
24792 be/4 root        110.03 M     64.00 K  0.00 %  0.00 % python Tautulli.py --datadir /config

 

I see you’re using the btrfs storage driver.

loop2 activity will probably related to this open Bugreport.

Link to comment
14 hours ago, S1dney said:

I see you’re using the btrfs storage driver.

loop2 activity will probably related to this open Bugreport.

Hey mate, absolutely, I found and bookmarked that last night on my ipad, I guess that's where the discussion on this is happening now. I'll have a good read through and see where things are up to!

 

Thanks for all your efforts thus far!

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.