• [6.8.3] docker image huge amount of unnecessary writes on cache


    S1dney
    • Solved Urgent

    EDIT (March 9th 2021):

    Solved in 6.9 and up. Reformatting the cache to new partition alignment and hosting docker directly on a cache-only directory brought writes down to a bare minimum.

     

    ###

     

    Hey Guys,

     

    First of all, I know that you're all very busy on getting version 6.8 out there, something I'm very much waiting on as well. I'm seeing great progress, so thanks so much for that! Furthermore I won't be expecting this to be on top of the priority list, but I'm hoping someone of the developers team is willing to invest (perhaps after the release).

     

    Hardware and software involved:

    2 x 1TB Samsung EVO 860, setup with LUKS encryption in BTRFS RAID1 pool.

     

    ###

    TLDR (but I'd suggest to read on anyway 😀)

    The image file mounted as a loop device is causing massive writes on the cache, potentially wearing out SSD's quite rapidly.

    This appears to be only happening on encrypted caches formatted with BTRFS (maybe only in RAID1 setup, but not sure).

    Hosting the Docker files directory on /mnt/cache instead of using the loopdevice seems to fix this problem.

    Possible idea for implementation proposed on the bottom.

     

    Grateful for any help provided!

    ###

     

    I have written a topic in the general support section (see link below), but I have done a lot of research lately and think I have gathered enough evidence pointing to a bug, I also was able to build (kind of) a workaround for my situation. More details below.

     

    So to see what was actually hammering on the cache I started doing all the obvious, like using a lot of find commands to trace files that were written to every few minutes and also used the fileactivity plugin. Neither was able trace down any writes that would explain 400 GBs worth of writes a day for just a few containers that aren't even that active.

     

    Digging further I moved the docker.img to /mnt/cach/system/docker/docker.img, so directly on the BTRFS RAID1 mountpoint. I wanted to check whether the unRAID FS layer was causing the loop2 device to write this heavy. No luck either.

    This gave me a situation I was able to reproduce on a virtual machine though, so I started with a recent Debian install (I know, it's not Slackware, but I had to start somewhere ☺️). I create some vDisks, encrypted them with LUKS, bundled them in a BTRFS RAID1 setup, created the loopdevice on the BTRFS mountpoint (same of /dev/cache) en mounted it on /var/lib/docker. I made sure I had to NoCow flags set on the IMG file like unRAID does. Strangely this did not show any excessive writes, iotop shows really healthy values for the same workload (I migrated the docker content over to the VM).

     

    After my Debian troubleshooting I went back over to the unRAID server, wondering whether the loopdevice is created weirdly, so I took the exact same steps to create a new image and pointed the settings from the GUI there. Still same write issues. 

     

    Finally I decided to put the whole image out of the equation and took the following steps:

    - Stopped docker from the WebGUI so unRAID would properly unmount the loop device.

    - Modified /etc/rc.d/rc.docker to not check whether /var/lib/docker was a mountpoint

    - Created a share on the cache for the docker files

    - Created a softlink from /mnt/cache/docker to /var/lib/docker

    - Started docker using "/etc/rd.d/rc.docker start"

    - Started my BItwarden containers.

     

    Looking into the stats with "iotstat -ao" I did not see any excessive writing taking place anymore.

    I had the containers running for like 3 hours and maybe got 1GB of writes total (note that on the loopdevice this gave me 2.5GB every 10 minutes!)

     

    Now don't get me wrong, I understand why the loopdevice was implemented. Dockerd is started with options to make it run with the BTRFS driver, and since the image file is formatted with the BTRFS filesystem this works at every setup, it doesn't even matter whether it runs on XFS, EXT4 or BTRFS and it will just work. I my case I had to point the softlink to /mnt/cache because pointing it /mnt/user would not allow me to start using the BTRFS driver (obviously the unRAID filesystem isn't BTRFS). Also the WebGUI has commands to scrub to filesystem inside the container, all is based on the assumption everyone is using docker on BTRFS (which of course they are because of the container 😁)

    I must say that my approach also broke when I changed something in the shares, certain services get a restart causing docker to be turned off for some reason. No big issue since it wasn't meant to be a long term solution, just to see whether the loopdevice was causing the issue, which I think my tests did point out.

     

    Now I'm at the point where I would definitely need some developer help, I'm currently keeping nearly all docker container off all day because 300/400GB worth of writes a day is just a BIG waste of expensive flash storage. Especially since I've pointed out that it's not needed at all. It does defeat the purpose of my NAS and SSD cache though since it's main purpose was hosting docker containers while allowing the HD's to spin down.

     

    Again, I'm hoping someone in the dev team acknowledges this problem and is willing to invest. I did got quite a few hits on the forums and reddit without someone actually pointed out the root cause of issue.

     

    I missing the technical know-how to troubleshoot the loopdevice issues on a lower level, but have been thinking on possible ways to implement a workaround. Like adjusting the Docker Settings page to switch off the use of a vDisk and if all requirements are met (pointing to /mnt/cache and BTRFS formatted) start docker on a share on the /mnt/cache partition instead of using the vDisk.

    In this way you would still keep all advantages of the docker.img file (cross filesystem type) and users who don't care about writes could still use it, but you'd be massively helping out others that are concerned over these writes.

     

    I'm not attaching diagnostic files since they would probably not point out the needed.

    Also if this should have been in feature requests, I'm sorry. But I feel that, since the solution is misbehaving in terms of writes, this could also be placed in the bugreport section.

     

    Thanks though for this great product, have been using it so far with a lot of joy! 

    I'm just hoping we can solve this one so I can keep all my dockers running without the cache wearing out quick,

     

    Cheers!

     

    • Like 3
    • Thanks 17



    User Feedback

    Recommended Comments



    If @capalm's table is to be believed, then I would really would like to hear something from the devs @jonp @limetech

     

    The data presented suggests that a feature that has come with 6.8.0 is causing these write amplifications by a factor of 4 to 5x. (according to the data that @capalm provided)

     

    I am assuming that virtually all users with 6.8.X are affected who are using ssds/nvmes as their cache drives. Many are not even aware (!) that their drives will have a shortened life span because of excessive and unnecessary writes.  When the drive prematurely fails during warranty time, the drive will not be replaced due to exceeding the TBW limit.

     

    Example: My raid1 nvmes burned through 3TBW within 1,5 hours due to nzbget hanging during an unpacking. I only caught this due to the the high temp warning email.

     

    @jonp @limetech

    Please give us an update or your take on this situation.

    Link to comment

    For those who can't wait for a true  solution, I solved my excessive SSD write issue by:

    1. Reformatting cache to xfs (lost all my dockers in the process, yeah it sucked)
    2. Switched from official Plex docker to Linuxserver Plex

    Now getting very minimal backgorund writes now ~5KB/s. Worth the hassle in my opinion.

    • Thanks 1
    Link to comment

    i’ve just found a problème with a vm.

    it’s a jeedom vm (home automation): debian with apache,php ans mysql.

    it has  a single vdisk and it’s on cache.

    i ran iotop in unraid and in the vm.

    results were extremely differents 

    during 10-15min i saw hundreds of MB written to disk by qemu process in unraid.

    During the same time i saw few MB writen to disk by mysql in the vm.

    result on cache disk was around 15 MB/s.

    it’s approximately half of the troughput i had during months.

     

    speaking of the table i posted above, figures come from diagnostics files.

    When interval time is short (1 or 2 days), accuracy is not very good as i have not taken in consideration the time of the day the diagnostic file has been downloaded.

    Edited by caplam
    typo
    Link to comment

    For anyone else that needs it, I was having more issues with libvirt/loop3 than docker/loop2, so I adapted @S1dney's solution from here for libvirt.

     

    A little CYA: To reiterate what has already been said, this workaround is not ideal and comes with some big caveats, so be sure to read through the thread and ask questions before implementing.

     

    I'm not going to get into it here, but I used S1dney's same basic directions for the docker by making backups and copying files to folders in /boot/config/.

     

     

    Create a share called libvirt on the cache drive just like for the docker instructions.

     

    edit rc.libvirt 's start_libvirtd method as follows:

    start_libvirtd() {
      if [ -f $LIBVIRTD_PIDFILE ];then
        echo "libvirt is already running..."
        exit 1
      fi
      if mountpoint /etc/libvirt &> /dev/null ; then
         echo "Image is mounted, will attempt to unmount it next."
         umount /etc/libvirt 1>/dev/null 2>&1
         if [[ $? -ne 0 ]]; then
           echo "Image still mounted at /etc/libvirt, cancelling cause this needs to be a symlink!"
           exit 1
         else
           echo "Image unmounted succesfully."
         fi
      fi
      # In order to have a soft link created, we need to remove the /etc/libvirt directory or creating a soft link will fail
      if [[ -d /etc/libvirt ]]; then
        echo "libvirt directory still exists, removing it so we can use it for the soft link."
        rm -rf /etc/libvirt
        if [[ -d /etc/libvirt ]]; then
          echo "/etc/libvirt still exists! Creating a soft link will fail thus refusing to start libvirt."
          exit 1
        else
          echo "Removed /etc/libvirt. Moving on."
        fi
      fi
      # Now that we know that the libvirt image isn't mounted, we want to make sure the symlink is active
      if [[ -L /etc/libvirt && -d /etc/libvirt ]]; then
        echo "/etc/libvirt is a soft link, libvirt is allowed to start"
      else
        echo "/etc/libvirt is not a soft link, will try to create it."
        ln -s /mnt/cache/libvirt /etc/ 1>/dev/null 2>&1
        if [[ $? -ne 0 ]]; then
          echo "Soft link could not be created, refusing to start libvirt!"
          exit 1
        else
          echo "Soft link created."
        fi
      fi
      # convert libvirt 1.3.1 w/ eric's hyperv vendor id patch to how libvirt does it in libvirt 1.3.3+
      sed -i -e "s/<vendor id='none'\/>/<vendor_id state='on' value='none'\/>/g" /etc/libvirt/qemu/*.xml &> /dev/null
      # remove <locked/> from xml because libvirt + virlogd + virlockd has an issue with locked
      sed -i -e "s/<locked\/>//g" /etc/libvirt/qemu/*.xml &> /dev/null
      # copy any new conf files we dont currently have
      cp -n /etc/libvirt-/*.conf /etc/libvirt &> /dev/null
      # add missing tss user account if coming from an older version of unRAID
      if ! grep -q "^tss:" /etc/passwd ; then
        useradd -r -c "Account used by the trousers package to sandbox the tcsd daemon" -d / -u 59 -g tss -s /bin/false tss
      fi
      echo "Starting libvirtd..."
      mkdir -p $(dirname $LIBVIRTD_PIDFILE)
      check_processor
      /sbin/modprobe -a $MODULE $MODULES
      /usr/sbin/libvirtd -d -l $LIBVIRTD_OPTS
    }

     

    Add this code the the go file in addition to the code for the docker workaround:

    # Put the modified libvirt service file over the original one to make it not use the libvirt.img
    cp /boot/config/service-mods/libvirt-service-mod/rc.libvirt /etc/rc.d/rc.libvirt
    chmod +x /etc/rc.d/rc.libvirt

     

    Edited by JTok
    grammar and clarity
    • Thanks 2
    Link to comment

    i think problems with my vm were related to qcow2 image format. I converted it to raw img and now kb written to disk are coherent between inside the vm and unraid.

    Cache writes seem to stabilize around 800kb/s for that vm.

     

    edit : could be related to btrfs driver. I downgraded to 6.7.2 and for the vm the problem is still there. 

    I think now i have to upgrade to 6.8.3 and apply the workaround of @S1dney

    Do i have to delete docker.img or move it elsewhere ? Mine is 80GB so it takes lot of space.

     

    edit2 : i applied the workaround in 6.7.2. I have to redownload all docker images (with a 5Mb/s connection it's a pain in the ass). I've redownloaded 3 docker but only 2 appear in the docker gui while portainer see them all.

    Edited by caplam
    Link to comment
    8 hours ago, JTok said:

    For anyone else that needs it, I was having more issues with libvirt/loop3 than docker/loop2, so I adapted @S1dney's solution from here for libvirt.

     

    A little CYA: To reiterate what has already been said, this workaround is not ideal and comes with some big caveats, so be sure to read through the thread and ask questions before implementing.

     

    I'm not going to get into it here, but I used S1dney's same basic directions for the docker by making backups and copying files to folders in /boot/config/.

     

     

    Create a share called libvirt on the cache drive just like for the docker instructions.

     

    edit rc.libvirt 's start_libvirtd method as follows:

    
    start_libvirtd() {
      if [ -f $LIBVIRTD_PIDFILE ];then
        echo "libvirt is already running..."
        exit 1
      fi
      if mountpoint /etc/libvirt &> /dev/null ; then
         echo "Image is mounted, will attempt to unmount it next."
         umount /etc/libvirt 1>/dev/null 2>&1
         if [[ $? -ne 0 ]]; then
           echo "Image still mounted at /etc/libvirt, cancelling cause this needs to be a symlink!"
           exit 1
         else
           echo "Image unmounted succesfully."
         fi
      fi
      # In order to have a soft link created, we need to remove the /etc/libvirt directory or creating a soft link will fail
      if [[ -d /etc/libvirt ]]; then
        echo "libvirt directory still exists, removing it so we can use it for the soft link."
        rm -rf /etc/libvirt
        if [[ -d /etc/libvirt ]]; then
          echo "/etc/libvirt still exists! Creating a soft link will fail thus refusing to start libvirt."
          exit 1
        else
          echo "Removed /etc/libvirt. Moving on."
        fi
      fi
      # Now that we know that the libvirt image isn't mounted, we want to make sure the symlink is active
      if [[ -L /etc/libvirt && -d /etc/libvirt ]]; then
        echo "/etc/libvirt is a soft link, libvirt is allowed to start"
      else
        echo "/etc/libvirt is not a soft link, will try to create it."
        ln -s /mnt/cache/libvirt /etc/ 1>/dev/null 2>&1
        if [[ $? -ne 0 ]]; then
          echo "Soft link could not be created, refusing to start libvirt!"
          exit 1
        else
          echo "Soft link created."
        fi
      fi
      # convert libvirt 1.3.1 w/ eric's hyperv vendor id patch to how libvirt does it in libvirt 1.3.3+
      sed -i -e "s/<vendor id='none'\/>/<vendor_id state='on' value='none'\/>/g" /etc/libvirt/qemu/*.xml &> /dev/null
      # remove <locked/> from xml because libvirt + virlogd + virlockd has an issue with locked
      sed -i -e "s/<locked\/>//g" /etc/libvirt/qemu/*.xml &> /dev/null
      # copy any new conf files we dont currently have
      cp -n /etc/libvirt-/*.conf /etc/libvirt &> /dev/null
      # add missing tss user account if coming from an older version of unRAID
      if ! grep -q "^tss:" /etc/passwd ; then
        useradd -r -c "Account used by the trousers package to sandbox the tcsd daemon" -d / -u 59 -g tss -s /bin/false tss
      fi
      echo "Starting libvirtd..."
      mkdir -p $(dirname $LIBVIRTD_PIDFILE)
      check_processor
      /sbin/modprobe -a $MODULE $MODULES
      /usr/sbin/libvirtd -d -l $LIBVIRTD_OPTS
    }

     

    Add this code the the go file in addition to the code for the docker workaround:

    
    # Put the modified libvirt service file over the original one to make it not use the libvirt.img
    cp /boot/config/service-mods/libvirt-service-mod/rc.libvirt /etc/rc.d/rc.libvirt
    chmod +x /etc/rc.d/rc.libvirt

     

    That's interesting, thanks for the work!

    Do you have any metrics to share?

     

    I'm using a simple script via the User Script plugin to keep track of total TB written every day:

     

    #!/bin/bash
    
    #>)>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>)
    #>)
    #>) Simple script to check the TBW of the SSD cache drives on daily basis
    #>)
    #>)>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>>)
    
    # Get the TBW of /dev/sdb
    TBWSDB_TB=$(/usr/sbin/smartctl -A /dev/sdb | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^4 }') 
    TBWSDB_GB=$(/usr/sbin/smartctl -A /dev/sdb | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^3 }') 
    
    echo "TBW on $(date +"%d-%m-%Y %H:%M:%S") --> $TBWSDB_TB TB, which is $TBWSDB_GB GB." >> /mnt/user/scripts/unraid/collect_ssd_tbw_daily/TBW_sdb.log
    
    
    # Get the TBW of /dev/sdb
    TBWSDG_TB=$(/usr/sbin/smartctl -A /dev/sdg | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^4 }')
    TBWSDG_GB=$(/usr/sbin/smartctl -A /dev/sdg | awk '$0~/LBAs/{ printf "%.1f\n", $10 * 512 / 1024^3 }')
    
    echo "TBW on $(date +"%d-%m-%Y %H:%M:%S") --> $TBWSDG_TB TB, which is $TBWSDG_GB GB." >> /mnt/user/scripts/unraid/collect_ssd_tbw_daily/TBW_sdg.log

    You would have to locate the correct devices in /dev though.

    I use it to look into the files once in a while to spot containers that write abnormally.

    If you would be able to rollback the fix once, have it run a few days and then reapply again to see what savings you get, that would be great. Surely you could use iotop the GUI to spot MB/s but this gives some more grasp on the longer term.

     

    45 minutes ago, caplam said:

    i think problems with my vm were related to qcow2 image format. I converted it to raw img and now kb written to disk are coherent between inside the vm and unraid.

    Cache writes seem to stabilize around 800kb/s for that vm.

     

    edit : could be related to btrfs driver. I downgraded to 6.7.2 and for the vm the problem is still there. 

    I think now i have to upgrade to 6.8.3 and apply the workaround of @S1dney

    Do i have to delete docker.img or move it elsewhere ? Mine is 80GB so it takes lot of space.

     

    edit2 : i applied the workaround in 6.7.2. I have to redownload all docker images (with a 5Mb/s connection it's a pain in the ass). I've redownloaded 3 docker but only 2 appear in the docker gui while portainer see them all.

    You should only remove the docker image once your 100% sure that there's no data in there you still need, cause recreating or deleting it is a permanent loss of data.

    I'm not really familiar with portainer, I can only speak for docker-compose, as I use that alongside the DockerMan GUI. Docker-compose created containers are only visible within the unRAID GUI once started, so this might be a similar thing?

     

    Containers that were created outside of the unRAID docker GUI (DockerMan) also don't get a template to easily recreate. But if you're using docker compose or something, recreating should be easy right.

    Link to comment
    55 minutes ago, S1dney said:

    You should only remove the docker image once your 100% sure that there's no data in there you still need, cause recreating or deleting it is a permanent loss of data.

    I'm not really familiar with portainer, I can only speak for docker-compose, as I use that alongside the DockerMan GUI. Docker-compose created containers are only visible within the unRAID GUI once started, so this might be a similar thing?

     

    Containers that were created outside of the unRAID docker GUI (DockerMan) also don't get a template to easily recreate. But if you're using docker compose or something, recreating should be easy right.

    i've dropped the use of docker compose. 

    The 3 docker i recreated were recreated with dockerman Gui and still only 2 appear in the gui even when they were started.

    docker template in unraid is very convenient. In my case there is a big downside: i have a poor connection so redownloading 60+ gig of images will take me around 2 days if shut down netflix for the kids. 🥵

    I wish i could extract images from docker.img.

    Link to comment
    10 hours ago, caplam said:

    i've dropped the use of docker compose. 

    The 3 docker i recreated were recreated with dockerman Gui and still only 2 appear in the gui even when they were started.

    docker template in unraid is very convenient. In my case there is a big downside: i have a poor connection so redownloading 60+ gig of images will take me around 2 days if shut down netflix for the kids. 🥵

    I wish i could extract images from docker.img.

    Hahaaah I see. You could try to cheat (remember at own risk, but I don’t think it could really hurt though).

     

    - Put a # before the line that copies over the modified rc.docker script in the /boot/config/go file

    - Reboot, so that the changes revert and the docker image is mounted again.

    - clear out the entire docker directory you’ve created (the new one so not the docker.img 🙂 , for the example I use /mnt/user/docker) so that just an empty docker directory remains

    - Stop all docker containers

    - Run cp -r /var/lib/docker /mnt/user/docker (so this recursively copies all files from the point where the docker image is mounted, to the location where you’ll be creating a symlink to)

    - Remove the # again from the go file

    - Reboot.

     

    I “think” this should work, only thing I could think of is that docker (since it’s still started when you copy the files) holds a lock on some files, causing the copy to fail. If that is the case you would need to stop the docker service, but that would also unmount the image again, so to not have that unmounted it would require some temporary changes to the dc.docker file again.

     

    It’s worth the try, you’re only copying files out of the image and can always just empty the /mnt/user/docker directory again.

     

    EDIT:

    After some PMs with @caplam we can safely advise NOT to go down the copy over data rabbit hole.

    It seemed like copying over the data will “inflate” it, most likely due to btrfs features like deduplication and/or layering/CoW. Not really into btrfs that much to fully explains what’s going on here, but I imagine you would be required to use of more specific tools to handle copying that btrfs specific data over.
    Recreating the containers (and thus redowloading) creates all the needed btrfs subvolumes. If your not a specialist I would not recommend missing with the btrfs data 🙂

     

    Edited by S1dney
    Update to advise not to take the copy over route
    Link to comment

    i tried to copy /var/lib/docker but failed (out of space). There must be some volumes that are mounted in /var/lib/docker/btrfs because when looking at the size of btrfs directory it's 315GB (docker.img is 60GB with 50GB used)

    Edited by caplam
    Link to comment
    22 minutes ago, caplam said:

    i tried to copy /var/lib/docker but failed (out of space). There must be some volumes that are mounted in /var/lib/docker/btrfs because when looking at the size of btrfs directory it's 315GB (docker.img is 60GB with 50GB used)

    Interesting.  

    I see a lot of files in the /mnt/user/docker/btrfs/subvolumes that correspond to container I deleted a big while ago, so I reckon docker doesn't really cleanup after itself, I might wipe out the entire /mnt/user/docker directory on time to save some space, I have a fast download link so don't care about redownloading 🙂

    However this doesn't seem to be the issue here I think.

    The /var/lib/docker/volumes folder contains persistent data, meaning it will most likely contain symlinks to your array.

    You could exclude certain directories from the copy action (like the volumes directory, since persistent data is persistent anyways), but I won't get into much details here since I don't want people to wreck their systems when they mistype certain commands 🙂

     

    You have a PM!

    Link to comment
    14 hours ago, S1dney said:

    That's interesting, thanks for the work!

    Do you have any metrics to share?

    It's funny you mention that. I haven't rolled the workaround back yet, but I'm starting to wonder if it isn't fixing the issue so much as obfuscating it.

    I'm planning on rolling the fix back later today/tonight to see what happens.

    I got some metrics by basically doing the same thing as your script (except manually). According to SMART reporting, my cache drives are writing 16.93GB/hr.

    This is even though when I implemented the workaround I also moved several VMs and a few docker paths to my nvme drives just to reduce the writes further.

     

    I'd be curious to know what others are seeing.

    Edited by JTok
    clarity
    • Like 1
    Link to comment
    12 hours ago, S1dney said:

    EDIT:

    After some PMs with @caplam we can safely advise NOT to go down the copy over data rabbit hole.

    It seemed like copying over the data will “inflate” it, most likely due to btrfs features like deduplication and/or layering/CoW. Not really into btrfs that much to fully explains what’s going on here, but I imagine you would be required to use of more specific tools to handle copying that btrfs specific data over.
    Recreating the containers (and thus redowloading) creates all the needed btrfs subvolumes. If your not a specialist I would not recommend missing with the btrfs data 🙂

     

    A note on this from my experience. I actually did this and it worked, but when I went to make changes to the dockers it failed and they would no longer start. I think I could have fixed it by clearing out the mountpoints, but I opted to just wipe the share and then re-add all the dockers from "my templates" which worked just fine.

     

    So, can confirm  --would not recommend.

     

    I did copy the libvirt folder though and have not noticed any ill effects (...yet. haha)

    Edited by JTok
    • Like 1
    Link to comment

    Back on track.

    To make this short.

    I change my cache from btrfs to xfs. In one hour loop2 has written 1400MB. So it seems good.

    The downside is all my vms and dockers are on cache without redundancy.

    I didn't have to redownload dockers. I simply move cache content to an unassigned ssd, made the change on cache and move back cache content.

    You can't count on mover as it's so slow. It moved only plex appdata folder (13GB) in 3 hours.

    Now i'll wait to a fix, i prefer to have a cache pool than a single drive.

    And i have to find 2 new ssd. 

    Link to comment
    4 hours ago, caplam said:

    Back on track.

    To make this short.

    I change my cache from btrfs to xfs. In one hour loop2 has written 1400MB. So it seems good.

    The downside is all my vms and dockers are on cache without redundancy.

    I didn't have to redownload dockers. I simply move cache content to an unassigned ssd, made the change on cache and move back cache content.

    You can't count on mover as it's so slow. It moved only plex appdata folder (13GB) in 3 hours.

    Now i'll wait to a fix, i prefer to have a cache pool than a single drive.

    And i have to find 2 new ssd. 

    That was my solution as well. Used a new nvme SSD (that i'd purchased as a spare as the current cache was being worn out so quickly) as an XFS formatted cache. loop2 writes now down to a sane level. btrfs and loop2 together seem definitely to be the  culprit. I would also prefer a cache pool, but this with daily backups of appdata is better for now.

    Link to comment

    Just as a data point. The average daily write volume to the cache drive when formatted btrfs, unencrypted, was just over 1.1 TB. Now that the drive is xfs, with the identical dockers and VMs, that has dropped to around 40 GB. That’s 4% of the previous amount.

    Link to comment

    Today i remembered to compare the LBAs since my last post on may 10th

    may 10th = 48905653011 LBAs

    June 1st = 68670054075 LBAs

    Difference = 19764401064 LBAs = 9424GB = 9.424TB in 21 days

    448GB/day
    18.7GB/hour
    0.31GB/minute

     

    I guess it could be way better since i'm not writting that much to the cache drive at all...

    Link to comment

    I tried just upgrading to 6.9b1 as part of my troubleshooting and it behaved exactly the same (all other things being equal).

    Link to comment

    Cheers, figured as much. 

     

    I'm starting a copy over of cache to convert to XFS. The writes are to the point that they're saturating my SSD"s write buffer, causing massive performance issues for anything on cache. 

     

    I'll be honest: I'll have a hard time going back to BTRFS after this. I think it'll be XFS and an hourly rsync or something until such a time as ZFS (hopefully) arrives to replace it. 

     

    Edit:

    Moved from unencrypted RAID1 pool (1TB+2x500GB 850 Evos) to a single 1TB unencrypted drive, and the writes to Loop2 have gone from over 100GB in an hour, to just over 100MB. All my containers and VMs are now performing as expected now that the SSDs aren't choking on writes as well.

    Edited by -Daedalus
    Link to comment

    i just done this to my Test unraid server aswell. 
    recreated my cache into XFS, from BTRFS. used mover to move shares from my cache to array while vm and docker services are disabled. then setting use cache to Yes and activate mover.

    i see no performance difference as of now...

    i hope Unraid devs look at this issue...

    Edited by mdsloop
    Link to comment

    and today i moved from mirror BTRFS Cache to Single SSD Cache XFS on my main Unraid Server... Took ages..

    And write are much lower now... 100kb/sec to 900kb/sec  and not 30+mb/sec like before...

    When do Devs solve this issue with BTRFS Cache abnormal Writes?
    i would love to get my mirrored Cache SSD back...

    Link to comment

    Another person with this issue signing in.

     

    Had 26TB in 21 days written to my cache drive, when i finally decided to investigate the high temp notifications etc.

     

    Changed from official plex to the binhex plex container, which brought be down from TB/day to about 120GB-ish per day. 

     

    Really look forward to finding a resolution to this one.

    Link to comment

    I had the same issue, on Thursday loop2 was doing around 5GB/h, managed to reduce that to ~2.5GB/h by turning off pihole docker.

    On Friday my new SSD arrived, so I replaced the btrfs raid1 (2x ADATA SU800 256GB) with 1x encrypted xfs Crucial MX500 1TB. Currently loop2 does around 200-300MB/h.

    Link to comment
    On 6/7/2020 at 9:36 AM, Moz80 said:

    Another person with this issue signing in.

     

    Had 26TB in 21 days written to my cache drive, when i finally decided to investigate the high temp notifications etc.

     

    Changed from official plex to the binhex plex container, which brought be down from TB/day to about 120GB-ish per day. 

     

    Really look forward to finding a resolution to this one.


    I have just now looked at the smart data for my ssd in unraid it has a line that says;

    202	Percent lifetime remain	0x0030	089	089	001	Old age	Offline	Never	11

    ...Does that really mean there is only 11% life left of my ssd that’s less than a month old?


    Popping into a calculator, using the lbas written, of 57527742008 shows me 26.79TB. From the crucial data sheet for the ssd they state 180TB written as the endurance of the drive (so I wasn’t as worried) ... but the smart data says only 11% so I’m freaking out a little now!
     

    Should I be worried? 


     

     

     

    6B7A1292-64A7-473D-A151-185502E48F37.png

    Link to comment
    42 minutes ago, Moz80 said:

    Does that really mean there is only 11% life left of my ssd that’s less than a month old?

    No, you have 89% left.

    • Thanks 1
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.