• [6.8.3] docker image huge amount of unnecessary writes on cache


    S1dney
    • Solved Urgent

    EDIT (March 9th 2021):

    Solved in 6.9 and up. Reformatting the cache to new partition alignment and hosting docker directly on a cache-only directory brought writes down to a bare minimum.

     

    ###

     

    Hey Guys,

     

    First of all, I know that you're all very busy on getting version 6.8 out there, something I'm very much waiting on as well. I'm seeing great progress, so thanks so much for that! Furthermore I won't be expecting this to be on top of the priority list, but I'm hoping someone of the developers team is willing to invest (perhaps after the release).

     

    Hardware and software involved:

    2 x 1TB Samsung EVO 860, setup with LUKS encryption in BTRFS RAID1 pool.

     

    ###

    TLDR (but I'd suggest to read on anyway 😀)

    The image file mounted as a loop device is causing massive writes on the cache, potentially wearing out SSD's quite rapidly.

    This appears to be only happening on encrypted caches formatted with BTRFS (maybe only in RAID1 setup, but not sure).

    Hosting the Docker files directory on /mnt/cache instead of using the loopdevice seems to fix this problem.

    Possible idea for implementation proposed on the bottom.

     

    Grateful for any help provided!

    ###

     

    I have written a topic in the general support section (see link below), but I have done a lot of research lately and think I have gathered enough evidence pointing to a bug, I also was able to build (kind of) a workaround for my situation. More details below.

     

    So to see what was actually hammering on the cache I started doing all the obvious, like using a lot of find commands to trace files that were written to every few minutes and also used the fileactivity plugin. Neither was able trace down any writes that would explain 400 GBs worth of writes a day for just a few containers that aren't even that active.

     

    Digging further I moved the docker.img to /mnt/cach/system/docker/docker.img, so directly on the BTRFS RAID1 mountpoint. I wanted to check whether the unRAID FS layer was causing the loop2 device to write this heavy. No luck either.

    This gave me a situation I was able to reproduce on a virtual machine though, so I started with a recent Debian install (I know, it's not Slackware, but I had to start somewhere ☺️). I create some vDisks, encrypted them with LUKS, bundled them in a BTRFS RAID1 setup, created the loopdevice on the BTRFS mountpoint (same of /dev/cache) en mounted it on /var/lib/docker. I made sure I had to NoCow flags set on the IMG file like unRAID does. Strangely this did not show any excessive writes, iotop shows really healthy values for the same workload (I migrated the docker content over to the VM).

     

    After my Debian troubleshooting I went back over to the unRAID server, wondering whether the loopdevice is created weirdly, so I took the exact same steps to create a new image and pointed the settings from the GUI there. Still same write issues. 

     

    Finally I decided to put the whole image out of the equation and took the following steps:

    - Stopped docker from the WebGUI so unRAID would properly unmount the loop device.

    - Modified /etc/rc.d/rc.docker to not check whether /var/lib/docker was a mountpoint

    - Created a share on the cache for the docker files

    - Created a softlink from /mnt/cache/docker to /var/lib/docker

    - Started docker using "/etc/rd.d/rc.docker start"

    - Started my BItwarden containers.

     

    Looking into the stats with "iotstat -ao" I did not see any excessive writing taking place anymore.

    I had the containers running for like 3 hours and maybe got 1GB of writes total (note that on the loopdevice this gave me 2.5GB every 10 minutes!)

     

    Now don't get me wrong, I understand why the loopdevice was implemented. Dockerd is started with options to make it run with the BTRFS driver, and since the image file is formatted with the BTRFS filesystem this works at every setup, it doesn't even matter whether it runs on XFS, EXT4 or BTRFS and it will just work. I my case I had to point the softlink to /mnt/cache because pointing it /mnt/user would not allow me to start using the BTRFS driver (obviously the unRAID filesystem isn't BTRFS). Also the WebGUI has commands to scrub to filesystem inside the container, all is based on the assumption everyone is using docker on BTRFS (which of course they are because of the container 😁)

    I must say that my approach also broke when I changed something in the shares, certain services get a restart causing docker to be turned off for some reason. No big issue since it wasn't meant to be a long term solution, just to see whether the loopdevice was causing the issue, which I think my tests did point out.

     

    Now I'm at the point where I would definitely need some developer help, I'm currently keeping nearly all docker container off all day because 300/400GB worth of writes a day is just a BIG waste of expensive flash storage. Especially since I've pointed out that it's not needed at all. It does defeat the purpose of my NAS and SSD cache though since it's main purpose was hosting docker containers while allowing the HD's to spin down.

     

    Again, I'm hoping someone in the dev team acknowledges this problem and is willing to invest. I did got quite a few hits on the forums and reddit without someone actually pointed out the root cause of issue.

     

    I missing the technical know-how to troubleshoot the loopdevice issues on a lower level, but have been thinking on possible ways to implement a workaround. Like adjusting the Docker Settings page to switch off the use of a vDisk and if all requirements are met (pointing to /mnt/cache and BTRFS formatted) start docker on a share on the /mnt/cache partition instead of using the vDisk.

    In this way you would still keep all advantages of the docker.img file (cross filesystem type) and users who don't care about writes could still use it, but you'd be massively helping out others that are concerned over these writes.

     

    I'm not attaching diagnostic files since they would probably not point out the needed.

    Also if this should have been in feature requests, I'm sorry. But I feel that, since the solution is misbehaving in terms of writes, this could also be placed in the bugreport section.

     

    Thanks though for this great product, have been using it so far with a lot of joy! 

    I'm just hoping we can solve this one so I can keep all my dockers running without the cache wearing out quick,

     

    Cheers!

     

    • Like 3
    • Thanks 17



    User Feedback

    Recommended Comments



    Hello,

     

    my first post ever here, so I just have to use the opportunity to say what an amazing product unRaid is :)

     

    However, I too have the samme issue as OP. Its seems that my poor little SSD has written around 50TB of data on it. (Bought June 2019, only used for unRaid).

     

    I was for a long time on version 6.7, but after upgrading directly to 6.8.3, I started getting a lot of notifications about excess heat on my SSD and saw that something wrote massively to the drive.

    I checked every docker app, turned them off - but still had the issue.

    (At the same time as upgrading the unRaid version, I also did extend my docker image to have more space, since I had to stop the array/boot the server anyways.)

     

    I almost never got notifications about excess heat on 6.7. (EDIT: as a result of the massive writing of course) This led me to googling and finding this post.

     

    I implemented the solution described here, and it worked! No more constant write and heat notifications on my SSD drive( BTRFS, encrypted).

     

    A bit scared to boot the server now, since I know rc.docker will reset. (Will it however boot my docker apps as normal, even without the symlink?)

     

     

     

     

     

     

    Edited by Thor
    Link to comment
    On 3/20/2020 at 10:00 AM, Thor said:

    Hello,

     

    my first post ever here, so I just have to use the opportunity to say what an amazing product unRaid is :)

     

    However, I too have the samme issue as OP. Its seems that my poor little SSD has written around 50TB of data on it. (Bought June 2019, only used for unRaid).

     

    I was for a long time on version 6.7, but after upgrading directly to 6.8.3, I started getting a lot of notifications about excess heat on my SSD and saw that something wrote massively to the drive.

    I checked every docker app, turned them off - but still had the issue.

    (At the same time as upgrading the unRaid version, I also did extend my docker image to have more space, since I had to stop the array/boot the server anyways.)

     

    I almost never got notifications about excess heat on 6.7. (EDIT: as a result of the massive writing of course) This led me to googling and finding this post.

     

    I implemented the solution described here, and it worked! No more constant write and heat notifications on my SSD drive( BTRFS, encrypted).

     

    A bit scared to boot the server now, since I know rc.docker will reset. (Will it however boot my docker apps as normal, even without the symlink?)

     

     

     

     

     

     

    Hey man,

     

    If you reboot your server everything will be reset to default and unRaid will mount the normal docker image upon boot.

    Assuming you've created the share which I've mentioned, then docker won't be able to see those, since they work created on the path targeted by the symlink and not inside the docker.img file.

    So it will see/start whatever container you have created into the docker image, or nothing if you created a new one before starting to work with the symlink approach.

     

    I've read though my posts real quick and noticed I have not yet provided my final solution, so let me share that.

    Basically what I did was:

     

    1. Create a share named docker (which has to be cache only or this will break after the mover kicks in!)
    2. Created a directory "docker-service-mod" at /boot/config/, which will obviously survive a reboot since it's on flash. --> command: mkdir /boot/config/docker-service-mod
    3. Copied the original rc.docker file to flash also (this allows me to easily check if the unRaid devs have made a change in a later version if my docker service fails all of a sudden). --> command: cp /etc/rc.d/rc.docker /boot/config/docker-service-mod/rc.docker.original
    4. Remove the rc.docker file --> command: rm /etc/rc.d/rc.docker
    5. Create a new /etc/rc.d/rc.docker file with everything that was in the other one, but replace the docker_start() function with a custom one (as defined code block below).

      To get this on unRaid you have multiple ways. You could use vi to edit the function directly the file at /etc/rc.d/rc.docker (vi has funky syntax though if you're not a regular vi user, I believe unRAID also has nano installed which is more user friendly for non-vi users) .

      You can also create a file named rc.docker locally (on Windows for example), copy the content of the original rc.docker file in it, make the changes to the start_docker() function and use WinSCP to copy it to /etc/rc.d/. If you copy it from Windows, make it executable with "chmod +x /etc/rc.d/rc.docker" (not sure if it's needed, but setting the execute bit on there won't hurt for sure here, since it's a script)
    6. # Start docker
      start_docker(){
        if is_docker_running; then
          echo "$DOCKER is already running"
          return 1
        fi
        if mountpoint $DOCKER_ROOT &>/dev/null; then
           echo "Image is mounted, will attempt to unmount it next."
           umount $DOCKER_ROOT 1>/dev/null 2>&1
           if [[ $? -ne 0 ]]; then
             echo "Image still mounted at $DOCKER_ROOT, cancelling cause this needs to be a symlink!"
             exit 1
           else
             echo "Image unmounted succesfully."
           fi
        fi
        # In order to have a soft link created, we need to remove the /var/lib/docker directory or creating a soft link will fail
        if [[ -d $DOCKER_ROOT ]]; then
          echo "Docker directory still exists, removing it so we can use it for the soft link."
          rm -rf $DOCKER_ROOT
          if [[ -d $DOCKER_ROOT ]]; then
            echo "$DOCKER_ROOT still exists! Creating a soft link will fail thus refusing to start docker."
            exit 1
          else
            echo "Removed $DOCKER_ROOT. Moving on."
          fi
        fi
        # Now that we know that the docker image isn't mounted, we want to make sure the symlink is active
        if [[ -L $DOCKER_ROOT && -d $DOCKER_ROOT ]]; then
          echo "$DOCKER_ROOT is a soft link, docker is allowed to start"
        else
          echo "$DOCKER_ROOT is not a soft link, will try to create it."
          ln -s /mnt/cache/docker /var/lib 1>/dev/null 2>&1
          if [[ $? -ne 0 ]]; then
            echo "Soft link could not be created, refusing to start docker!"
            exit 1
          else
            echo "Soft link created."
          fi
        fi
        echo "starting $BASE ..."
        if [[ -x $DOCKER ]]; then
          # If there is an old PID file (no docker running), clean it up:
          if [[ -r $DOCKER_PIDFILE ]]; then
            if ! ps axc|grep docker 1>/dev/null 2>&1; then
              echo "Cleaning up old $DOCKER_PIDFILE."
              rm -f $DOCKER_PIDFILE
            fi
          fi
          nohup $UNSHARE --propagation slave -- $DOCKER -p $DOCKER_PIDFILE $DOCKER_OPTS >>$DOCKER_LOG 2>&1 &
        fi
      }

       

    7. Copy the new rc.docker file to flash. --> command: cp /etc/rc.d/rc.docker /boot/config/docker-service-mod/rc.docker 
    8. Modify the /boot/config/go file, so that unRaid injects the modified version of the rc.docker file into the /etc/rc.d/ directory before starting emhttp (which will also start docker). I used vi for that. My go file has several other things in it, but the relevant part is below. The chmod +x command might not be necessary cause it worked also before I used it, however I feel more comfortable knowing it explicitly sets the execution bit:
    9. #!/bin/bash
      
      # Put the modified docker service file over the original one to make it not use the docker.img
      cp /boot/config/docker-service-mod/rc.docker /etc/rc.d/rc.docker
      chmod +x /etc/rc.d/rc.docker
      
      # Start the Management Utility
      /usr/local/sbin/emhttp &

       

    I've been using this for a while and have been very happy with it so far.

    Be aware though that this might break one day if Limetech decides that the rc.docker script should be modified (which is why I keep a copy of the original one like mentioned).

    It could also break if Limetech steps away from the Go file.

    Simple thing you can do is recreate the docker.img file from the GUI before you take the steps above (this will destroy each container and also any data that you have not made persistent!), then if a future update breaks this, you'll have no docker container running (since docker is started on a blank image). You should be aware of this pretty quick.

     

    Only one side effect of this approach I was able to determine is that opening the docker settings page takes a while to load, cause the GUI tries to run some commands on the BTRFS filesystem of the image (which is not there anymore). This will freeze up the settings page for a bit and push some resource usage in one CPU core until that command times out. Not a deal breaker for me, if you want to turn off docker, then simple wait until the command times out and reveals the rest of the page (without BTRFS status info).

     

    Good luck!

    Edited by S1dney
    • Thanks 1
    Link to comment
    On 3/21/2020 at 10:36 AM, S1dney said:

    I've been using this for a while and have been very happy with it so far.

    This is really not the recommended way to solve your issues.

     

    rc.docker gets updated with new versions of Unraid and this will certainly break old versions.

     

    It is always possible to make a feature request and do a proposal for improvement

    (sorry didn't go through all details here, so not sure what the actual changes are)

    • Like 1
    Link to comment
    20 hours ago, bonienl said:

    This is really not the recommended way to solve your issues.

     

    rc.docker gets updated with new versions of Unraid and this will certainly break old versions.

     

    It is always possible to make a feature request and do a proposal for improvement

    (sorry didn't go through all details here, so not sure what the actual changes are)

    Well, I'm actually expecting it to break at one point, which is why I've made a copy of the original rc.docker script so I can diff that against a possible new rc.docker script after a future upgrade.

    In short I just modified the start_docker function inside the rc.docker script to unmount the docker image, remove the /var/lib/docker directory and create a symlink to a directory directly on my cache. Simple hack to put the loopdevice out of the equation.

     

    Also I very much agree that this hack isn't a real fix, but I've bought expensive Samsung EVO SSD's for the 5 year warranty on them and for me it was not acceptable to void the warranty in two years (or less) due to writes that were going into a black hole. I noticed this problem had been around for a long while and I wasn't expecting a fix soon, hence I created my own temporary patch. I can also imagine @Thor isn't really keen on waiting for a fix since his drives are overheating non stop.

     

    Now I did actually made a suggestion in my original post, but this would rather be a workaround, since the root cause of the massive writes would be better solved:

    Quote

    I missing the technical know-how to troubleshoot the loopdevice issues on a lower level, but have been thinking on possible ways to implement a workaround. Like adjusting the Docker Settings page to switch off the use of a vDisk and if all requirements are met (pointing to /mnt/cache and BTRFS formatted) start docker on a share on the /mnt/cache partition instead of using the vDisk.

    In this way you would still keep all advantages of the docker.img file (cross filesystem type) and users who don't care about writes could still use it, but you'd be massively helping out others that are concerned over these writes.

     

    Like said, I understand your point and understand that as a developer you're wary for modifications to system scripts that might cause failures later on.

    For me it was the only way to work around docker hammering on the cache like crazy and felt like I needed to share it for someone that has similar issues, of course with a warning that modifying these scripts comes with a risk :)

     

    I'm sure this bug report will get bumped once in a while and will eventually be solved, but I have a workable solution for now. Appreciate the work!

     

    🙌

    Edited by S1dney
    Link to comment

    I also have this problem. Been using unraid for just 14 days and in these 14 days Unraid has written 802 GB to cache (1 drive btrfs encrypted). I only have 8 containers and no VMs. I only use the cache for appdata , system and domains.

     

    It looks this issue has been existing for a long time, and I can not understand how it has not been fixed. Is it not serious enough?

    • Like 1
    Link to comment

    I think it will be pushed further up the devs todo list if more people experience and notify them about it :)

     

     

    Link to comment

    Same here!

     

    unRAID 6.8.3
    WD Black SN750 NVME with 500GB from 2020-01 (new!)

    Power on hours: 1,362

    Data units written 68.5 TB

     

    TBW is 250TB

    massiv writes on loop device

    Edited by leo_poldX
    Link to comment

    I am also seeing the problem.

     

    unRaid 6.8.3

    Intel 1TB 660Op installed 7/2019

    Power on hours: 6451

    Data Written: 41.2 TB

     

    Cache only used for dockers and VMs.

     

    Link to comment

    Has anyone found a proper solution to this or heard from the devs? My 1 month old 500GB 860 evos have 22TB of writes and Im not using encryption with btrfs.

    Link to comment
    7 hours ago, Raesche said:

    Has anyone found a proper solution to this or heard from the devs? My 1 month old 500GB 860 evos have 22TB of writes and Im not using encryption with btrfs.

    The devs were involved in this topic already, but this has not yielded any results yet.

     

    No proper solution yet.

    Only way to get around those excessive writes (that I found) is by putting the loop device out of the equation.

    I've described a (non-supported) way to do so, which works well for me.

     

    You're running in a BTRFS pool if I understand correctly? Unencrypted?

    Thought this was solely based on encrypted pools.

     

    What you could to is install the "Nerd Pack" from the Community Applications, then head over to Setting -> "Nerd Pack" and install "iotop".

    Then from the commandline run "iotop -ao" (shows an accumulated view of all processes that actually read and write).

    See if one partical item (besides loop2) stand out in a big way, this could indicate one container is running wild, I have found topics were certain databases would write tremendously on the cache.

    Link to comment
    2 hours ago, S1dney said:

    The devs were involved in this topic already, but this has not yielded any results yet.

     

    No proper solution yet.

    Only way to get around those excessive writes (that I found) is by putting the loop device out of the equation.

    I've described a (non-supported) way to do so, which works well for me.

     

    You're running in a BTRFS pool if I understand correctly? Unencrypted?

    Thought this was solely based on encrypted pools.

     

    What you could to is install the "Nerd Pack" from the Community Applications, then head over to Setting -> "Nerd Pack" and install "iotop".

    Then from the commandline run "iotop -ao" (shows an accumulated view of all processes that actually read and write).

    See if one partical item (besides loop2) stand out in a big way, this could indicate one container is running wild, I have found topics were certain databases would write tremendously on the cache.

    I may have to give your work around a shot. I've been using iotop to watch things and found the official PMS Plex docker was causing a lot of writes. I move the the linuxserver docker to see if that helps, but something was still causing loop2 to write a bunch. PMS just made it a lot worse. My plex DB isnt even on my cache disks, its on a separate SSD via unassigned drives.

     

    I have running a XFS array with a BTRFS cache pool and it unencrypted, so the issue isnt isolated to encrypted pools strangely.

    Link to comment

    Hey, 

    I'm seeing this exact same issue on an unencrypted raid1 cache drive too.

    My server hasn't been online for 6 months and they have done around 200TBW already. It is writing 5-15MB/s to the cache drives all the time, regardless of what is running.

     

    unRaid 6.8.3

    Cache1 Samsung_SSD_970_EVO_Plus_1TB - 1 TB (nvme0n1)

    Cache2 Samsung_SSD_970_EVO_Plus_1TB - 1 TB (nvme1n1)

    Power on hours ~4000

     

    I've spent many hours trying to debug this so I am keen for a solution too otherwise my drives won't last 18 months.

     

    Link to comment
    4 minutes ago, nzdavid said:

    Hey, 

    I'm seeing this exact same issue on an unencrypted raid1 cache drive too.

    My server hasn't been online for 6 months and they have done around 200TBW already. It is writing 5-15MB/s to the cache drives all the time, regardless of what is running.

     

    unRaid 6.8.3

    Cache1 Samsung_SSD_970_EVO_Plus_1TB - 1 TB (nvme0n1)

    Cache2 Samsung_SSD_970_EVO_Plus_1TB - 1 TB (nvme1n1)

    Power on hours ~4000

     

    I've spent many hours trying to debug this so I am keen for a solution too otherwise my drives won't last 18 months.

     

    Yeah I ended up implementing S1dney's workaround. I disabled all my dockers except for pihole and it was still writing over a 2.5gigs an hour. Really would like to see the devs address this one.

    Link to comment

    same issue here.

     

    constant writes on ssds btrfs cache pool by loop2

    so far 138.53 TB written in  8698h (11m, 27d, 10h)

    Unraid Version: 6.8.2 

     

    hope that will be fixed soon, otherwise its killing he ssd drives sooner or later.

    • Like 1
    Link to comment

    One interesting quirk I've noticed here (also running BTRFS cache pool, encrypted, on 6.8.3):

     

    I leave "iotop -ao" open, and after a minute or so, I have maybe 30MB or so written.

    I stop a Docker container, and I have 120-150MB written. 

    I start it, and it jumps another 100-150MB.

    I start and stop a VM, and this doesn't happen.

     

    I've no idea if this is expected behaviour, if it means anything, or if it helps at all, but I thought I'd mention it.

     

     

    server-diagnostics-20200416-1134.zip

    Link to comment
    13 minutes ago, -Daedalus said:

    One interesting quirk I've noticed here (also running BTRFS cache pool, encrypted, on 6.8.3):

     

    I leave "iotop -ao" open, and after a minute or so, I have maybe 30MB or so written.

    I stop a Docker container, and I have 120-150MB written. 

    I start it, and it jumps another 100-150MB.

    I start and stop a VM, and this doesn't happen.

     

    I've no idea if this is expected behaviour, if it means anything, or if it helps at all, but I thought I'd mention it.

     

     

    server-diagnostics-20200416-1134.zip 282.47 kB · 0 downloads

    Seems you're just hitting this bug and for it, this is expected behavior I think :P

    It seems like every write docker does on the cache multiplies by 10 (at least).

    I recall seeing similar behavior, whenever docker starts to write, it hammers big time.

    • Haha 1
    Link to comment

    Seems like there are multiple people having issues with this, which is great! Then perhaps, if the developers have some time left, that they can check out our issue :)

    Link to comment

    I stumbled across this thread and quickly checked my server.

     

    Power on hours: 1,119

    Data units written: 180,027,782 [92.1 TB]

     

    I'am not using the cache that much. 1 Linux VM that I use from time to time and a couple dockers. Server on idle doin pretty much nothing produces up to 5GB writes in 10 minutes. It varies, sometimes only 1GB sometimes i see numbers up to 5GB in 10-15min. Disabling all the dockers brings it down close to 0 writes. It doesn't matter which Docker I'am starting, after a couple minutes a couple gigs are written to the NVME cache drive by loop2. Tried it with only DuckDNS and only Bitwarden running.

    Edited by bastl
    Link to comment

    Same thing for me, BRTFS unencrypted pool. 14 days uptime I have nearly 100 million writes. iotop shows mostly coming from loop2

    Link to comment

    Unraid 6.8.3 and you can put my hat in too. I as well am seeing a MASSIVE amount of black hole writing to my SSD cache. I have a lot of data coming in and going out at all times so high writes are not uncommon for me but they are uncommon when Plex is off, I am not copying anything at all, and the only thing running is a tiny little word press thing that gets maybe 5 views a week, which are all me anyways, a very light usage Next cloud instance, and a Flarum instance that only has one test post in it I was toying around with just for fun.

     

    It's like it cycles between 10-20 MB/s and zero constantly. After doing some digging and I found the thread linked earlier here and it fits exactly with what I am seeing. This is a pretty big one from what I can tell that is just KILLING SSDs. I thought it was odd my SSD, which isn't even two years old, is showing being in a pre failure state, it's just gotten worn out! -_-

     

    Anyways, hope we see this one getting fixed ASAP.

    Edited by cammelspit
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.