• [6.8.3] docker image huge amount of unnecessary writes on cache


    S1dney
    • Solved Urgent

    EDIT (March 9th 2021):

    Solved in 6.9 and up. Reformatting the cache to new partition alignment and hosting docker directly on a cache-only directory brought writes down to a bare minimum.

     

    ###

     

    Hey Guys,

     

    First of all, I know that you're all very busy on getting version 6.8 out there, something I'm very much waiting on as well. I'm seeing great progress, so thanks so much for that! Furthermore I won't be expecting this to be on top of the priority list, but I'm hoping someone of the developers team is willing to invest (perhaps after the release).

     

    Hardware and software involved:

    2 x 1TB Samsung EVO 860, setup with LUKS encryption in BTRFS RAID1 pool.

     

    ###

    TLDR (but I'd suggest to read on anyway 😀)

    The image file mounted as a loop device is causing massive writes on the cache, potentially wearing out SSD's quite rapidly.

    This appears to be only happening on encrypted caches formatted with BTRFS (maybe only in RAID1 setup, but not sure).

    Hosting the Docker files directory on /mnt/cache instead of using the loopdevice seems to fix this problem.

    Possible idea for implementation proposed on the bottom.

     

    Grateful for any help provided!

    ###

     

    I have written a topic in the general support section (see link below), but I have done a lot of research lately and think I have gathered enough evidence pointing to a bug, I also was able to build (kind of) a workaround for my situation. More details below.

     

    So to see what was actually hammering on the cache I started doing all the obvious, like using a lot of find commands to trace files that were written to every few minutes and also used the fileactivity plugin. Neither was able trace down any writes that would explain 400 GBs worth of writes a day for just a few containers that aren't even that active.

     

    Digging further I moved the docker.img to /mnt/cach/system/docker/docker.img, so directly on the BTRFS RAID1 mountpoint. I wanted to check whether the unRAID FS layer was causing the loop2 device to write this heavy. No luck either.

    This gave me a situation I was able to reproduce on a virtual machine though, so I started with a recent Debian install (I know, it's not Slackware, but I had to start somewhere ☺️). I create some vDisks, encrypted them with LUKS, bundled them in a BTRFS RAID1 setup, created the loopdevice on the BTRFS mountpoint (same of /dev/cache) en mounted it on /var/lib/docker. I made sure I had to NoCow flags set on the IMG file like unRAID does. Strangely this did not show any excessive writes, iotop shows really healthy values for the same workload (I migrated the docker content over to the VM).

     

    After my Debian troubleshooting I went back over to the unRAID server, wondering whether the loopdevice is created weirdly, so I took the exact same steps to create a new image and pointed the settings from the GUI there. Still same write issues. 

     

    Finally I decided to put the whole image out of the equation and took the following steps:

    - Stopped docker from the WebGUI so unRAID would properly unmount the loop device.

    - Modified /etc/rc.d/rc.docker to not check whether /var/lib/docker was a mountpoint

    - Created a share on the cache for the docker files

    - Created a softlink from /mnt/cache/docker to /var/lib/docker

    - Started docker using "/etc/rd.d/rc.docker start"

    - Started my BItwarden containers.

     

    Looking into the stats with "iotstat -ao" I did not see any excessive writing taking place anymore.

    I had the containers running for like 3 hours and maybe got 1GB of writes total (note that on the loopdevice this gave me 2.5GB every 10 minutes!)

     

    Now don't get me wrong, I understand why the loopdevice was implemented. Dockerd is started with options to make it run with the BTRFS driver, and since the image file is formatted with the BTRFS filesystem this works at every setup, it doesn't even matter whether it runs on XFS, EXT4 or BTRFS and it will just work. I my case I had to point the softlink to /mnt/cache because pointing it /mnt/user would not allow me to start using the BTRFS driver (obviously the unRAID filesystem isn't BTRFS). Also the WebGUI has commands to scrub to filesystem inside the container, all is based on the assumption everyone is using docker on BTRFS (which of course they are because of the container 😁)

    I must say that my approach also broke when I changed something in the shares, certain services get a restart causing docker to be turned off for some reason. No big issue since it wasn't meant to be a long term solution, just to see whether the loopdevice was causing the issue, which I think my tests did point out.

     

    Now I'm at the point where I would definitely need some developer help, I'm currently keeping nearly all docker container off all day because 300/400GB worth of writes a day is just a BIG waste of expensive flash storage. Especially since I've pointed out that it's not needed at all. It does defeat the purpose of my NAS and SSD cache though since it's main purpose was hosting docker containers while allowing the HD's to spin down.

     

    Again, I'm hoping someone in the dev team acknowledges this problem and is willing to invest. I did got quite a few hits on the forums and reddit without someone actually pointed out the root cause of issue.

     

    I missing the technical know-how to troubleshoot the loopdevice issues on a lower level, but have been thinking on possible ways to implement a workaround. Like adjusting the Docker Settings page to switch off the use of a vDisk and if all requirements are met (pointing to /mnt/cache and BTRFS formatted) start docker on a share on the /mnt/cache partition instead of using the vDisk.

    In this way you would still keep all advantages of the docker.img file (cross filesystem type) and users who don't care about writes could still use it, but you'd be massively helping out others that are concerned over these writes.

     

    I'm not attaching diagnostic files since they would probably not point out the needed.

    Also if this should have been in feature requests, I'm sorry. But I feel that, since the solution is misbehaving in terms of writes, this could also be placed in the bugreport section.

     

    Thanks though for this great product, have been using it so far with a lot of joy! 

    I'm just hoping we can solve this one so I can keep all my dockers running without the cache wearing out quick,

     

    Cheers!

     

    • Like 3
    • Thanks 17



    User Feedback

    Recommended Comments



    Came across this issue in reddit, and after doing a bit of reading in this thread and elsewhere on the forum I'm quite concerned. BUT I don't know how to check how badly - if at all - I'm affected.

     

    So far what I've done is installed iotop and libffi from the Nerd Tools plug-in and I've run iotop -oa, but I don't know how to interpret the results. loop2 does seem to be writing more than anything else, but how do I know which disk it's writing to? Does it write only to the cache disk?

     

    Could someone help out by posting what commands a casual user could try to see what's happening, and how to interpret the results?

    Link to comment
    3 hours ago, nas_nerd said:

    How are you calculating 90TBW?

    Isn't the absolute easiest way just to run a SMART test on the drive and look at the report? If you click on the name of the drive from the main menu you can download it from there. 

     

    I just did that, ran a iotop -oa -d 3600, and averaged 10 hours of usage to give me a rough average of how much data loop2 was generating. Multiply that by the power on hours in the drive attributes and you'd get an approx. of how bad this bug is killing the drive and how much of the TBW this bug is responsible for. 

     

    In my case my drive reported 64.1TBW total in the SMART data, I measured 8.6MB/s from loop2 averaged over a ten hour period with users to plex, access to my server, etc. or about 30.2GB/hour. My drive attributes showed 1064 on hours... so rough napkin math I'm looking at loop2 having generated ~31.4TBW by itself (basically halving the life of my drive). Rest will be from me doing transfer of about 12TB from my old NAS (stupidly had cache enabled for the initial transfer), downloads, heavy handbrake H265 transcoding, couple of VM installs and futzing around with my server in general. For comparison after converting to XFS I'm generating ~9MB/minute from loop2 or what would be about ~0.6TBW over the lifetime of my drive. 

    Edited by chanrc
    Link to comment
    On 3/27/2020 at 2:07 PM, leo_poldX said:

    Same here!

     

    unRAID 6.8.3
    WD Black SN750 NVME with 500GB from 2020-01 (new!)

    Power on hours: 1,362

    Data units written 68.5 TB

     

    TBW is 250TB

    massiv writes on loop device

     

     

    2 months later:

     

    649566356_2020-05-14at08_40.thumb.png.db0d662c80bc71e9a2beadd428a30b3d.png

     

    Link to comment
    11 hours ago, nas_nerd said:

    Как вы рассчитываете 90TBW?

    my ssd smart. I did not divide into 512, sector. BUT, as before, it is a disastrous amount of 42 TB in 26 days.

    Аннотация 2020-05-14 154258.jpg

    Аннотация 2020-05-14 155619.jpg

    Edited by muwahhid
    Link to comment

    I observed the following:

    •  BTRFS pool, unencrypted, 2 x 500GB SSD. All my containers started, including Plex, Nextcloud, MariaDB, a few -rr containers (sonarr, etc), a few torrent containers, UniFi controller. Cache write 47GB / 12h.
    • BTRFS pool, unencrypted, 2 x 500GB SSD. Most my containers stopped, including Plex, Nextcloud, MariaDB, a few -rr containers (sonarr, etc), a few torrent containers, UniFi controller. Cache write 38GB / 12h.
    • XFS single 500GB SSD, unencrypted. All my containers started, including Plex, Nextcloud, MariaDB, a few -rr containers (sonarr, etc), a few torrent containers, UniFi controller. Cache write 7GB / 12h.

    Now testing encrypted XFS.

     

    Update:

    • XFS single 500GB SSD, encrypted. All my containers started, including Plex, Nextcloud, MariaDB, a few -rr containers (sonarr, etc), a few torrent containers, UniFi controller. Cache write 6,6GB / 12h.
    Edited by szymon
    • Thanks 1
    Link to comment
    On 5/12/2020 at 1:56 PM, S1dney said:

    It behaves as expected.

    All your docker containers are downloaded into the docker image (located in the system folder somewhere on the cache, docker.img is the file I think).

    After the changes you’ve made, you’re not mounting that anymore, but have docker targeted to a different directory.

    Docker will create the needed working directories upon service start, meaning that all container are still inside the docker.img file.

     

    I initially re-created them, using the templates from the dockerMan GUI this isn’t too much work and all persistent data should not reside in the docker.img anyways or you might lose it if the docker.img gets corrupted. I guess you could also copy all data over before implementing the script that mounts the cache directory but I would recreate the containers if I were you.

     

    You should also recreate the docker.img image if you’re done with everything, so that when something changes in future unRAID versions which potentially breaks this, you’ll notice that you have no containers after a reboot and know the docker.img file is mounted or something else is wrong 🙂

    This is great! I implemented your work around and re-added the containers from the templates like you suggested and the writes are drastically reduced!

     

    One thing I did notice and wanted to see if I'm crazy. Making the change to not use the docker.img blocks network access from the host to the macvlan containers even though I have it set to enabled.

    858543265_ScreenShot2020-05-14at12_11_45PM.thumb.png.4e7622af4053cc71770d9dd66dfe81b5.png

     

    For example, my Unraid is 10.0.0.2 and my Plex container 10.0.0.5 is on the custom macvlan adapter (br0). While not using the docker.img pinging from unraid to plex, it never replies and get destination host not reachable. If I remove your work around and docker uses docker.img. Unraid can ping the Plex container.

     

    It isn't the worst thing, but I do have some reverse proxies that aren't working until I get this figured out.

     

    Any thoughts?

    Link to comment
    1 hour ago, hovee said:

    This is great! I implemented your work around and re-added the containers from the templates like you suggested and the writes are drastically reduced!

     

    One thing I did notice and wanted to see if I'm crazy. Making the change to not use the docker.img blocks network access from the host to the macvlan containers even though I have it set to enabled.

    858543265_ScreenShot2020-05-14at12_11_45PM.thumb.png.4e7622af4053cc71770d9dd66dfe81b5.png

     

    For example, my Unraid is 10.0.0.2 and my Plex container 10.0.0.5 is on the custom macvlan adapter (br0). While not using the docker.img pinging from unraid to plex, it never replies and get destination host not reachable. If I remove your work around and docker uses docker.img. Unraid can ping the Plex container.

     

    It isn't the worst thing, but I do have some reverse proxies that aren't working until I get this figured out.

     

    Any thoughts?

    That's interesting.

    That must be an option implemented after version 6.8.0-rc7.

    I'm still running that version, cause I needed a newer kernel and I don't felt comfortable with the warnings issued at 6.9 beta 1. While I know the form-based authentication has some security issues, I prefer holding, since my server is local LAN only.

     

    Can you sent me the contents of 6.8.3's version of the rc.docker file in a PM (let's not go off topic too much on this bug report so rather PM). The start_docker() function must have been adjusted to include some settings on the docker daemon before starting it. Got me curious.

    Edited by S1dney
    Link to comment
    49 minutes ago, S1dney said:

    That's interesting.

    That must be an option implemented after version 6.8.0-rc7.

    I'm still running that version, cause I needed a newer kernel and I don't felt comfortable with the warnings issued at 6.9 beta 1. While I know the form-based authentication has some security issues, I prefer holding, since my server is local LAN only.

     

    Can you sent me the contents of 6.8.3's version of the rc.docker file in a PM (let's not go off topic too much on this bug report so rather PM). The start_docker() function must have been adjusted to include some settings on the docker daemon before starting it. Got me curious.

    Thank you for your help, and you're right, I don't want to derail this thread from the issue at hand.


    If anyone is curious, I was able to get it working after @S1dneys insight. To get it to work, I had to take the original rc.docker file from 6.8.3 and modified only the start_docker() with the updates. That restored the proper networking after a reboot.

    • Like 1
    Link to comment

    I stopped Docker and copied my docker.img file to a non-SSD array drive that was always spun up anyways. In Docker settings I pointed it to the new location and started Docker. All my containers started up normally and everything is working fine.

     

    Prior to doing this I was averaging over 1 GB/hr on loop2. I'm down to 140 MB/hr now. We'll see if that lower number stays down.

     

    Update: Almost 24 hours later I'm still running 140ish MB/hr.

    Edited by Dase
    Fixed units. Added update.
    Link to comment
    9 hours ago, muwahhid said:

    my ssd smart. I did not divide into 512, sector. BUT, as before, it is a disastrous amount of 42 TB in 26 days.

    Аннотация 2020-05-14 154258.jpg

    Аннотация 2020-05-14 155619.jpg

    Thanks, my SSD smart attributes doesn't show the data written like some users above have posted screen caps of.

    Link to comment

    Removed raid1 btrfs. Now cache 1 ssd 2tb xfs. The second SSD is made as unassigned. Greedy docker is disabled. The most hungry are the PyCharm and Vs code, for two they write 1GB in 10 minutes.

    Now in 7 hours with my configuration 1.13GB in 7 hours.
    Screens are attached.

     

    1.jpg

    2.jpg

    3.jpg

    4.jpg

    5.jpg

    Edited by muwahhid
    Link to comment

    Seems this problem is not only related to cache disks ... I had the same problem but removed the cache disk from the equation and now my Disk 1 (XFS) is taking a hammering

    Anyone experienced this ?

    Link to comment

    Brand new server (6.8.3) and suddenly noticed 111 million writes to my SSD pool (2 * 2TB WD Blue 3D NAND) after little more than a week.

     

    Power on hours : 205 (8d, 13h)

    233 NAND_GB_Written_TLC     -O--CK   100   100   ---    -    1393
    234 NAND_GB_Written_SLC     -O--CK   100   100   ---    -    6789

     

    Both drives are almost identical (as you'd expect).

     

    I only have some Windows 10 VMs running, which I stopped and restarted.  It seemed to write a lot during windows boot up and then calm down.  However over time, the rate has increased again.

     

    No dockers are running, only VMs.  I do not think this is exclusively a docker issue.

     

    The loop2 process is nowhere to be seen on iotop, it's the VMs (which are idle most of the time).

     

    This is a disaster in the making and will toast the drives long before the 5 year warranty expires.  Where is LT on this thread?  Is it worth a new one?  This is surely the single most important thing to look at right now, as customers out there with cache pools might be silently ruining their hardware.

    Edited by Nigel
    Link to comment
    13 hours ago, Christo said:

    Seems this problem is not only related to cache disks ... I had the same problem but removed the cache disk from the equation and now my Disk 1 (XFS) is taking a hammering

    Anyone experienced this ?

    Some docker containers tend to misbehave occasionally.

    Stop all of them and start them one by one to see which one increases writes out of proportion, iotop -ao is easy to use for this.

     

    10 hours ago, Nigel said:

    Brand new server (6.8.3) and suddenly noticed 111 million writes to my SSD pool (2 * 2TB WD Blue 3D NAND) after little more than a week.

     

    Power on hours : 205 (8d, 13h)

    233 NAND_GB_Written_TLC     -O--CK   100   100   ---    -    1393
    234 NAND_GB_Written_SLC     -O--CK   100   100   ---    -    6789

     

    Both drives are almost identical (as you'd expect).

     

    I only have some Windows 10 VMs running, which I stopped and restarted.  It seemed to write a lot during windows boot up and then calm down.  However over time, the rate has increased again.

     

    No dockers are running, only VMs.  I do not think this is exclusively a docker issue.

     

    The loop2 process is nowhere to be seen on iotop, it's the VMs (which are idle most of the time).

     

    This is a disaster in the making and will toast the drives long before the 5 year warranty expires.  Where is LT on this thread?  Is it worth a new one?  This is surely the single most important thing to look at right now, as customers out there with cache pools might be silently ruining their hardware.

    That's interesting.

    I've always assumed this was a combination of docker with the loopdevice implementation, libvirt's image is also mounted in the same manner, but that image is usually smaller so I'd expect that it written to less. Not sure though, my processor (i3 9100) seems to take a big hit from just running 1 VM so I never took the VM route. Starting so much as a browser inside a Windows VM cranks up all cores to 100%. Tried numerous things like messing with settings and passing iGPU to the VM but all the same. Eventually decided to stick with docker.

     

    What's the top 10 disk writers when running iotop -ao for 30 mins?

     

    Edited by S1dney
    Link to comment
    1 hour ago, S1dney said:

    ...

    That's interesting.

    I've always assumed this was a combination of docker with the loopdevice implementation, libvirt's image is also mounted in the same manner ...

     

    I had what might well be a related issue with a Windows10 VM and torrent clients - every write of a small received chunk of a large file would, seemingly, lead to the rewriting of the entire size of the file every time.

    /bug-reports/stable-releases/680-massive-write-amplification-on-raid-1-btrfs-ssd-cache-pool-with-sparse-files-r811/ ]

     

    Temporarily "fixed" by turning off a normally desirable setting in the torrent clients that had worked as intended in earlier UnRaid releases. It looks like a systemic problem that came in with the transition from 6.7.2 to 6.8.0 codebase.

    Link to comment
    1 hour ago, S1dney said:

    I've always assumed this was a combination of docker with the loopdevice implementation, libvirt's image is also mounted in the same manner, but that image is usually smaller so I'd expect that it written to less.

    Most of my writes are also from my 3 Windows VMs, I also have dockers, but there's not much writing going to loop2, at least comparatively, iotop accumulated writes after a couple of minutes:

     

    imagem.thumb.png.34f974d459ce4fc62838e03554991579.png

    • Thanks 1
    Link to comment
    30 minutes ago, johnnie.black said:

    Most of my writes are also from my 3 Windows VMs, I also have dockers, but there's not much writing going to loop2, at least comparatively, iotop accumulated writes after a couple of minutes:

     

    imagem.thumb.png.34f974d459ce4fc62838e03554991579.png

    That looks identical indeed.

    On the contrast, I don't see any dockerd commands in that output.

    From what I saw when I was troubleshooting this the writes by loop2 would go up a lot if dockerd commands started to show, I assumed that a container was doing writes at that time having docker interact with the loop device, which would in turn crank up writes on/in there as well.

     

    In my case with docker I was quite certain that it was the loop device's implementation, since bypassing the loop device solved the amount of writes.

    Now in your case I don't see any loop device so that makes we wonder if we're on the wrong track here. I'm not really sure how to track the location where that writes are going exactly, but the loop2 process eating up storage was a good indication for blaming the loop device.

     

    I guess you'd have to test with a system and also mount libvirt's directly onto the cache to find out.

    Thanks for checking though 👍

    Link to comment

    This has been running since last night (12 hours roughly)

    image.png.f49921a5f5b045d481b7696f3855e8bc.png

     

    So 2 Windows 10 VMs responsible for 583G in 12 hours.

    Link to comment
    2 hours ago, S1dney said:

    guess you'd have to test with a system and also mount libvirt's directly onto the cache to find out.

    libvirt.img is already on the cache device on my server.

    Link to comment

    Just one more screenshot from me on this, I left iotop running since earlier and you can see it's not even the VMs in general, for me it's mostly the Windows Server VM, which is the main one but it was mostly idle during this, loop2 has few writes comparatively, and libvirt (loop3) doesn't even appear on the list.

     

    image.thumb.png.2a7889b6493af4eac890523ebfe9ea14.png

     

    I'm not sure if this is related to this topic, but I have been noticing the unusually large amount of writes to my cache device for some time, it's writing on average 1 or 2TBs per day, some days more, just never though too much of it.

    Link to comment
    41 minutes ago, johnnie.black said:

    libvirt.img is already on the cache device on my server.

    Yeah, I know. It's in /mnt/cache/system/, as is the docker.img file.

    Then that img file is mounted onto (/var/lib/docker in docker's case)., so that when the docker deamon writes to its /var/lib directory (new container images or some logging data for example), it writes that into the file on the cache instead, so that it survives a reboot. That way of working was causing the writes at docker's end, as creating a symlink to a location on the cache and disabling the loopdevice makes it stop.

     

    I haven't really spent any time on the hypervisor, but it looks like the libvirt image (located at /mnt/cache/system/libvirt/libvirt.img), mounts itself on /etc/libvirt.

    Now looking through the files there this doesn't seem like a directory that is being written too much, as it just contains some XML, conf and non-volatile ram files.

     

    Also I just noticed your new post, that's crazy!!

    I must admit that I thought I pinned this down a bit, but now users are starting to report these numbers on the hypervisor side also, I'm really starting to doubt if docker even has something to do with it.

     

    Like said before, it would be worth checking to see what happens if you copy all files within /etc/libvirt (while the image is mounted) to a directory on the cache, create a symlink at /etc/libvirt to point to that directory (/mnt/cache/somedir) and then modify the rc.libvirt file so that the start_libvirtd() doesn't check for a mounted image. That is what stopped the writes on the docker side. Or...... Have some of the devs hop on board here, as this is all unsupported and not to recommend of source (although a great way to spend some hours)  :P :D 

    Edited by S1dney
    Link to comment

    Two things I have found.

    1. If your VM images are in a share with Copy on Write set to Auto, create a new share with this disabled, and move/copy the images to the new share.  Update your VMs to point to the new images in the new share.  This seems to reduce the write rate considerably.  I made these changes on all 3 VMs, and 2 out of the 3 reduced to practically nothing for an idle machine, as you'd expect.
    2. The 3rd VM was still stubbornly writing out at a constant 3MB/s (~253GB/day).  I finally tracked this down to the Origin game client.  As soon as I signed out or shutdown the origin client, the excessive writes disappeared.

     

    Lesson learned for point one - always read the tool tips as it says to set this to No for images.

     

    I still think there is something weird going on, because the Windows VM was not registering 3MB/s of data being written by Origin - rather 0.1MB/s.  My suspicion is that some type of IO activity is being expanded dramatically at the filesystem layer for btrfs pools.

     

    This is on the new build server.  Now I've got to get some outage on my older server to try and stop the crazy writing there and see if I can replicate the above success.

     

    Edit: Also noticed the temp on my SSD pool has dropped by 2 degrees since making these changes.

    Edited by Nigel
    Link to comment
    1 minute ago, Nigel said:

    My suspicion is that some type of IO activity is being expanded dramatically at the filesystem layer for btrfs pools. 

    I see that with MariaDB. Small (really small) writes to the databases generates gigabytes of written data in no time.

    Link to comment
    4 minutes ago, Nigel said:

    If your VM images are in a share with Copy on Write set to Auto, create a new share with this disabled

    Yes, mine are all set to auto, but I want them like that or else btrfs will also stop checksumming the data, and that's more important for me, even if it I have to live with the extra writes, because of this feature I detected silent corruption on an SSD I was using a few years ago with a VM, but you are correct, most users should have it off for vdisks, because of increased fragmentation and likely the additional writes.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.