• Unraid OS version 6.8.0-rc9 available


    limetech

    A fairly large kernel update was published yesterday as was a new WireGuard release and I thought it important to update and sanity-check.  We'll let this bake over the weekend and if no new big issue shows up we'll publish to stable. Then we can move on to the 5.4 kernel in Unraid 6.9.

     

    -rc9 summary:

    • Update kernel to 4.19.88.
    • Update to latest WireGuard release
    • Other package updates.

     

    Specific changes in [-rcN] are indicated in bold below.

     

    New in Unraid OS 6.8 release:

     

    The unRAIDServer.plg file (update OS) still downloads the new release zip file to RAM but then extracts directly to USB flash boot device.  You will probably notice a slight difference in speed of extract messages. [-rc2] The 'sync' command at the end has been replaced with 'sync -f /boot'.

     

    Forms based authentication
    If you have set a root password for your server, when accessing webGUI you'll now see a nice login form.  There still is only one user for Unraid so for username enter root.  This form should be compatible with all major password managers out there.  We always recommend using a strong password.  [-rc2] There is no auto-logout implemented yet, please click Logout on menu bar or completely close your browser to logout.

     

    Linux kernel

    • [-rc8] Remains on 4.19
    • [-rc6/-rc7] include latest Intel microcode for yet another hardware vulnerability mitigation.
    • default scheduler now 'mq-deadline' [-rc2] but this can be changed via Settings/Disk Settings/Scheduler setting.
    • enabled Huge Page support, though no UI control yet
    • binfmt_misc support
    • added "Vega 10 Reset bug" [-rc2] and 'navi-reset' patches removed [-rc5]
    • [-rc2] added oot: Realtek r8125: version 9.002.02
    • [-rc3] additional md/unraid changes and instrumentation
    • [-rc6] fix chelsio missing firmware
    • [-rc8] removed Highpoint r750 driver [does not work]

     

    md/unraid driver

     

    Introduced "multi-stream" support:

     

    • Reads on devices which are not being written should run at full speed.  In addition, if you have set the md_write_method tunable to "reconstruct write", then while writing, if any read streams are detected, the write method is switched to "read/modifywrite".
    • Parity sync/check should run at full speed by default.
    • Parity sync/check is throttled back in presence of other active streams.
    • The "stripe pool" resource is automatically shared evenly between all active streams.

     

    As a result got rid of some Tunables:

    • md_sync_window
    • md_sync_thresh

    and added some tunables:

    • md_queue_limit
    • md_sync_limit
    • [-rc2] md_scheduler

     

    Please refer to Settings/Disk Settings help text for description of these settings.


    WireGuard support - available as a plugin via Community Apps.  Our WireGuard implementation and UI is still a work-in-process; for this reason we have made this available as a plugin, though the latest WireGuard module is included in our Linux kernel.  Full WireGuard implementation will be merged into Unraid OS itself in a future release.  I want to give special thanks to @bonienl who wrote the plugin with lots of guidance from @ljm42 - thank you!  I also should give a shout out to @NAS who got us rolling on this.  If you don't know about WireGuard it's something to look into!

     

    Guide here:


    WS-Discovery support - Finally you can get rid of SMBv1 and get reliable Windows network discovery.  This feature is configured on the Settings/SMB Settings page and enabled by default.

    • Also on same settings page is Enable NetBIOS setting.  This is enabled by default, however if you no longer have need for NetBIOS discovery you can turn it off.  When turned off, Samba is configured to accept only SMBv2 protocol and higher.
    • Added mDNS client support in Unraid OS.  This means, for example, from an Unraid OS terminal session to ping another Unraid OS server on your network you can use (e.g., 'tower'):
      ping tower.local
      instead of
      ping tower
      Note the latter will still work if you have NetBIOS enabled.

     

    User Share File System (shfs) changes:

    • Integrated FUSE-3 - This should increase performance of User Share File System.
    • Fixed bug with hard link support.  Previously a 'stat' on two directory entries referring to same file would return different i-node numbers, thus making it look like two independent files.  This has been fixed however there is a config setting on Settings/Global Share Settings called "Tunable (support hard links)".  [-rc2 ] Fixed the default value Yes, but with certain very old media and DVD players which access shares via NFS, you may need to set this to No.
      [-rc5] Fixed not accounting for devices not mounted yet.
    • Note: if you have custom config/extra.cfg file, get rid of it.


    Other improvements/bug fixes:

    • Format - during Format any running parity sync/check is automatically Paused and then resumed upon Format completion.
    • Encryption - an entered passphrase is not saved to any file.
    • Fixed bug where multi-device btrfs pool was leaving metadata set to dup instead of raid1.
    • Several other small bug fixes and improvements.
    • [-rc5] Fixed bug where quotes were not handled properly in passwords.
    • Numerous base package updates [-rc2] including updating PHP to version 7.3.x, Samba to version 4.11.x.

     

    Known Issues and Other Errata

    • Some users have reported slower parity sync/check rates for very wide arrays (20+ devices) vs. 6.7 and earlier releases - we are still studying this problem.
    • [-rc6] this is fixed: If you are using Unassigned Devices plugin with encrypted volumes, you must use the file method of specifying the encryption passphrase.  Note that a file containing your passphrase must consist of a single null-terminated string with no other line ending characters such as LF or CR/LF.
    • In another step toward better security, the USB flash boot device is configured so that programs and scripts residing there cannot be directly executed (this is because the 'x' bit is set now only for directories).  Commands placed in the 'go' file still execute because during startup, that file is copied to /tmp first and then executed from there.  If you have created custom scripts you may need to take a similar approach.
    • AFP is now deprecated and we plan to remove support in Unraid 6.9 release.

     

    A note on password strings

    Password strings can contain any character however white space (space and tab characters) is handled specially:

    • all leading and trailing white space is discarded
    • multiple embedded white space is collapsed to a single space character.

    By contrast, encryption passphrase is used exactly as-is.

     

    Version 6.8.0-rc9 2019-12-06

    Base distro:

    • btrfs-progs: version 5.4
    • ca-certificates: version 20191130
    • ebtables: version 2.0.11
    • gnutls: version 3.6.11.1
    • gtk+3: version 3.24.13
    • iproute2: version 5.4.0
    • iptables: version 1.8.4
    • keyutils: version 1.6
    • libepoxy: version 1.5.4
    • libnftnl: version 1.1.5
    • librsvg: version 2.46.4
    • libtasn1: version 4.15.0
    • lvm2: version 2.03.07
    • nano: version 4.6
    • pcre2: version 10.34
    • pkgtools: version 15.0 build 28
    • wireguard: version 0.0.20191205
    • xorg-server: version 1.20.6

    Linux kernel:

    • version 4.19.88
    • Like 3
    • Thanks 6



    User Feedback

    Recommended Comments



    What are the chances you think/see for the proper intel 10gbe drivers (ixgbe) to make it to the stable ? 

    The diff with the  intree in performance is too big for me to move on unfortunately and would love to move to the new 6.8 

     

    edit : Sorry , ignore my question. Missed the whole story with rc8 and going back to old kernel. Saw in the rc8 annuoncement that ixgbe is back to out of tree drive. Jippieeee. Going to start testing again today as have been ignoring the rc's since my intells lost 300Mb/s on my last test in rc3.

    Tnx guys ✌️

    Edited by glennv
    Link to comment

    Hmmm. Interesting. Just upgraded and still intel 10g X540-AT2 read speed over smb(3) from raid10 3 drive ssd btrfs unraid cache from client 10g (same card) system dropped from :

    6.7.2 : write ~500-600MB/s , read ~815MB/s

    to 

    6.8rc9 : write ~417MB/s, read ~570MB/s

     

    Nothing changed, just upgraded. Downgrade to 6.7.2 and speed restored.

    Any ideas what could have caused that behavior ?  

     

    p.s. Measure with Blackmagic disk speed test.

    Edited by glennv
    Link to comment
    7 hours ago, glennv said:

    Hmmm. Interesting. Just upgraded and still intel 10g X540-AT2 read speed over smb(3) from raid10 3 drive ssd btrfs unraid cache from client 10g (same card) system dropped from :

    6.7.2 : write ~417MB/s , read ~815MB/s

    to 

    6.8rc9 : write ~417MB/s, read ~570MB/s

     

    Nothing changed, just upgraded. Downgrade to 6.7.2 and speed restored.

    Any ideas what could have caused that behavior ?  

     

    p.s. Measure with Blackmagic disk speed test.

    Did you try RC7 with kernel 5.3?

     

    RC9 is kernel 4.19

     

    Curious if newer kernel has regression fixes.

    Edited by Dazog
    Link to comment

    The new kernel was a problem as according to the release notes the ixgbe driver did not compile. Last i tried was rc3 and it was a mess for the 10g card. Super unstable speeds and 400MB/s less then normal.

    The reason i tried again now is that its the older kernel and out of tree driver. Seems stable and all good only slower read speeds as before the upgrade.

    So can live with it for a while but still wierd. But as other things have also changed that could affect this , like filesystem driver, maybe tunables , i am just wondering if there is something alse that could explain it.

    Its the read part so straight from memory cache so should be super fast and close to linespeed as before the upgrade. 

    So it either the newer network driver for the card or some funky tunable that now comes into effect in this new release but just guessing.

     

    Edit ; more convinced its not the network card driver as with iperf3 i can get 10G linespeed. So more looking at the changes related to the filesystem drivers , smb3 changes maybe or other stuff that has changed by the upgrade.

    Edited by glennv
    Link to comment

    Hmm, looks indeed like the fuse driver. Even a synthetic test goes insane on /mnt/user/SHARE vs /mnt/cache/SHARE. Reading from memory cache (as zero drive read activity and 128G mem) capped at 500MB/s which matches the speed i saw over the network. Write behavior is crazy with stops/spikes compared to a straght burst when writing to /mnt/cache. See attched image. Thinking about rolling back and forgeting about 6.8. 

     

    root@TACH-UNRAID:/mnt/user/SAN-FCP# cd /mnt/cache/SAN-FCP/
    root@TACH-UNRAID:/mnt/cache/SAN-FCP# time dd if=/dev/zero of=testfile.tmp bs=8k count=1250000

    10240000000 bytes (10 GB, 9.5 GiB) copied, 9.74445 s, 1.1 GB/s


    root@TACH-UNRAID:/mnt/cache/SAN-FCP# time dd of=/dev/zero if=testfile.tmp bs=8k count=1250000

    10240000000 bytes (10 GB, 9.5 GiB) copied, 2.88004 s, 3.6 GB/s


    root@TACH-UNRAID:/mnt/cache/SAN-FCP# cd /mnt/user/SAN-FCP
    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd if=/dev/zero of=testfile.tmp bs=8k count=1250000

    10240000000 bytes (10 GB, 9.5 GiB) copied, 77.2359 s, 133 MB/s


    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd of=/dev/zero if=testfile.tmp bs=8k count=1250000

    10240000000 bytes (10 GB, 9.5 GiB) copied, 19.1843 s, 534 MB/s <<<<<<<<<<<< ?????

     

     

    rc9.jpg.f0c531ff86350fe061a5a2cb8dcc8393.jpg

     

    edit: rolling back now. Too much crazyness and i have no time for that. Too bad. Will try at some other point.

    Edited by glennv
    Link to comment

    Interesting. After rolling back as expected smb3 read speed back to 800+ MB/s.

    Interestinglyu enough the artificial dd tests like i did on 6.8rc9 show the same slow write speed and crazy write curve stop start behavior (maybe its my btrfs samsung misalignement thing) but as i only need fast read speed (editing/color grading  straight from cache  ) i did not even notice. BUT more importantly for th erelease comparison, the read speed is not capped anymore to 500MB/s as you see in the 6.8rc9  test before but 1.6GB/s (as from memory so as it should be)

     

    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd if=/dev/zero of=testfile.tmp bs=8k count=1250000

    10240000000 bytes (10 GB, 9.5 GiB) copied, 50.8538 s, 201 MB/s


    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd of=/dev/zero if=testfile.tmp bs=8k count=1250000

    10240000000 bytes (10 GB, 9.5 GiB) copied, 6.24655 s, 1.6 GB/s  <<<<<<<<<<<<<<<

     

    Back to business at least ;-) 

     

    6.7.2.jpg.fa611755edc64536a571e217e56a12e4.jpg

     

    And in case you wondering , real world tests are in line with the artifical tests.

    Copy a file from the unraid servers cache to local storage does the same. So capped and slow on rc9 and full speed 600-800MB/s file copies on 6.7.2.  

     

    6.8 i not for me.... 

    Edited by glennv
    Link to comment
    55 minutes ago, glennv said:

    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd if=/dev/zero of=testfile.tmp bs=8k count=1250000

    8k is a pretty unrealistic block size.  Try 128K

    Link to comment

    True and you are totaly right there. 128k gives write speeds more in line with the Blackmagic test (which is the one i look at mostly, as focused and my type of workload)  of ~500MB/s writes and also less peaky / bumpy. So write is not a big issue and similar between releases. (although still about 100MB/s less in real world filecopies/writes)

    I just did the dd test to see if anything stuck why my 10G network read speed was capped at around 500MB/s read when comparing between releases and what stuck out in the dd. Which coming entirely from cache in memory was a good indicator for if there was another bottleneck limiting my 10GB network speed. And there it was. Read speed was capped artificialy at 500MB/s while on 6.7.2 it hits 1.6GB/s hence also over the network the uncapped close to linespeed results.

    So i am ignoring the dd tests for its absolute numbers, more as a bottleneck finder.

    And if they agree with other more realistic tests, like BM test and normal file copies etc , its something to look at. 

    Completely repeatable , 6.8rc9 gives me 200 to 300 MB/s less read speed (800MB/s vs 500MB/s) over the net (SMB3) or even 1GB/s less local in the same exact test between releases (when cached in mem 1.6GB/s vs 500MB/s) and about 100MB/s less write spead, local and over ether.

     

    edit: upgrading again to do the same comparison at 128k . Just in case.

    Edited by glennv
    Link to comment

    So upgraded again to do the tests again (at 128k) and effect not as extreme as the 8k tests but still a considerable difference also showing up in the real world BM speedtest.

     

    6.7.2

     

    Local :

    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd if=/dev/zero of=testfile.tmp bs=128k count=125000

     

    16384000000 bytes (16 GB, 15 GiB) copied, 33.1774 s, 494 MB/s

     

    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd of=/dev/zero if=testfile.tmp bs=128k count=125000

     

    16384000000 bytes (16 GB, 15 GiB) copied, 9.76896 s, 1.7 GB/s

     

    Over 10G from client more real world test

    6.7.2.jpg.ff2cf4d8688a6e2c893adc38333f0fd2.jpg

     

    6.8rc9

     

    Local

    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd if=/dev/zero of=testfile.tmp bs=128k count=125000

     

    16384000000 bytes (16 GB, 15 GiB) copied, 36.897 s, 444 MB/s

     

    root@TACH-UNRAID:/mnt/user/SAN-FCP# time dd of=/dev/zero if=testfile.tmp bs=128k count=125000

     

    16384000000 bytes (16 GB, 15 GiB) copied, 11.6999 s, 1.4 GB/s

     

    Over 10G from client more real world test

    6_8rc9.jpg.d0e3a49f6183a1f8f36560e8753c936f.jpg

     

     

     

    in case it helps the diags (made on rc9):

    tach-unraid-diagnostics-20191207-2258.zip

     

     

     

    My educated guess that this is the change that is involved here. But could be soething completely different of course as lots of stuff has changed. But as the kernel is the same , we have to look at these things

    -----

    User Share File System (shfs) changes:

    Integrated FUSE-3 - This should increase performance of User Share File System.

    ------

    Edited by glennv
    Link to comment

    I was doing some disk testing as well to see if there was anything similar on my system.

    disk speed tests

    When i tested a speed test on my unassigned appdata disk, my system froze and crashed.  It appears to have kernel panicked and I attempted to pull diagnostics prior to a reboot, but was not successful.

    kernel panic

    failed diagnostics attempt

    I attempted a graceful reboot, but that hung as well and was forced to reset the machine.

    Link to comment

    ouch.... Not much fun.

     

    p.s. Only if you test via /mnt/user/XX you will test the fuse filesystem overhead. All other incl unassigned devices will bypass and do direct i/o. I have all my appdata/vm's etc on unassigned devices (on zfs) and these do not experience any speed issue as not connected to the array and fuse filesystem driver. The cache is part of the array and if accessed via /mnt/user (with share set to cache=yes or  like in my case to only ) it uses the fuse filesystem driver . A virtual userspace filesystem. If you access it via /mnt/cache or directly via drive id etc , you bypass it hence the faster speeds.

     

    Link to comment
    13 hours ago, glennv said:

    So upgraded again to do the tests again (at 128k) and effect not as extreme as the 8k tests but still a considerable difference also showing up in the real world BM speedtest.

    rc9 is performing normally for me with SMB transfer from user shares:

     

    bm.PNG.cd92d1a9ea4ab52f9877bdaae9d59b75.PNG

     

    And real usage examples:

     

    Write to Unraid:

    1449312646_Screenshot2019-12-0808_39_06.png.bdbad157bb39bea55b8c15be392839b2.png

     

    Read from Unraid:

    844655836_Screenshot2019-12-0811_09_15.png.264c2514663ee62c9357e78297e0b2f6.png

     

    Read speeds are usually 600 to 800MB/s and not constant mostly because the NVMe devices on the desktop can't keep up, because if I write to a Ramdisk it's faster:

    997659793_Screenshot2019-12-0811_11_02.png.ba2f65c648e9d94512fd3b927832c2ae.png

    • Like 1
    • Thanks 1
    Link to comment
    On 12/7/2019 at 9:37 AM, Lev said:

    Any updates on the known issue regarding slower parity sync/check for large arrays?

    I would guess they have bigger fish to fry for now, but also hope it gets looked at eventually, when time permits since it's more of an annoyance.

     

    Also please post the details of how much performance degraded in this report, so it's easier to keep track of affect users/servers.

     

     

    Link to comment

    @johnnie.black I hear you. But for me its not and its consistent and repeatable. I upgrade i drop big performance , i downgrade, all back to speed.

    Just looking for the factor at play.

     

    edit : just did the test copy to ramdisk and same results as capped at around 500MB/s, so its capped at the source for me not the target.

    Same test on 6.7.2 full speed (850+ MB/s over the 10Gb connection)

     

    edit 2 : i am osx only systems so maybe it only affects osx. Or just my system. But as i also see a speeddrop doing a test purely on the server itself , i doubt its a windows/mac thing.

     

    edit 3: what i also noticed and at first thought is was just cooincidence (untill i read another user with something similar and rechecked) but browsing directories on the smb was much snappier on previuos release. 

    Just moved back again and feels like a breeze. Staying there . Getting tired of moving back and forwards with every time the same results whatever test i do. So i will sit it out a few releases and hope things improve. Am happy at current release and not missing any features on this already amazing unraid setup.

     

    Edited by glennv
    Link to comment

    I went from rc7 to rc9 and my pfSense VM does not boot. Had the "GSO" type bug prior. Passing in a Intel Quad NIC to my pfSense VM. Thoughts here?

    Screen Shot 2019-12-09 at 12.03.33 PM.png

    Screen Shot 2019-12-09 at 12.08.47 PM.png

    Screen Shot 2019-12-09 at 12.10.20 PM.png

     

    Guess it's a known thing:

     

    Edited by joelones
    Link to comment
    7 minutes ago, joelones said:

    I went from rc7 to rc9 and my pfSense VM does not boot. Had the "GSO" type bug prior. Passing in a Intel Quad NIC to my pfSense VM. Thoughts here?

    No diagnostics => No help.

     

    Also please open separate bug report.

    Link to comment
    On 12/8/2019 at 6:36 AM, johnnie.black said:

    rc9 is performing normally for me with SMB transfer from user shares:

     

    bm.PNG.cd92d1a9ea4ab52f9877bdaae9d59b75.PNG

     

    And real usage examples:

     

    Write to Unraid:

    1449312646_Screenshot2019-12-0808_39_06.png.bdbad157bb39bea55b8c15be392839b2.png

     

    Read from Unraid:

    844655836_Screenshot2019-12-0811_09_15.png.264c2514663ee62c9357e78297e0b2f6.png

     

    Read speeds are usually 600 to 800MB/s and not constant mostly because the NVMe devices on the desktop can't keep up, because if I write to a Ramdisk it's faster:

    997659793_Screenshot2019-12-0811_11_02.png.ba2f65c648e9d94512fd3b927832c2ae.png

    How are you getting those speeds? Did you changed something in Unraid to get those speeds. I have a 10gb nic on my motherboard and I can only see 400 for like 5 to 10 seconds and then it drops off. If the files I transfer are small it stays at 400. However, if I do like a bunch of files for a total of 8gb or higher, it will start at 400 then go down to 90 mbps. Did you make any adjustments on the settings in Unraid. Also, I have my Unraid 10gb connected to a 10gb on my other computer and I am using ssd. One on the server and one on the other computer. Could you share your settings or point me to the right place please. 

    Edited by Tucubanito07
    Link to comment
    4 hours ago, Tucubanito07 said:

    it will start at 400 then go down to 90 mbps.

    The initial transfer is cached to RAM, if you see a drop off after a few seconds it means the devices can't keep up with the write speed.

    Link to comment
    15 hours ago, joelones said:

    I went from rc7 to rc9 and my pfSense VM does not boot. Had the "GSO" type bug prior. Passing in a Intel Quad NIC to my pfSense VM. Thoughts here?

    Screen Shot 2019-12-09 at 12.03.33 PM.png

    Screen Shot 2019-12-09 at 12.08.47 PM.png

    Screen Shot 2019-12-09 at 12.10.20 PM.png

     

    Guess it's a known thing:

     

    I've experienced the same yesterday. Started up an older Pfsense VM I use from time to time to play around with and test some configs. I never had issues with that VM. I only use virtual devices for it, nothing directly passed through. I tried to restore some older backups, tried different machine type versions but nothing worked. I get the exact same halt screen on Pfsense as joelones posted. Not sure on which Unraid version i fired up the VM the last time, but within the VM nothing was changed.

    Link to comment
    4 hours ago, johnnie.black said:

    The initial transfer is cached to RAM, if you see a drop off after a few seconds it means the devices can't keep up with the write speed.

    So in order to keep those transfer speeds i will need to build an array out SSD's in order to be close to 400 mbps correct?

    Link to comment

    @Tucubanito07 Nope , you can like i have use a large SSD based cache and set your share(s) to cache=yes/only.

    Then after the initial memory cache the writes go to the fast ssd's (or nvme's Like Johnnie has) .

    You can then at a convenient time uffload to spindels (mover) or keep in in the cache, whatever suits your workflow

    Edited by glennv
    Link to comment
    6 minutes ago, Tucubanito07 said:

    So in order to keep those transfer speeds i will need to build an array out SSD's in order to be close to 400 mbps correct?

    You need a cache pool with one or more fast NVMe devices or a SATA SSD pool with multiple devices, for example my cache pool is a raid5 btrfs pool with 6 SSDs, on my desktop I'm using dual NVMe devices in raid1.

    Link to comment
    6 minutes ago, johnnie.black said:

    You need a cache pool with one or more fast NVMe devices or a SATA SSD pool with multiple devices, for example my cache pool is a raid5 btrfs pool with 6 SSDs, on my desktop I'm using dual NVMe devices in raid1.

    Ok so that is what i was thinking. Awesome thank you so much for the info.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.