• Unraid OS version 6.12.0-rc1 available


    limetech

    The 6.12 release includes initial ZFS support in addition to the usual set of bug fixes, kernel, and package updates.

     

    Please create new topics here in this board to report Bugs or other Issues.

     

    As always, prior to updating, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".

     

    Note regarding plugins:  This release includes upgrading PHP from v7.4 to v8.2.3, necessary for continued PHP security updates. We recommend you upgrade all your plugins before updating the OS, and be aware that any plugin which hasn't been updated recently (with a comment about PHP 8 in the change logs) may not work correctly. You may want to uninstall that plugin until the author has had time to update it.  Plugin authors are aware of necessary changes and many plugins have already been updated.  If you encounter an issue with a plugin, please create a nice post in the plugins support topic.

     

    Special thanks to all our beta testers and especially:

     

    @bonienl for his continued refinement and updating of the Dynamix webGUI.

    @Squid for continued refinement of Community Apps and associated feed.

    @dlandon for continued refinement of Unassigned Devices plugin.

    @ich777 for continued support of third-party drivers, recommendations for base OS enhancements and beta testing.

    @JorgeB for rigorous testing of the storage subsystem and helping us understand ZFS nuances, of which there are many.

    @SimonF for curating our VM Manager and adding some very nice enhancements.

     

    Thanks to everyone above and our Plugin Authors for identifying and putting up with all the changes which came about from upgrading PHP from v7 to v8.

     

    Finally a big Thank You! to @steini84 who brought ZFS to Unraid via plugin several years ago.

     


    Version 6.12.0-rc1 2023-03-14

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

     

    If you revert back from 6.12 to 6.11.5 or ealier, you have to force update all your Docker containers and start them manually after downgrading.  This is necessary because of the underlying change to cgroup v2 in 6.12.0-rc1.

    ZFS Pools

    New in this release is the ability to create a ZFS file system in a user-defined pool.  In addition you may format any data device in the unRAID array with a single-device ZFS file system.

     

    We are splitting full ZFS implementation across two Unraid OS releases.  Initial support in this release includes:

    • Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4 devices in a mirror vdev. Multiple vdev groups.
    • Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table.
    • Support replacing single missing device with a new device of same or larger size.
    • Support pool rename.
    • Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit.
    • Non-root vdev cannot be configured in this release, however, they can be imported.
    • Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system.

     

    A ZFS pool has three variables:

    • profile - the root data organization: raid0, mirror, raidz1, raidz2, raidz3
    • width - the number of devices per root vdev
    • groups - the number of root vdevs in the pool

    At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool.

     

    Special treatment for root single-vdev mirrors:

    • A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation.
    • A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device.

     

    To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update.

     

    Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

     

    Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized.

     

    Autotrim can be configured as "on" or "off" (except for single-device ZFS volumes in the unRAID array).

     

    Compression can be configured as "on" or "off", where "on" selects "lz4". Future update will permit specifying other algorithms/levels.

     

    When creating a new ZFS pool you may choose "zfs - encrypted", which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

     

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.

    btrfs pools

    Autotrim can be configured as "on" or "off" when used in a pool.

     

    Compression can be configured as "on" or "off". "on" selects "zstd". Future update to permit specifying other algorithms/levels.

    xfs

    Autotrim can be configured as "on" or "off" when used as a single-slot pool.

    Docker

    • CreateDocker: changed label "Docker Hub URL" to "Registry URL" because of GHCR and other new container registries becoming more and more popular.
    • Honor user setting of stop time-out.
    • Accept images in OCI format.
    • Add option to disable readmore-js on container table

    VM Manager

    If you enable copy/paste for virtual consoles you need to install additional software on the client in addition to the QEMU agent if that has been installed. Here is the location for spice-vdagent for both Windows and Linux. Note copy/paste function will not work with web spice viewer you need to use virt-viewer.

    • Add Serial option to vdisk.
    • Spice Bug fix for users with non standard GUI ports defined.
    • OVMF for QEMU: version stable202302
    • Fix for bus text.
    • Enable copy paste option for virtual consoles
    • Update Memory Backup processing for Virtiofs.
    • Fix lockup when no VMs are present
    • Add support for rtl8139 network model.
    • fix translation omission
    • added lock/unlock for sortable items
    • Fix for Spice Mouse if Copy paste enabled.

    Dashboard

    The webGUI Dashboard has been redesigned, and it is now possible to move elements (tiles) up and down and between columns. This allows the user to organize the tiles in any way they desire.  There is a small "lock" icon on the menu bar which must be clicked to enable this function.

     

    Note: The "lock" icon also appears on the Docker and VM pages and must be clicked to rearrange the startup order.

    Linux kernel

    • version 6.1.19
    • md/unraid: version 2.9.27
    • CONFIG_FS_DAX: File system based Direct Access (DAX) support
    • CONFIG_VIRTIO_FS: Virtio Filesystem
    • CONFIG_ZONE_DEVICE: Device memory (pmem, HMM, etc...) hotplug support
    • CONFIG_USBIP_HOST: Host driver
    • CONFIG_INTEL_MEI: Intel Management Engine Interface
    • CONFIG_INTEL_MEI_ME: ME Enabled Intel Chipsets
    • CONFIG_INTEL_MEI_GSC: Intel MEI GSC embedded device
    • CONFIG_INTEL_MEI_PXP: Intel PXP services of ME Interface
    • CONFIG_INTEL_MEI_HDCP: Intel HDCP2.2 services of ME Interface
    • CONFIG_DRM_I915_PXP: Enable Intel PXP support
    • CONFIG_SCSI_FC_ATTRS: FiberChannel Transport Attributes
    • CONFIG_FUSION_SPI: Fusion MPT ScsiHost drivers for SPI
    • CONFIG_FUSION_FC: Fusion MPT ScsiHost drivers for FC
    • CONFIG_FUSION_CTL: Fusion MPT misc device (ioctl) driver
    • CONFIG_FUSION_LOGGING: Fusion MPT logging facility

    Base Distro

    • aaa_glibc-solibs: version 2.37
    • adwaita-icon-theme: version 43
    • at-spi2-core: version 2.46.0
    • bash: version 5.2.015
    • bind: version 9.18.12
    • btrfs-progs: version 6.2.1
    • ca-certificates: version 20221205
    • cryptsetup: version 2.6.1
    • curl: version 7.88.1
    • dbus: version 1.14.6
    • diffutils: version 3.9
    • dnsmasq: version 2.89
    • docker: version 20.10.23
    • e2fsprogs: version 1.47.0
    • encodings: version 1.0.7
    • file: version 5.44
    • freetype: version 2.13.0
    • fuse3: version 3.12.0
    • gawk: version 5.2.1
    • git: version 2.39.2
    • glib2: version 2.74.6
    • glibc: version 2.37
    • glibc-zoneinfo: version 2022g
    • gnutls: version 3.7.9
    • gptfdisk: version 1.0.9
    • gtk+3: version 3.24.37
    • harfbuzz: version 7.1.0
    • htop: version 3.2.2
    • iproute2: version 6.2.0
    • iptables: version 1.8.9
    • iputils: version 20221126
    • less: version 612
    • libICE: version 1.1.1
    • libSM: version 1.2.4
    • libX11: version 1.8.4
    • libXau: version 1.0.11
    • libXcomposite: version 0.4.6
    • libXdamage: version 1.1.6
    • libXdmcp: version 1.1.4
    • libXpm: version 3.5.15
    • libXrandr: version 1.5.3
    • libXres: version 1.2.2
    • libXxf86dga: version 1.1.6
    • libarchive: version 3.6.2
    • libdrm: version 2.4.115
    • libfontenc: version 1.1.7
    • libglvnd: version 1.6.0
    • libjpeg-turbo: version 2.1.5.1
    • libpcap: version 1.10.3
    • libpng: version 1.6.39
    • libpsl: version 0.21.2
    • libwebp: version 1.3.0
    • libxkbcommon: version 1.5.0
    • libxkbfile: version 1.1.2
    • libxshmfence: version 1.3.2
    • lmdb: version 0.9.30
    • logrotate: version 3.21.0
    • lsof: version 4.98.0
    • lz4: version 1.9.4
    • lzlib: version 1.13
    • mc: version 4.8.29
    • mcelog: version 191
    • mpfr: version 4.2.0
    • nano: version 7.2
    • ncurses: version 6.4
    • nginx: version 1.23.3
    • nghttp2: version 1.52.0
    • openssh: version 9.2p1
    • openssl: version 1.1.1t
    • openssl-solibs: version 1.1.1t
    • openzfs: version 2.1.9
    • pango: version 1.50.14
    • pciutils: version 3.9.0
    • pcre2: version 10.42
    • php: version 8.2.3
    • php-libvirt: version 0.5.7
    • php-markdown: version 2.0.0
    • samba: version 4.17.4
    • sqlite: version 3.41.0
    • sudo: version 1.9.13p2
    • sysstat: version 12.7.2
    • tdb: version 1.4.8
    • tevent: version 0.14.1
    • traceroute: version 2.1.2
    • transset: version 1.0.3
    • tree: version 2.1.0
    • usbutils: version 015
    • xcb-util: version 0.4.1
    • xdriinfo: version 1.0.7
    • xf86-video-vesa: version 2.6.0
    • xfsprogs: version 5.13.0
    • xhost: version 1.0.9
    • xinit: version 1.4.2
    • xkbcomp: version 1.4.6
    • xkeyboard-config: version 2.38
    • xorg-server: version 21.1.7
    • xprop: version 1.2.6
    • xrandr: version 1.5.2
    • xset: version 1.2.5
    • xterm: version 379
    • xz: version 5.4.1
    • zstd: version 1.5.4

    Misc

    • cgroup2 now the default
    • do not mount loopback images using directio
    • Patch upgradepkg to prevent replacing existing package with older version.
    • NFS: enable UPD transport
    • emhttp: fix cache pool (null) syslog strings
    • emhttp: fix cache pool display wrong device size for selected replacement device
    • mover: fixed bug: improper handling of symlinks
    • shfs: igonore top-level hidden directoris (names beginning with '.')
    • wireguard: add SSL support for WG tunnel IP addresses (myunraid.net wildcard certs only)
    • webgui: support PHP8, increase PHP max memory from 128M to 256M
    • webgui: ManagementAccess: Disable Provision/Renew/Upgrade buttons when no IP on eth0
    • webgui: ManagementAccess: Support wireguard local IP addresses in combination with myservers.unraid.net SSL cert
    • webgui: Move "view" icon on Main and Shares page to the left
    • webgui: Dashboard: fix regression error in "select case"
    • webgui: Dashboard: make items moveable between columns
    • webgui: Keep dismissed banners hidden for a month
    • webgui: Dashboard: API for adding custom tiles
    • webgui: Dashboard: rearrange processor information
    • webgui: Dashboard: rearrange UPS info
    • webgui: Dashboard: rearrange memory info
    • webgui: Dashboard: VPN header rearrangement
    • webgui: Dashboard: header rearrangements
    • webgui: Add jqueryUI touch punch for mobile devices
    • webgui: Changed ID to CLASS for elements occurring more than once
    • webgui: Make header in white and black themes scrollable
      • When more items are present than screen space, the user can now scroll through them (previously these items were invisible)
    • webgui: Dashboard and Docker: introduce lock button for sortable items
      • By default sortable items are locked, which allows mobile devices to scroll the page. Upon request items can be made sortable
    • webgui: Users: add icon to title bar
    • webgui: Tools: new function -> PHP Settings
      • View PHP info
      • Configure error reporting
      • Open LOG to see errors in real-time
    • webgui: System info: fix reading inactive ports
    • webgui: Plugin: Include the actual command, being executed
    • webgui: System info: cache enhancement
    • webgui: System info: memory enhancement
    • webgui: DeviceInfo: disable buttons when erase operation is running
    • webgui: Docker: filetree corrections
    • webgui: Fixed: Dashboard: show heat alarm per pool
    • webgui: Notifications: revised operation
      • Autoclose new notifications after 3 seconds
      • Fix notifications reappearing after closure
    • webgui: DeviceList: add FS type in offline state
    • webgui: Add notification agent for Bark
    • webgui: Main: hide browse icon when disk is not mounted
    • webgui: Diagnostics: add additional btrfs and zfs info
    • webgui: Dashboard: add ZFS memory usage
    • webgui: Revised New Permissions
      • Select either disks or shares (not both)
    • webgui: Add testparm to diagnostics
    • webgui: Support new UD reserved mount point of /mnt/addons
    • Like 16
    • Thanks 4
    • Upvote 1



    User Feedback

    Recommended Comments



    Come on, Folks.  When you experience a problem that as has been previously reported in this thread,  Do NOT post another message in this thread about it. 

     

    First look in this Prerelease Bug Report sub-forum for a bug report about what you found.  If you have more information that could help with the solution, post in that thread and provide complete logs and diagnostics files. 

     

    If you don't find a thread about the problem, create a bug report by creating a thread about it!  If you want to include a snippet of a log file in your narrative, be sure that you attach the complete log file. (If you don't know how to create a new thread, it is very simple.  Just click on the 'Start new topic' button at the top or bottom of this page.)

    • Haha 1
    • Upvote 1
    Link to comment

    How is everyone handling the default shares when using zfs? So if you use a dummy USB to start the array with, and set up a ZFS pool as a user share, how are you migrating the default shares to the user share? Or do those need to stay on the array for the time being?

     

    Edit. I’m reading the zfs plug-in documentation linked below and it has suggestions for keeping everything off the array and onto the pool. 

    Edited by Jclendineng
    Link to comment
    25 minutes ago, JorgeB said:

     

    It's in the release notes, create this file:

    /boot/config/modprobe.d/zfs.conf

    with the amount you want, e.g., this is for 64GB:

    options zfs zfs_arc_max=67060137984

    Note that with zfs on Linux ARC should not be set to a value larger than half of the total installed RAM, even if it's unused, since due to memory fragmentation it can end up exhausting the server RAM.

    Thanks i believe that was the issue. I reverted the update, then removed some ZFS plugins that were installed. Updated again and the issue went away.

     

    Only one issue left that i can tell. The top of the Web GUI looks a little strange, there are side to side scroll arrows. I tried opening in incognito to make sure it wasn't a cache issue and it's still there.

    unraid.PNG

    • Thanks 1
    Link to comment
    On 3/16/2023 at 2:04 PM, Liwanu said:

    Only one issue left that i can tell. The top of the Web GUI looks a little strange, there are side to side scroll arrows. I tried opening in incognito to make sure it wasn't a cache issue and it's still there.

    Try a different browser please and/or the same browser from a different computer.

    This is a browser problem and was already reported in the testing phase, not sure what causes this.

     

    Make also sure that you've installed the GPU Statistics plugin from @SimonF since @b3rs3rk hasn't merged his PullRequest on GitHub yet and the Plugin from the CA App is not compatible yet with 6.12:

    https://raw.githubusercontent.com/SimonFair/gpustat-unraid/master/gpustat.plg

     

    Link to comment

    This is great news and really exciting. This opens up a complete new dimnesion for Unraid and I cannot wait to play more with this. 

    • Like 2
    • Thanks 1
    • Upvote 1
    Link to comment
    12 hours ago, B_Sinn3d said:

    Great work guys. looks like lots of good improvements.  Gonna love the customizable dashboard.

     

    I will have to read up on advantages of ZFS but can anyone point out some use cases that would make it more valuable to the normal unraid user that just has the disks formated in xfs/btrfs with single/dual parity?  Is read/write performance better with ZFS?

    I wrote some points on why and how I use ZFS -> 

     

    • Thanks 4
    Link to comment

    Just going to leave this one here as it might help someone.

    Coming from 6.11.5 with docker.folder installed it messed up my docker start and as such i could not manually start containers. Uninstalling the docker folder fixed the issue. 

    Moreover, the folders themself and the function ability of the folders was gone on this RC1 and reinstalling doesn't return the functionability and brings back the inability to start containers manually :)

    Link to comment

    Quick question...

     

    I currently have 8 drives in my array. I'm wanting to pull 5 of them into a zfs pool (5 18TB drives). Due to storage limitations, I can't pull all of them at once. Is it possible to setup the pool initially with 3 drives, move the data from the 2 drives still in the array to the new zfs pool, then add the last 2 drives from the array to the pool?

     

    Thanks.

    Link to comment
    22 minutes ago, geoff.gibby said:

    Due to storage limitations, I can't pull all of them at once. Is it possible to setup the pool initially with 3 drives, move the data from the 2 drives still in the array to the new zfs pool, then add the last 2 drives from the array to the pool?

    I assume you intend to use a raidz pool? If yes raidz expansion in not currently supported by zfs, there's some work to support that but no idea when it will be merged, if you had for example 6 disks you could start with 3 in raidz1 and then add a second vdev with another 3, but would lose 2 disks to parity.

     

     

     

     

     

    Link to comment
    1 hour ago, JorgeB said:

    I assume you intend to use a raidz pool? If yes raidz expansion in not currently supported by zfs, there's some work to support that but no idea when it will be merged, if you had for example 6 disks you could start with 3 in raidz1 and then add a second vdev with another 3, but would lose 2 disks to parity.

     

     

     

     

     

    I was afraid of that. Would there be any benefit to formatting my array drives in zfs as opposed to btrfs?

    Link to comment
    3 hours ago, Liwanu said:

    Thanks i believe that was the issue. I reverted the update, then removed some ZFS plugins that were installed. Updated again and the issue went away.

     

    Only one issue left that i can tell. The top of the Web GUI looks a little strange, there are side to side scroll arrows. I tried opening in incognito to make sure it wasn't a cache issue and it's still there.

    unraid.PNG

    The same for me is happening with Firefox (latest update). But, if I try to open with Chrome, it doesn't.

    Link to comment
    33 minutes ago, geoff.gibby said:

    Would there be any benefit to formatting my array drives in zfs as opposed to btrfs?

    zfs is more reliable than btrfs, especially for raid5/6 pools which are still considered experimental by the maintainers, btrfs is usually OK for raid0/1/10, btrfs is more flexible though, you can add and remove devices from any pool type.

    Link to comment
    6 hours ago, Xploit61 said:

    Confirmed after upgrade I have also lost VMs in VM Manager. Some installed and I have 1 Windows passed from SSD.

     

    @ross232 @Daryl Williams @Govnah

     

    Seems the outdated Docker folder plugin may be the culprit, if you have it try removing it

    Link to comment
    13 hours ago, ross232 said:

    My main VM has started (and I'm using it now) but I'm unable to see any VMs listed in the Virtual Machines page.Screenshot_20230316_163727.thumb.png.30156c89fd799beabee3f36cd45ac5c0.png
    Edit: I can see them listed on the dashboard however :/

    Screenshot_20230316_164336.png

    Can you raise a bug report for this.

     

    Also attached running from command line: virsh list --all

     

    also can you include diagnostics

     

    which os vers did you upgrade from?

    Edited by SimonF
    Link to comment

    Why can't I update unraid to version 6.12? It says up to date and even when I go into plugins and install manually it says "plugin: not reinstalling same version", I'm currently still on version 6.11.5 of unraid

    Link to comment
    12 minutes ago, SimonF said:

    Can you raise a bug report for this.

     

    Also attached running from command line: virsh list --all

     

    also can you include diagnostics


    Done :) 

     

    • Like 1
    Link to comment
    5 minutes ago, Hellomynameisleo said:

    Why can't I update unraid to version 6.12? It says up to date and even when I go into plugins and install manually it says "plugin: not reinstalling same version", I'm currently still on version 6.11.5 of unraid

    What version is listed on the "Next" branch? 6.12 isn't stable yet.

    Link to comment
    2 hours ago, SmartPhoneLover said:

    The same for me is happening with Firefox (latest update). But, if I try to open with Chrome, it doesn't.

    Please see: Click

    Link to comment

    Question, how does the mover work? It appears that we don't have the cache/array functionality with the zfs pools? So how does a cache disk work with zfs? I am planning on creating a cache disk as a single vdev, how would I use that as a cache for the zfs pool?

     

    Related question, I created datasets under my zfs pool, those datasets show up in the "shares" tab BUT show the size of the array disk not the zfs pool and the sizes don't accurately reflect what's there. I can still use the datasets manually for vm/docker/appdata though. Should I file a bug report, or is this expected?

    Edited by Jclendineng
    typo
    Link to comment
    1 hour ago, Jclendineng said:

    It appears that we don't have the cache/array functionality with the zfs pools?

    At least right now, Mover will move files from a pool to the Unraid array, not to other pools.

     

    1 hour ago, Jclendineng said:

    Related question, I created datasets under my zfs pool, those datasets show up in the "shares" tab BUT show the size of the array disk not the zfs pool and the sizes don't accurately reflect what's there. I can still use the datasets manually for vm/docker/appdata though

    When creating a share that should only live on a ZFS pool, set "Use cache pool" to "Only" and then specify which pool to use. On a ZFS pool, that will create a dataset.

    If "Use cache pool" is set to a different option then your ZFS pool is acting as a cache drive for the array, so it makes sense that the array size would be included.

     

    Also, let's say your share name is "Data". Regardless of the "Use cache pool" setting, any array drive or pool with a top level directory named "Data" will participate in the share. So if array disks are unexpectedly being included in your Data share, you probably need to delete a "Data" directory from a disk or two.

    BTW, the Fix Common Problems plugin can warn about this situation.

    • Like 1
    Link to comment
    50 minutes ago, ljm42 said:

    At least right now, Mover will move files from a pool to the Unraid array, not to other pools.

     

    When creating a share that should only live on a ZFS pool, set "Use cache pool" to "Only" and then specify which pool to use. On a ZFS pool, that will create a dataset.

    If "Use cache pool" is set to a different option then your ZFS pool is acting as a cache drive for the array, so it makes sense that the array size would be included.

     

    Also, let's say your share name is "Data". Regardless of the "Use cache pool" setting, any array drive or pool with a top level directory named "Data" will participate in the share. So if array disks are unexpectedly being included in your Data share, you probably need to delete a "Data" directory from a disk or two.

    BTW, the Fix Common Problems plugin can warn about this situation.

    Awesome thank you that’s very helpful. I’m assuming phase 2 which is getting rid of “arrays” per se will fix this, or potentially just a mover update to allow flexibility in how it works. Everything looking good so far, going to transfer a couple tb of test data and then pull a disk to see how it handles rebuilding 😉

     

    Edit: WOW! I stopped the array, pulled a drive, replaced the sled, and started the array, looking at the disk log, unraid runs all the commands to wipe the drive and replace it in the pool.  Amazing, and simple. Thank you! That was less than 5 minutes of downtime for the new drive to come up. 

    Edited by Jclendineng
    • Like 1
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.