• Unraid OS version 6.12.0-rc1 available


    limetech

    The 6.12 release includes initial ZFS support in addition to the usual set of bug fixes, kernel, and package updates.

     

    Please create new topics here in this board to report Bugs or other Issues.

     

    As always, prior to updating, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".

     

    Note regarding plugins:  This release includes upgrading PHP from v7.4 to v8.2.3, necessary for continued PHP security updates. We recommend you upgrade all your plugins before updating the OS, and be aware that any plugin which hasn't been updated recently (with a comment about PHP 8 in the change logs) may not work correctly. You may want to uninstall that plugin until the author has had time to update it.  Plugin authors are aware of necessary changes and many plugins have already been updated.  If you encounter an issue with a plugin, please create a nice post in the plugins support topic.

     

    Special thanks to all our beta testers and especially:

     

    @bonienl for his continued refinement and updating of the Dynamix webGUI.

    @Squid for continued refinement of Community Apps and associated feed.

    @dlandon for continued refinement of Unassigned Devices plugin.

    @ich777 for continued support of third-party drivers, recommendations for base OS enhancements and beta testing.

    @JorgeB for rigorous testing of the storage subsystem and helping us understand ZFS nuances, of which there are many.

    @SimonF for curating our VM Manager and adding some very nice enhancements.

     

    Thanks to everyone above and our Plugin Authors for identifying and putting up with all the changes which came about from upgrading PHP from v7 to v8.

     

    Finally a big Thank You! to @steini84 who brought ZFS to Unraid via plugin several years ago.

     


    Version 6.12.0-rc1 2023-03-14

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

     

    If you revert back from 6.12 to 6.11.5 or ealier, you have to force update all your Docker containers and start them manually after downgrading.  This is necessary because of the underlying change to cgroup v2 in 6.12.0-rc1.

    ZFS Pools

    New in this release is the ability to create a ZFS file system in a user-defined pool.  In addition you may format any data device in the unRAID array with a single-device ZFS file system.

     

    We are splitting full ZFS implementation across two Unraid OS releases.  Initial support in this release includes:

    • Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4 devices in a mirror vdev. Multiple vdev groups.
    • Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table.
    • Support replacing single missing device with a new device of same or larger size.
    • Support pool rename.
    • Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit.
    • Non-root vdev cannot be configured in this release, however, they can be imported.
    • Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system.

     

    A ZFS pool has three variables:

    • profile - the root data organization: raid0, mirror, raidz1, raidz2, raidz3
    • width - the number of devices per root vdev
    • groups - the number of root vdevs in the pool

    At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool.

     

    Special treatment for root single-vdev mirrors:

    • A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation.
    • A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device.

     

    To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update.

     

    Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

     

    Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized.

     

    Autotrim can be configured as "on" or "off" (except for single-device ZFS volumes in the unRAID array).

     

    Compression can be configured as "on" or "off", where "on" selects "lz4". Future update will permit specifying other algorithms/levels.

     

    When creating a new ZFS pool you may choose "zfs - encrypted", which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

     

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.

    btrfs pools

    Autotrim can be configured as "on" or "off" when used in a pool.

     

    Compression can be configured as "on" or "off". "on" selects "zstd". Future update to permit specifying other algorithms/levels.

    xfs

    Autotrim can be configured as "on" or "off" when used as a single-slot pool.

    Docker

    • CreateDocker: changed label "Docker Hub URL" to "Registry URL" because of GHCR and other new container registries becoming more and more popular.
    • Honor user setting of stop time-out.
    • Accept images in OCI format.
    • Add option to disable readmore-js on container table

    VM Manager

    If you enable copy/paste for virtual consoles you need to install additional software on the client in addition to the QEMU agent if that has been installed. Here is the location for spice-vdagent for both Windows and Linux. Note copy/paste function will not work with web spice viewer you need to use virt-viewer.

    • Add Serial option to vdisk.
    • Spice Bug fix for users with non standard GUI ports defined.
    • OVMF for QEMU: version stable202302
    • Fix for bus text.
    • Enable copy paste option for virtual consoles
    • Update Memory Backup processing for Virtiofs.
    • Fix lockup when no VMs are present
    • Add support for rtl8139 network model.
    • fix translation omission
    • added lock/unlock for sortable items
    • Fix for Spice Mouse if Copy paste enabled.

    Dashboard

    The webGUI Dashboard has been redesigned, and it is now possible to move elements (tiles) up and down and between columns. This allows the user to organize the tiles in any way they desire.  There is a small "lock" icon on the menu bar which must be clicked to enable this function.

     

    Note: The "lock" icon also appears on the Docker and VM pages and must be clicked to rearrange the startup order.

    Linux kernel

    • version 6.1.19
    • md/unraid: version 2.9.27
    • CONFIG_FS_DAX: File system based Direct Access (DAX) support
    • CONFIG_VIRTIO_FS: Virtio Filesystem
    • CONFIG_ZONE_DEVICE: Device memory (pmem, HMM, etc...) hotplug support
    • CONFIG_USBIP_HOST: Host driver
    • CONFIG_INTEL_MEI: Intel Management Engine Interface
    • CONFIG_INTEL_MEI_ME: ME Enabled Intel Chipsets
    • CONFIG_INTEL_MEI_GSC: Intel MEI GSC embedded device
    • CONFIG_INTEL_MEI_PXP: Intel PXP services of ME Interface
    • CONFIG_INTEL_MEI_HDCP: Intel HDCP2.2 services of ME Interface
    • CONFIG_DRM_I915_PXP: Enable Intel PXP support
    • CONFIG_SCSI_FC_ATTRS: FiberChannel Transport Attributes
    • CONFIG_FUSION_SPI: Fusion MPT ScsiHost drivers for SPI
    • CONFIG_FUSION_FC: Fusion MPT ScsiHost drivers for FC
    • CONFIG_FUSION_CTL: Fusion MPT misc device (ioctl) driver
    • CONFIG_FUSION_LOGGING: Fusion MPT logging facility

    Base Distro

    • aaa_glibc-solibs: version 2.37
    • adwaita-icon-theme: version 43
    • at-spi2-core: version 2.46.0
    • bash: version 5.2.015
    • bind: version 9.18.12
    • btrfs-progs: version 6.2.1
    • ca-certificates: version 20221205
    • cryptsetup: version 2.6.1
    • curl: version 7.88.1
    • dbus: version 1.14.6
    • diffutils: version 3.9
    • dnsmasq: version 2.89
    • docker: version 20.10.23
    • e2fsprogs: version 1.47.0
    • encodings: version 1.0.7
    • file: version 5.44
    • freetype: version 2.13.0
    • fuse3: version 3.12.0
    • gawk: version 5.2.1
    • git: version 2.39.2
    • glib2: version 2.74.6
    • glibc: version 2.37
    • glibc-zoneinfo: version 2022g
    • gnutls: version 3.7.9
    • gptfdisk: version 1.0.9
    • gtk+3: version 3.24.37
    • harfbuzz: version 7.1.0
    • htop: version 3.2.2
    • iproute2: version 6.2.0
    • iptables: version 1.8.9
    • iputils: version 20221126
    • less: version 612
    • libICE: version 1.1.1
    • libSM: version 1.2.4
    • libX11: version 1.8.4
    • libXau: version 1.0.11
    • libXcomposite: version 0.4.6
    • libXdamage: version 1.1.6
    • libXdmcp: version 1.1.4
    • libXpm: version 3.5.15
    • libXrandr: version 1.5.3
    • libXres: version 1.2.2
    • libXxf86dga: version 1.1.6
    • libarchive: version 3.6.2
    • libdrm: version 2.4.115
    • libfontenc: version 1.1.7
    • libglvnd: version 1.6.0
    • libjpeg-turbo: version 2.1.5.1
    • libpcap: version 1.10.3
    • libpng: version 1.6.39
    • libpsl: version 0.21.2
    • libwebp: version 1.3.0
    • libxkbcommon: version 1.5.0
    • libxkbfile: version 1.1.2
    • libxshmfence: version 1.3.2
    • lmdb: version 0.9.30
    • logrotate: version 3.21.0
    • lsof: version 4.98.0
    • lz4: version 1.9.4
    • lzlib: version 1.13
    • mc: version 4.8.29
    • mcelog: version 191
    • mpfr: version 4.2.0
    • nano: version 7.2
    • ncurses: version 6.4
    • nginx: version 1.23.3
    • nghttp2: version 1.52.0
    • openssh: version 9.2p1
    • openssl: version 1.1.1t
    • openssl-solibs: version 1.1.1t
    • openzfs: version 2.1.9
    • pango: version 1.50.14
    • pciutils: version 3.9.0
    • pcre2: version 10.42
    • php: version 8.2.3
    • php-libvirt: version 0.5.7
    • php-markdown: version 2.0.0
    • samba: version 4.17.4
    • sqlite: version 3.41.0
    • sudo: version 1.9.13p2
    • sysstat: version 12.7.2
    • tdb: version 1.4.8
    • tevent: version 0.14.1
    • traceroute: version 2.1.2
    • transset: version 1.0.3
    • tree: version 2.1.0
    • usbutils: version 015
    • xcb-util: version 0.4.1
    • xdriinfo: version 1.0.7
    • xf86-video-vesa: version 2.6.0
    • xfsprogs: version 5.13.0
    • xhost: version 1.0.9
    • xinit: version 1.4.2
    • xkbcomp: version 1.4.6
    • xkeyboard-config: version 2.38
    • xorg-server: version 21.1.7
    • xprop: version 1.2.6
    • xrandr: version 1.5.2
    • xset: version 1.2.5
    • xterm: version 379
    • xz: version 5.4.1
    • zstd: version 1.5.4

    Misc

    • cgroup2 now the default
    • do not mount loopback images using directio
    • Patch upgradepkg to prevent replacing existing package with older version.
    • NFS: enable UPD transport
    • emhttp: fix cache pool (null) syslog strings
    • emhttp: fix cache pool display wrong device size for selected replacement device
    • mover: fixed bug: improper handling of symlinks
    • shfs: igonore top-level hidden directoris (names beginning with '.')
    • wireguard: add SSL support for WG tunnel IP addresses (myunraid.net wildcard certs only)
    • webgui: support PHP8, increase PHP max memory from 128M to 256M
    • webgui: ManagementAccess: Disable Provision/Renew/Upgrade buttons when no IP on eth0
    • webgui: ManagementAccess: Support wireguard local IP addresses in combination with myservers.unraid.net SSL cert
    • webgui: Move "view" icon on Main and Shares page to the left
    • webgui: Dashboard: fix regression error in "select case"
    • webgui: Dashboard: make items moveable between columns
    • webgui: Keep dismissed banners hidden for a month
    • webgui: Dashboard: API for adding custom tiles
    • webgui: Dashboard: rearrange processor information
    • webgui: Dashboard: rearrange UPS info
    • webgui: Dashboard: rearrange memory info
    • webgui: Dashboard: VPN header rearrangement
    • webgui: Dashboard: header rearrangements
    • webgui: Add jqueryUI touch punch for mobile devices
    • webgui: Changed ID to CLASS for elements occurring more than once
    • webgui: Make header in white and black themes scrollable
      • When more items are present than screen space, the user can now scroll through them (previously these items were invisible)
    • webgui: Dashboard and Docker: introduce lock button for sortable items
      • By default sortable items are locked, which allows mobile devices to scroll the page. Upon request items can be made sortable
    • webgui: Users: add icon to title bar
    • webgui: Tools: new function -> PHP Settings
      • View PHP info
      • Configure error reporting
      • Open LOG to see errors in real-time
    • webgui: System info: fix reading inactive ports
    • webgui: Plugin: Include the actual command, being executed
    • webgui: System info: cache enhancement
    • webgui: System info: memory enhancement
    • webgui: DeviceInfo: disable buttons when erase operation is running
    • webgui: Docker: filetree corrections
    • webgui: Fixed: Dashboard: show heat alarm per pool
    • webgui: Notifications: revised operation
      • Autoclose new notifications after 3 seconds
      • Fix notifications reappearing after closure
    • webgui: DeviceList: add FS type in offline state
    • webgui: Add notification agent for Bark
    • webgui: Main: hide browse icon when disk is not mounted
    • webgui: Diagnostics: add additional btrfs and zfs info
    • webgui: Dashboard: add ZFS memory usage
    • webgui: Revised New Permissions
      • Select either disks or shares (not both)
    • webgui: Add testparm to diagnostics
    • webgui: Support new UD reserved mount point of /mnt/addons
    • Like 16
    • Thanks 4
    • Upvote 1



    User Feedback

    Recommended Comments



    15 minutes ago, Jclendineng said:

    if you have all the same sizes and can do zfs I would think it would be a definite upgrade?

    Just keep in mind if you want to upsize the zfs pool I think it's a little more complicated than just adding a single disk, unlike the parity pool.

    • Upvote 1
    Link to comment

    There appears to be a bug with pool drives and formatting. I added a new config and attempted to create a raidz pool of drives and they refuse to format. They format fine if moved back to an array, however. 

    Link to comment
    5 hours ago, Jclendineng said:

    but if you have all the same sizes and can do zfs I would think it would be a definite upgrade?

     

    What happens in a single-parity ZFS pool setup if you lose 2 devices? Or a dual-parity ZFS pool and you lose 3 devices? Is all of the data wiped out at that point?

    Link to comment
    1 hour ago, BRiT said:

    What happens in a single-parity ZFS pool setup if you lose 2 devices? Or a dual-parity ZFS pool and you lose 3 devices? Is all of the data wiped out at that point?

    when using unraid array / parity, no difference than what happens today. In that case, each drive is a single disk independent zfs pool and works exactly like how xfs or btrfs formatted drives work. you can even mix and match. The only thing you gain in this setup is zfs filesystem (hash checks / better fs error detection, maybe snapshots)

    In both scenarios you described, you cannot recover data using parity, but the drives that did not fail will still contain data

    • Like 1
    Link to comment
    17 hours ago, Jclendineng said:

    I can comment on this, zfs uses available ram as a cache/buffer of sorts and so if you have a bit flip it is really bad. ECC is super important for any server housing important data but ESPECIALLY zfs. An example, I have 1 stick of ram in my test server going bad, and so when I copied a bunch of test data over, I got about 3 hardware checks telling me I had 3 corrections on one stick of ram. If that weren’t ECC that would have been 3 corruptions in my copied data. 

    This shows the importance of ECC, but doesn't make this any more of a problem with zfs than with any other filesystem. All data passes through RAM before being written to disk, and is at risk of memory corruption until written. how long the data sits in memory is less important. The only thing zfs does it make errors more visible in case the corruption happens after the zfs hash was computed (data and hash won't match, highlighting an otherwise silent error, either during write or during scrubs). As for corruption during scrubs, this explains it very well - https://jrs-s.net/2015/02/03/will-zfs-and-non-ecc-ram-kill-your-data/

     

    Agree with you that when possible, ECC should be used

     

    17 hours ago, Jclendineng said:

    Also, ddr5 has a lot of cool features, 1 of which is built in single and multi bit ecc on the consumer chips. It’s in the spec. It’s very expensive compared to ddr4 but someday it will be the same price and then most will have ecc without even realizing it. 

    Important to realize that this is mainly to allow denser memory to pass the spec, it doesn't really protect the data unless the processor supports ECC through the chain. You may very well pass the data via CPU and it may come out with a flipped bit and a matching hash that is then written back to storage. It's still a step in the right direction, but only if CPU/Motherboard manufacturers follow through. Ian cutress explained this very well - 

     

     

    Link to comment
    6 hours ago, chlballi said:

    There appears to be a bug with pool drives and formatting.

    Please create a bug report and post the diagnostics after a format attempt. 

    Link to comment
    3 hours ago, BRiT said:

     

    What happens in a single-parity ZFS pool setup if you lose 2 devices? Or a dual-parity ZFS pool and you lose 3 devices? Is all of the data wiped out at that point?

    If you're doing a raidz pool then if you have more failures than the tolerance everything's gone, unlike in the unraid array.

    • Thanks 1
    Link to comment

    i think the question here is,

     

    to create a ZFS Unraid Array + 1 Parity drive. Instead the regular XFS Unraid Array +1 Parity and what happens if you loose 2 drives at once?

     

    10TB Parity <----- working

    10TB ZFS <---- working

    10TB ZFS <---- defect

    10TB ZFS <---- defect

     

    there is no "zfs" combined Pool in the array atm

     

    my logic tells me that we can, at this time, do such a setup but every drive in the array is an Solo ZFS drive and is not in any kind of raidz so the parity works just as before and we can lose one drive at a time. the benefit for that kind of setup is replication and much better ZFS backup solutions from the other faster ZFS "cache" Pools.

     

    In future releases we can ditch the stock array completly for an raidz array, right?

    Edited by domrockt
    Link to comment
    11 minutes ago, domrockt said:

    my logic tells me that we can, at this time, do such a setup but every drive in the array is an Solo ZFS

    Correct, it's the same as if using XFS or BTRFS.

    • Thanks 1
    Link to comment
    6 hours ago, apandey said:

    This shows the importance of ECC, but doesn't make this any more of a problem with zfs

    All very true, but ZFS implicitly trusts RAM and will use all available RAM up to a set point, so IMO making it more important to use ECC? Since ZFS + Non-ECC will not correct on the RAM level...

     

    Link to comment
    • Featured Comment

    PSA

     

    The next revision of 6.12 (currently private testing) has further refinements to the dashboard.  Plugins which have NOT been updated to support the new style method of adding tiles to the dashboard are now going to be marked as being incompatible with the OS if they cause issues.

     

    Notably, Plex Streams, Corsair PSU, disk location, GPU Stats if installed will now completely prevent the dashboard from loading at all.  These are now marked as being incompatible with the OS.  Further testing might find more plugins having the same result.

     

    Any affected authors will need to contact me via PM to inform me that the problems have been fixed so that I can retest and then drop the incompatible flag

     

    While I try and give authors time to fix their problems during RC stage, simply having the plugins installed prevent the dashboard from loading at all, and this cannot be ignored.  Previously the plugins while they had issues (notably major display aberrations on the dashboard) would not actually impede the operation or management of the server.  Now they are.

    • Like 1
    • Thanks 4
    Link to comment
    Quote

    VM Manager

    If you enable copy/paste for virtual consoles you need to install additional software on the client in addition to the QEMU agent if that has been installed. Here is the location for spice-vdagent for both Windows and Linux. Note copy/paste function will not work with web spice viewer you need to use virt-viewer.

    Add Serial option to vdisk.

    Spice Bug fix for users with non standard GUI ports defined.

    OVMF for QEMU: version stable202302

    Fix for bus text.

    Enable copy paste option for virtual consoles

    Update Memory Backup processing for Virtiofs.

    Fix lockup when no VMs are present

    Add support for rtl8139 network model.

    fix translation omission

    added lock/unlock for sortable items

    Fix for Spice Mouse if Copy paste enabled.

     

    Can someone say something about those new features, I don't get it what's new or what are the features and how to use them.

     

    Edit:

    what's copy/paste for virtual consoles? What benefits got spice?

     

    Thanks!

    Edited by Stri
    Link to comment
    37 minutes ago, Squid said:

    Notably, Plex Streams and Corsair PSU, disk location if installed will now completely prevent the dashboard from loading at all

    I need to wait and watch a bit. I use both Corsair PSU and Disk Location. Neither essential, but both nice to have. Hoping to see them updated

    Link to comment
    2 minutes ago, Stri said:

     

    Can someone say something about those new features, I don't get it what's new or what are the features and how to use them.

     

    Thanks!

    Copy paste enables the function to copy between you pc and the VM. It is dependant on installing the spice vdagent.

     

    image.thumb.png.9b59d893aab4a2a2ac104a7d1cade548.png

     

    So this example is using virt viewer and spice into the vm Notepad on my windows machine copy and paste into the VM

    image.thumb.png.f3f986159a13e983ef13ab2f7d7f6384.png

     

    Add Serial option to vdisk.

     

    This sets the serial number of a vdisk into the VM, useful if you have a Unraid VM so the devices are labled the same

     

    image.png.26cfa676d322150dc47157c0d23e38e0.png

    image.png.f300bb696e7c05e430cd69f7b2291741.png

     

    Update of OVMF vers.

    OVMF for QEMU: version stable202302

     

    Add NIC model

    Add support for rtl8139 network model.

     

    These are just fixes.

     

    Spice Bug fix for users with non standard GUI ports defined.

    Fix for bus text.

    Update Memory Backup processing for Virtiofs.

    Fix lockup when no VMs are present

    Add support for rtl8139 network model.

    fix translation omission

    Fix for Spice Mouse if Copy paste enabled.

    • Like 1
    • Thanks 1
    Link to comment
    Quote

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

    What's the likelihood of this happening again before release? Just wondering how temporary to make this pool if there's a chance of seeing "If you created any zpools using 6.12.0-rc1 please Erase..." down the line.

    Link to comment

    Question for everyone, has anyone tried an NVME drive as a single zfs vdev? I am getting surprisingly terrible speeds.

     

    Pre-update was close to 3GB/s, Post-update I am getting 220MB/s MAX. The only change was btrfs was changed to zfs after the update. Thoughts? I know zfs will have a severe penalty for ssd/nvme due to overhead but not that severe :) Polling the group to see if anyone else has nvme drives to test with.

    Link to comment

    An interesting observation, assuming disk shares are enabled:

    If you create a folder off the root of a zfs array disk or pool,

      a share is created

    If instead you create the share using the shares tab,

      a dataset is created with the name of the share,

      and it's mounted in the expected location

    Link to comment
    6 hours ago, domrockt said:

    i think the question here is,

     

    to create a ZFS Unraid Array + 1 Parity drive. Instead the regular XFS Unraid Array +1 Parity and what happens if you loose 2 drives at once?

     

    That wasn't my question. I already know how Unraid Arrays function as it is filesystem agnostic.

     

    My question was around going pure ZFS Arrays instead of the Unraid Array(s).

    Link to comment

    Just an fyi... if this is not the place then let me know.  Display issues with this release.  I have attached a few screenshots to demonstrate.

     

    image.thumb.png.3f611fb6a8c5c39f33af61c86a4e8d33.png

    image.thumb.png.cfad598e05100f7dc50d8da783ba03bf.png

    Edited by ximian
    Link to comment

    @JorgeB I did some more digging/testing and apparently the old pool still existed and prevented the formatting of the drives as they were still connected to that pool in the BG. I did a 'zpool destroy' on that old pool and attempted to format and it had no issues. That leads me to believe that the issue is the pools not being edited properly when created or destroyed via the GUI.

    Edited by chlballi
    words
    Link to comment
    29 minutes ago, ximian said:

    Confirmed that safe mode, display was alright. 

     

    Now to find offending plugin !!

    Broken dashboards normally justify to the right of the column, working ones will be left justified.

    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.