• Unraid OS version 6.12.0-rc1 available


    limetech

    The 6.12 release includes initial ZFS support in addition to the usual set of bug fixes, kernel, and package updates.

     

    Please create new topics here in this board to report Bugs or other Issues.

     

    As always, prior to updating, create a backup of your USB flash device:  "Main/Flash/Flash Device Settings" - click "Flash Backup".

     

    Note regarding plugins:  This release includes upgrading PHP from v7.4 to v8.2.3, necessary for continued PHP security updates. We recommend you upgrade all your plugins before updating the OS, and be aware that any plugin which hasn't been updated recently (with a comment about PHP 8 in the change logs) may not work correctly. You may want to uninstall that plugin until the author has had time to update it.  Plugin authors are aware of necessary changes and many plugins have already been updated.  If you encounter an issue with a plugin, please create a nice post in the plugins support topic.

     

    Special thanks to all our beta testers and especially:

     

    @bonienl for his continued refinement and updating of the Dynamix webGUI.

    @Squid for continued refinement of Community Apps and associated feed.

    @dlandon for continued refinement of Unassigned Devices plugin.

    @ich777 for continued support of third-party drivers, recommendations for base OS enhancements and beta testing.

    @JorgeB for rigorous testing of the storage subsystem and helping us understand ZFS nuances, of which there are many.

    @SimonF for curating our VM Manager and adding some very nice enhancements.

     

    Thanks to everyone above and our Plugin Authors for identifying and putting up with all the changes which came about from upgrading PHP from v7 to v8.

     

    Finally a big Thank You! to @steini84 who brought ZFS to Unraid via plugin several years ago.

     


    Version 6.12.0-rc1 2023-03-14

    If you created any zpools using 6.12.0-beta5 please Erase those pools and recreate.

     

    If you revert back from 6.12 to 6.11.5 or ealier, you have to force update all your Docker containers and start them manually after downgrading.  This is necessary because of the underlying change to cgroup v2 in 6.12.0-rc1.

    ZFS Pools

    New in this release is the ability to create a ZFS file system in a user-defined pool.  In addition you may format any data device in the unRAID array with a single-device ZFS file system.

     

    We are splitting full ZFS implementation across two Unraid OS releases.  Initial support in this release includes:

    • Support raid0, mirror, raidz1, raidz2 and raidz3 root profiles. Up to 4 devices in a mirror vdev. Multiple vdev groups.
    • Support removing single device: if device still present in server, 'wipefs' is used to clear the partition table.
    • Support replacing single missing device with a new device of same or larger size.
    • Support pool rename.
    • Pool names must begin with a lowercase letter and only contain lowercase letters, digits, the underscore and dash. Pool names must not end with a digit.
    • Non-root vdev cannot be configured in this release, however, they can be imported.
    • Pools created on other systems may or may not import depending on how the the pool was created. A future update will permit importing pools from any system.

     

    A ZFS pool has three variables:

    • profile - the root data organization: raid0, mirror, raidz1, raidz2, raidz3
    • width - the number of devices per root vdev
    • groups - the number of root vdevs in the pool

    At time of ZFS pool creation, the webGUI will present all topology options based on the number of devices assigned to the pool.

     

    Special treatment for root single-vdev mirrors:

    • A single-device ZFS pool can be converted to multiple-device mirror by adding up to 3 additional devices in one operation.
    • A 2-device mirror can be increased to 3-device by adding a single device; similarly a 3-device mirror can be increased to 4-device mirror by adding a single device.

     

    To add an additional root vdev, you must assign 'width' number of new devices to the pool at the same time. The new vdev will be created with the same 'profile' as the existing vdevs. Additional flexibility in adding/expanding vdevs will be provided in a future update.

     

    Pools created with the 'steini84' plugin can be imported as follows: First create a new pool with the number of slots corresponding to the number of devices in the pool to be imported. Next assign all the devices to the new pool. Upon array Start the pool should be recognized, though certain zpool topologies may not be recognized (please report).

     

    Mixed topologies are not supported. For example, a pool with both a mirror root vdev and a raidz root vdev is not recognized.

     

    Autotrim can be configured as "on" or "off" (except for single-device ZFS volumes in the unRAID array).

     

    Compression can be configured as "on" or "off", where "on" selects "lz4". Future update will permit specifying other algorithms/levels.

     

    When creating a new ZFS pool you may choose "zfs - encrypted", which, like other encrypted volumes, applies device-level encryption via LUKS. ZFS native encryption is not supported at this time.

     

    During system boot, the file /etc/modprobe.d/zfs.conf is auto-generated to limit the ZFS ARC to 1/8 of installed memory. This can be overridden if necessary by creating a custom 'config/modprobe.d/zfs.conf' file. Future update will include ability to configure the ARC via webGUI, including auto-adjust according to memory pressure, e.g., VM start/stop.

    btrfs pools

    Autotrim can be configured as "on" or "off" when used in a pool.

     

    Compression can be configured as "on" or "off". "on" selects "zstd". Future update to permit specifying other algorithms/levels.

    xfs

    Autotrim can be configured as "on" or "off" when used as a single-slot pool.

    Docker

    • CreateDocker: changed label "Docker Hub URL" to "Registry URL" because of GHCR and other new container registries becoming more and more popular.
    • Honor user setting of stop time-out.
    • Accept images in OCI format.
    • Add option to disable readmore-js on container table

    VM Manager

    If you enable copy/paste for virtual consoles you need to install additional software on the client in addition to the QEMU agent if that has been installed. Here is the location for spice-vdagent for both Windows and Linux. Note copy/paste function will not work with web spice viewer you need to use virt-viewer.

    • Add Serial option to vdisk.
    • Spice Bug fix for users with non standard GUI ports defined.
    • OVMF for QEMU: version stable202302
    • Fix for bus text.
    • Enable copy paste option for virtual consoles
    • Update Memory Backup processing for Virtiofs.
    • Fix lockup when no VMs are present
    • Add support for rtl8139 network model.
    • fix translation omission
    • added lock/unlock for sortable items
    • Fix for Spice Mouse if Copy paste enabled.

    Dashboard

    The webGUI Dashboard has been redesigned, and it is now possible to move elements (tiles) up and down and between columns. This allows the user to organize the tiles in any way they desire.  There is a small "lock" icon on the menu bar which must be clicked to enable this function.

     

    Note: The "lock" icon also appears on the Docker and VM pages and must be clicked to rearrange the startup order.

    Linux kernel

    • version 6.1.19
    • md/unraid: version 2.9.27
    • CONFIG_FS_DAX: File system based Direct Access (DAX) support
    • CONFIG_VIRTIO_FS: Virtio Filesystem
    • CONFIG_ZONE_DEVICE: Device memory (pmem, HMM, etc...) hotplug support
    • CONFIG_USBIP_HOST: Host driver
    • CONFIG_INTEL_MEI: Intel Management Engine Interface
    • CONFIG_INTEL_MEI_ME: ME Enabled Intel Chipsets
    • CONFIG_INTEL_MEI_GSC: Intel MEI GSC embedded device
    • CONFIG_INTEL_MEI_PXP: Intel PXP services of ME Interface
    • CONFIG_INTEL_MEI_HDCP: Intel HDCP2.2 services of ME Interface
    • CONFIG_DRM_I915_PXP: Enable Intel PXP support
    • CONFIG_SCSI_FC_ATTRS: FiberChannel Transport Attributes
    • CONFIG_FUSION_SPI: Fusion MPT ScsiHost drivers for SPI
    • CONFIG_FUSION_FC: Fusion MPT ScsiHost drivers for FC
    • CONFIG_FUSION_CTL: Fusion MPT misc device (ioctl) driver
    • CONFIG_FUSION_LOGGING: Fusion MPT logging facility

    Base Distro

    • aaa_glibc-solibs: version 2.37
    • adwaita-icon-theme: version 43
    • at-spi2-core: version 2.46.0
    • bash: version 5.2.015
    • bind: version 9.18.12
    • btrfs-progs: version 6.2.1
    • ca-certificates: version 20221205
    • cryptsetup: version 2.6.1
    • curl: version 7.88.1
    • dbus: version 1.14.6
    • diffutils: version 3.9
    • dnsmasq: version 2.89
    • docker: version 20.10.23
    • e2fsprogs: version 1.47.0
    • encodings: version 1.0.7
    • file: version 5.44
    • freetype: version 2.13.0
    • fuse3: version 3.12.0
    • gawk: version 5.2.1
    • git: version 2.39.2
    • glib2: version 2.74.6
    • glibc: version 2.37
    • glibc-zoneinfo: version 2022g
    • gnutls: version 3.7.9
    • gptfdisk: version 1.0.9
    • gtk+3: version 3.24.37
    • harfbuzz: version 7.1.0
    • htop: version 3.2.2
    • iproute2: version 6.2.0
    • iptables: version 1.8.9
    • iputils: version 20221126
    • less: version 612
    • libICE: version 1.1.1
    • libSM: version 1.2.4
    • libX11: version 1.8.4
    • libXau: version 1.0.11
    • libXcomposite: version 0.4.6
    • libXdamage: version 1.1.6
    • libXdmcp: version 1.1.4
    • libXpm: version 3.5.15
    • libXrandr: version 1.5.3
    • libXres: version 1.2.2
    • libXxf86dga: version 1.1.6
    • libarchive: version 3.6.2
    • libdrm: version 2.4.115
    • libfontenc: version 1.1.7
    • libglvnd: version 1.6.0
    • libjpeg-turbo: version 2.1.5.1
    • libpcap: version 1.10.3
    • libpng: version 1.6.39
    • libpsl: version 0.21.2
    • libwebp: version 1.3.0
    • libxkbcommon: version 1.5.0
    • libxkbfile: version 1.1.2
    • libxshmfence: version 1.3.2
    • lmdb: version 0.9.30
    • logrotate: version 3.21.0
    • lsof: version 4.98.0
    • lz4: version 1.9.4
    • lzlib: version 1.13
    • mc: version 4.8.29
    • mcelog: version 191
    • mpfr: version 4.2.0
    • nano: version 7.2
    • ncurses: version 6.4
    • nginx: version 1.23.3
    • nghttp2: version 1.52.0
    • openssh: version 9.2p1
    • openssl: version 1.1.1t
    • openssl-solibs: version 1.1.1t
    • openzfs: version 2.1.9
    • pango: version 1.50.14
    • pciutils: version 3.9.0
    • pcre2: version 10.42
    • php: version 8.2.3
    • php-libvirt: version 0.5.7
    • php-markdown: version 2.0.0
    • samba: version 4.17.4
    • sqlite: version 3.41.0
    • sudo: version 1.9.13p2
    • sysstat: version 12.7.2
    • tdb: version 1.4.8
    • tevent: version 0.14.1
    • traceroute: version 2.1.2
    • transset: version 1.0.3
    • tree: version 2.1.0
    • usbutils: version 015
    • xcb-util: version 0.4.1
    • xdriinfo: version 1.0.7
    • xf86-video-vesa: version 2.6.0
    • xfsprogs: version 5.13.0
    • xhost: version 1.0.9
    • xinit: version 1.4.2
    • xkbcomp: version 1.4.6
    • xkeyboard-config: version 2.38
    • xorg-server: version 21.1.7
    • xprop: version 1.2.6
    • xrandr: version 1.5.2
    • xset: version 1.2.5
    • xterm: version 379
    • xz: version 5.4.1
    • zstd: version 1.5.4

    Misc

    • cgroup2 now the default
    • do not mount loopback images using directio
    • Patch upgradepkg to prevent replacing existing package with older version.
    • NFS: enable UPD transport
    • emhttp: fix cache pool (null) syslog strings
    • emhttp: fix cache pool display wrong device size for selected replacement device
    • mover: fixed bug: improper handling of symlinks
    • shfs: igonore top-level hidden directoris (names beginning with '.')
    • wireguard: add SSL support for WG tunnel IP addresses (myunraid.net wildcard certs only)
    • webgui: support PHP8, increase PHP max memory from 128M to 256M
    • webgui: ManagementAccess: Disable Provision/Renew/Upgrade buttons when no IP on eth0
    • webgui: ManagementAccess: Support wireguard local IP addresses in combination with myservers.unraid.net SSL cert
    • webgui: Move "view" icon on Main and Shares page to the left
    • webgui: Dashboard: fix regression error in "select case"
    • webgui: Dashboard: make items moveable between columns
    • webgui: Keep dismissed banners hidden for a month
    • webgui: Dashboard: API for adding custom tiles
    • webgui: Dashboard: rearrange processor information
    • webgui: Dashboard: rearrange UPS info
    • webgui: Dashboard: rearrange memory info
    • webgui: Dashboard: VPN header rearrangement
    • webgui: Dashboard: header rearrangements
    • webgui: Add jqueryUI touch punch for mobile devices
    • webgui: Changed ID to CLASS for elements occurring more than once
    • webgui: Make header in white and black themes scrollable
      • When more items are present than screen space, the user can now scroll through them (previously these items were invisible)
    • webgui: Dashboard and Docker: introduce lock button for sortable items
      • By default sortable items are locked, which allows mobile devices to scroll the page. Upon request items can be made sortable
    • webgui: Users: add icon to title bar
    • webgui: Tools: new function -> PHP Settings
      • View PHP info
      • Configure error reporting
      • Open LOG to see errors in real-time
    • webgui: System info: fix reading inactive ports
    • webgui: Plugin: Include the actual command, being executed
    • webgui: System info: cache enhancement
    • webgui: System info: memory enhancement
    • webgui: DeviceInfo: disable buttons when erase operation is running
    • webgui: Docker: filetree corrections
    • webgui: Fixed: Dashboard: show heat alarm per pool
    • webgui: Notifications: revised operation
      • Autoclose new notifications after 3 seconds
      • Fix notifications reappearing after closure
    • webgui: DeviceList: add FS type in offline state
    • webgui: Add notification agent for Bark
    • webgui: Main: hide browse icon when disk is not mounted
    • webgui: Diagnostics: add additional btrfs and zfs info
    • webgui: Dashboard: add ZFS memory usage
    • webgui: Revised New Permissions
      • Select either disks or shares (not both)
    • webgui: Add testparm to diagnostics
    • webgui: Support new UD reserved mount point of /mnt/addons
    • Like 16
    • Thanks 4
    • Upvote 1



    User Feedback

    Recommended Comments



    Just now, Hellomynameisleo said:

    To make a ZFS pool would I have to create a pool instead of an array just like making a cache pool?

    yep! 

    Link to comment
    Just now, Hellomynameisleo said:

    To make a ZFS pool would I have to create a pool instead of an array just like making a cache pool?

    Yes, so what I did was use a spare USB stick as my 1 array drive, then I created a new pool Im calling "zfs" to test with and adding drives to it. Once you add the drives, click the first drive and change the file system to "zfs" and it will show you your options for mirror/raidz/etc. It takes care of the rest. Id reccomend 2 plugins, "ZFS Master" for snapshot management and datasets, and ZnapZend for snapshot automation per the thread linked above. 

    Link to comment

    So I noted in the post above that I successfully stopped the array, pulled a sled and replaced a drive and unraid imported it just fine, is that best practice? I know with zfs you’re supposed to offline the failed drive, will unraid have that in the gui when a drive starts to fail? So you would offline the drive, stop the array, replace drive and start? Unraid imports and resilvers the new drive. 

    Link to comment

    I imported the pool that was created with a plugin. I am having couple issues.

    1. I cant delete a dataset.

    root@UnRAID:/mnt/citadel# zfs destroy -f citadel/vsphere
    cannot unmount '/mnt/citadel/vsphere': unmount failed
    root@UnRAID:/mnt/citadel# zfs unmount -f citadel/vsphere
    cannot unmount '/mnt/citadel/vsphere': unmount failed

    2. As soon as I export my zfs SMB share, it shows unprotected in the shares menu. also shows that the share lives on the zfs pool and "Disk1" which is a ssd and in a btrfs pool (image attached).

    1462599316_Screenshot2023-03-17at12_22_11pm.thumb.png.5a84dbbb7cfec99ec8162e88d5cd4534.png

     

    edit: now I am getting "no such pool or dataset"

     

    root@UnRAID:/mnt/citadel# zfs destroy -f citadel/vsphere
    cannot unmount '/mnt/citadel/vsphere': no such pool or dataset
    root@UnRAID:/mnt/citadel# zfs list
    NAME                        USED  AVAIL     REFER  MOUNTPOINT
    citadel                    5.27T  5.17T      440G  /mnt/citadel
    citadel/Documents          1.97T  5.17T     1.97T  /mnt/citadel/Documents
    citadel/Media              1.75T  5.17T     1.75T  /mnt/citadel/Media
    citadel/Torrent_Downloads  16.5G  5.17T     16.5G  /mnt/citadel/Torrent_Downloads
    citadel/nextcloud           531G  5.17T      531G  /mnt/citadel/nextcloud
    citadel/software            128G  5.17T      128G  /mnt/citadel/software
    citadel/veeam-backup        402G  5.17T      402G  /mnt/citadel/veeam-backup
    citadel/vms                41.4G  5.17T     41.4G  /mnt/vms
    citadel/vsphere            31.7G  5.17T     31.7G  /mnt/citadel/vsphere

     

    Edited by Xxharry
    add more info
    Link to comment
    On 3/15/2023 at 7:01 PM, B_Sinn3d said:

    Great work guys. looks like lots of good improvements.  Gonna love the customizable dashboard.

     

    I will have to read up on advantages of ZFS but can anyone point out some use cases that would make it more valuable to the normal unraid user that just has the disks formated in xfs/btrfs with single/dual parity?  Is read/write performance better with ZFS?

     

    I for one would also appreciate a "101" summary write-up on the significance of ZFS - particularly with regard to this comment from the 6.12.0-rc1 blog:

    Quote

    Additionally, you may format any data device in the unRAID array with a single-device ZFS file system.

     

    Does this new feature mean that Unraid will achieve some sort of "silent corruption" resilience through the deployment/adoption of ZFS? (And how does such a facility impact capacity - presumably it's a tunable thing??) I don't know enough practically anything about ZFS to be able to get a feel for this, and it's hard to find this level of info in here.

     

    Thanks in advance to any kind souls willing to expand on this.

    Edited by magic144
    Link to comment

    I have a couple of child datasets in my zfs pool from the zfs plugin in 6.11. E.g. pool/usr/usrname. These datasets do not seem to be showing up under Shares. I only see pool/usr and not the datasets under that one. Would just adding a share named usr/usrname for that pool add that dataset to shares?

    Link to comment
    3 hours ago, magic144 said:

    Does this new feature mean that Unraid will achieve some sort of "silent corruption" resilience through the deployment/adoption of ZFS?

    Using zfs for array drives will allow you to detect data corruption, but if found zfs cannot repair it since there's no redundancy in the filesystem, it's works the same as it already did with btrfs.

    • Like 1
    Link to comment
    Just now, JorgeB said:

    Only top level folder are Unraid shares.

    Ah okay, thank you. I'm going to make a pool/usrname share and just move the files from pool/usr/usrname then.

    • Like 1
    Link to comment
    9 hours ago, Paul_Ber said:

    Is Docker changing?

    7 hours ago, BRiT said:

    Fortunately 6.12.x versions support alternate Repositories.

    Yes, but only for organization accounts.

    Community accounts will stay untouched.

     

    BTW, this would be better suited in the General sub Forums.

    Link to comment
    44 minutes ago, JorgeB said:

    Using zfs for array drives will allow you to detect data corruption, but if found zfs cannot repair it since there's no redundancy in the filesystem, it's works the same as it already did with btrfs.

     

    I tried to "study" what benefits moving to zfs would bring compared to the traditional Unraid array with xfs formatted drives. I failed to get a clear picture of this. I'm sure quite many will wonder this same when a stable 6.12.x will be released. Would it be possible to generate a short FAQ for this? That is,

    • What benefits/disadvantages there are for formatting individual Unraid array disks to zfs instead of xfs?
    • What benefits/disadvantages there are for changing from an Unraid array (xfs) to a zfs pool? For example, does one loose the advantage that in case of disk failure the files in other disks are still intact as the files are written to a single disk instead of distributed to several array disks?
    • What are the requirements of zfs? That is, I understood that it heavily utilizes RAM and therefore one should have plenty of it to maintain system performance.
    • ECC-memory is also mentioned when googling about zfs. Is it more important than with xfs for example? Of course, ECC should always been used but I guess most of Unraiders do not have ECC in their systems.
    • Etc.

     

    Anyway, great progress on Unraid! One happy user here.

    Link to comment
    7 hours ago, Jclendineng said:

    will unraid have that in the gui when a drive starts to fail?

    Stop array, unassign a device, start array, device will be offline and the pool will remain working in a degraded state (for redundant pools obviously), you can then replace the device, with the same or a new one.

     

    Note that for now you can only offline one device at a time even if the pool has more redundancy, like a raidz2 pool or stripped mirrors.

    Link to comment
    1 hour ago, Ruato said:

    ECC-memory is also mentioned when googling about zfs. Is it more important than with xfs for example? Of course, ECC should always been used but I guess most of Unraiders do not have ECC in their systems.

    Please see this post from the ZFS Guru Allan Jude itself from Twitter, @steini84 sent me this a long time ago:

    grafik.png.3d56d637142796fa691869b9aaa61416.png

     

    Sure thing ECC is "better" then non-ECC but most users are basically home users, for businesses I really get the point of ECC, at least that's my perspective.

     

    I don't use ECC in my system either and use ZFS for now about 3 years and have had zero issues.

    • Like 2
    Link to comment
    4 hours ago, Ruato said:

     

    • ECC-memory is also mentioned when googling about zfs. Is it more important than with xfs for example? Of course, ECC should always been used but I guess most of Unraiders do not have ECC in their systems. 

     

    I can comment on this, zfs uses available ram as a cache/buffer of sorts and so if you have a bit flip it is really bad. ECC is super important for any server housing important data but ESPECIALLY zfs. An example, I have 1 stick of ram in my test server going bad, and so when I copied a bunch of test data over, I got about 3 hardware checks telling me I had 3 corrections on one stick of ram. If that weren’t ECC that would have been 3 corruptions in my copied data. 
     

    Edit. As stated above zfs still has a ton of nice features that make it better even without ecc but imo if you can, use ecc. Also, ddr5 has a lot of cool features, 1 of which is built in single and multi bit ecc on the consumer chips. It’s in the spec. It’s very expensive compared to ddr4 but someday it will be the same price and then most will have ecc without even realizing it. 

    Edited by Jclendineng
    Link to comment
    4 hours ago, JorgeB said:

    Stop array, unassign a device, start array, device will be offline and the pool will remain working in a degraded state (for redundant pools obviously), you can then replace the device, with the same or a new one.

     

    Note that for now you can only offline one device at a time even if the pool has more redundancy, like a raidz2 pool or stripped mirrors.

    Awesome! I am running raidz2 since imo that’s the most efficient for my setup. 

    Link to comment

    When using unraid now, do I still need to keep at least one hard disk in the array to start the array? Can't I start the array directly with zfs raidz pool?

    Link to comment
    6 minutes ago, ncceylan said:

    When using unraid now, do I still need to keep at least one hard disk in the array to start the array? Can't I start the array directly with zfs raidz pool?

    Yes that is still the case for 6.12

    Link to comment
    28 minutes ago, ncceylan said:

    When using unraid now, do I still need to keep at least one hard disk in the array to start the array? Can't I start the array directly with zfs raidz pool?

    Yes I just used a usb stick 

    Link to comment

    Wow so far so good, seemles change to zfs.

     

    i importet my 8x2TB ssd  raidz1 pool with clone without any hicup, even renamed my pool. Awsome next step backup my "old" and "slow" HDD array and set ZFS instead :D

    Link to comment
    10 minutes ago, domrockt said:

    next step backup my "old" and "slow" HDD array and set ZFS instead

     

    If you are sure you need it, go for it! But the Unraid array is in no way deprecated, I'd say it still makes sense as the main data storage option for most of our users. We need to put together some guidance around this. 

    • Like 1
    Link to comment
    7 minutes ago, ljm42 said:

     

    If you are sure you need it, go for it! But the Unraid array is in no way deprecated, I'd say it still makes sense as the main data storage option for most of our users. We need to put together some guidance around this. 

    It seems to me (could be wrong) a zfs pool is better than the array/parity anyways unless you have multiple drives of differing sizes. Unraid draw is being able to use mutt drives and just work, but if you have all the same sizes and can do zfs I would think it would be a definite upgrade? It’s complicated and can get just about as complicated as you like which is an issue, but I have a server running 12 now and setup was really user friendly, the average user would have no issues imo. Guidance definitely is a good thing though, but I was pretty surprised at how well it’s working so far. 

    Link to comment
    17 minutes ago, ljm42 said:

     

    If you are sure you need it, go for it! But the Unraid array is in no way deprecated, I'd say it still makes sense as the main data storage option for most of our users. We need to put together some guidance around this. 

     

    sure you are indeed right. i should say for my usecase it is better. 

    I have my "sensitive" Data on any sort of array Unraid provides AND on my Synology. 

    The rest of my Data is just Games/Movies/Arrs 

     

    for me its a pita to copy 20+ TB with Parity over anything 100mb/s 

     

    i did not want to bash the Unraid Array and want all Devs to konwo now its more accesable than ever and versataile than ever :D Unraid OS Pro totally worth it!!

    • Like 2
    • Thanks 1
    Link to comment
    18 minutes ago, Jclendineng said:

    but if you have all the same sizes and can do zfs I would think it would be a definite upgrade?

    Until you want to add a drive and can't just do that :)

    • Like 2
    • Upvote 1
    Link to comment



    Join the conversation

    You can post now and register later. If you have an account, sign in now to post with your account.
    Note: Your post will require moderator approval before it will be visible.

    Guest
    Add a comment...

    ×   Pasted as rich text.   Restore formatting

      Only 75 emoji are allowed.

    ×   Your link has been automatically embedded.   Display as a link instead

    ×   Your previous content has been restored.   Clear editor

    ×   You cannot paste images directly. Upload or insert images from URL.


  • Status Definitions

     

    Open = Under consideration.

     

    Solved = The issue has been resolved.

     

    Solved version = The issue has been resolved in the indicated release version.

     

    Closed = Feedback or opinion better posted on our forum for discussion. Also for reports we cannot reproduce or need more information. In this case just add a comment and we will review it again.

     

    Retest = Please retest in latest release.


    Priority Definitions

     

    Minor = Something not working correctly.

     

    Urgent = Server crash, data loss, or other showstopper.

     

    Annoyance = Doesn't affect functionality but should be fixed.

     

    Other = Announcement or other non-issue.