limetech

Administrators
  • Posts

    10131
  • Joined

  • Last visited

  • Days Won

    182

limetech last won the day on January 13

limetech had the most liked content!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

limetech's Achievements

Veteran

Veteran (13/14)

3k

Reputation

5

Community Answers

  1. We plan to support this. The difference is that Unraid utilizes partition 1 on a storage device as the zfs data partition, where-as TrueNAS sets up a small partition 1 which can function as a swap volume, and sets partition 2 as the zfs data partition. There are a couple of ways we can go about handling this... t.b.d.
  2. This is exactly where we are headed! Can I recruit you to rewrite the wiki? (kidding, actually only somewhat kidding It won't all happen in 6.12 release.
  3. More clarifications: in Unraid OS only user-defined pools can be configured as multi-device ZFS pools. You can select ZFS as the file system type for an unRAID array disk, but will always be just a single device. The best way to think of this, anywhere you can select btrfs you can also select zfs, including 'zfs - encrypted' which is not using zfs built-in encryption but simply LUKS device encryption. Also note that ZFS hard drive pools will require all devices in a pool to be 'spun up' during use. IMO where ZFS will shine is in large flash-based pools (SSD, NVMe, etc).
  4. Sure open to design ideas.
  5. 6.12 beta is on 6.0.15 as I type this. OpenZFS is not listed as good to go on 6.1, though looks like that is imminent, at which time we'll upgrade to 6.1.
  6. You really want more than 30 devices in a single array?
  7. Some clarification... Currently: We have a single "unRAID" array(*) and multiple user-defined "cache pools", or simply "pools". Data devices in the unRAID array can be formatted with xfs, btrfs, or reiserfs file system. A pool can consist of a single slot, in which case you can select xfs or btrfs as the file system. Multi-slot pools can only be btrfs. What's unique about btrfs is that you can have a "raid-1" with an odd number of devices. With 6.12 release: You will be able to select zfs as file system type for single unRAID array data disks. Sure, as a single device lots of zfs redundancy features don't exist, but it can be a target for "zfs receive", and it can utilize compression and snapshots. You will be able to select zfs as the file system for a pool. As mentioned earlier you will be able to configure mirrors, raidz's and groups of those. With future release: The "pool" concept will be generalized. Instead of having an "unRAID" array, you can create a pool and designate it as an "unRAID" pool. Hence you could have unRAID pools, btrfs pools, zfs pools. Of course individual devices within an unRAID pool have their own file system type. (BTW we could add ext4 but no one has really asked for that). Shares will have the concept of "primary" storage and "cache" storage. Presumably you would assign an unRAID pool as primary storage for a share, and maybe a btrfs pool for cache storage. The 'mover' would then periodically move files from cache to primary. You could also designate maybe a 12-device zfs pool as primary and 2-device pool as cache, though there are other reasons you might not do that.... * note: we use the term "unRAID" to refer to the specific data organization of an array of devices (like RAID-1, RAID-5, etc). We use "Unraid" to refer to the OS itself.
  8. Our plan is to release a public beta soon(tm) which includes OpenZFS support and changes which Plugin authors need to be aware of. Posting this now as a sneak peak, more detail will follow. That said.... ZFS support: this will let you create a named pool similar to how you can create named btrfs pools today. You will have choice of various zfs topologies depending on how many devices are in the pool. We will support single 2, 3, and 4-way mirrors, as well as groups of such mirrors (a.k.a., raid10). We will also support groups of raidz1/raidz2/raidz3. We will also support expansion of pools by adding additional vdev of same type and width to existing pool. Also will support raid0. It's looking like first release we will support replacing only single devices of a pool at a time even if the redundancy would support replacing 2 or 3 at time - that support will come later. Initially we'll also have a semi-manual way of limiting ARC memory usage. Finally, a future release will permit adding hot spares and special vdev's such as L2ARC, LOG, etc. and draid support. webGUI change: there are several new features but the main change for Plugin authors to note is that we have upgraded to PHP v8.2 and will be turning on all error, warning, and notices. This may result in some plugins not operating correctly and/or spewing a bunch of warning text. More on this later... By "public release" we mean that it will appear on the 'next' branch but with a '-beta' suffix. This means only run on test servers since there may be data integrity issues and config tweaks, though not anticipating any. Once any initial issues have been sorted, we'll release -rc1.
  9. This is nonsense but beyond the scope of the discussion dealing with CRC/DMA errors. For sure, 100%, DMA/CRC errors are hardware faults, not caused by software, file systems, etc. They are reported by physical controllers and indicate a physical h/w problem. In my experience, these kinds of errors commonly originate with bad cables or connectors, or simply faulty components. Another overlooked cause is faulty or overloaded power supplies. Back when we offered server products, we always were careful to source single-rail PSU's so that full capacity of the power supply can be fed to the hard drives. Servers with multi-rail PSU's might have a high overall wattage rating, but any one rail is a fraction of that; and, typically one rail would serve the entire hard drive array. I'm sure you can deduce what the problem is with this arrangement. I haven't looked at many low-level Linux device drivers for several years but I'll take a look at a few and see if they retry CRC/DMA errors. Adding retry logic in md/unraid driver might be something for us to consider. As has been stated correctly, Unraid only disables devices which fail writes because what else can you do if a write fails (and presumably all retries fail)? But sure, if there is a lot of other activity in the server causing a transient dip in voltage, then maybe a retry would succeed.
  10. Respectfully, please refrain from using that term, it's not helpful. We can continue this discussion if you want, I am open to an honest technical exchange, and making code changes if necessary. Some brief comments: re: CRC errors: those typically indicate some kind of corruption in the data path between the device controller and the device itself. Usually as a result of bad cables, connectors, or power supply issues. In general Unraid relies on the Linux device drivers to handle retries and assumes if a write has failed, then the driver and the device itself has exhausted all attempts at recovery and it would be pointless to waste more time re-issuing the same command over and over. re: SMART: it's well known that many drives fail which have perfectly clean SMART reports. The fact you see a disabled drive and the only thing in the SMART report is a single CRC error is suspect.
  11. How did you reboot the server following download/install of new Unraid OS version? Console messages imply an unclean reboot...
  12. I think those are related to ZFS. https://github.com/openzfs/zfs/issues/11691
  13. As always, prior to updating, create a backup of your USB flash device: "Main/Flash/Flash Device Settings" - click "Flash Backup". Bugs: If you discover a bug or other issue in this release, please open a Stable Releases Bug Report. Version 6.11.5 2022-11-20 This release reverts a change which modified the wrong file. This resulted in not being able to select 'macvlan' custom docker network type. New installations only will now have 'ipvlan' selected by default. Docker fix: Set IPVLAN as default only for new installations.