Jump to content

Cannot create a functioning ZFS array


Go to solution Solved by JorgeB,

Recommended Posts

About a week ago I set up the UNRAID machine but I can't create a ZFS Array.  I have four 8TB Seagate Ironwolf Pro HDDs. 

I follow the prompt. Assign one disk to parity and three to the array.  I click on each one and switch the FS to ZFS-encrypted.  It take about 15 hours to create the parity but for some reason they are Unmountable: Volume not encrypted.  I tried the whole process 3 time in the past week but don't know what I'm doing wrong.  But every time I failed, I recreated the USB and started over.  I have installed the following apps:

  • Unassigned Devices
  • Unassigned Devices Plus (add-on)
  • Unassigned Devices Preclear
  • Fix Common Problems

I also created a ZFS cache pool with my 2 nvme drives (mirrored).

Screenshot

 

I'm including the output of fdisk -l and zfspool list for your information.  I appreciate any help I can get.

 

 

Quote

 

 

root@Tower:~# zpool list
NAME    SIZE  ALLOC   FREE  CKPOINT  EXPANDSZ   FRAG    CAP  DEDUP    HEALTH  ALTROOT
cache   952G  1.84M   952G        -         -     0%     0%  1.00x    ONLINE  -
root@Tower:~# fdisk -l
Disk /dev/loop0: 63.38 MiB, 66457600 bytes, 129800 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 338.96 MiB, 355430400 bytes, 694200 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 20 GiB, 21474836480 bytes, 41943040 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/sda: 14.65 GiB, 15728640000 bytes, 30720000 sectors
Disk model: Mass Storage    
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device     Boot Start      End  Sectors  Size Id Type
/dev/sda1  *     2048 30719999 30717952 14.6G  c W95 FAT32 (LBA)


Disk /dev/nvme0n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: TEAM TM8FP6001T                         
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device         Boot Start        End    Sectors   Size Id Type
/dev/nvme0n1p1       2048 2000409263 2000407216 953.9G 83 Linux


Disk /dev/nvme1n1: 953.87 GiB, 1024209543168 bytes, 2000409264 sectors
Disk model: TEAM TM8FP6001T                         
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes
Disklabel type: dos
Disk identifier: 0x00000000

Device         Boot Start        End    Sectors   Size Id Type
/dev/nvme1n1p1       2048 2000409263 2000407216 953.9G 83 Linux


Disk /dev/sdb: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-3CP1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: C23CA402-C053-4C91-B959-4CB98A01ABAB

Device     Start         End     Sectors  Size Type
/dev/sdb1     64 15628053134 15628053071  7.3T Linux filesystem


Disk /dev/sdd: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-3CP1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: B2C53C81-161D-418F-AD1E-8FB5684177AC

Device     Start         End     Sectors  Size Type
/dev/sdd1     64 15628053134 15628053071  7.3T Linux filesystem


Disk /dev/sde: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-3CP1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 74A1C7EA-CE46-4307-8058-26BDC1B14D08

Device     Start         End     Sectors  Size Type
/dev/sde1     64 15628053134 15628053071  7.3T Linux filesystem


Disk /dev/sdc: 7.28 TiB, 8001563222016 bytes, 15628053168 sectors
Disk model: ST8000VN004-3CP1
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 4096 bytes
I/O size (minimum/optimal): 4096 bytes / 4096 bytes
Disklabel type: gpt
Disk identifier: 242F1999-73C3-4DCB-8722-4EE337E27856

Device     Start         End     Sectors  Size Type
/dev/sdc1     64 15628053134 15628053071  7.3T Linux filesystem


Disk /dev/md1p1: 7.28 TiB, 8001563168768 bytes, 15628053064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md2p1: 7.28 TiB, 8001563168768 bytes, 15628053064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/md3p1: 7.28 TiB, 8001563168768 bytes, 15628053064 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes

 

 

 

 

Link to comment

I did.  After I created the Array, it said the disks are Unmountable.  I gave me the option to format it which I did.  The fs was XFS. 

 

The very first time I tried to create the pool, it asked me to preclear the disks which took about 24 hours.  This issue happened even then.  But I have since reused the disks and formatted them but haven't preclear them again.  

Link to comment

I guess, you are one of the victims of the "unraid array and zfs array" mixup. So pls reconsider what you are wishing for.

 

An UNRAID array consists of one or more (independent) data drives and optionally of one or two parity drives. The data drives can contain almost ANY filesystem but zfs is the worst choice currently. XFS works best in this combination (the parity(s) will ensure data integrity on block level among all drives)

 

A ZFS array consists of 2 or more equal data drives. 2 will use mirror mode, more than 2 will use RAID-5 mode. THERE IS NO PARITY DISK HERE!!!

 

The main difference is that UNRAID allows any disk added as a data drive (even "full" ones, turn off parity before adding, switch it back on afterwards, it will regenerate) and shares can be split among drives too. The only "drawback" is that the parity disk always needs to be the largest of all drives in the array.

If you take out a data disk later on, the data on it is still accessible for other operating systems.

 

ZFS Array demands the same drives for all members. If one is larger, only the part that is used by the smallest disk will be used. A ZFS array appears to UNRAID like a single drive. Usually you use it as pool. If you take out a data disk, data is not accessible, cannot be used elsewhere.

 

ZFS claims to be faster, so far it did not prove it in reality to me. I prefer the old but flexible UNRAID style.

 

Currently they are looking for new names of these types of arrays because many people became confused

 

And another warning ahead: although possible, avoid using ZFS disks within an UNRAID array. It will be slow as a dog because ZFS "optimizes" the drives in idle state, which forces always an additional head move / writing to UNRAIDs parity drive(s). ZFS takes sure that drives are not locking up by this, but it is not aware of the UNRAID-on-top and creates massive head movements. You have never seen your disks this slow before, believe me.

 

Link to comment

Thank you for your thorough explanation @MAM59.  I still want to try ZFS for my use case.

So, honestly, I am REALLY confused now.  This is the first time I work on a NAS and have difficulty getting my head around it. 

I believe I don't have a good understanding of UNRAID and, by extension, NAS systems. 

 

My original intention was to install UNRAID and set up a 4-disk ZFS array (raidz1).  I tried it 3 times and need to change my approach.

 

Here is what I'm thinking.  Since creating a ZFS array is not possible, I create a 1 disk array with a 128G USB and use XFS format!!

Then, I add a pool with all four 8TB HDDs and use a ZFS-encrypted format to be mirrored with 1 group of 3.

Or I can put one of the 8TB HDDs as a disk array, XFS; and assign the other three HDDs to the newly formed pool.  

 

 

Does that solve my problem?  Do you guys think I have the right approach?

Link to comment
  • Solution
17 minutes ago, rezaasl2000 said:

I create a 1 disk array with a 128G USB and use XFS format!!

Then, I add a pool with all four 8TB HDDs and use a ZFS-encrypted format to be mirrored with 1 group of 3

That's not a problem, many users are using a flash drive as the only array device, including myself with a couple of servers.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.

×
×
  • Create New...