![](http://content.invisioncic.com/u329766/set_resources_34/84c1e40ea0e759e3f1505eb1788ddf3c_pattern.png)
DiscoverIt
Members-
Posts
53 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
DiscoverIt's Achievements
Rookie (2/14)
5
Reputation
-
Isn’t native zfs encryption abandoned and by extension considered insecure now? Last I recall there was basically no development team left on that facet of zfs.
-
I put other as it would be nice to have an LTS version, for example; LTS 7.0 (update X). This branch wouldn’t see feature improvements but rather security updates and critical bug fixes.
-
ZFS Formatted Drives Display Smaller Capacity
DiscoverIt replied to DiscoverIt's topic in General Support
I would have assumed the overall total disk size would have come from the disk level and not the filesystem level. XFS has similar metadata but unraid shows it as used space vs a reduction in capacity. -
What could be the cause in which drives formatted with XFS-encrypted show the full capacity while ZFS-encrypted shows a slightly reduced value? All three disks also show the exact same partition size
-
Looking into expanding my ZFS pool to larger capacity drives. ZFS allows autoexpand=on where ZFS takes up the full available filesystem. One complication I see is the LUKS encryption being implemented. Will there be an issue with LUKS being applied to the new drive to full maximize the added capacity?
-
Sep 4 13:56:02 Bakery emhttpd: shcmd (142): touch /boot/config/forcesync Sep 4 13:56:02 Bakery Parity Check Tuning: Unclean shutdown detected What was introduced from 6.12.3 to 6.12.4 that drastically changed the start up? Seems like a wide ranging issue.
-
Help Needed Identifying Bottleneck (ZFS Cache)
DiscoverIt replied to DiscoverIt's topic in General Support
Thx Jorge, an interesting lead was given in Discord in that dd appears capped or restricted in some manner. Going to spin up a basic Ubuntu container and try a common benchmarking tool next. -
I have a system that I just can't seem to get reads/writes to what they should be. I can usually saturate my 25Gbps NIC but with that MikroTik 100Gb switch tempting me daily...I want to make sure if I get a deal on it I'm prepared hardware-wise. Cache consists of 2 x 4 raidz1 zpool comprised of 8 x 1TB PCIe 3.0 NVME's in two ASUS Hyper M.2 carriers. System is an EPYC 7302p with 256GB of 2133 DDR4 memory and a 25Gbps NIC. LUKS encryption is enabled through UnRaid's implementation. root@UNRAID:/# zpool status pool: cache state: ONLINE scan: scrub repaired 0B in 00:03:48 with 0 errors on Sun Jul 9 04:03:49 2023 config: NAME STATE READ WRITE CKSUM cache ONLINE 0 0 0 raidz1-0 ONLINE 0 0 0 nvme2n1p1 ONLINE 0 0 0 nvme3n1p1 ONLINE 0 0 0 nvme1n1p1 ONLINE 0 0 0 nvme0n1p1 ONLINE 0 0 0 raidz1-1 ONLINE 0 0 0 nvme4n1p1 ONLINE 0 0 0 nvme5n1p1 ONLINE 0 0 0 nvme6n1p1 ONLINE 0 0 0 nvme7n1p1 ONLINE 0 0 0 errors: No known data errors Writing to Cache pool; root@UNRAID:/mnt/cache/appdata# dd if=/dev/zero of=test.img bs=1G count=10 oflag=dsync && rm test.img 10+0 records in 10+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 3.78222 s, 2.8 GB/s Writing to RAM; root@UNRAID:/tmp# dd if=/dev/zero of=test.img bs=1G count=10 oflag=dsync && rm test.img 10+0 records in 10+0 records out 10737418240 bytes (11 GB, 10 GiB) copied, 6.7859 s, 1.6 GB/s Possible culprits I feel it could be but I don't know what rock to turn over to find additional bandwidth. 1) Slow memory 2) PCIe 3.0 though I should theoretically be capable of 4 GB/s per drive 3) A slow NVME bringing down the pool 4) LUKS in some form as the culprit
-
Deleting Docker template does not delete .xml.bak
DiscoverIt commented on DiscoverIt's report in Stable Releases
Is the .bak no longer created on modern OS versions?