Keliaxx

Members
  • Posts

    15
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

Keliaxx's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Any idea when the native zfs implementation will support the use of partitions instead of entire physical drives for use as members of a vdev? Just wondering how long it will be before my array will actually work properly with the array start and array stop commands and I won't have to manually do a zpool import on startup and export prior to system shutdown.
  2. Unfortunately, no. I haven't found a solution to the problem yet. In my case, its annoying but not causing me any major issues at the moment, aside from getting the warning every time I fire up a terminal session.
  3. @limetech What is the gameplan here with regards to zfs volumes that use high performance nvme partitions for cache/special/zil?
  4. LT, any thoughts to offer up on this?
  5. Just so I'm clear, you replaced the CPU section to have it reporting Intel architecture and attributes on a physical AMD platform?
  6. As a bit of an addendum around NVME and ZFS, but also applicable to NVME in general is that underprovisioning NVME is extremely common for extending its service life. A lot of modern controllers no longer provide the option to underprovision or create separate domains at the hardware level, so it is now accomplished through partitioning. Fully supporting NVME use cases really does require having the ability to work with partitions as well as entire devices IMHO.
  7. In the release notes, it mentions reporting zpools topologies that won't import, so I wanted to file the report so it can be added to the list of ZFS topology possibilities to be considered going forward. The idea of splitting up a very high performance NVME into partitions and using those partitions to serve multiple functions is quite common when balancing availability and performance in ZFS special, cache, and log VDEVs.
  8. Zpool that includes log, cache, and special devices (partitions on a group of 3 NVME drives) will not import on array start. Pool can be imported from the CLI, requiring services to be restarted to work properly. Pool topology is shown below. Special, cache, and log VDEVs are comprised of partitions on a group of 3 NVME drives. pool: mainpool state: ONLINE scan: scrub repaired 0B in 00:22:50 with 0 errors on Sun Apr 23 23:38:41 2023 config: NAME STATE READ WRITE CKSUM mainpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sde1 ONLINE 0 0 0 sdf1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdg1 ONLINE 0 0 0 sdh1 ONLINE 0 0 0 special mirror-3 ONLINE 0 0 0 nvme4n1p2 ONLINE 0 0 0 nvme5n1p2 ONLINE 0 0 0 nvme6n1p2 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 nvme4n1p1 ONLINE 0 0 0 nvme5n1p1 ONLINE 0 0 0 nvme6n1p1 ONLINE 0 0 0 cache nvme4n1p3 ONLINE 0 0 0 nvme5n1p3 ONLINE 0 0 0 nvme6n1p3 ONLINE 0 0 0 errors: No known data errors mobeast-diagnostics-20230518-1258.zip
  9. Meaning *ONLY* on partition 1? The three NVME devices that are being used each have 3 paritions, paritition 1 on each is combined into a 3 way mirror VDEV for SLOG, partition 2 on each is a part of a 3-way mirror VDEV for SPECIAL, partition 3 on each is joined as a stripe VDEV for persistent CACHE. This is definitely something that should be considered for support in future releases, as the fault tolerance recommendations for SLOG and SPECIAL along with extremely high IOPs and low latency of 4xpcie4 NVME makes this model of using partitions instead of entire physical devices a lot more practical. I'm curious, how exactly is the zpool import being called on the back end that causes this part of it to fail? I can understand the UI getting confused about how to present it, but array start doesn't successfully import the pool.
  10. I've been scouring the interwebs trying to find an answer to this and I read that it was fixed in kernel 6.1 (see here): https://bugzilla.kernel.org/show_bug.cgi?id=155211 Alas 6.12RC3 with kernel 6.1.23 seems to still exhibit this problem. When trying to use nested virtualization in Windows 11 for things like WSL and WSA, you get the following in the host dmesg. Would anyone happen to know if / what kvm cpu parameter voodoo might allow one to trick Windows 11 into playing nice? I've tried stripping the hypervisor property and forcing the svm capability flags with no luck. Within the respective VM, Windows is quick to announce that nested virtualization is not supported on the platform when you launch WSL2. [ 3830.698973] SVM: kvm [19055]: vcpu0, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3831.069250] SVM: kvm [19055]: vcpu1, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3831.193400] SVM: kvm [19055]: vcpu2, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3831.317537] SVM: kvm [19055]: vcpu3, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3831.441694] SVM: kvm [19055]: vcpu4, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3831.565861] SVM: kvm [19055]: vcpu5, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3831.690037] SVM: kvm [19055]: vcpu6, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3831.814213] SVM: kvm [19055]: vcpu7, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3831.938362] SVM: kvm [19055]: vcpu8, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0 [ 3832.062556] SVM: kvm [19055]: vcpu9, guest rIP: 0xfffff86757387841 unimplemented wrmsr: 0xc0010115 data 0x0
  11. I followed the steps mentioned and have pretty much the same results, except I now have the 3 NVME devices listed in the pool along with the 4 hard drives. I can still manually import it after starting the array and it works, but starting the array with all 7 devices defined doesn't import it during the array startup. I've attached the updated diagnostic report. If necessary, I have the space to vacate the pool and rebuild it if there is a different process I need to follow. Please note that that SLOG, SPECIAL, and CACHE VDEVs are pointed at partitions on the same 3 NVME devices. ZFS doesn't care, but I wanted to point it out in case it might have been overlooked. Because these VDEVs were added from the CLI after the pool was created from the web UI, they aren't encrypted with luks. If I need to rebuild to pool, I can vacate the data and do that. I would just need to know what steps to follow to make the UI happy with the resulting pool.
  12. Attached is the diagnostic report after starting the array. At the point where this was captured, the zfs pool mainpool shows all 4 of the 20TB drives as unformatted, however they are unlocked by luks. From this point, if I just go to the cli, I can do a zpool import mainpool and it brings up and mounts the zpool without issues. From there, I just need to disable and re-enable VMs and everything works fine, but the UI never shows the pool correctly. Here is the topology of the zfs mainpool after import from the command line: pool: mainpool state: ONLINE scan: scrub repaired 0B in 00:22:50 with 0 errors on Sun Apr 23 23:38:41 2023 config: NAME STATE READ WRITE CKSUM mainpool ONLINE 0 0 0 mirror-0 ONLINE 0 0 0 sde1 ONLINE 0 0 0 sdf1 ONLINE 0 0 0 mirror-1 ONLINE 0 0 0 sdg1 ONLINE 0 0 0 sdh1 ONLINE 0 0 0 special mirror-3 ONLINE 0 0 0 nvme4n1p2 ONLINE 0 0 0 nvme5n1p2 ONLINE 0 0 0 nvme6n1p2 ONLINE 0 0 0 logs mirror-2 ONLINE 0 0 0 nvme4n1p1 ONLINE 0 0 0 nvme5n1p1 ONLINE 0 0 0 nvme6n1p1 ONLINE 0 0 0 cache nvme4n1p3 ONLINE 0 0 0 nvme5n1p3 ONLINE 0 0 0 nvme6n1p3 ONLINE 0 0 0 errors: No known data errors
  13. FWIW and to help anyone else that gets into a weird state like this, I did find a workaround that allowed me to create the second pool I needed. I temporarily renamed the pool configuration file under /boot/config/pools to [poolname].cfg.bak and rebooted. This way, it just saw all the disks that were previously part of that pool as unused and allowed me to format only the disks associated with the new pool. Once they were formatted and all was in good order, I just stopped the array, put the pool cfg file back and rebooted. I still have the issue with the zpool requiring manual import every time I reboot, but the newly added pool works fine.
  14. Just to make sure I capture diagnostics from the correct workflow and get you the most useful set of data... Starting from a reboot and the array not configured for autostart, what steps would you like me to complete before capturing the diagnostics? Array start, command line zpool import, or both?
  15. I'm working with 6.12 beta RC3. I have a large, raidz2 encrypted ZFS pool and the UI doesn't know how to handle it after I added mirrored NVME ZIL and Special devices, along with a striped L2ARC. Yes, I know how they work and when to use them and when not to with ZFS, so I'm hoping this stays on topic about the formatting question. Anyway, the ZFS pool works exactly as I had planned, but unraid has no idea how to handle it properly when starting the array, so you have to import it manually each time you start the array and then it works fine, but the UI complains that the disks in the array are unformatted. Annoying, but not a show stopper until I tried to add another pool. The problem now is that I can't format the disks in the new pool because the format option wants to format ALL of the disks that it THINKS are unformatted, including the disks in my main zfs pool? Is there a way to manually format just the disks that make up a specific pool and leave the rest untouched? Alternatively, has anyone had success fixing the UI after adding ZIL, special, and L2ARC to ZFS manually?