Jump to content

iamroot4ever

Members
  • Posts

    7
  • Joined

  • Last visited

Everything posted by iamroot4ever

  1. I've recreated this issue twice as I expand my primary array, previously in 6.12.8, now in 6.12.10: Purchase new disk(s), slot into backplane Drives are recognized under Unassigned Drives Start Preclear on some/all drives, wait for drives to complete Preclear. Optional: Purchase and add more new disks to backplane Once Preclear is complete, stop array to add drives Add drives, start array, this succeeds as expected Tick the "Yes, Format" and click Format to officially add the precleared drives to the array Formatting begins, and because I am impatient, I go down and start preclear on any new Unassigned Devices I may have installed but not yet cleared Formatting will now silently stall until the Preclear(s) are complete (25+ hours for 18TB drives). I recognize this is likely intended or untested behavior, and an edge case. And now that I've found it, it's easy to avoid. I think my low-priority bugfix suggestion would be: have Unassigned Devices grey out Preclear while a formatting operation is occurring, as Formatting should only take a few minutes for Precleared drives to finish on an array, and/or have a warning note on Format button to please wait before starting additional Preclear functions. You can see what happens in my attached syslog: I start Formatting 3 precleared drives, click Preclear on a new drive (not logged in syslog, afaict), then the Format process just stalls completely for 12 hours. I then realize the Preclear is probably causing me grief, stop the preclear, and the stalled formatting completes quickly as expected and the drives are available in the array. (Array drives: Seagate 18TB Ironwolf Pro) hoard-diagnostics-20240411-1619.zip
  2. Since this thread is still popping up at the top of Google searches for "unraid array size limit," I thought I would update the details based on my experiences using Unraid 6.12.4+ : The direct answer of "no, you can't have parity drives in pools" is technically correct if you are referring to "parity drives" in the way the array uses them. However, You can - and should - configure pools with redundancy as allowed, or striping as preferred. I have a pool just for cache without redundancy that is btrfs striped for optimum performance, and an SSD pool configured as RAIDZ.1 ZFS for redundancy on my appdata / docker config setup. The addition of ZFS in particular has opened up a ton of options going forward, though the share confguration and mover config not allowing easy pool-to-pool data migration is the biggest hiccup. Of course this can be overridden by creative script work, so YMMV.
  3. @Eurotimmy Thanks again for creating the template; how did you get this configured on your local instance?
  4. This is a great idea and I'm glad to help flesh out the community knowledge on this. I've installed the container using the new unraid template, but I'm having some difficulty in configuration. The template doesn't have an entry for a config directory, and no config.yml is created under the /mnt/user/appdata/romm/ directory tree. Modifying config.yml is the way ROMM usually works, for things like mapping custom library paths to supported platforms, however if I execute a shell in the running container (which means looking at the underlying disk from the docker daemon, not the user-modifiable appdata directory), I only see /backend/config/tests/fixtures/config.yml ... which is marked internally as a test/sample file only. Can someone fill me in on the missing piece here? Is the "test" config.yml actually live, so if I create a manual mount path for /backend/config/tests/fixtures/ in unraid's docker config for romm, it will allow me to create and modify my own config? Should I create a configmap for config.yml to mount at /backend/config/config.yml in the container, which will override any default configuration? Thanks in advance, I will be glad to help troubleshoot and share anything that can help here.
  5. Have you tried 6.12.5rc1? They claim to have ZFS import working.
  6. That's correct, and the number of moving parts between the Linux 6.1, 6.2, 6.3 kernels, ZFS support, and Arc drivers in the last 9 months has made it very hard to make everybody happy here, I'm sure. I have a A380 installed waiting to add to my container for transcoding on Plex; however the host OS must support it first, which means 6.2 or later. 6.2 isn't LTS (AFAIK there woin't be any more 6.2 kernels?), 6.3 is still in active development and support, so that's the path forward... however OpenZFS only added 6.3 support two months ago. So for unRaid devs and release managers, it's probably a choice between the wide number of use cases and number of users of ZFS versus the very few who have Arc cards as far as deciding what kernel to use; moving to a new kernel requires a ton of regression, I'm sure.
  7. Just purchased a license and set up my server... perfect timing, as this feature makes my dashboard much more readable in one screen! Thanks!
×
×
  • Create New...