boomam

Members
  • Posts

    222
  • Joined

  • Last visited

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

boomam's Achievements

Explorer

Explorer (4/14)

15

Reputation

  1. Any particular model, or just Supermicro cables in general?
  2. Hi, Does anyone have any recommendations for some high quality SFF-8087 SATA cables? I'm currently using the ones that came with my controller, here, but one of my drives keeps getting CRC errors every few months, and for the cost of replacing the cables with something decent, it'd rather take the cables out of the equation as being at fault. Thanks in advance!
  3. Having this issue too, on 6.12.2 - ZFS Master doesn't exist for me, (it did a few weeks ago as i wanted to check it out though). There's nothing actually on the array either, other than my media. Everything 'active' is on a ZFS pool, and the only thing that can access the media is Jellyfin, which is off right now. Seemed fine on 6.12.1
  4. No worries, I can wait on the final fix and then test that if you want? I'm in no rush to apply the quota. 🙂
  5. Thanks. Attempted to put in 1T as a test on a dataset, but it doesnt want to apply. Unable to update dataset Output: 'cannot set property for '\''storage/dataset-test'\'': '\''compression'\'' must be one of '\''on | off | lzjb | gzip | gzip-[1-9] | zle | lz4 | zstd | zstd-[1-19] | zstd-fast | zstd-fast-[1-10,20,30,40,50,60,70,80,90,100,500,1000]'\''' Tested with various settings, and not touching them beyond what shows up in the popup. Always some variant of the above. As an example with the following settings - Compression - Inherit Quota - 1 T Record Size - Inherit Access Time - Off Extended Attributes - On Primary Cache - Inherit Read Only - Off Sync - Standard (Default) Unable to update dataset Output: 'cannot unmount '\''/mnt/storage/dataset-test'\'': pool or dataset is busy'
  6. Thanks for clarifying. As a question, the quota's on the datasets shows a default of 'B', so I assume bytes, but the maximum I can put in is 10000 B? Is this a known thing?
  7. If you know the command you want to run to achieve that as a 'one off', you can just use User Scripts to schedule it with Cron.
  8. Probably a silly question, as ZFS is ZFS, but is there any negative impact to the usage of this plugin or ZFS features, considering they are not exposed in Unraid itself nativly? For example, if i create sub-datasets and/or snapshots, then Unraid 6.13 adds support for managing that in the GUI, i assume it'll just work?
  9. Re-tested after the 6.12.2 update today, same error persists, same workaround resolves.
  10. Morning all, Has anyone noticed a bug that prevents virtiofs share passthroughs working with VMs, where the share is set as exclusive access? Specifically this GUI error when starting the VM: internal error: virtiofsd died unexpectedly And this log error: libvirt: error : libvirtd quit during handshake: Input/output error Workaround is to do something to turn off exclusive shares, such as enabling NFS, but its a weird bug. Anyone else come across this? Thanks!
  11. Thanks, that solved it. It must have changed in rc6/rc7 (coming from 6.11). 🙂
  12. Not sure if its rc6 or rc7 specific, but I just noticed, the docker page doesn't allow me to move the container ordering around. Has the function moved or have i found a bug? 😛
  13. RC6 to RC7 - touch wood, seemed uneventful.
  14. Decades of working with storage arrays at various levels of enterprise, along with building lots of home-grown solutions based on anything from Unraid, to True(Free)NAS and their contemporaries. In a normalized setup, the difference is not a consideration factor whatsoever. That's true of anything though, no? The devil's in the detail, in this case, BTRFS being the likely culprit, with a close second being FUSE. My moneys on BTRFS...shudder...how its considered stable in this industry is beyond me. True, however you are missing one of the major advantages of jumbo frames - dramatically less overhead. If a given implementation does not support NIC offloading, then its the CPU that deals with it - even with modern CPUs, it doesn't take much for Jumbo frames to become a factor. Equally, retries due to misconfiguration at the network level, add up too. I see your point for diagnostic purposes, but for anything close to day-to-day, or 'production', a single block device is not a great idea. One suggestion here, would be to perhaps look into zvols to serve the function of the block device, thus kinda giving the advantages of both data redundancy and psudo-block-level access - assuming ZFS is in use of course.
  15. The issue here is that not everyone is comfortable with Cloudflare tunnels - Some prefer traditional VPN, some Tailscale, etc. Equally, in order for the tunnel software to remain compatible, regular updates are needed. Unraid just isnt updated often enough for that to be possible. It would make more sense imo, to decouple the docker daemon running from the array. As an example, I run my docker images from a dedicated cache drive. For stateless containers like Cloudflared, it does not need or use storage beyond the image to start from - there is no need for the array in this case. Decoupling as such gives that flexibility for truly stateless containers to run without the array, without forcing a specific ongoing "feature" requirement on the Unraid dev team.