jortan

Members
  • Posts

    287
  • Joined

  • Last visited

Everything posted by jortan

  1. Was plex on your Unraid array or cache pool? Can you describe what performance issues you were having? I'm not sure that nested virtualisation is the path to better performance!
  2. The pool name isn't represented as a sub-folder of the mount location, it's mounted directly to the directory you specify. My suggestion would be not to overcomplicate this: zpool create -m /mnt/poolname poolname mirror sdb sdc zpool create -m /mnt/poolname2 poolname2 mirror sdd sde
  3. Not only should you - you will need to. You can't mount multiple pools to the same path. It's probably fine. AFAIK the only reason not to mount zfs directly in /mnt/pool is this, which can just be ignored:
  4. do you mean /mnt/disks/zfs/pool1 /mnt/disks/zfs/pool2 ? I believe /mnt/disks is where the unassigned disk plugin mounts disks? Probably not an issue, but to avoid any edge case issues I would just mount zfs pools here: /mnt/pool1 /mnt/pool2 For Samba sharing - you can either have ZFS do this - I haven't done it this way, but it's something like: zfs set sharesmb=on pool1 What I have done before is just add this to Unraid /boot/config/smb-extras.conf [sharename] path = /mnt/pool1 comment = share description browseable = yes public = yes writeable = yes vfs objects = This assumes you want the share to be public - accessible anonymously. If so, then: chmod 777 /mnt/pool1 chown nobody:users /mnt/pool1
  5. +1 I have 4TB, 5TB, 8TB and 12TB disks, a combintation of 2.5" and 3.5", all in a single array. You can use disk slot inclusions/exclusions within your shares to ensure that performance-sensitive shares are only written to your fastest disks. (I also use ZFS disk mirrors for performance - ie. VMs)
  6. How many drive bays do you have to achieve this? Enough for all your old and new disks at the same time? There's multiple options depending on your tolerance for maintaining parity - someone may want to chime in on the finer points: Option 1: - Replace your parity disks one-by-one - In Unraid "Shares" config, exclude all your old disks from your shares except the first disk slot you are about to replace. (The shares will still show any files present on the old disks, but this will prevent new files being written to them.) - Replace your disks one-by-one - unexclude the disk slots from your shares when they have been replaced with a 16TB disk. - As you replace the disks with larger ones, use "unBALANCE" plugin to merge data from multiple smaller disks to the new disks. - When you've finished, your array should have 2 x 16TB parity disks, all your data on 16TB disks, and a bunch of empty 6TB disks. - Take a screenshot of your current array and slot assignments (noting which 16TB disks are parity and which are data) - Move any files on cache pool if used (Settings | Scheduler | Move Now) - Use "Tools | New config" to reconfigure the array to only include the new 16TB parity/data disks. You should be able to "copy" your current disk assignments, and then just remove the 6TB disks before accepting the new array configuration. Read and understand the warnings presented at this point (you shouldn't need to format any disks). If you lose the array config for some reason, refer to screenshot and create an array with 16TB parity/data disks in their correct slots. Note that you will need to regenerate parity at this point, so you will have a day or so without parity protection If you have enough drive bays you can even do something similar to above, except a bit easier: - Replace your parity disks one-by-one - Then add *all* the new disks to the current array - Exclude all your old disks from your shares - Move the data from old disks to new disks using unBALANCE - "Tools | New config" to re-configure the array to only include the new disks/parity drives. Option 2: Unraid isn't RAID5 - file data isn't sliced and distributed among disks - Unraid arrays are just a bunch of xfs formatted disks (optionally with parity). You can just copy or move everything from your current array to the new disks formatted as XFS. Once you have copied or moved all your data to the new disks, use "Tools | New config" to "create" a new array that includes only your new data disks and those you want to assign to parity. This will take a long time, so may be hard to manage if you need to keep the array "live" for new writes.
  7. Borgbackup provide a binary that seems to work https://github.com/borgbackup/borg/releases wget https://github.com/borgbackup/borg/releases/download/1.2.2/borg-linux64 /boot/config/borg-linux64 Run these and also add them to /boot/config/go cp /boot/config/borg-linux64 /usr/local/sbin/borg chmod +x /usr/local/sbin/borg Alternative you could use something like the borgmatic docker, mount whatever paths you need borg to have access to, and run your commands from the docker's console or using "docker exec"
  8. Here's another post with information more specifically about setting boot order for passthrough nvme:
  9. /mnt/user is where unraid does it's magic for shares backed by Unraid arrays / cache devices. My advice would be to leave /mnt/user alone and not attempt to manually mount (or link) filesystems here. Maybe I'm missing something but can you not go to: - Settings - VM Manager And set "Default ISO storage path" to /zfs-sas/isos ? Fairly sure this is what most Unraid/ZFS users are doing.
  10. Can confirm the current "next" build of 6.10.2-rc3 is also working if you wanted to try that. VVV oops, you're right - thanks
  11. The other thing you can do is zfs set sync=disabled poolname This forces all writes to be asynchronous. This doesn't risk data corruption, but it does risk the last 5 seconds of your data in the case of a power failure which could lead to data loss in some scenarios. If you happen to be moving data to the array at the time - The sender has been told the data is moved so the source is deleted, but it won't actually get written to the destination for 0-5 seconds. You could see significant performance benefit on a busy pool though - particularly one seeing lots of small synchronous writes.
  12. This seems about what I would expect. You're not streaming data directly and uninterrupted to your spinning rust as you would be in a RAID0-like configuration. For every write to one disk, ZFS is having to store that data and metadata redundantly on another disk. Then the first disk gets interrupted because it needs to write data/metadata to provide redundancy for another disk. You're not streaming data neatly in a row, there are seek times involved. If you want performance with spinning rust, get more spindles and ideally switch to RAID10 (mirrrored pairs)
  13. To clarify, it's relevant for any application doing synchronous writes to your ZFS pool. For these writes, the file system won't confirm to the application that the write has been completed until the data is confirmed as written to the pool. Because writes to SLOG are much faster, there can be significant improvements in write performance. SLOG is not a read cache - ZFS will never read from the SLOG except in very rare circumstances (mostly after a power failure.) Even if it could, it would be useless as the SLOG is regularly flushed and any data is already in memory (ARC) If your application is doing asynchronous writes, ZFS stores the write in memory (ARC) and reports reports to the application that the write was complete (it then flushes those writes from ARC to your pool - by default every 5 seconds I think?) The SLOG has zero benefit here. I have a feeling QEMU/VMs will do synchronous writes also, but I don't have an SLOG running to test. From memory NFS shares default to synchronous and SMB shares will default to asynchronous? >>Intel Optane P1600X 118GB That said if you have/are getting this for L2ARC anyway, you may as well slice off 10GB for a SLOG. I was running a P4800X in exactly the same way a while ago.
  14. Yes for HDD pools zstd makes more sense as you are IO bound by slow spinning rust. Needing to read and write less data to spinning rust is more likely to be beneficial than any increase in computation required for compression/decompression. For NVME you are less bound by IO, so the increase in computation required by zstd is more likely to impact performance negatively. On modern computers it probably makes very little difference. If you're rocking 10-year-old Xeons (like me) then it might. That said, zstd also gives better compression (to varying degrees, depending on the type of data) so if that's important to you it's another thing to consider.
  15. For NVME you're probably better of using lz4 by default, and zstd for datasets with very compressible data (large log/text files), or for datasets you don't read often (archives/backups.) Some insights here: https://news.ycombinator.com/item?id=23210491
  16. You may want to look at ZFS + snapshots using sanoid + syncoid if you want to replicate those snapshots to another machine. It's a lot more manual work to configure and maintain, but a good learning experience if you're up for it. It seems native ZFS support will be coming to Unraid eventually. It's great for running dockers/vms, though with ZFS you don't have the ability to cache writes to the array within Unraid like you can with the built-in "pool" functionality (my guess is this will be possible when Unraid supports ZFS natively) There's a number of Youtube guides for implementing ZFS in Unraid also.
  17. From what I understand it has long been the case that some people report issues with docker on ZFS and some people have none. This might be due to ZFS only having problems with specific containers? I've had issues with dockers using "sendfile" syscall on ZFS previously: But it seems likely this is fixed now: https://github.com/openzfs/zfs/issues/11151 Could this have caused some of the other docker + ZFS issues seen in the past? I've had issues with docker + ZFS previously (both using docker.img and using a direct file path on ZFS). I've never used ZFS zvols. I don't have the bandwidth right now to try migrating this back to ZFS. I will try to revisit this when 6.10 is released.
  18. Typically this is ~/.vimrc (aka /root/.vimrc) but this path isn't located on persistent storage so it won't survive reboots. Run this once but also add it to /boot/config/go echo "set tabstop=4" >> ~/.vimrc
  19. zfs import is what you wanted here, not zfs create I suggest before you do anything else, you zfs export the pool (or just disconnect the drive) prevent any further writing and consider your options (but I'm not sure if there are any)
  20. More deduped data also means more compute resources to compare each write to hashes of all the existing data (to see if it can be deduplicated) I don't know for sure, but I suspect not - in the same way that adding a normal vdev does not cause ZFS to redistribute your data or metadata. It should by design, but it also might just break (situation may have improved in the last 11 months?)
  21. To clarify earlier comments about ZFS memory usage - the ARC doesn't show how much memory ZFS needs to function, the ARC will dynamically consume memory that the system doesn't otherwise need. This is why you can't assume how much memory ZFS "needs" for dedupe/DDT based on ARC size before/after turning on dedupe. It is expected that the ARC would be the same size in both scenarios. To demonstrate that the ARC is dynamic and doesn't actually show how much memory ZFS "needs", you can artificially reduce the amount of memory available to ZFS by consuming memory in a ram disk mount -t tmpfs -o size=96G tmpfs /mnt/ram/ As you copy files to the ram disk, and as available memory approaches 0%, the ZFS ARC will release memory back to the system, dynamically reducing its size: \ ZFS will continue functioning without issue (but with less data cached) until the DDT starts getting pushed out of the ARC because it no longers fits. This can happen if you: - keep adding data (DDT size increases), and/or - reduce the amount of memory available to ZFS At this point, performance of the ZFS filesystem will reduce, likely by orders of magnitude.
  22. Neither me nor the documentation I'm referencing are saying dedupe performance with a special vdev is bad (at least not over dedupe performance without a special vdev) Without the special vdev your normal pool devices will very busy writing your data, but also all the hashes/references for the deduplication table (DDT). For spinning rust disks, this is a lot of additional random I/O and hurts performance significantly. Unless designed very poorly, the special vdev will increase performance because it spreads the DDT writes to separate, fast storage devices. The DDT is not a cache, and neither is the special vdev --- https://www.truenas.com/docs/references/zfsdeduplication/#deduplication-on-zfs The DDT needs to be stored within your pool (in vdev or special vdev) and constantly updated for every block of data that you write to the pool. Every write involves more writes to the DDT (either new hashes or references to existing hashes) When ZFS writes new entries to DDT (or needs to read them from the pool/special vdev) it's cached in memory (ARC) and will push out other information that would otherwise be stored in the ARC. If your DDT becomes large enough to exceed the amount of memory that ZFS is allocating for ARC, that's where you will run in to significant performance issues. That's not counting the fact that there is other data that ZFS wants to keep in the ARC (regularly accessed data and ZFS metadata) for performance reasons. Keeping the hashes/references to what has already been written in memory is fundamental to how de-duplicating file systems work. However, those hashes/references are fundamental structures of the file system, which is why they can't only exist in memory and must also be written to the filesystem.
  23. By default openzfs on Linux will "consume" up to half of system memory for the ARC, subject to other memory requirements of the system. This memory is allocated to the ARC regardless of whether dedupe is enabled or not and the amount allocated can't be used to measure how much memory is being used by dedupe. The special vdev provides faster read/write access for permanent storage of the metadata and DDT, but the DDT still needs to fit in ARC memory to avoid significant performance issues. https://www.truenas.com/docs/references/zfsdeduplication/ The amount of memory required to keep DDT cached in ARC ... https://www.truenas.com/docs/references/zfsdeduplication/ zdb doesn't function by default in Unraid due to lack of persistant storage location for its database. You can enable zdb database in memory using these commands: Further recommendations for estimating DDT size in a ZFS pool (and subsequently, the memory required for performant dedupe in ZFS): https://serverfault.com/questions/533877/how-large-is-my-zfs-dedupe-table-at-the-moment Hope this helps.
  24. I think he's talking about this suggestion: The Unassigned Disks has facility to mount remote shares in the GUI. These get mounted to /mnt/remotes. You could then assign these folders to dockers on the primary Unraid server. This is theoretically possible, but there will likely be some issues around timing for auto-mount as the nested Unraid won't be available when the primary Unraid array starts. And I think where you fix these remote mounts after the docker has started, you will need to restart the docker in order for it to see the files? Not sure but I recall something like this from my testing. Using multiple cache pools on a single Unraid server would be the simpler option for now. Or, live with a single array for now. This will be much easier to transfer to multiple arrays when this feature is added as your data will just be on standard XFS formatted disks, and you can (carefully) assign these to a different array later without needing to transfer data.
  25. I'm not trying to win an argument. If I post something that's wrong, I appreciate it when someone takes the time to correct me (and will edit the original post containing the incorrect information). The information you have posted isn't a difference of opinion, it's just incorrect as per openzfs documentation. If you don't want to learn and don't want to edit your posts then I guess we're done here. For someone who complains about ZFS misinformation on the internet, I find this bizarre. Have a great day? https://openzfs.readthedocs.io/en/latest/performance-tuning.html