jortan

Members
  • Posts

    221
  • Joined

  • Last visited

  • Days Won

    1

jortan last won the day on February 12

jortan had the most liked content!

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jortan's Achievements

Explorer

Explorer (4/14)

32

Reputation

  1. This is normal but ZFS will release this memory if needed by any other processes running on the system. You can test this, create a ram disk of whatever size is appropriate and copy some files files to it: mount -t tmpfs -o size=64G tmpfs /mnt/ram/ Outside of edge cases where other processes benefit from large amount of caching, it's generally best to leave ZFS to do its own memory management. If you want to set a 24GB ARC maximum, add this to /boot/config/go echo 25769803776 >> /sys/module/zfs/parameters/zfs_arc_max Yes, but if you're optimising for performance on spinning rust, you should probably use mirrors. Optane covers a lot of products. As far as I'm aware, they all just show up as nvme devices and work fine for ZFS. Where they don't work outside of modern Intel systems is when you want to use them in conjunction with Intel's software for tierered storage. I use an Optane P4800X in an (old, unsupported) Intel system for ZFS SLOG/L2ARC on unRAID.
  2. Nice job, looks great! Live read/writes stats for pools? ie. 'zpool iostat 3' ? Perhaps make the pool "green ball" turn another colour if a pool hasn't been scrubbed in >35 days (presumably it turns red if degraded?) Maybe a similar traffic light indicator for datasets that don't have a recent snapshot? This might really help someone who has added a dataset but forgotten to configure snapshots for it. Maybe make the datasets clickable - like the devices in a normal array? You could then display various properties of the datasets (zfs get all pool/dataset - though maybe not all of these) as well as snapshots. Some of the more useful properties for a dataset: used available referenced compression compressratio as well as snapshots
  3. Well that confirms it - ZFS lacks sendfile syscall support, at least on Unraid. This should be configurable in nginx, it might be fairly simple to disable this as presumably that file will be stored in appdata. Look for nginx.conf and just change "sendfile = on" to "sendfile = off" I had to do some scripting to ensure sendfile is disabled in my lancache docker as the relevant configuration file was inside the docker image, not inside appdata, so my changes were overwritten every docker update. As an alternative, swag docker doesn't have this issue, though it doesn't have the nice front-end of nginx-proxy-manager
  4. I also had unusual problems with certain dockers trying to use docker in directory mode on ZFS last time I tried it. Glad you got it working.
  5. The df output for ZFS pools is incorrect
  6. I have an unraid server for testing that uses a single 8GB thumb drive for the array. You don't need to assign a parity device. Keep in mind that by default the "system" share (libvirt image, docker image) is going to be placed on the array, as presumably you also won't have an unraid array cache device either. If you're going to use a thumb drive for your array + ZFS pool for storage, you will want to make sure all of these point to appropriate locations in your ZFS pool: Settings | Docker - Docker vDisk location - Default appdata storage location Settings | VM Manager - Libvirt storage location - Default VM storage path - Default ISO storage path Note that some dockers from Community Applications ignore the default appdata storage location, and will still default to: /mnt/user/appdata/xxxx Make sure you check these and change to a path within your pool when adding any new docker applications I'm no ZFS expert, but I'm not sure that this is a good idea. From what I understand, this setting could add to write amplification for asynchronous writes and cause other performance issues. For a dataset of bluray images, this makes sense. Not so much for dockers/vms. The ZFS ARC will use up to half your system memory for caching by default (I think?) - but is also very responsive to other memory demands from your system and will release any memory required by other processes. In most cases it's best to just let the ZFS ARC use whatever memory it can - unless you have large databases or other processes that could make better use of unallocated memory. You should still setup a user script to run "zpool trim poolname" manually in addition to this: https://askubuntu.com/questions/1200172/should-i-turn-on-zfs-trim-on-my-pools-or-should-i-trim-on-a-schedule-using-syste
  7. Just helped someone diagnose something with very similar errors. This was caused by the NVME controller being configured for passthrough in Tools | System Devices - even though this device had never been configured for passthrough. This happened due to a single unrelated pcie device being removed - this changed pcie device assignments in some way that caused the wrong devices to be enabled for passthrough.
  8. Don't want to labour the point, but it matters if your use case isn't huge streaming writes as in the case off chia plotting. For most people: Chia plotting = Abysmal ZFS pool for some dockers and VMs = Great The Firecuda 520 isn't in this graph, but most (consumer) nvme devices have similar reduction in write performance after their fast-cache fills up:
  9. Agreed, Firecuda 520 not optimal for chia plotting. You're hitting SLC cache limits as well as potentially reduced write speed due to TLC / drive filling up: https://linustechtips.com/topic/1234481-seagate-firecuda-520-low-write-speeds/?do=findComment&comment=13923773 I'm doing some background serial plotting now, but just using an old 1TB SATA disk and -2 pointing to a 110GB ramdisk. Takes about 5 hours per plot, but Chia netspace growth has really levelled off now, so I'm less keen to burn through SSDs/nvmes: https://xchscan.com/charts/netspace I was more interested in TBW rating. Doesn't compare to enterprise SSD, but good ratings compared to other consumer nvme. I'm hoping these will run my dockers/VMs for 5 years or more.
  10. For what workload? Outside of "370GB of sustained writes filling up the pSLC cache" scenarios, they seem to perform well I'm currently using 2 x Firecuda 520 nvmes in RAIDZ1 pool (for possible raidz expansion later) No issues encountered, though mine are sitting behind a pcie 3.0 switch.
  11. Any shares configured in /boot/config/smb-extras.conf will appear after restarting samba service: /etc/rc.d/rc.samba restart
  12. If you want to make the ZFS datasets open to everyone, on the unraid server with your zfs pool: chown -R nobody:users /zfs/movies chown -R nobody:users /zfs/music chown -R nobody:users /zfs/tv That should probably fix it, but if not you could try this as well: chmod -R 777 /zfs/movies chmod -R 777 /zfs/music chmod -R 777 /zfs/tv
  13. I haven't watched space invader's video - how was the SMB share created? I shared a ZFS dataset by adding the the share to: /boot/config/smb-extra.conf [sharename] path = /zfs/dataset comment = zfs dataset browseable = yes public = yes writeable = yes vfs objects = If I remember correctly, you can then restart samba with: /etc/rc.d/rc.samba restart
  14. They're saying, in the nicest possible way, that BTRFS is not stable in RAID5 mode: >>btrfs today is most reliable when configured in RAID 1 or RAID 10 Seems like all these features will make it in to unRAID eventually and they are just polling in order to set their priorities.
  15. Vote for native ZFS support in unRAID here: