jonpetersathan

Members
  • Posts

    65
  • Joined

  • Last visited

2 Followers

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

jonpetersathan's Achievements

Rookie

Rookie (2/14)

18

Reputation

  1. @ph.neutral: Do still have this issue? If yes, may I ask what version you are on?
  2. And btw, for everyone stumbling across this, who might need the functionality already, here is how I solved it for now: Disable the NFS export for all shares you want to be exclusive and create a new user script that runs on every array start and looks like this: KUBE_STORAGE="\"/mnt/storage/KubeStorage\" -fsid=103,async,no_subtree_check 192.168.100.200(sec=sys,rw,sync,no_subtree_check,no_root_squash)" EXPORTS="/etc/exports" grep -qF -- "$KUBE_STORAGE" "$EXPORTS" || echo "$KUBE_STORAGE" >> "$EXPORTS" exportfs -ra Ofc, modify the path to point directly to the folder in your pool and you might want to change or remove the NFS options (everything after and including 192.168... - in particular no_subtree_check,no_root_squash) unless you know what you are doing. Repeat line 1 and 4 for each share you want to export.
  3. Hey everyone, I just updated from 6.12.0-rc6 to 6.12.3 and noticed that exclusive shares don't work together with NFS exports (as it´s stated very clearly in the patch notes). However, the strange thing is, that this worked perfectly in 6.12.0-rc6. Maybe this is because of some last minute change that got introduced in 6.12 or I tinkered around manually with /etc/exports and just can´t remember that I did so.... I understand that this doesn't work with the symlinks, as they are not resolved by the NFS server, so here is my proposal: When a share fulfills all conditions to be exclusive, but is exported via NFS. Instead of using FUSE and exporting the share via /mnt/user/myshare, what about using a symlink and exposing the share directly via e.g. /mnt/mypool/myshare? This might be a little inconvenient when mounting the share, but nothing good documentation couldn't fix. Any thoughts?
  4. I have the same issue, but for me it is my Plex container i believe, which indeed uses the gpu. But this issue is new. Anyone of you running on rc6? And do you have the nvidia driver plugin installed by any chance?
  5. Amazing, not 100% sure what did the trick, but I got it working using your guide! Thanks! Edit: I have a pool of 10x16TB raidz2 disks and a 2x500GB mirror as special. Maybe the trick was settings the filesystem to auto instead of forcing zfs, because when re-importing the pool with all 12 disks, I was only able to select 1 group of 12 devices, but now it automatically set 1 group of 10 devices ignoring the 2 metadata disks, maybe that was it, but not sure though, I am not touching it again 😅
  6. Nope, when importing a zfs pool after adding a Special Metadata Device, the GUI reports back "Unmountable: Unsupported or no file system". Everything else is the same, the ZFS pool has been created by unRAID, only thing I did is adding the Special device.
  7. Yes I know, but when importing the pool manually, Disk Shares and for example disk usage in the dashboard, as well as automated scrubs doesn't work, right? Or am I missing something?
  8. Hi everyone, I have been really excited for that zfs support for long time now (and even more exited for 6.13), and have been using all previous release candidates on my test server. I have now deployed rc5 to my production server (while also recreating every zfs pools from scratch) and it is running buttery smooth. Amazing work, guys, really! Big thanks to all of you! There are a few minor bugs and suggestions from my journey I wanted to share, some specific to this release, some not. Do with it what every you want 🙂 Bug: Can´t configure existing qcow2 vDisk via GUI when creating a new VM (not specific to this release) When trying to create a new VM with a vDisk pointing to an existing qcow2 image the vDisk Type setting is removed from the GUI. After examining the XML I noticed that it is always set to raw, even if the vDisk path is pointing to a qcow2 image. Not sure if there is any logic that tries to detect the vDisk type that fails or if this is just a UI bug, happy to provide more information if needed. Bug: Manual vDisk location drop-down menu slowly moves to the right (haven't noticed this before tbh) On the same note, when opening and closing the drop-down menu to select a path, the menu always reappears a few inches to the right until it leaves the screen. Bug: Resizing dashboard leads to overlapping columns (specific to this release) When resizing the browser when the Dashboard tab is open (from three column view to two column view) the middle columns isn't positioned / sized correctly and is overlapping with the first column. Suggestion: Rudimentary support for Special, ZIL/SLOG, L2ARC I know that support for more zfs features is planned for 6.13+, but it would be really helpful, if pools configured manually via console with a Special Metadata Device, a ZFS Intent Log or an L2ARC would be able to import and be used in disk shares, show things like pool utilization and trigger regular scrubs. I am using the Special Metadata Device for my main ZFS pool and tried to fix it by myself, but I wasn't able to figure out where the zfs import command is actually called during array startup (tried searching in the php files and scripts folder). Any hint pointing me in the right direction would be greatly appreciated... Got it working using this guide: Suggestion: Display VLAN names when selection Network Source during VM creation When multiple VLANs are configured in network settings and multiple bridges are created, it would be helpful to have the VLAN name directly within the drop-down menu when selecting "Network Source". This is basically already done when selecting a custom network in docker (br0.40 -- MyVLAN). Suggestion: Separate button for pool configuration in Main tab Maybe this is just me, but I have found myself a couple of times struggling to find the GUI setting for configuring pool properties (e.g., Compression, Encryption, ZFS Pool Layout). I think clicking on the first disk in the array is a bit unintuitive, maybe having a separate edit button (e.g. next to the disk spin up / down buttons) would be more straight forward. But again, just my opinion, maybe that's just me getting old...
  9. Migration went smooth for me so far! I really like the new account page and having a direct link to the OTP Auth url, instead of having to decode the QR code manually, is really awesome, thanks! Just a minor remark (it's more of convenience thing): When generating the OTP Auth URL (and the respective QR code), IMO the label shouldn't include the Amazon Cognito user id, but rather the account name. So instead of: otpauth://totp/AWSCognito:UUID?secret=BLABLABLA&issuer=Unraid The URL with the (URL encoded) account name could look like this: otpauth://totp/jonpetersathan?secret=BLABLABLA&issuer=Unraid Or if you want to include the issuer prefix in the label: otpauth://totp/Unraid:jonpetersathan?secret=BLABLABLA&issuer=Unraid I think this Is a bit more convenient and user-friendly when importing into other apps/services. But that's just my humble opinion...
  10. @Mstrlink1 Can you check your Push notification settings? Plex Server Settings / General / Push Notifications (Advanced View) This needs to be enabled in order for "library.new" events to be send to all web hooks.
  11. Hi, I saw some guys requesting a username change in this section. Any chance I can get my username changed to jonpetersathan. Want to align with my GitHub and Twitter username. Thanks!
  12. Really exited about the ZFS groundwork here and looking forward to the upcoming official ZFS support. I actually switched to TrueNAS Scale for my new server because of ZFS. And although I enjoy TrueNAS Scale quite much, it just can not compete with Unraid in the HomeLab space. So thanks for all the great work!
  13. @Hawkins12 Can you send me your logs and config? I mainly added documentation and automated the GitHub pipeline, so no idea what caused that