sabertooth

Members
  • Posts

    39
  • Joined

  • Last visited

Everything posted by sabertooth

  1. Afraid it indeed was a zfs dataset since all my shares were on zfs in 6.11.5. Please remember that the share initially was created with Primary Storage set to Array containing a link to dataset with same name as the share. After migration, I did change the Primary Storage to zpool which probably explains the yellow warning symbol next to it. I suggest you try out the above listed steps.
  2. I am no longer seeing this with 6.12.4
  3. Please refer to attached screen-shot, clearly shows dummy_old share as non-exclusive and dummy as exclusive (on zdata) All steps are listed below: root@UnraidZFS:/mnt/user# mv dummy_old/* dummy/. root@UnraidZFS:/mnt/user# rm -rf dummy_old/ root@UnraidZFS:/mnt/user# zfs list NAME USED AVAIL REFER MOUNTPOINT zdata 5.47T 12.6T 805G /mnt/zdata zdata/dummy 1.04M 12.6T 1.04M /mnt/zdata/dummy
  4. Afraid it actually did, it worked for all of the below listed datasets: /mnt/zdata/cache /mnt/zdata/downloads /mnt/zdata/isos /mnt/zdata/media I will try to replicate this with a dummy dataset.
  5. From Unraid GUI. By move, I meant the contents i.e. mv source_dataset/* to destination_dataset/. Dataset was renamed as well as per ZFS master plugin and zfs list. zfs list NAME USED AVAIL REFER MOUNTPOINT zdata 5.47T 12.6T 805G /mnt/zdata zdata/cache 232G 12.6T 231G /mnt/zdata/cache zdata/downloads 153G 12.6T 153G /mnt/zdata/downloads zdata/isos 469G 12.6T 469G /mnt/zdata/isos zdata/media 128K 12.6T 128K /mnt/zdata/media zdata/prashant 2.32T 12.6T 2.32T /mnt/zdata/p------- zfs mount zdata /mnt/zdata zdata/cache /mnt/zdata/cache zdata/downloads /mnt/zdata/downloads zdata/isos /mnt/zdata/isos zdata/media /mnt/zdata/media zdata/prashant /mnt/zdata/p-------
  6. Hindsight is 20/20 😭, expensive mistake to say the least.
  7. So, let me try to explain the problem again. 6.11.5: ZFS(RAIDZ1) with zpool called zfs. User share e.g. downloads with a link data to a dataset called downloads in the zpool called zfs Upgrade to 6.12.4 6.12.4: Create a new pool - zdata, and add all HDDs. Global Share Settings -> Permit exclusive shares. Change primary storage for downloads to zfs pool zdata. Now, since the exclusive access is shown as NO renamed old share to downloads_old. created new share called downloads with primary storage as zdata. move data from downloads_old to downloads. Remove old share downloads_old. ^^^^^^^^^^ This is when I lost data ^^^^^^^^^^^^^^ NOTE: Before trying on downloads, I had tried with other shares. After this disaster, I left my last share as is to avoid more data loss. This is the share which you see as exclusive access marked as NO. I had TWO issues, a) Loss of data and b) How to enable exclusive access without going through arduous process creating a new share and copying close 3 TB of data.
  8. Apparently reading is hard. downloads I am still searching for an old copy.
  9. If that is the case why are contents mapped to the corresponding dataset on ZFS pool? So, did the upgrade messed it up ? i.e. after changing the primary storage (6.12.4) to dataset on zfs pool the original share (6.11.5) was left on disk1. Have lost close to 800 GB of data, I won't be trying to lose more data.
  10. That share in question is NOT on disk1 but on pool (zdata). As clearly mentioned in first post, share was created with a link to ZFS dataset in 6.11.5. Please see the first post.
  11. That share is thankfully intact. This share exist only on ZFS pool. How does one FIX this in the first place? Why close the BUG? Is there a way for me to recover deleted data?
  12. I have just one pool, please find the diagnostics as attached. unraidzfs-diagnostics-20230904-1514.zip
  13. Finally took the plunge and did an upgrade to 6.12.4 from 6.11.5. 6.11.5: ZFS(RAIDZ1) User shares with same names as zfs dataset with a link(inside the share) to zfs dataset. 6.12.4: Create a new pool, and add all HDDs. Change primary storage for all shares to zfs pool. Global Share Settings -> Permit exclusive shares. So far so good, however some shares (from within zfs pool) show Exclusive access: No After a reboot, things still remain the same. In order to enable exclusive share: Create a new new dataset on the zfs pool. Move data from old share to new share (mv source destination) Delete old share. Rename new share. I tested this with smaller dataset/share first before moving to dataset(s)/share(s) with hefty amount of data. And, this is when I lost data after moving. I should have paid attention to ZFS master plugin while it was showing no increase in dataset size even when close to 800GB data was moved. Any ideas as to what went wrong? For the first time, am now worried that other datasets might show empty. Also, how can one enable Exclusive access without having to go through this circus?
  14. After upgrade to 6.11.1, ZFS dataset is no longer visible.
  15. Upgraded to 6.11.1, now I can't see any ZFS dataset. Great. EDIT-1: Went back to 6.11.0. EDIT-2: Upgrade again to 6.11.1. ZFS is fine, not sure what happened.
  16. I have uploaded the diagnostics, could you please suggest exact changes I should to in smb-extra.cong? Also, am seeing these errors again while accessing these shares from within Windows, earlier problems were from macOS.
  17. /etc/samba/smb-shares.conf contains following: vfs objects = catia fruit streams_xattr This is released product, why should customers test this on production systems. Why can't you replicate this in-house? RC releases are completely different story.
  18. This is applicable to all shares and not restricted to Time Machine.
  19. 1. All shares have a data folder which links to ZFS volumes. Though this is not true for timemachine share. 2. /boot/extra: gcc 3. Log file grew so fast that I had to revert back to 6.10.3
  20. Reverted back to 6.10.3, and above mentioned errors are gone.
  21. System log is flooded with below messages (after the upgrade): Sep 26 08:47:39 UnraidZFS smbd[31175]: synthetic_pathref: opening [Someone’s MacBook Pro.sparsebundle/bands/2:AFP_AfpInfo] failed Sep 26 08:47:39 UnraidZFS smbd[31175]: [2022/09/26 08:47:39.199065, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) This is applicable for all shares: synthetic_pathref: opening [data/backup/Downloads/ntfs.sh:AFP_AfpInfo] failed unraidzfs-diagnostics-20220926-0853.zip
  22. Finally, was able to replicate the issue with RC3 as well. Please find the diagnostics as attached. unraidzfs-diagnostics-20220324-1932-zfs-suspended.zip