Jump to content

sabertooth

Members
  • Posts

    45
  • Joined

  • Last visited

Everything posted by sabertooth

  1. Sorry @JorgeB about the delay in reply. I started to explore various options to restore the zpool. After a fresh install on the same USB drive followed by copy of the /config folder the array would fail to start including the zpool. Manual start of the array would indeed start the main Unraid array though not the zpool based one since same wasn't recognized by Unraid. I had to manually configure the zpool by adding a new pool and adding the existing HDDs to the array. This thankfully restored the zfs pool. Regarding the auto-start issue, apparently the Settings::Disk Settings:Enable auto start was disabled after the first error message (relating to corruption of USB drive). Once enabled array would restart after a reboot. Having said that I believe there is indeed an issue with zpool config restoration after copying over the /config folder.
  2. No back-up, but after copying the /config folder, the array doesn't start along with the zfs pool. Will try again
  3. Afraid, I saved after the reboot How do I correctly restore installation along with the config?
  4. Recently upgraded to 6.12.13 (more than a week). After starting the system, was greeted with above message. I have tried to reboot multiple times though the array is still offline. ZFS isn't up either. unraidzfs-diagnostics-20240912-1231.zip
  5. Afraid it indeed was a zfs dataset since all my shares were on zfs in 6.11.5. Please remember that the share initially was created with Primary Storage set to Array containing a link to dataset with same name as the share. After migration, I did change the Primary Storage to zpool which probably explains the yellow warning symbol next to it. I suggest you try out the above listed steps.
  6. I am no longer seeing this with 6.12.4
  7. Please refer to attached screen-shot, clearly shows dummy_old share as non-exclusive and dummy as exclusive (on zdata) All steps are listed below: root@UnraidZFS:/mnt/user# mv dummy_old/* dummy/. root@UnraidZFS:/mnt/user# rm -rf dummy_old/ root@UnraidZFS:/mnt/user# zfs list NAME USED AVAIL REFER MOUNTPOINT zdata 5.47T 12.6T 805G /mnt/zdata zdata/dummy 1.04M 12.6T 1.04M /mnt/zdata/dummy
  8. Afraid it actually did, it worked for all of the below listed datasets: /mnt/zdata/cache /mnt/zdata/downloads /mnt/zdata/isos /mnt/zdata/media I will try to replicate this with a dummy dataset.
  9. From Unraid GUI. By move, I meant the contents i.e. mv source_dataset/* to destination_dataset/. Dataset was renamed as well as per ZFS master plugin and zfs list. zfs list NAME USED AVAIL REFER MOUNTPOINT zdata 5.47T 12.6T 805G /mnt/zdata zdata/cache 232G 12.6T 231G /mnt/zdata/cache zdata/downloads 153G 12.6T 153G /mnt/zdata/downloads zdata/isos 469G 12.6T 469G /mnt/zdata/isos zdata/media 128K 12.6T 128K /mnt/zdata/media zdata/prashant 2.32T 12.6T 2.32T /mnt/zdata/p------- zfs mount zdata /mnt/zdata zdata/cache /mnt/zdata/cache zdata/downloads /mnt/zdata/downloads zdata/isos /mnt/zdata/isos zdata/media /mnt/zdata/media zdata/prashant /mnt/zdata/p-------
  10. Hindsight is 20/20 😭, expensive mistake to say the least.
  11. So, let me try to explain the problem again. 6.11.5: ZFS(RAIDZ1) with zpool called zfs. User share e.g. downloads with a link data to a dataset called downloads in the zpool called zfs Upgrade to 6.12.4 6.12.4: Create a new pool - zdata, and add all HDDs. Global Share Settings -> Permit exclusive shares. Change primary storage for downloads to zfs pool zdata. Now, since the exclusive access is shown as NO renamed old share to downloads_old. created new share called downloads with primary storage as zdata. move data from downloads_old to downloads. Remove old share downloads_old. ^^^^^^^^^^ This is when I lost data ^^^^^^^^^^^^^^ NOTE: Before trying on downloads, I had tried with other shares. After this disaster, I left my last share as is to avoid more data loss. This is the share which you see as exclusive access marked as NO. I had TWO issues, a) Loss of data and b) How to enable exclusive access without going through arduous process creating a new share and copying close 3 TB of data.
  12. Apparently reading is hard. downloads I am still searching for an old copy.
  13. If that is the case why are contents mapped to the corresponding dataset on ZFS pool? So, did the upgrade messed it up ? i.e. after changing the primary storage (6.12.4) to dataset on zfs pool the original share (6.11.5) was left on disk1. Have lost close to 800 GB of data, I won't be trying to lose more data.
  14. That share in question is NOT on disk1 but on pool (zdata). As clearly mentioned in first post, share was created with a link to ZFS dataset in 6.11.5. Please see the first post.
  15. That share is thankfully intact. This share exist only on ZFS pool. How does one FIX this in the first place? Why close the BUG? Is there a way for me to recover deleted data?
  16. I have just one pool, please find the diagnostics as attached. unraidzfs-diagnostics-20230904-1514.zip
  17. Finally took the plunge and did an upgrade to 6.12.4 from 6.11.5. 6.11.5: ZFS(RAIDZ1) User shares with same names as zfs dataset with a link(inside the share) to zfs dataset. 6.12.4: Create a new pool, and add all HDDs. Change primary storage for all shares to zfs pool. Global Share Settings -> Permit exclusive shares. So far so good, however some shares (from within zfs pool) show Exclusive access: No After a reboot, things still remain the same. In order to enable exclusive share: Create a new new dataset on the zfs pool. Move data from old share to new share (mv source destination) Delete old share. Rename new share. I tested this with smaller dataset/share first before moving to dataset(s)/share(s) with hefty amount of data. And, this is when I lost data after moving. I should have paid attention to ZFS master plugin while it was showing no increase in dataset size even when close to 800GB data was moved. Any ideas as to what went wrong? For the first time, am now worried that other datasets might show empty. Also, how can one enable Exclusive access without having to go through this circus?
  18. After upgrade to 6.11.1, ZFS dataset is no longer visible.
  19. Upgraded to 6.11.1, now I can't see any ZFS dataset. Great. EDIT-1: Went back to 6.11.0. EDIT-2: Upgrade again to 6.11.1. ZFS is fine, not sure what happened.
  20. I have uploaded the diagnostics, could you please suggest exact changes I should to in smb-extra.cong? Also, am seeing these errors again while accessing these shares from within Windows, earlier problems were from macOS.
×
×
  • Create New...