sabertooth
-
Posts
45 -
Joined
-
Last visited
Content Type
Profiles
Forums
Downloads
Store
Gallery
Bug Reports
Documentation
Landing
Report Comments posted by sabertooth
-
-
I am no longer seeing this with 6.12.4
-
Please refer to attached screen-shot, clearly shows dummy_old share as non-exclusive and dummy as exclusive (on zdata)
All steps are listed below:
root@UnraidZFS:/mnt/user# mv dummy_old/* dummy/.
root@UnraidZFS:/mnt/user# rm -rf dummy_old/
root@UnraidZFS:/mnt/user# zfs list
NAME USED AVAIL REFER MOUNTPOINT
zdata 5.47T 12.6T 805G /mnt/zdata
zdata/dummy 1.04M 12.6T 1.04M /mnt/zdata/dummy -
11 hours ago, JorgeB said:
This command would not remove a dataset, something missing from your description.
Afraid it actually did, it worked for all of the below listed datasets:- /mnt/zdata/cache
- /mnt/zdata/downloads
- /mnt/zdata/isos
- /mnt/zdata/media
I will try to replicate this with a dummy dataset.
-
39 minutes ago, JorgeB said:
Now, since the exclusive access is shown as NO
- renamed old share to downloads_old. - using the GUI?: Yes
- created new share called downloads with primary storage as zdata. - I assume with the GUI? : Yes
- move data from downloads_old to downloads. - complete command used?: From command line> mv downloads_old/* downloads/.
- Remove old share downloads_old. - complete command used: From within command line> rm -rf downloads_old/
-
3 minutes ago, JorgeB said:
How did you rename the share? If it was a zfs dataset mv wouldn't work, at most it would have renamed the mount point, leaving the dataset still there.
From Unraid GUI. By move, I meant the contents i.e. mv source_dataset/* to destination_dataset/. Dataset was renamed as well as per ZFS master plugin and zfs list.
zfs list NAME USED AVAIL REFER MOUNTPOINT zdata 5.47T 12.6T 805G /mnt/zdata zdata/cache 232G 12.6T 231G /mnt/zdata/cache zdata/downloads 153G 12.6T 153G /mnt/zdata/downloads zdata/isos 469G 12.6T 469G /mnt/zdata/isos zdata/media 128K 12.6T 128K /mnt/zdata/media zdata/prashant 2.32T 12.6T 2.32T /mnt/zdata/p-------
zfs mount zdata /mnt/zdata zdata/cache /mnt/zdata/cache zdata/downloads /mnt/zdata/downloads zdata/isos /mnt/zdata/isos zdata/media /mnt/zdata/media zdata/prashant /mnt/zdata/p-------
-
8 minutes ago, JorgeB said:
You should have saved the diags at that time and asked for help, most likely the share existed somewhere else besides that pool, it would have been an easy fix, and no need to rename the share.
Hindsight is 20/20 😭, expensive mistake to say the least.
-
So, let me try to explain the problem again.
6.11.5:- ZFS(RAIDZ1) with zpool called zfs.
- User share e.g. downloads with a link data to a dataset called downloads in the zpool called zfs
Upgrade to 6.12.4
6.12.4:- Create a new pool - zdata, and add all HDDs.
- Global Share Settings -> Permit exclusive shares.
- Change primary storage for downloads to zfs pool zdata.
Now, since the exclusive access is shown as NO- renamed old share to downloads_old.
- created new share called downloads with primary storage as zdata.
- move data from downloads_old to downloads.
- Remove old share downloads_old.
- ^^^^^^^^^^ This is when I lost data ^^^^^^^^^^^^^^
NOTE: Before trying on downloads, I had tried with other shares.
After this disaster, I left my last share as is to avoid more data loss. This is the share which you see as exclusive access marked as NO.
I had TWO issues,
a) Loss of data and
b) How to enable exclusive access without going through arduous process creating a new share and copying close 3 TB of data.
-
Just now, Kilrah said:
OP said this isn't the share that was involved
Apparently reading is hard.
1 minute ago, Kilrah said:But which ones was it then?
downloads I am still searching for an old copy.
-
13 minutes ago, JorgeB said:
That's not what this shows:
If that is the case why are contents mapped to the corresponding dataset on ZFS pool?
So, did the upgrade messed it up ? i.e. after changing the primary storage (6.12.4) to dataset on zfs pool the original share (6.11.5) was left on disk1.
13 minutes ago, JorgeB said:I cannot replicate based on that.
Have lost close to 800 GB of data, I won't be trying to lose more data.
-
2 minutes ago, JorgeB said:
You need to move or delete that share from disk1, you can do that manually or by using the mover with the correct settings, see the GUI help.
That share in question is NOT on disk1 but on pool (zdata). As clearly mentioned in first post, share was created with a link to ZFS dataset in 6.11.5.
2 minutes ago, JorgeB said:Can you explain step by step what you did to result in lost data to see if it can be reproduced? It may not be a bug but user error.
Please see the first post.
-
Changed Status to Open
Changed Priority to Urgent
-
22 minutes ago, JorgeB said:
Correct, you must fix that or exclusive mode won't work.
That share is thankfully intact. This share exist only on ZFS pool.
How does one FIX this in the first place?
Why close the BUG? Is there a way for me to recover deleted data? -
29 minutes ago, Kilrah said:
One of your shares is set to cache only but has data on disk1
Was that one involved?
Exclusive access requires primary storage only and the share must not exist on any other disk/pool
No, this share has 2TB+ data and is from zfs on 6.11.5.
-
I have just one pool, please find the diagnostics as attached.
-
Upgraded to 6.11.1, now I can't see any ZFS dataset. Great.
EDIT-1: Went back to 6.11.0.
EDIT-2: Upgrade again to 6.11.1. ZFS is fine, not sure what happened.
- 1
-
21 hours ago, dlandon said:
Correct, the smb-fruit.conf file contains all the fruit settings.
My wording may not have been the best. In this situation, there is a log message not seen in internal testing or rc releases and we are trying to understand where it is comming from.
The smb-fruit.conf file contains the most common settings used for fruit settings. The uncommented settings are the default settings. The commented settings are optional. The idea is to not have to change the smb-extras.conf file to make any adjustments you might need for your particular system. Changes in the extras file are global and the settings in the fruit file are applied per share.
I have uploaded the diagnostics, could you please suggest exact changes I should to in smb-extra.cong?
Also, am seeing these errors again while accessing these shares from within Windows, earlier problems were from macOS.
-
6 hours ago, dlandon said:
Yes, try to avoid that for fruit settings.
The idea is to make a copy on the flash device. When Unraid sees the smb-fruit.conf on the flash, those settings are applied for fruit settings. If the flash copy is not available, Unraid uses the default settings at /etc/samba/smb-fruit.conf. The idea is that the flash settings get applied on each reboot and you customize for your use case.
Settings in that file are commented out. You uncomment the settings you think will apply to your situation and restart samba so you can do some testing. We want people to test and give feedback so we can create a set of generic settings as a default.
cat /etc/samba/smb-shares.conf.
No. Make your changes and restart samba. They will be applied.
Remove any settings related to fruit. Other settings in smb-extras.conf can stay.
This scheme insures that the fruit settings are applied correctly per share. There is a special case with fat and exfat file systems. Special settings for these devices are applied because those file systems do not support extended attributes and writes can fail. You can't apply fruit settings globally for all shares.
/etc/samba/smb-shares.conf contains following:
vfs objects = catia fruit streams_xattrQuoteSettings in that file are commented out. You uncomment the settings you think will apply to your situation and restart samba so you can do some testing. We want people to test and give feedback so we can create a set of generic settings as a default.
This is released product, why should customers test this on production systems.
Why can't you replicate this in-house? RC releases are completely different story.
-
7 hours ago, limetech said:
Are all these reports only applicable to Time Machine backup operations?
This is applicable to all shares and not restricted to Time Machine.
-
1. All shares have a data folder which links to ZFS volumes. Though this is not true for timemachine share.
2. /boot/extra: gcc
3. Log file grew so fast that I had to revert back to 6.10.3
-
Reverted back to 6.10.3, and above mentioned errors are gone.
-
On 3/12/2022 at 6:08 PM, AndrewZ said:
Remove the symlink linking the user shares back to your ZFS pool and directly reference the zfs pool instead
Typing the whole path (i.e. symbolic link) in the VM Add GUI works as expected, i.e. am able to install to Windows using iso file through the symbolic link.
-
-
6 minutes ago, JorgeB said:
Please post the diagnostics.
unraidzfs-diagnostics-20220311-2018.zip
Here you go.
- 1
-
3 minutes ago, ich777 said:
Try to run fix permissions from the GUI for your isos path.
Tried that couple of times, didn't help though.
[6.12.4] [ZFS] Lost data while moving across shares/datasets.
in Stable Releases
Posted
Afraid it indeed was a zfs dataset since all my shares were on zfs in 6.11.5. Please remember that the share initially was created with Primary Storage set to Array containing a link to dataset with same name as the share. After migration, I did change the Primary Storage to zpool which probably explains the yellow warning symbol next to it. I suggest you try out the above listed steps.