Oldbean57

Members
  • Posts

    14
  • Joined

  • Last visited

Everything posted by Oldbean57

  1. Thank-you for the suggestion @dlandon I've tested here and setting "logging = 0" in smb-extra.conf worked to disable the logging (I used the SMB Extras GUI in SMB Settings). "log level = 0" did not work, as you suspected. This is a good workaround while the Samba logging changes are understood. Thanks very much.
  2. Hello, I'd like to report the same synthetic_pathref issues. These are logged upon any SMB file access - even listing a directory - although there are no issues with the actual access from a client. For example, Time Machine backups are working OK, just that the syslog is filled with entries such as these until I run out of log space: Oct 11 06:23:30 NAS smbd[3868]: [2022/10/11 06:23:30.418333, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 11 06:23:30 NAS smbd[3868]: synthetic_pathref: opening [RH MacBook Pro 13.sparsebundle/bands/1961:AFP_AfpInfo] failed Oct 11 06:23:30 NAS smbd[3868]: [2022/10/11 06:23:30.421560, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) Oct 11 06:23:30 NAS smbd[3868]: synthetic_pathref: opening [RH MacBook Pro 13.sparsebundle/bands/18a1:AFP_AfpInfo] failed Oct 11 06:23:30 NAS smbd[30538]: [2022/10/11 06:23:30.422283, 0] ../../source3/smbd/files.c:1193(synthetic_pathref) I'd be happy to share a full diagnostics output with Limetech if it will help, I'd prefer not to post my syslog here as with these errors, full pathnames including names of family members are exposed. Edited to add that these entries are logged upon access from both Windows 11 and MacOS 12.6 clients. It's not just MacOS-specific. Thanks.
  3. Hi, I'm very pleased to hear this is helping some others, I wish it would work for all! I tried earlier with 6.11.0-rc3 as it includes a newer version of Samba (4.16.4) which has some bug fixes that I was curious if they would be the cause of the issue since 6.10. I'm afraid this did not help - my incremental Time Machine backups still did not without the "fruit:metadata = stream" parameter. I've not had need to delve in to this part of using Samba previously so I'm still learning, but from what I have read so far setting "fruit:metadata = stream" just tells the Samba fruit VFS module to pass handling metadata to the next VFS module, which I expect for most of us using "Enhanced macOS interoperability" means the "streams_xattr" module (e.g. we have "vfs objects = catia fruit streams_xattr" in each of our share definitions. Without this "fruit:metadata = stream" parameter taking precedence later via smb-extra.conf, the default Samba setting of "fruit:metadata = netatalk" is applied via smb-names.conf, meaning metadata is written in a way compatible with Netatalk (AFP). I wondered if this was perhaps something to do with the underlying file systems we are using to store backups (I use encrypted btrfs), but looking here, btrfs has a lesser capacity than XFS for storing extended attributes, so you'd think that streaming the attributes to the filesystem would cause issues rather than resolve them. Unless the issue is with the Netatalk-compatible metadata that was written during previous backups and now we're saying to use a different method, we're working around that issue? I've worked with computers long enough to suspect it's not a coincidence that this issue started with 6.10 and that included the version of Samba which was released to patch the vulnerability in the Netatalk-compatible resource and metadata implementation in vfs_fruit... That implementation received some changes, then we find that not using part of it helps with an issue that appeared since it was tweaked? It may still be interesting to compare the file system used by working and non-working implementations. Bit of a brain dump there in case it helps anyone else with their reasoning! @Jclendineng - Just saw your post come in: I've never used Time Machine backup encryption, so have had this issue without that enabled. (I do use Unraid file-system encryption instead).
  4. Glad to hear it worked for you @54lzy, but sorry to hear it didn't work for you @saber1. Just to check, because I noticed I said "I wonder if you could test just adding this one line to your SMB Extras section", did you actually add the [Global] too to your SMB Extras section? E.g.: [Global] fruit:metadata = stream
  5. Thanks for sharing this UnKwicks, it prompted me to do some comparisons. I had a look at these compared to my Unraid 6.10.3 system with Enhanced macOS interoperability set to ON and with Time Machine shares exported. I found: vfs objects = catia fruit streams_xattr - already in smb.conf via smb-shares.conf (under each share definition) fruit:nfs_aces = no - already set in smb.conf via smb-names.conf fruit:zero_file_id = yes - not set anywhere that I can find, but Samba default is YES fruit:metadata = stream - already set in smb.conf via smb-names.conf, but set to NETATALK (which is the Samba default) fruit:encoding = native - already set in smb.conf via smb-names.conf spotlight backend = tracker - not set anywhere that I can find So aside from the Spotlight setting, the only difference was that I did not have fruit:metadata = stream. I added this to my SMB Extras section... [Global] fruit:metadata = stream ... and I'm very happy to say that my Time Machine backups have resumed working (same MacOS devices, no other changes on their side). The fix is repeatable (remove the section, they break, put it back in and they work again). This is with leaving the system-generated smb-names.conf file as-is, so it's working despite the fruit:metadate = netatalk section in that file. Presumably since smb-extras.conf is included later in the smb.conf and so take precedence. I wonder if you could test just adding this one line to your SMB Extras section please @saber1?
  6. Thank-you wgstarks - I did mean to add that I do intend to use that Docker in the meantime so I can get some backups running, so thanks for highlighting that option. I just wanted to make sure the Unraid team had visibility that it's affecting a number of people!
  7. I would like to add please that I too am experiencing this exact issue since moving to 6.10 and with three Macs running MacOS 12.x New Time Machine backups work OK, but subsequent runs fail to mount. Was fine on earlier versions of Unraid. I've seen the suggestions about SMB extra config, but I can see in smb-shares.conf that these are already added by Unraid when exporting the Time Machine shares: vfs objects = catia fruit streams_xattr Edit to add a copy of the error from the logs: (TimeMachine) [com.apple.TimeMachine:DiskImages] Failed to attach using DiskImages2 to url '/Volumes/.timemachine/NAS._smb._tcp.local/BBEA0555-A339-4C12-9707-3C07F3F8735B/_TimeMachine_MacBookAir/MacBook Air.sparsebundle', error: Error Domain=NSPOSIXErrorDomain Code=19 "Operation not supported by device" UserInfo={DIErrorVerboseInfo=Failed to initialize IO manager: Failed opening folder for entries reading}
  8. Thank-you for your feedback @ChatNoir and @bonienl for the settings tip - that does help. Interestingly, I've found this setting is a little inconsistent in where it applies. For example, it has worked for device names in the Main tab, but not the Dashboard tab. It works in the Shares tab, but not the Shares Settings page. EDIT: I just upgraded to 6.10.0-rc4 (need to test ipvlan vs macvlan for some network-related crashes) and have found that the lower-case "raw" behaviour is now consistent across all tabs, thank-you Thanks, Rich.
  9. Hi, This is just a small cosmetic request, please: I use lower case names for my Pools in Unraid 6.9.2 as I like the consistency with other names such as disk1, disk2 and ease-of-typing when using the terminal. But since they are acronyms such as SSD and TM (Time Machine), they look odd when shown in the UI as Ssd, Tm etc*. I wondered if it would be possible for you to stop auto-capitalising the first letter off Pool names in the UI please, so they show exactly as named? Like I said, just a small thing, but thought I would ask! Many thanks, Rich. (*I know they are acronyms and should be upper case, but this is just my preference for making the terminal approach the consistent one).
  10. Hi, It's funny you say that, before it started working (spoiler alert), the FS type was listed as btrfs in the FS column in UD. It's a SATA M.2 SSD. After I did the manual decrypt with cryptsetup in the Terminal, I thought I would test a mount via the UD GUI, which unfortunately still didn't work (same error). However I noticed that the FS type changed to crypto_LUKS: I wanted to send you a screenshot of the before and after, so rebooted to reset the encryption, but post-reboot the FS type was still crypto_LUKS, not back to btrfs. After entering the array passphrase I then tried a UD mount and it worked! Automount post-reboot also works as expected, so I'm delighted about that. I don't know whether it was the manual one-off cryptsetup or the reboot that resolved it, but all is now well. It might be hard to get to the bottom of it, but if there is anything further you'd be interested to see please do let me know. Thanks for your help, small Paypal retirement contribution incoming Rich.
  11. After seeing the output of the diagnostics, I decided to have a look for the successful mounting of the HDD from the array to see why that would be any different for /dev/sdd, but I'm not sure if the details of a successful mount are recorded in detail as part of the "unassigned.devices" logging? I was able to see the failed mount, for example: Mar 16 16:57:31 NAS unassigned.devices: Adding disk '/dev/sdj1'... Mar 16 16:57:31 NAS unassigned.devices: Mount drive command: /sbin/mount -t btrfs -o auto,async,noatime,nodiratime '/dev/sdj1' '/mnt/disks/WDC_WDS100T2B0B-00YS70_180481420126' Mar 16 16:57:31 NAS unassigned.devices: Mount of '/dev/sdj1' failed. Error message: mount: /mnt/disks/WDC_WDS100T2B0B-00YS70_180481420126: wrong fs type, bad option, bad superblock on /dev/sdj1, missing codepage or helper program, or other error. Mar 16 16:57:31 NAS unassigned.devices: Partition 'WDC_WDS100T2B0B-00YS70_180481420126' could not be mounted... Looking in the logs at how the encrypted disks in the array are unlocked using "cryptsetup" and then mounted, I replicated this manually via the Terminal for /dev/sdj1 and it mounted successfully, so I have a manual workaround for now from which I can run VMs: root@NAS:/mnt# cryptsetup luksOpen /dev/sdj1 sdj1 Enter passphrase for /dev/sdj1: root@NAS:/mnt# mount -t btrfs -o auto,async,noatime,nodiratime '/dev/mapper/sdj1' '/mnt/disks/test' root@NAS:/mnt# cd /mnt/disks/test root@NAS:/mnt/disks/test# ls root@NAS:/mnt/disks/test# ls -la total 16 drwxrwxrwx 1 nobody users 0 Mar 16 16:50 ./ drwxrwxrwx 3 nobody users 60 Mar 16 20:27 ../ root@NAS:/mnt/disks/test# However it would be great to be able to use Unassigned Devices to have this happen automatically using the same passphrase as entered to start the array after each boot, then all will be mounted and ready for the VM services to start. I think the key is to work out why /dev/sdd mounted using the array passphrase but /dev/sdj will not. Thanks for your help, Rich.
  12. Hi, thanks for getting back to me already. Diagnostics file attached. Thanks, Rich. nas-diagnostics-20180316-1903.zip
  13. Hello, New unRAID user here. I've been having a lot of fun setting my new server up having come from a Synology background. I'm pretty much there, but do have one question about mounting an encrypted unassigned device that I'd appreciate some help with please. My array consists of 6 x HDDs, 2 x SSDs for cache and I have just added a final single M.2 SSD which I intend to use as a standalone unassigned device for virtual disk images. The main reason for being standalone is that I see that SSDs are not yet supported as part of the array and it's much higher capacity than the 2 x SSDs working as cache so wouldn't be fully utilised. I would very much prefer to have the M.2 SSD encrypted in the same way as the other disks in the system (all BTRFS encrypted). I read this statement in the plugin info... An encrypted array disk can be mounted in UD with the following restrictions: The array disk passphrase has to be defined. You cannot enter the passphrase for the disk in UD. An array encrypted disk cannot be created with UD. The disk can only be mounted if the current array passphrase is the same as the UD encrypted disk. Note: There has to be at least one encrypted disk in the array. ... and thought I'd been pretty clever in adding the M.2 SSD temporarily to the array so it was formatted as "BTRFS encrypted" using the same array passphrase, then removing it (shrinking the array) so I could then mount it via UD as per the above. When I try and mount it however, I see this error message in the logs: Mar 16 17:23:51 NAS unassigned.devices: Mount of '/dev/sdj1' failed. Error message: mount: /mnt/disks/WDC_WDS100T2B0B-00YS70_180481420126: wrong fs type, bad option, bad superblock on /dev/sdj1, missing codepage or helper program, or other error. I thought perhaps there was an issue with using BTRFS encrypted, but if I remove another HDD of the same format from the array, it mounts without error in UD, so UD is using the array passphrase OK. The M.2 SSD will also add back in to the array without needing to be reformatted, suggesting the file system is OK. Any suggestions appreciated thank-you. Rich.