wow001
Members-
Posts
44 -
Joined
-
Last visited
Recent Profile Visitors
The recent visitors block is disabled and is not being shown to other users.
wow001's Achievements
Rookie (2/14)
8
Reputation
-
I have changed from user shares to disk shares, as accessing the unraid user shares from a mac is to slow, once you have 60 or more files in a folder and not matter the samba settings used. So i was successfully using your Syncthing docker with user shares with /media/name of the user share. Once i have created the disk shares will i be able to use the same path /media/name of the folder in the disk share. As i noticed that the top level folders in the disk shares are added to the user share list. So can you use the same file path ?
-
wow001 started following ZFS scrub on nvme 2 disk mirror
-
I am currently using unraid 7.02 beta When I setup up a ZFS mirror pool using 2 x nvme drives the scrub is turned off by default. The data is kept in the pool not transferred to the array. I use the each ZFS mirror pool as a disk share to speed up the read and write. Should i turn on a monthly scrub ?
-
Thanks wgstarks. I created the boot/config/smb-fruit.conf on my flash drive and added your suggested settings, restarted the server and they was a big improvement in copying files to folders with large number of files in them. From my tests you do not need the: #spotlight settings spotlight = yes in the boot/config/smb-fruit.conf or #spotlight settings [global] spotlight backend = tracker #end spotlight settings In your smb extra configuration It seemed to make the search slower and in folders that have 700 files or more the search timed out before finding the file. So the setting i am using at the moment are: (in the boot/config/smb-fruit.conf) # global parameters are defined in /etc/samba/smb.conf # current per-share Unraid OS defaults vfs objects = catia fruit streams_xattr #fruit:resource = file fruit:metadata = stream #fruit:locking = none #fruit:encoding = private fruit:encoding = native #fruit:veto_appledouble = yes fruit:posix_rename = yes #readdir_attr:aapl_rsize = yes #readdir_attr:aapl_finder_info = yes #readdir_attr:aapl_max_access = yes #fruit:wipe_intentionally_left_blank_rfork = no #fruit:delete_empty_adfiles = no #fruit:zero_file_id = no # these are added automatically if TimeMachine enabled for a share: #fruit:time machine #fruit:time machine max size = SIZE Although this has massively improved copying to the share via SMB and previewing pdfs with Quick Look, as soon as you exceed approx 150 files in a folder you start getting slower copy times and quick look preview starts taking time to load. So it seems the speed of the SMB is linked directly to the quantity of files in a folder.
-
I removed all my "Samba extra configuration" as i was going backwards when i add the extras i used before 6.11 My main issue was speed, when connected to a shared drive it would take 1 minute to move 2 x pdfs that are 47Kb in size from one folder to another folder in the same share, copying new files (10 x small pdfs to the share i could go make a coffee and still see the files copying to the share from my MAC. Using Quick Look on the Mac to view the pdf's in the share would take 10-30 seconds to preview. It was becoming unusable. Searches would take a minimum 1 minute and some times way longer. But would work. The above shares are on an NVME cache format XFS and SSD cache made of 2 x SSD format BTRFS Mirror and contained between 200 - 300 pdfs. I had problems with folders that contained more than 600 files in earlier version of UNRAID 6.9 and 6.10 but now it seems to be any thing with more than 60 files in a folder. Reducing the number of files down to 30 - 40 in a folder I was able to move files between folders very quickly, copy files to the share without the need for a coffee break and the search was almost instant. So the question is, why should the number of files in a folder have such an impact on the performance of the SMB share ? The SMB setting are set to default.
-
If i start the array with the missing disk i get the message the "All existing data on this device will be OVERWRITTEN when array is Started" as per the attached image. So the steps are: 1. Add the missing disk back to location disk 3 and start the array, 2. The data on disk 3 is overwritten/wiped. 3. Unraid rebuilds the data on disk 3 from the two Parity disks.?
-
What is the correct way to deal with a disk removed from the array after a reboot? As you can see from the attached images showing the array list before and after reboot. Disk 3 was removed and all the drive letters have changed although the remaining disks seem to be in the same order. How to i get the disk back into the array with out loosing the data on the disk that was removed from the array (disk 3)? ----------------- I restarted the server as i removed a plug-in "CA Dynamix Unlimited Width" and I removed a disk that failed pre clearing.
-
So far, I have removed all the extras setting. Settings as displayed in the attached image. For the first time I am able to copy files from the Mac 12.6 to Unraid 6.11.0-rc5 without the finder on the Mac crashing and completing the copy. (1.2Gb takes 20seconds on 1GB network to nvme cache pool) Also searching in a share is also working without any smb extras, takes 1-2 minutes before any results show. (Share is on a nvme cache pool)
-
thank you for the info, that helps a lot.
-
wow001 started following MacOS Optimization
-
When you add extra setting to the - SETTINGS/SMB/SMB Extras Are these being saved to the /etc/samba/smb.conf or /boot/config/smb-extra.conf ? What settings should be used as starting point? Is the list below a good starting point? #unassigned_devices_start #Unassigned devices share includes include = /tmp/unassigned.devices/smb-settings.conf #unassigned_devices_end [global] vfs objects = catia fruit streams_xattr fruit:nfs_aces = no fruit:zero_file_id = yes fruit:metadata = stream fruit:encoding = native [share_name] path = /mnt/user/"share_name" spotlight = on ------------------------------ *Please note: do not use spaces in share name or mover will not move the files from the array back to the cache pool.