• Content Count

  • Joined

  • Last visited

Everything posted by Tomr

  1. Could you update jdupes to newer version? Current version is bugged, see: for 1.15 release. I personally would like/need at least 1.18.1 for the newer/older filter
  2. I wonder why it didn't recognize the correct signature then.
  3. The thing is I didn't tell it to format anything. I just added new slot, started the array and it went up straight to cleaning it. Now that I think about it, it's prob because my array is encrypted.
  4. Everything went fine, I received email saying so, and I also checked the logs. That means preclear doesn't do anything when adding new drive? That's strange. I thought Unraid should pickup that the disk is precleared and go straight to syncing parity. I didn't format the disk as trurl wrote, just preclear.
  5. In 6.9, I did a preclear on disk, using Joe-L, skip pre-read and selected fast post-read. It finished, I got notification on email so I added it to the array (new slot), and unraid clears the disk again ("Clearing in progress"). Is it normal? Or it happened because I didn't click the red "X" button after the preclear? I thought pre-clearing's purpose was to not have to clear it when adding to array.
  6. I was said it's unintented behavior and to post there. Not sure if it's a bug or lack of feature, but mover will not move the hardlink if it's on another share, but rather copy the whole thing. It only works if they are on the same share. I suspect it's because mover works on per-share basis, if it is I ask if something can be done about that? Steps to reproduce: Two shares, A And B, both's cache set to Yes with the same pool. touch /mnt/user/A/test.txt cd /mnt/user/A ln test.txt /mnt/user/B/test.txt ls -i | grep test.txt 1435
  7. Referring to this suggestion to invoke mover before changing cache settings I ask for a "Move Now" equivalent button in share settings so one could move just that share's cache, not the whole array.
  8. They don't state that the files will be orphaned if the setting are changed, or the whole things with hardlinks. And I was assured that would work as I expected because of this:
  9. I understand that it might be designed that way, but that doesn't mean it's correct. I only discovered that because I tried to write my custom mover scheduler, otherwise I would found out it the hard way with no space left in pools. At the very least this should be explained better in tooltips.
  10. 1) Shares with cache set to "No" wont move files between disks. If you have files in cache and later set the option to these values, mover will never move them to array. One would think that mover will take care of it. I set it to not use cache, so it shouldn't. Same goes for "Only" but from array to pool. 2) If you change the pool of the share and have it set on "Prefer" or "Only", mover will not move the files between the pools nor will it move the files to the array. Same as 1. I set it to use that cache, so mover should reorganize the items to match these settings.
  11. More reading about the differences with different schedules. I wont copy-paste the internet here: Benchmarks on NVME SSD - Benchmarks on HDD - YMMW, you can only be sure what is best for you if you do the benchmarks yourself. How to change your schedulers and auto-apply them on every reboot? nano /etc/udev/rules.d/60-ioschedulers.rules Paste the code: # set scheduler for NVMe ACTION=="a
  12. I'm evaluating UnRAID for my new NAS, I modified your script a bit. With it, one can create periodic snapshots with various retention policy (eg. keep hourly backups for a day, daily backups for a week etc). You just need to schedule more copy of the script. I think you can't pass arguments to scheduled script, so I deleted them. (Optional) Add this to your Settings->SMB->SMB Extra. This enables file versioning for smb clients. vfs objects = shadow_copy2 shadow:sort = desc shadow:format = _UTC_%Y.%m.%d-%H.%M.%S shadow:localtime = no shadow:snapprefix = ^\(monthly