domidomi

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by domidomi

  1. Thanks, I'm now using the "test" image tag. Not sure how to verify that it works to be honest!
  2. Is there any chance we can get par2cmdline-turbo into the image? It would greatly improve repair performance. https://sabnzbd.org/wiki/installation/par2cmdline-turbo
  3. I just found /root/keyfile with -rw-r--r-- permissions. Is this normal? Is this file part of the UnRAID system somehow? Should the permissions really be what they are? Seems risky to have a keyfile of some sort which is readable by anyone.
  4. Much appreciated, thanks! Could someone please guide me to the "latest and greatest" regarding macOS + UnRAID SMB configuration? I have a hard time figuring out what the current consensus is.
  5. In many attempts at (re-)configuring SMB for macOS since 6.10 was released, I'm lost in all my SMB settings. In /boot/config/ I now have two files: smb-extra.conf and smb-fruit.conf. Is it safe to just delete these files and then "samba restart"? Will that reset any SMB customizations that I have done?
  6. I can do nothing but 100% agree with this. In this world of ubiquitous free/open source software, it's easy to forget that we are actually paying customers for this product. As paying customers, it only makes sense to make certain demands and set some expectations on what we're purchasing. Make no mistake, I like UnRAID a lot, but this whole thing with letting the customers figure out solutions to bugs in the product they paid for... It's just not right. A NAS software product which doesn't support Time Machine backups out of the box? Again, it's just not right.
  7. What is port 9897 used for? I've looked everywhere, but I can't figure out what it's supposed to be for? It's not a default port in Sonarr.
  8. Yes, many Docker containers are indeed accessing files on the array. I'm currently scheduling the Mover to run at a time when the services are unlikely to be significantly used, but I'm still searching for a solution which is less "greedy" for disk bandwidth, if at all possible.
  9. I have a 1 TB cache drive which tends to fill up often. While the Mover is running, all my Docker services become unusably slow. Is this common for everyone else? Is there any way to customize the Mover so that it doesn't choke all other services? I don't imagine running the Mover without affecting the rest of the system at all. I just want it to be less greedy about disk bandwidth so that other services are at least usable. Any ideas? Many thanks!
  10. I'm sorry to say the obvious here, but I don't think you've done enough internal/RC testing then to release this as "stable". Users that had SMB configured configured to work correctly in 6.10.3 shouldn't expect their syslog to fill up with errors to the point where they have to either downgrade or have a lengthy "whack-a-mole" back-and-forth with you on the UnRAID forums for STABLE software. This is a very serious issue, and you SHOULD be able to easily replicate it using the same SMB settings on your own servers and a single macOS installation. You have the entire logs and you have the SMB settings. Don't expect your paying customers to do the debugging and testing of your STABLE software. We are paying for the benefit of NOT having to do that. I'm very disappointed by this experience. Why were we recommended to upgrade exactly? Wasn't this supposed to be a non-problematic upgrade, better, more secure? What was the rush to upgrade? What benefit was it to me? Reminder that this is NAS software, not a general purpose Linux OS, and you're telling me one of the most basic features of NAS software, namely backups over SMB, was just one of those weird edge case things that didn't pop up during your deep and extensive internal and RC testing? Do better.
  11. The reason why that doesn't work is that you're essentially trying to evaluate a shell script twice. 1. The content of ~/.dircolors is a shell script 2. dircolors -b ~/.dircolors will evaluate the script and print the results, which is empty 3. eval "$(dircolors -b ...)" will evaluate the empty string At least that's how I think it is. Just copy /etc/DIR_COLORS to ~/dircolors and work with that instead. It's much easier to read and maintain, and it prevents misunderstandings.
  12. I did some of investigation into xfs_fsr in order to determine if defragging XFS filesystems actually makes any significant difference. Below are my findings. Note that I am no expert on any of this, so take all of this with a grain of salt. My idea was to take the most fragmented file that I could find on a drive and see how long it would take to read the entire file. Then I was going to defragment the file and perform the same read test again and compare the difference in speed. I chose disk1 for this, but I guess it shouldn't matter much. Turns out that, by far, the most fragmented file is the Docker image file with a whopping 15,571 extents. My Docker image file is 20 GB in size. xfs_db gave me the inode number and then I just used "find" to figure out which file it corresponded to. Before defragmenting the file, reading the entire file took 98.6 seconds. After having defragmented the file, it took 82.96 seconds, which is an improvement by about 15%. I performed the experiment on the second-to-most fragmented file, on a different drive, which spanned 97 extents. The speed improvement there was around 10%. I don't know enough about HDD's to know if this could be a result of the drive head having been moved to a better position after the defragmentation or not. It doesn't seem unlikely to me. Some people in this forum have reported that defragmenting their drives improved performance tremendously, but based on what I've found here, I wouldn't put too much trust in those reports. Placebo is a hell of a drug! root@tower:~# xfs_db -r /dev/mapper/md1 -c "frag -v" | sort -k4n | tail -n 2 | head -n 1 inode 2147483777 actual 15571 ideal 1 root@tower:~# find /mnt/disk1/ -inum 2147483777 -printf "%p (%s bytes)\n" /mnt/disk1/system/docker/docker.img (21474836480 bytes) root@tower:~# dd if=/mnt/disk1/system/docker/docker.img of=/dev/null status=progress 21196114432 bytes (21 GB, 20 GiB) copied, 98 s, 216 MB/s 41943040+0 records in 41943040+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 98.6201 s, 218 MB/s root@tower:~# xfs_fsr -d -v /mnt/disk1/system/docker/docker.img /mnt/disk1/system/docker/docker.img /mnt/disk1/system/docker/docker.img extents=15571 can_save=15570 tmp=/mnt/disk1/system/docker/.fsr1141 DEBUG: fsize=21474836480 blsz_dio=16773120 d_min=512 d_max=2147483136 pgsz=4096 Temporary file has 3 extents (15571 in original) extents before:15571 after:3 /mnt/disk1/system/docker/docker.img root@tower:~# dd if=/mnt/disk1/system/docker/docker.img of=/dev/null status=progress 21222499840 bytes (21 GB, 20 GiB) copied, 82 s, 259 MB/s 41943040+0 records in 41943040+0 records out 21474836480 bytes (21 GB, 20 GiB) copied, 82.9575 s, 259 MB/s
  13. Been a while since you asked, but "before:4709 after:1" means that before defragmenting, the inode spans across 4709 different extents, and afterwards it will occupy just one extent. Think of an extent like a contiguous range of segments on your disk, basically. So with before:4709, your OS would need to read across 4709 different areas to get all parts of that inode.
  14. @mgutt Thanks for the help, much appreciated! If someone else reads this post in the future, I'll add that I balanced the disks so that they all have roughly 20% free space (1.5 TB free of 8 TB total), and it hasn't significantly changed the speed of the defragmentation in general. I will also add that I've tested the speed of all the disks and they all start at 250+ MB/s and end at around 130 MB/s, so that's not the issue either. I'll just be writing this up as caused by the parity drive and disk encryption. I still don't quite understand how the slowest of the disk extents can reach only around 13-15 MB/s, that doesn't really make much sense to me personally. Some extents do perform better, around 70-90 MB/s.
  15. If you mean copying it from disk 1 to disk 1, then it's literally instant. Tried it with a 4.1 GB file. I have noticed that xfs_fsr is running faster on disk 5, which has about 3.5 TB free space. There I get around 40 MB/s write speed. Edit: A few minutes of observation shows that occasionally, on disk 5, it reaches 90 MB/s, then it'll jump down to 35-45 MB/s. It seems like it might depend on the which file it's currently working with?
  16. They're five drives of the same model, Seagate IronWolf 8TB. This particular drive that is currently being defragged has 300+ GB free space.
  17. So I'm using this script for the first time, and I don't know much about how xfs_fsr is supposed to work, but it seems to be really slow? It could be that I'm running it with parity enabled, but the write speed on the target HDD is around 13 MB/s while CPU is not even reaching 50%. Nothing else is accessing the shares, no Docker containers are running, no VMs installed at all. I guess I just want to know: is this normal? Edit: Could it be that I'm using this on encrypted XFS? Obviously I have modified the script to use /dev/mapper.
  18. A little while back, I upgraded from the trial of UnRAID to first the Basic version and then the Plus version. However, the old trial version still shows as a signed out server. They're the exact same server. At some point I did change the USB stick that I kept the installation on, but I can't say exactly in which order I switched the USB stick and purchased a license. How can I delete the old installation so that I never have to see it ever again? It's the same install.