mgutt Posted October 20, 2020 Share Posted October 20, 2020 The default xfs_fsr command defragmentates all XFS disks. For SSDs it isn't only useless, but also wears them out. This script: does not defrag SSDs does not defrag NVMe defrags for a specific time in seconds (default is 7200 seconds = 2 hours) defrag_only_sleep=1 (defrags only if parity is not spinning) #!/bin/bash # ##################################### # Script: XFS Extended Defragmentation v0.2 # Description: Defragmentate only HDDs (SSDs will be ignored) if requirements met for a selectable running time. # Author: Marc Gutt # # Changelog: # 0.2 # - SSD recognition added # 0.1 # - first release # # ######### Settings ################## defrag_seconds=7200 defrag_only_sleep=1 notification=1 # ##################################### # # ######### Script #################### # make script race condition safe if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1; fi; trap 'rmdir "/tmp/${0///}"' EXIT; # defrag only if parity is not spinning if [[ $defrag_only_sleep == "1" ]]; then # obtain parity disk's device id parity_id=$(mdcmd status | grep rdevName.0 | cut -d = -f 2) echo "Parity has device id $parity_id" # we defrag only if parity is not spinning parity_state=$(smartctl -n standby "/dev/$parity_id") if [[ $parity_state != *"STANDBY"* ]]; then echo "Defragmentation has been skipped because of active parity disk" exit fi fi echo "Search for HDDs..." # parse /etc/mtab and check rotational status declare -a mtab while IFS= read -r -d '' line; do disk_id_regex="^/dev/md([0-9]+)" if [[ "$line" =~ $disk_id_regex ]]; then disk_id=${BASH_REMATCH[1]} rotational=$(cat /sys/block/md${disk_id}/queue/rotational) if [[ "$rotational" == "1" ]]; then mtab+=("$line") echo "Found HDD with id md${disk_id} (added)" continue fi echo "Found SSD with id md${disk_id} (skipped)" fi done < <(cat /etc/mtab | grep -E '^/dev/md' | tr '\n' '\0') if [ ${#mtab[@]} -eq 0 ]; then /usr/local/emhttp/webGui/scripts/notify -i alert -s "XFS Defragmentation failed!" -d "No HDD found!" exit fi printf "%s\n" "${mtab[@]}" > /tmp/.xfs_mtab echo "Content of /tmp/.xfs_mtab:" cat /tmp/.xfs_mtab # defrag xfs_fsr -v -m /tmp/.xfs_mtab -t "$defrag_seconds" -f /tmp/.xfs_fsrlast 3 1 Quote Link to comment
mgutt Posted October 20, 2020 Author Share Posted October 20, 2020 Does anyone know how it could be possible to check if a disk is an SSD? By that we could automatically obtain the SSD disk IDs and remove the "ignore_disks" setting from the script. Quote Link to comment
JorgeB Posted October 20, 2020 Share Posted October 20, 2020 5 minutes ago, mgutt said: Does anyone know how it could be possible to check if a disk is an SSD? You could use: cat /sys/block/<device>/queue/rotational 0 output is an SSD, 1 is HDD 2 Quote Link to comment
mgutt Posted October 20, 2020 Author Share Posted October 20, 2020 Ok done. The Script v0.2 now automatically recognizes SSDs and skips them. Quote Link to comment
eagle470 Posted October 20, 2020 Share Posted October 20, 2020 Are you going to make this a plugin? 1 Quote Link to comment
kizer Posted October 20, 2020 Share Posted October 20, 2020 How does this deal with Parity Sync? Quote Link to comment
mgutt Posted October 20, 2020 Author Share Posted October 20, 2020 42 minutes ago, kizer said: How does this deal with Parity Sync? If a file needs to be defragmentated, it will update the Parity as usual. So it does not "bypass" the parity if this was your thought. Is this even possible? I don't know. Quote Link to comment
mgutt Posted October 20, 2020 Author Share Posted October 20, 2020 1 hour ago, eagle470 said: Are you going to make this a plugin? My first plugin will add file browser features to the WebGUI. But don't tell anyone ^^ 2 Quote Link to comment
trurl Posted October 21, 2020 Share Posted October 21, 2020 1 hour ago, mgutt said: If a file needs to be defragmentated, it will update the Parity as usual. So it does not "bypass" the parity if this was your thought. Is this even possible? I don't know. I assume (you not what that means) that it is similar to xfs repair as far as maintaining parity. If you repair the md device, parity is maintained. If you repair the partition of the sd device, parity is not maintained. Looks like your code is working with the md devices so parity should be maintained, but I won't pretend to understand it all. Quote Link to comment
trurl Posted October 21, 2020 Share Posted October 21, 2020 1 hour ago, mgutt said: My first plugin will add file browser features to the WebGUI. But don't tell anyone ^^ Hope you will consider these points: Quote Link to comment
mgutt Posted October 21, 2020 Author Share Posted October 21, 2020 I will test this by fragmentating a file and check the parity activity while defragmentation. Screenshots will follow. But I don't think it bypasses it, as I defrag since several weeks and no parity check ever corrected something. 1 Quote Link to comment
mgutt Posted October 21, 2020 Author Share Posted October 21, 2020 3 minutes ago, trurl said: Hope you will consider these points: Yes, I know both "bugs" (move to a different share does not move to correct disk and moving between share and disk can kill files). But moving will come much later. At first I will realize delete and maybe zip/tar. Quote Link to comment
tronyx Posted October 21, 2020 Share Posted October 21, 2020 Pardon my ignorance here, but what is the benefit of this? Is this something that one *SHOULD* be doing? Are there any disadvantages to this? Quote Link to comment
mgutt Posted October 21, 2020 Author Share Posted October 21, 2020 5 hours ago, tronyx said: Is this something that one *SHOULD* be doing No. Most people suggest this only for heavily written HDDs as this data will fragmentate (spreaded across different positions in the plattern) over time. This causes higher latency for reads and writes = slower / wavering transfer rates. As I used multiple parallel transfers to fill my HDDs (not clever), some of them were heavily fragmentated. So I would say it's more like a one time job or should only be done once per year or after changing many files on the HDD. Sadly it's not possible to get the real fragmentation rate as really huge files must be fragmentated to 8GB blocks. Maybe I should integrate a notification when all HDDs were successfuly defragmentated. Or I add a temporary file which disables defragmentation for several months when it was successful. 1 Quote Link to comment
JorgeB Posted October 21, 2020 Share Posted October 21, 2020 15 hours ago, mgutt said: rotational=$(cat /sys/block/md${disk_id}/queue/rotational) I forgot to mention, you can't do this on the mdX device, it will always return 1, even for an SSD, there shouldn't be many users with SSDs in the array, but if you want to check it you need to do it on the sdX devices. 1 Quote Link to comment
ljm42 Posted October 22, 2020 Share Posted October 22, 2020 This looks pretty cool, thanks! Would you mind posting this to gist.github.com instead of embedding it here in the forum? Often the forum will add hidden characters that break things. Plus with a gist we can see the change history. Might also make sense to link to this thread for more context: https://forums.unraid.net/topic/44592-defrag-xfs-array-drives/ Potential idea, what if you pass a 'status' param to the script and then instead of running the defrag it does this to show the fragmentation on each drive: xfs_db -c 'frag -f' -r /dev/mdX This should be safe to run while a defrag is happening, so it could skip the "race condition" check and the parity spinning check. Also, would you mind making it so setting defrag_seconds=0 removes the -t param? Quote Link to comment
mgutt Posted November 19, 2020 Author Share Posted November 19, 2020 @ljm42 Thanks for your input. Will consider the zero value problem in a future release. I will publish this script only on github if I find the time to write it as a plugin. "frag -f" is sadly nearly useless as described in this post. As an example all my HDDs return a high fragmentation factor although no defragmentation is possible (as they mainly contain huge movies). Maybe it would be possible to calculate the real fragmentation factor by counting files and their sizes and devide them through 8GB while finally comparing this value with the value returned by "frag -f". I need to think about that... 1 Quote Link to comment
mgutt Posted November 21, 2020 Author Share Posted November 21, 2020 On 10/23/2020 at 12:43 AM, ljm42 said: Also, would you mind making it so setting defrag_seconds=0 removes the -t param? I checked that. If the param is removed, it uses the default value of 7200 seconds (check line 59). If you expect an "unlimited" execution you must set a really high value like 2630000 (= 1 month). Should I replace 0 against 9999999 (~4 month)? Quote Link to comment
Hairy Posted February 2, 2021 Share Posted February 2, 2021 Really looking forward to this as a plugin (if you will ever make one) I mainly have HDD's and had to fill them nearly full (95% and more) as new disks are expensive now with corona and the fragmentation is rather high and can "feel" already the performance drain of it. Thank you for your time into looking stuff like this and making it available for non linux pro's 2 Quote Link to comment
skyfox77 Posted May 9, 2021 Share Posted May 9, 2021 Any news on this as a plugin? I would really love one Quote Link to comment
Gorosch Posted July 25, 2021 Share Posted July 25, 2021 Is it possible that the script or defragmentation in general does not work with encrypted HDDs? The script claims that it would not find any HDDs. Quote Link to comment
drfsol Posted October 18, 2021 Share Posted October 18, 2021 On 7/25/2021 at 4:34 PM, Gorosch said: Is it possible that the script or defragmentation in general does not work with encrypted HDDs? The script claims that it would not find any HDDs. Yes, replace all occurence of /dev/md to /dev/mapper/md 1 Quote Link to comment
Gorosch Posted October 23, 2021 Share Posted October 23, 2021 On 10/18/2021 at 6:59 PM, drfsol said: Yes, replace all occurence of /dev/md to /dev/mapper/md Thanks, seems to be working. Quote Link to comment
Judd35472 Posted December 8, 2021 Share Posted December 8, 2021 On 5/9/2021 at 2:39 AM, skyfox77 said: Any news on this as a plugin? I would really love one been a while. any update? thx very appreciated. Quote Link to comment
shorshi Posted July 11, 2022 Share Posted July 11, 2022 can i somehow see a progress on this anywhere? i am wondering what kind of runtime length i should be aiming for, the default 2hrs seems very low? was thinking of just putting 24hrs or something like that, it will just quite once it is done anyway, right? Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.