DoINeedATag Posted March 28 Share Posted March 28 In the terminal can you check in htop that you see xfs_fsr running? Use F4 to filter for xfs_fsr Can you run blkid and check if your xfs drives are using either /dev/md1p1 or /dev/mapper/md1p1 for the mapping? I have another machine on 6.12.8 (Node2) and it has it's xfs drives mounted differently then 6.12.6 (Node1) (not sure if its normal or not). root@Node1:~# blkid | grep xfs /dev/sdn1: UUID="" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="" /dev/mapper/md2p1: UUID="" BLOCK_SIZE="512" TYPE="xfs" /dev/mapper/md1p1: UUID="" BLOCK_SIZE="512" TYPE="xfs root@Node2:~# blkid | grep xfs /dev/sdb1: UUID="" BLOCK_SIZE="512" TYPE="xfs" /dev/md1p1: UUID="" BLOCK_SIZE="512" TYPE="xfs" Quote Link to comment
DoINeedATag Posted March 28 Share Posted March 28 (edited) Alright from my testing I think I fixed this script to work with both 6.12.x drive mappings (/dev/md1p1 or /dev/mapper/md1p1). However I don't have an Unraid machine that has BOTH drive mappings at the same time. So I am not sure what will happen if you do (at least in theory its just two OR statements so it should handle both?) So use at your own risk and verify with blkid what kind of disk mappings you have. #!/bin/bash # ##################################### # Script: XFS Extended Defragmentation v0.3.1 # Description: Defragmentate only HDDs (SSDs will be ignored) if requirements met for a selectable running time. # Author: Marc Gutt, Computron, DoINeedATag # # Changelog: # 0.3.1 # - Fixed regex finding both /dev/mapper/mdxpy and /dev/mdxpy (DoINeedATag) # - Made it so the notification setting actually did something line 68 # - Only for Unraid version 6.12+ # 0.3 # - 6.12.6+ fix for regex finding drive paths (Updated by Computron & DoINeedATag) # 0.2 # - SSD recognition added # 0.1 # - first release # # ######### Settings ################## # defrag_seconds=7200 : Defrag only for a specific time in seconds (default is 7200 seconds = 2 hours) # defrag_only_sleep=1 : Defrag only when parity is not active (set to 0 to disable) # notification=1 : Notify yourself (set to 0 to disable) defrag_seconds=7200 defrag_only_sleep=1 notification=1 # ##################################### # # ######### Script #################### # make script race condition safe if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1; fi; trap 'rmdir "/tmp/${0///}"' EXIT; # defrag only if parity is not spinning if [[ $defrag_only_sleep == "1" ]]; then # obtain parity disk's device id parity_id=$(mdcmd status | grep rdevName.0 | cut -d = -f 2) echo "Parity has device id $parity_id" # we defrag only if parity is not spinning parity_state=$(smartctl -n standby "/dev/$parity_id") if [[ $parity_state != *"STANDBY"* ]]; then echo "Defragmentation has been skipped because of active parity disk" exit fi fi echo "Search for HDDs..." # parse /etc/mtab and check rotational status declare -a mtab while IFS= read -r -d '' line; do disk_mapper_id_regex="^/dev/mapper/md([0-9]+)p([0-9]+)" disk_id_regex="^/dev/md([0-9]+)p([0-9]+)" if [[ "$line" =~ $disk_mapper_id_regex ]] || [[ "$line" =~ $disk_id_regex ]]; then disk_id=${BASH_REMATCH[1]} part_id=${BASH_REMATCH[2]} rotational=$(cat /sys/block/md${disk_id}p${part_id}/queue/rotational) if [[ "$rotational" == "1" ]]; then mtab+=("$line") echo "Found HDD with id md${disk_id}p${part_id} (added)" continue fi echo "Found SSD with id md${disk_id}p${part_id} (skipped)" fi done< <(cat /etc/mtab | grep -E '^/dev/mapper/md|^/dev/md' | tr '\n' '\0') if [ ${#mtab[@]} -eq 0 ]; then if [ $notification == "1" ]; then /usr/local/emhttp/webGui/scripts/notify -i alert -s "XFS Defragmentation failed!" -d "No HDD found!" fi echo "No HDD found exiting" exit fi printf "%s\n" "${mtab[@]}" > /tmp/.xfs_mtab echo "Content of /tmp/.xfs_mtab:" cat /tmp/.xfs_mtab # defrag xfs_fsr -v -m /tmp/.xfs_mtab -t "$defrag_seconds" -f /tmp/.xfs_fsrlast Edited March 28 by DoINeedATag Quote Link to comment
anthony0030 Posted March 29 Share Posted March 29 Hi @DoINeedATag, xfs_frs is running when the script starts, and my drives look like this: root@Tower:~# blkid | grep xfs /dev/sdf1: UUID="" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="" /dev/sdd1: UUID="" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="" /dev/md2p1: UUID="" BLOCK_SIZE="512" TYPE="xfs" /dev/md1p1: UUID="" BLOCK_SIZE="4096" TYPE="xfs" /dev/sdg1: UUID="" BLOCK_SIZE="512" TYPE="xfs" PARTUUID="" /dev/sde1: UUID="" BLOCK_SIZE="4096" TYPE="xfs" PARTUUID="" /dev/md3p1: UUID="" BLOCK_SIZE="512" TYPE="xfs" I tried ruinniong the script and it was blocked by the parity, I disabled the setting that waits for parity and now it seems to be doing something. Search for HDDs... Found HDD with id md1p1 (added) Found HDD with id md2p1 (added) Found HDD with id md3p1 (added) Content of /tmp/.xfs_mtab: /dev/md1p1 /mnt/disk1 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 /dev/md2p1 /mnt/disk2 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 /dev/md3p1 /mnt/disk3 xfs rw,noatime,nouuid,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0 I am going to check back in 2 hours to see if it did anything. Thanks for your help so far Quote Link to comment
anthony0030 Posted March 29 Share Posted March 29 I am running it again because after the run i still get. root@Tower:~# xfs_db -r /dev/md1p1 xfs_db> Frag command Frag not found xfs_db> frag actual 123304, ideal 35055, fragmentation factor 71.57% Note, this number is largely meaningless. Files on this filesystem average 3.52 extents per file xfs_db> Quote Link to comment
DoINeedATag Posted March 30 Share Posted March 30 (edited) The script will only run for two hours. In the thread below some people reported really fragmented drives taking 48+ hours sometimes. If you use the user scripts plug-in to run daily it'll finish over time, or increase the run time in the script. Also if you have files larger then 8gb on the majority of the drive it won't be able to improve those files. Which is fine as the system is working as intended. Also you can see if it's your files or directories by doing frag -d or frag -f Overall if you run this once in a blue moon on most drives that should be exceedingly sufficient from what I've read and learned researching this topic. Again depends on your use case, a docker image folder or shared file system that gets added to and deleted from regularly probably will benefit from an occasional defrag. But remember you are adding wear and tare to the drive by defragging, use your best judgement when/if you should defrag. Hope that helps! Edited March 30 by DoINeedATag 1 Quote Link to comment
anthony0030 Posted March 30 Share Posted March 30 (edited) It’s definitely doing something. Disk one is down to 64.13% Update after running it again for 5 hours: It is down to 28,44% One more time and I think it will be fine. Update after running it again for 5 hours: It is down to 9.76% Edited March 31 by anthony0030 1 Quote Link to comment
anthony0030 Posted April 5 Share Posted April 5 Hi @DoINeedATag, I have some sugestions for the script. 1) Add it to a git repo on github (for version controll) 2) Add console loggs before and after the dfrag happens to see fragmentation befor and after. 1 Quote Link to comment
Presjar Posted June 5 Share Posted June 5 On 3/29/2024 at 3:13 AM, anthony0030 said: I tried running the script in the background and it looks like nothing happens Script Starting Mar 28, 2024 19:09.13 Full logs for this script are available at /tmp/user.scripts/tmpScripts/XFS Extended Defragmentation v0.3/log.txt Parity has device id sdf Search for HDDs... Script Finished Mar 28, 2024 19:09.13 I tried also disabeling defrag_only_sleep I am running unraid 6.12.8 I get the same as Anthony on 6.12.10. HDD detection not right as it riggers the error popup notification immediately. Quote Link to comment
anthony0030 Posted June 5 Share Posted June 5 @Presjar are you using the latest version of the script? With the new version, It works for me. Quote Link to comment
Presjar Posted June 6 Share Posted June 6 (edited) @anthony0030 Thanks for that. I somehow copied the 0.3 version not 0.3.1. Copied the correct version and all is good! I enabled the "Turbo Write" / "reconstruct write" also for now. Hopefully increases the amount of work done in the 2hr per drive but unsure so far. Seems to be 30-50MB/s Read/Write on the active disk when the defrag is running. What size drives were you running this on? How full? Edited June 6 by Presjar 1 Quote Link to comment
Presjar Posted June 8 Share Posted June 8 (edited) Looks like Turbo Write" / "reconstruct write" enabled doesn't improve the performance for me. Edited June 8 by Presjar Quote Link to comment
JorgeB Posted June 8 Share Posted June 8 Turbo write won't help if there are simultaneous writes to more than one disk, it's automatically disabled during those. Quote Link to comment
Gorosch Posted August 30 Share Posted August 30 (edited) Hello, I have recently had the problem that the script does not stop after 2 hours, as it is actually set to do. It worked until recently. Now it runs for half the day until it is finished. What could be the reason for this? Quote #!/bin/bash # ##################################### # Script: XFS Extended Defragmentation v0.3 # Description: Defragmentate only HDDs (SSDs will be ignored) if requirements met for a selectable running time. # Author: Marc Gutt, Computron, DoINeedATag # # Changelog: # 0.3 # - 6.12.6+ fix for regex finding drive paths (Updated by Computron & DoINeedATag) # 0.2 # - SSD recognition added # 0.1 # - first release # # ######### Settings ################## # defrag_seconds=7200 : Defrag only for a specific time in seconds (default is 7200 seconds = 2 hours) # defrag_only_sleep=1 : Defrag only when parity is not active (set to 0 to disable) # notification=1 : Notify yourself (set to 0 to disable) defrag_seconds=7200 defrag_only_sleep=1 notification=1 # ##################################### # # ######### Script #################### # make script race condition safe if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1; fi; trap 'rmdir "/tmp/${0///}"' EXIT; # defrag only if parity is not spinning if [[ $defrag_only_sleep == "1" ]]; then # obtain parity disk's device id parity_id=$(mdcmd status | grep rdevName.0 | cut -d = -f 2) echo "Parity has device id $parity_id" # we defrag only if parity is not spinning parity_state=$(smartctl -n standby "/dev/$parity_id") if [[ $parity_state != *"STANDBY"* ]]; then echo "Defragmentation has been skipped because of active parity disk" exit fi fi echo "Search for HDDs..." # parse /etc/mtab and check rotational status declare -a mtab while IFS= read -r -d '' line; do disk_id_regex="^/dev/mapper/md([0-9]+)p([0-9]+)" if [[ "$line" =~ $disk_id_regex ]]; then disk_id=${BASH_REMATCH[1]} part_id=${BASH_REMATCH[2]} rotational=$(cat /sys/block/md${disk_id}p${part_id}/queue/rotational) if [[ "$rotational" == "1" ]]; then mtab+=("$line") echo "Found HDD with id md${disk_id}p${part_id} (added)" continue fi echo "Found SSD with id md${disk_id}p${part_id} (skipped)" fi done < <(cat /etc/mtab | grep -E '^/dev/mapper/md' | tr '\n' '\0') if [ ${#mtab[@]} -eq 0 ]; then /usr/local/emhttp/webGui/scripts/notify -i alert -s "XFS Defragmentation failed!" -d "No HDD found!" exit fi printf "%s\n" "${mtab[@]}" > /tmp/.xfs_mtab echo "Content of /tmp/.xfs_mtab:" cat /tmp/.xfs_mtab # defrag xfs_fsr -v -m /tmp/.xfs_mtab -t "$defrag_seconds" -f /tmp/.xfs_fsrlast Edited August 30 by Gorosch Quote Link to comment
Recommended Posts
Join the conversation
You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.