Defrag XFS array drives


ljm42

Recommended Posts

I recently reformatted my disk1 from ReiserFS to XFS, and then I did rsync -avPX /mnt/disk4/ /mnt/disk1
When it's done I ran the xfs_db - frag-f, to my surprise the fragmentation level is 27%, I was kind of hoping to see 0%.

Does rsync write some temporary files somewhere on destination drive, and then delete it after completion? Other than that, I'm not sure what might cause the rather significant fragmentation.

Link to comment
  • 11 months later...

Thanks for the guide. I have been having a lot of buffering problems with some of my movies of late. So I decided to check fragmentation. OMG. I was told that Linux file systems don't suffer from fragmentation like NTFS. They can't be more wrong. One of my drives is 99.87% fragmentation. That is the worst fragmentation I have ever seen, on any system. This was is a 2TB drive with 350GB free. Can anyone explain why the fragmentation is so bad?

 

2TB HDD, 350GB Free, 99.86% Fragmentation

root@Tower:~# xfs_db -r /dev/md6
xfs_db> frag
actual 462546, ideal 636, fragmentation factor 99.86%
Note, this number is largely meaningless.
Files on this filesystem average 727.27 extents per file
xfs_db> frag -d
actual 27, ideal 26, fragmentation factor 3.70%
Note, this number is largely meaningless.
Files on this filesystem average 1.04 extents per file
xfs_db> frag -f
actual 462519, ideal 610, fragmentation factor 99.87%
Note, this number is largely meaningless.
Files on this filesystem average 758.23 extents per file
xfs_db> 

 

2TB HDD, 320GB Free, 99.33% Fragmentation

root@Tower:~# xfs_db -r /dev/md5
xfs_db> frag
actual 2299335, ideal 15320, fragmentation factor 99.33%
Note, this number is largely meaningless.
Files on this filesystem average 150.09 extents per file
xfs_db> frag -d
actual 522, ideal 460, fragmentation factor 11.88%
Note, this number is largely meaningless.
Files on this filesystem average 1.13 extents per file
xfs_db> frag -f
actual 2298813, ideal 14860, fragmentation factor 99.35%
Note, this number is largely meaningless.
Files on this filesystem average 154.70 extents per file
xfs_db> 

 

6TB HDD, 600GB Free, 95.48% Fragmentation

root@Tower:~# xfs_db -r /dev/md4
xfs_db> frag
actual 15899, ideal 718, fragmentation factor 95.48%
Note, this number is largely meaningless.
Files on this filesystem average 22.14 extents per file
xfs_db> frag -d
actual 33, ideal 28, fragmentation factor 15.15%
Note, this number is largely meaningless.
Files on this filesystem average 1.18 extents per file
xfs_db> frag -f
actual 15866, ideal 690, fragmentation factor 95.65%
Note, this number is largely meaningless.
Files on this filesystem average 22.99 extents per file
xfs_db> 

 

6TB HDD, 400GB Free, 98.56% Fragmentation

root@Tower:~# xfs_db -r /dev/md3
xfs_db> frag
actual 500672, ideal 7220, fragmentation factor 98.56%
Note, this number is largely meaningless.
Files on this filesystem average 69.35 extents per file
xfs_db> frag -d
actual 391, ideal 370, fragmentation factor 5.37%
Note, this number is largely meaningless.
Files on this filesystem average 1.06 extents per file
xfs_db> frag -f
actual 500281, ideal 6850, fragmentation factor 98.63%
Note, this number is largely meaningless.
Files on this filesystem average 73.03 extents per file
xfs_db> 

 

2TB HDD, 350GB Free, 76.99% Fragmentation

root@Tower:~# xfs_db -r /dev/md2
xfs_db> frag
actual 153132, ideal 35233, fragmentation factor 76.99%
Note, this number is largely meaningless.
Files on this filesystem average 4.35 extents per file
xfs_db> frag -d
actual 1374, ideal 1354, fragmentation factor 1.46%
Note, this number is largely meaningless.
Files on this filesystem average 1.01 extents per file
xfs_db> frag -f
actual 151758, ideal 33879, fragmentation factor 77.68%
Note, this number is largely meaningless.
Files on this filesystem average 4.48 extents per file
xfs_db> 

 

6TB HDD, 1.1TB Free, 36.5% Fragmentation (Contains all my AppData, dockers, ect and some Movies)

root@Tower:~# xfs_db -r /dev/md1
xfs_db> frag 
actual 2846177, ideal 1807222, fragmentation factor 36.50%
Note, this number is largely meaningless.
Files on this filesystem average 1.57 extents per file
xfs_db> frag -d
actual 53083, ideal 40568, fragmentation factor 23.58%
Note, this number is largely meaningless.
Files on this filesystem average 1.31 extents per file
xfs_db> frag -f
actual 2793094, ideal 1766654, fragmentation factor 36.75%
Note, this number is largely meaningless.
Files on this filesystem average 1.58 extents per file
xfs_db> 

 

2TB HDD, 600GB Free, 3.96% Fragmentation (This is the newest drive on the server)

root@Tower:~# xfs_db -r /dev/md7
xfs_db> frag
actual 2954, ideal 2837, fragmentation factor 3.96%
Note, this number is largely meaningless.
Files on this filesystem average 1.04 extents per file
xfs_db> frag -d
actual 122, ideal 109, fragmentation factor 10.66%
Note, this number is largely meaningless.
Files on this filesystem average 1.12 extents per file
xfs_db> frag -f
actual 2832, ideal 2728, fragmentation factor 3.67%
Note, this number is largely meaningless.
Files on this filesystem average 1.04 extents per file
xfs_db> 

 

Link to comment
5 minutes ago, mayhem2408 said:

Can anyone explain why the fragmentation is so bad?

Same reason why files get fragmented on any OS, especially when they get really full, or during the normal process of adding / removing media (or simultaneous writes of large files to the same drive in an attempt to speed up transfers).  There's not enough contiguous space available to store the files.

 

10 minutes ago, mayhem2408 said:

I have been having a lot of buffering problems with some of my movies of late.

I would actually surmise that the root of your issue is that you're running unRaid 6.7.2 and there's also simultaneous writes happening to the cache drive.  6.8 fixes that.

Link to comment
7 hours ago, mayhem2408 said:

I was told that Linux file systems don't suffer from fragmentation like NTFS.

You probably misinterpreted that statement. All file systems can get fragmented, some filesystems perform worse than others while at the same level of fragmentation. Having fragmentation, and suffering a performance loss are 2 different things. Like Squid said, your issues with performance are likely not primarily due to fragmentation.

Link to comment

I tried running "xfs_fsr -t 18000" without specifying a /dev/md? so that it would start an automatic defrag of all xfs drives for 5 hours. I noticed it started defraging /mnt/disk1 instead of /dev/md1. Is it safe to let it continue on /mnt/disk1 ?

I've noticed that if I specify /dev/md1 in the command line, it ignores the -t command (Which is normal apparently)

Link to comment
  • 4 months later...
51 minutes ago, James_Darkness said:

how or can i disable the parity drive to speed up the defrag 

Doing so would mean that parity would no longer be valid and parity would then need to be rebuilt.    Also if a drive failed while parity was disabled you are not protected so you would lose the contents of that drive.

Link to comment
  • 4 months later...

Anyone an idea how to force defragmentation or to find out why some files are skipped?

xfs_db -c frag -r /dev/md1
actual 1718, ideal 674, fragmentation factor 60.77%
Note, this number is largely meaningless.
Files on this filesystem average 2.55 extents per file
xfs_fsr /dev/md1 -v
/mnt/disk1 start inode=0
ino=133
No improvement will be made (skipping): ino=133
ino=135
No improvement will be made (skipping): ino=135
ino=138
No improvement will be made (skipping): ino=138
ino=142

....
xfs_db -r /dev/md1 -c "inode 133" -c "bmap -d"
data offset 0 startblock 1314074773 (4/240332949) count 2097151 flag 0
data offset 2097151 startblock 1316171924 (4/242430100) count 2097151 flag 0
data offset 4194302 startblock 1318269075 (4/244527251) count 2097151 flag 0
data offset 6291453 startblock 1320366226 (4/246624402) count 1121800 flag 0
find /mnt/disk1 -xdev -inum 133
/mnt/disk1/Movie/AB/A/movie.mkv

EDIT: Ok, found the source of xfs_fsr and the -d flag returns more information:

xfs_fsr /dev/md1 -v -d
/mnt/disk1 start inode=0
ino=133
ino=133 extents=4 can_save=3 tmp=/mnt/disk1/.fsr/ag0/tmp23917
DEBUG: fsize=30364684107 blsz_dio=16773120 d_min=512 d_max=2147483136 pgsz=4096
Temporary file has 4 extents (4 in original)
No improvement will be made (skipping): ino=133
ino=135
ino=135 extents=4 can_save=3 tmp=/mnt/disk1/.fsr/ag1/tmp23917
orig forkoff 288, temp forkoff 0
orig forkoff 288, temp forkoff 296
orig forkoff 288, temp forkoff 296
orig forkoff 288, temp forkoff 296
orig forkoff 288, temp forkoff 296
orig forkoff 288, temp forkoff 296
orig forkoff 288, temp forkoff 296
orig forkoff 288, temp forkoff 288
set temp attr
DEBUG: fsize=28400884827 blsz_dio=16773120 d_min=512 d_max=2147483136 pgsz=4096
Temporary file has 4 extents (4 in original)
No improvement will be made (skipping): ino=135

Although it says "can_save=3", the temporary file still has the same amount of file parts as the original file, so it does not defrag the file. So it seems some fragmentation can not be prevented?!

 

Link to comment
  • 1 month later...

i only tested my 2x 12 tb ones dont want to defrag those.

 

fs_db> frag
actual 52445, ideal 50533, fragmentation factor 3.65%
Note, this number is largely meaningless.
Files on this filesystem average 1.04 extents per file
xfs_db> frag -d
actual 1629, ideal 1510, fragmentation factor 7.31%
Note, this number is largely meaningless.
Files on this filesystem average 1.08 extents per file
xfs_db> frag -f
actual 50816, ideal 49023, fragmentation factor 3.53%
Note, this number is largely meaningless.
Files on this filesystem average 1.04 extents per file


xfs_db> frag
actual 49875, ideal 42765, fragmentation factor 14.26%
Note, this number is largely meaningless.
Files on this filesystem average 1.17 extents per file
xfs_db> frag -d
actual 1514, ideal 1471, fragmentation factor 2.84%
Note, this number is largely meaningless.
Files on this filesystem average 1.03 extents per file
xfs_db> frag -f
actual 48359, ideal 41292, fragmentation factor 14.61%
Note, this number is largely meaningless.
Files on this filesystem average 1.17 extents per file

Link to comment

Add script to CA User Scripts:

#!/bin/bash

# make script race condition safe
if [[ -d "/tmp/${0///}" ]] || ! mkdir "/tmp/${0///}"; then exit 1; fi
trap 'rmdir "/tmp/${0///}"' EXIT

# defrag all xfs drives - stops automatically after 2 hours
xfs_fsr -v


Add custom schedule "0 4 * * *" and it will be executed every night a 04:00 and stops after 2 hours. No brainer.

  • Thanks 1
Link to comment
8 hours ago, mgutt said:

 


# defrag all xfs drives - stops automatically after 2 hours
xfs_fsr -v

 

If you have any XFS formatted SSDs you probably don't want to do this. Fragmentation on an SSD isn't going to cause a problem, so defragging an SSD is just unneeded wear and tear.

Link to comment
11 hours ago, Squid said:

Probably because fragmentation isn't a real huge problem on modern file systems, and the speed of drives lessens the degree of the impact.  People are so used to "You have to defrag your drives to solve Windows problems" which was always completely false.

Sure its not a huge problem anymore, but there is so many plugins for almost everything already so it would be nice to have one for defragging too :)

 

Link to comment
  • 6 months later...

What I don't understand, is why I have 5-8% file fragmentation on my drives, when I'm doing nothing but migrating the initial data to the drives. This isn't folder fragmentation, but files. Normally when you copy files to a fresh drive, they'd be written sequentially.

 

I can tell it isn't writing from beginning to end in any case. I'm using 'turbo write' which means it is reading from all drives except the one it's writing to. I just put in a new 16TB (same size as my parity drives) and I notice that it varies from file to file whether it is reading from my 10TB drives, which means some files are being placed in the last 6TB of the 16TB drive, where there would be nothing to read from the 10TB ones, but other files are spinning up all discs. Why does it seem to scatter the files around, and get some fragmentation on initial copy (copying via network)?

Edited by Hastor
Link to comment
2 hours ago, Hastor said:

I just put in a new 16TB (same size as my parity drives) and I notice that it varies from file to file whether it is reading from my 10TB drives, which means some files are being placed in the last 6TB of the 16TB drive, where there would be nothing to read from the 10TB ones, but other files are spinning up all discs.

 

What is the fill rate of your 16TB disk? And you parity is a 16TB disk as well?

 

2 hours ago, Hastor said:

is why I have 5-8% file fragmentation on my drives

Because XFS splits huge files into blocks and each block of a huge file is added to the fragmentation calculation. This is why this number is not accurate.

Link to comment
1 hour ago, mgutt said:

What is the fill rate of your 16TB disk? And you parity is a 16TB disk as well?

 

I'm not sure what fill rate is exactly, and searching didn't help - not sure if you mean the speed I'm filling them, how full they are, or their max write speed (or maybe something else). Pardon my ignorance on that.

 

If this helps - one 16TB has 12.7GB, the other has 2TB and is filling up more currently. The 10's each have 4TB free, as that was the last high water mark when it was just the three drives. I am using the 'turbo write' just until I get all my data transferred as I have a lot and it's much slower otherwise. I write anywhere from 90-110MB/s over the network. When it is doing stuff to the drives directly like clearing, they get around 240MB/s. I'm not sure for the parity - they are always full, and keep up the speed of the others. I have cache drives, but they won't be added to the array until my data migration is done.

 

The part about the numbers being unreliable makes sense - is there a reliable way to get fragmentation stats on an XFS drive? Seems like someone would have come up with a way by now...

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.