Defrag XFS array drives


ljm42

Recommended Posts

Mine finished sometime late yesterday after I called it a night... It completely defragmented files. I'm talking 0% fragmentation! Directory fragmentation remained the same as before.

 

Impressive!  Now I really wonder why mine didn't go to zero :)  And we're still left with the question of directory fragmentation.

 

 

Looking at this I have two questions.

1. Does it update Parity at the same time? or simply how does it work with parity?

2. Has anybody noticed any significant increases in performance or reliability?

 

Check out the OP

 

You will get your command prompt back, when it's complete. There really is no other means of telling when it's finished.

 

I didn't use screen. If I close the console window (shell in a box) will I be able to re-connect just by re-opening the window?

 

Where the OP says "console", that means using the keyboard and monitor plugged directly into the unRAID computer.  I assume Shell in a Box doesn't keep long running processes going after you close the browser, but let me know how it works out for you and I'll update the OP.

 

Link to comment

 

Where the OP says "console", that means using the keyboard and monitor plugged directly into the unRAID computer.  I assume Shell in a Box doesn't keep long running processes going after you close the browser, but let me know how it works out for you and I'll update the OP.

 

You're right. I closed the window just to see. Judging from the cpu activity the process is still running, but without screen I can't connect back to it.

Link to comment

... This is a 4TB drive with 3TB of data (mostly movies).  It ran for nearly 48 hours!  When I checked the results afterwards, fragmentation was improved although not as much as I expected:

 

48 hours !!

 

It would take less time to simply (a) copy the 3TB of data to a spare drive on a PC over the network;  (b) reformat the drive;  and then © copy the data back to it => and the result would be ZERO fragmentation  :)

 

Copying 3TB would take ~ 9 hours to copy and about 24 hours to copy back (due to slower writes) ... so perhaps 33 hours to get zero fragmentation.

 

Not sure either process is worth the bother -- fragmentation isn't a big deal with the typical use case for these arrays ... but if the goal is as little fragmentation as possible, the copy off/copy back process would result in NONE.

 

  • Thanks 1
  • Upvote 1
Link to comment

48 hours !!

 

Yeah I'm not really sure why it took so long, or why it wasn't able to get to zero fragments when the drive has 1 TB of free space, you'd think there would be enough contiguous free space to finish the job. I'm not really worried about it though, 21% file fragmentation doesn't seem too bad.

 

The good news is that there was no effort on my part, no downtime, and I didn't have to worry about data loss.  Copy off/copy back would have added to all three.

 

but if the goal is as little fragmentation as possible, the copy off/copy back process would result in NONE.

 

I'm not sure about that, I mean, how did the fragmentation get there in the first place?  It isn't like I am editing the movies once they are on the array.  When I copy from my desktop they go to the cache first and then mover takes them over to the array.  Not really sure why they are getting fragmented since mover runs during the night when nothing else is writing to the array.

 

Not sure either process is worth the bother

 

That is definitely a discussion worth having! 

 

The good news is that with XFS we can easily measure fragmentation, the tough part is figuring out the threshold for when to take action.

 

  • Like 2
Link to comment

Any time a file is deleted, there will eventually be a fragment of a file to fill the space. So, unless you never delete any files, you will get fragmentation even if there is only one write at a time. Gary's method of copying everything in one go to a freshly formatted disk is the only way to truly clean out a drive. After that, if you ever delete anything off the drive, newly copied files will be fragmented.

Link to comment

Mine finished sometime late yesterday after I called it a night... It completely defragmented files. I'm talking 0% fragmentation! Directory fragmentation remained the same as before.

 

Impressive!  Now I really wonder why mine didn't go to zero :)  And we're still left with the question of directory fragmentation.

 

I disabled Cache Dirs and re-ran it, but it had no effect on directory fragmentation.  I'm beginning to think this tool doesn't defragment directories.

  • Like 1
Link to comment
  • 1 year later...

Alright, thanks. Gonna have to wait will i get a monitor hooked up. I've no idea what screen is heh

 

EDIT: If i just run 

xfs_fsr -v

It will just run it for 2 hours on all my drives? Skipping my SSD which is btrfs?

 

EDIT2: And is there a way to set unRAID to run that for 2 hours every day?

Edited by superderpbro
Link to comment

I know that general recommendation is, when a drive is being replaced, to remove it and let unRAID rebuild it onto a new disk. I seldom/never do that. I am normally replacing at least 2, and sometimes 3 or 4 at a time. And I will install them outside the array (making sure unRAID partitioning and formatting is applied), and COPY the files from the array disks to the replacement disks. This results in completely defragmented disks, rather than a clones of the existing fragmented disks. After all the copying is done (which can be done in parallel since there is not parity involved), I do a new config, and rebuild parity. Overall I find this a more efficient process, with defragging just being a nice side effect.

 

Of course all of this is done only after a parity check and inspection of the SMART reports / notification emails to ensure that the drives are healthy. If done properly, it introduces no additional risk with minimal drawbacks, and a huge advantage in getting through the upgrade cycle quicker.

 

One of the big benefits of defragmentation is enhanced ability to recover data in the event of some sort of disk corruption event. Unfortunately, XFS is a file system that is particularly poor at being able to recover from such events, defragged or not. So I would question the need to run the defrag tool. Beating the crap out of the drive for 48 hours may not be worth it! With media, the data is typically read back at a slow rate compared to the disk speed, and it will be able to easily keep up with your streaming even with some head movement.

 

Also, when moving to XFS I did some digging into how XFS works,. We tend to think of a drive having a file allocation table that maintains the file linkages. But my understanding of XFS is that it is broken up, internally, into multiple blocks each having its own file allocation table. This tends to keep files within a band of cylinders, and minimize the bad side effects of the heads moving wildly back and forth over the disk surface to read a badly fragmented file.

 

All in all, I would not be overly concerned with disk fragmentation in an XFS array. And if you are worried about it, next time you want to upsize some disks, use my method. Feel free to ask and I can give more details.

  • Like 4
Link to comment

Thanks. I have played with the idea of taking everything off, formatting the drive and putting everything back on.. but i KNOW i would mess something up lol :/

 

I take files off and put files on my array all the time. It's not all just static data. I think the idea of running defrag a couple hours every 1 or 2 or 3 days is pretty cool.

 

No idea why only one of my disks is so fragmented.

 

Soooooooooooo....  If i just run 

xfs_fsr -v

It will just run it for 2 hours on all my drives? Skipping my SSD which is btrfs?

 

EDIT: guess so .. https://i.imgur.com/GidJsjN.jpg

Edited by superderpbro
Link to comment

Am i at least on the right track to get this to run every few days for a couple hours? 

 

Jragzfm.png

 

Please be kind... i have almost no idea what i am doing hehehe :/:D 

 

EDIT: 0 0 * * 1,3,5 xfs_fsr -v && curl -sm 30 k.wdt.io/<[email protected]>/<xfs_fsr>?c=0_0_*_*_1,3,5 .. i mean't. Probably still very wrong haha. Not sure if the < > stay around the email and name or not.. :/

Edited by superderpbro
Link to comment
5 hours ago, superderpbro said:

Am i at least on the right track to get this to run every few days for a couple hours?

 

You're on the right track with cron, but I don't understand what curl or k.wdt.io has to do with the xfs_fsr command. The easiest way to do cron jobs on unRAID is with the "CA User Scripts" plugin.

  • Like 1
Link to comment
7 hours ago, superderpbro said:

Am i at least on the right track to get this to run every few days for a couple hours? 

 

EDIT: 0 0 * * 1,3,5 xfs_fsr -v && curl -sm 30 k.wdt.io/<[email protected]>/<xfs_fsr>?c=0_0_*_*_1,3,5 .. i mean't. Probably still very wrong haha. Not sure if the < > stay around the email and name or not.. :/

Cron format:

# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name command to be executed

For 0 0 * * 1,3,5 it would run at midnight every Monday, Wednesday, and Friday.

https://crontab.guru is great for figuring out cron schedules. But, as @ljm42 said, you might want to use the CA User Scripts plugin instead.

Link to comment
11 hours ago, ljm42 said:

 

You're on the right track with cron, but I don't understand what curl or k.wdt.io has to do with the xfs_fsr command. The easiest way to do cron jobs on unRAID is with the "CA User Scripts" plugin.

 

https://crontab.guru  Cronjobs can fail! Monitor your important cronjob by pasting the following snippet at the end of the crontab entry. Make sure to replace the <placeholders> with your email address and some name for your cronjob.

 && curl -sm 30 k.wdt.io/<email-address>/<cronjob-name>?c=5_4_*_*_*

Should your cron job fail or not even start, you will receive an alert email.

 

9 hours ago, gilahacker said:

Cron format:

# .---------------- minute (0 - 59)
# |  .------------- hour (0 - 23)
# |  |  .---------- day of month (1 - 31)
# |  |  |  .------- month (1 - 12) OR jan,feb,mar,apr ...
# |  |  |  |  .---- day of week (0 - 6) (Sunday=0 or 7) OR sun,mon,tue,wed,thu,fri,sat
# |  |  |  |  |
# *  *  *  *  * user-name command to be executed

For 0 0 * * 1,3,5 it would run at midnight every Monday, Wednesday, and Friday.

https://crontab.guru is great for figuring out cron schedules. But, as @ljm42 said, you might want to use the CA User Scripts plugin instead.

 

Yup that was the goal.

 

Thanks guys i will check out user scripts.

 

EDIT: Ran it for only 6 hours.

 

RoMy0RB.png

 

oNSolEx.png

 

:)

Edited by superderpbro
Link to comment
  • 10 months later...

I ran xfs_fsr -v /dev/md6 out of curiosity for a bit and then quit out using CTRL C, and I've now got a /.fsr directory on that drive. 

 

Do I have to do a full defrag to remove?

 

Edit: Decided to restart it and will run to completion (will probably take a few days) to see if /.fsr directory goes at the end

 

Edited by DZMM
Link to comment
  • 4 weeks later...
  • 1 month later...

would my filesystem get defragged if I only copy files to my array over the network with file managers like windows explorer, nautilus ?
I mean, if want to download anything (http or torrent), I would download it first to an unassigned downloads drive and then move it to the array when the download completes.

Edited by sadeq
Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.