"Empty" disk using 56GB


Recommended Posts

Just installed a new WD 8TB RED WD80EFAX hdd in my system. Ran a preclear (zeroing) with the preclear plugin.

Stopped the array, added the new disk to the array, started the array, formatted the new disk (as xfs).

 

The overview now shows that 56GB of the 8TB of that new disk is used.

Investigating the disk with the terminal shows it contains 1 folder with 16 small files (between 1 and 250KB in size).

 

I know there always is some os overhead, sure. But 56GB? What could be going on here?

Screenshot 2020-07-30 at 12.36.21.png

Edited by ssh
Link to comment
  • 1 month later...
On 7/30/2020 at 1:04 PM, JorgeB said:

 

It's normal for newer xfs file systems, they use more space for reflink support and such.

Is it possible to format the old way with reflink=0 on Unraid 6.8.3?

I use my NVME cache pool with BTRFS for VM and docker.

My array is formatted with XFS for media files mostly and there I do not see much benefits from reflink feature.

Link to comment
10 hours ago, Forusim said:

Unraid is using for formatting before adding to the array?

Format is done after the device is part of the array, so parity can be updated.

 

Default Unraid xfs format is:

mkfs.xfs -m crc=1,finobt=1 -f /dev/mdX

where X is the disk #, since I don't use xfs for a long time not current on the options, but probably crc=0 would get rid of the extra space, google "mkfs.xfs man page" for the options.

 

Format will likely not work on a mounted filesystem, you'd need to manually unmount, probably easier for you to do this:

 

-stop array

-click on the disk you want to format and change it to a different filesystem

-start array, that disk won't mount

-type the correct mkfs command for the disk on the console (all data on that disk will be deleted)

-stop array, change its fs back to xfs/auto

-start array, if you used the correct format options it will now use less space.

 

 

  • Thanks 1
Link to comment
  • 1 year later...
On 9/9/2020 at 4:42 PM, JorgeB said:

Format is done after the device is part of the array, so parity can be updated.

 

Default Unraid xfs format is:

mkfs.xfs -m crc=1,finobt=1 -f /dev/mdX

where X is the disk #, since I don't use xfs for a long time not current on the options, but probably crc=0 would get rid of the extra space, google "mkfs.xfs man page" for the options.

 

Format will likely not work on a mounted filesystem, you'd need to manually unmount, probably easier for you to do this:

 

-stop array

-click on the disk you want to format and change it to a different filesystem

-start array, that disk won't mount

-type the correct mkfs command for the disk on the console (all data on that disk will be deleted)

-stop array, change its fs back to xfs/auto

-start array, if you used the correct format options it will now use less space.

 

 

Sorry to dig up an old post but does "mkfs.xfs -m crc=1,finobt=1 -f /dev/mdX" still work?

I used it once before with no problems but went to use it again with the following command mkfs.xfs -m crc=1,finobt=1 -f /dev/mdg  G being the disk # and it came back with an error Error accessing specified device /dev/mdg.

Or should I be using the actual driver number as in /dev/md7. Been a long time since I used this command so I can remember.

 

Regards

Link to comment

That worked perfectly. Thanks very much.

When I first tried it I thought I may have gotten the command wrong so I did mkfs.xfs -m crc=1,finobt=1 -f /dev/sdg as that is at the end of the description on that drive.

It did it thing ok and didnt give me any errors, obviously it didnt work, however what damage if any would I have done by using sdg instead of the correct command?

 

Regards

Link to comment
  • 1 month later...
On 7/30/2020 at 7:04 AM, JorgeB said:

It's normal for newer xfs file systems, they use more space for reflink support and such.

 

Just wanted to confirm, that this explains some pretty huge difference in used space on empty array disks.

 

I emptied 3 identical 10 TB disks on my array and - 2 older ones have ~10GB used space, while the newest one has ~70GB.

I assume, the 3rd disk was formatted with the newer version of xfs?

2022-08-10_17-10-55__chrome-crop.png.c5bd9ccbd3db4eae64039fc5a518c495.png

 

Same with the newest 18TB drives, I guess? They all have 126GB used space freshly formatted, instead of ~18GB...

 2022-08-08_21-46-31__chrome-crop.png.18314a1a3a8b53da1bfb8614c70aea16.png

 

Would very much appreciate, if anyone could confirm, that I'm understanding this correctly.

Link to comment

How can I find out which array disks are actually formatted the "old" and "new" way? I mean, can I get that info without emptying the disks to see how much free space they take up "on empty".

 

Also, what would be the easiest and fastest way to re-format the array disks using the "new" xfs format?

I am constantly using hardlinks - I probably have tens of thousands of them on my array, and I am not 100% sure, but it seems, like I am getting way faster results when searching for hardlinks on the "newer" disks, than older ones. or am I just imagining this?

Link to comment
8 hours ago, shEiD said:

How can I find out which array disks are actually formatted the "old" and "new" way?

Check the syslog during disk mounting:

 

Quote

Aug 11 19:39:28 Kraken root: meta-data=/dev/md17              isize=512    agcount=48, agsize=61047660 blks
Aug 11 19:39:28 Kraken root:          =                       sectsz=512   attr=2, projid32bit=1
Aug 11 19:39:28 Kraken root:          =                       crc=1        finobt=1, sparse=1, rmapbt=0
Aug 11 19:39:28 Kraken root:          =                       reflink=1    bigtime=0 inobtcount=0
Aug 11 19:39:28 Kraken root: data     =                       bsize=4096   blocks=2929721331, imaxpct=25
Aug 11 19:39:28 Kraken root:          =                       sunit=0      swidth=0 blks
Aug 11 19:39:28 Kraken root: naming   =version 2              bsize=4096   ascii-ci=0, ftype=1
Aug 11 19:39:28 Kraken root: log      =internal log           bsize=4096   blocks=119233, version=2
Aug 11 19:39:28 Kraken root:          =                       sectsz=512   sunit=0 blks, lazy-count=1
Aug 11 19:39:28 Kraken root: realtime =none                   extsz=4096   blocks=0, rtextents=0

 

reflink=1 new format, reflink=0 old format.

 

8 hours ago, shEiD said:

Also, what would be the easiest and fastest way to re-format the array disks using the "new" xfs format?

There's only one way, empty the disk, reformat, copy the data back.

  • Thanks 1
Link to comment
13 hours ago, shEiD said:

it seems, like I am getting way faster results when searching for hardlinks on the "newer" disks, than older ones. or am I just imagining this?

Speculation here, but it's probably a combination of the new filesystem and the older disks having more fragmentation.

 

Doing a fresh copy to a newly formatted disk solves both issues.

  • Thanks 1
Link to comment
On 8/12/2022 at 3:50 AM, JorgeB said:

There's only one way, empty the disk, reformat, copy the data back.

 

I am using unRAID v6.10.3

 

My array is data disks only - NO PARITY.

 

What would be the correct procedure to re-format array disks?

I would love to be able to do it fast and simple:

- stop the array

- remove the disk from an array slot (it appears in the Unassigned Devices)

- delete the partition with UD

- put the disk back in it's array slot

- start the array and hopefully the disk gets formatted 🤞 without zeroing out?

 

Or do I need to do it long way round:

- new config - remove disk

- pre-clear

- new config - add disk back

- auto-format on array start (the usual unraid behavior when adding a new disk to the array)

 

Link to comment
1 minute ago, shEiD said:

What would be the correct procedure to re-format array disks?

Same whether you have parity or not.

1 minute ago, JorgeB said:

-stop array, click on the empty disk, change fs to reiserfs

-start array, format disk

-stop array, click on the disk, change fs back to

-start array, format disk

 

Link to comment
38 minutes ago, JorgeB said:

-stop array, click on the empty disk, change fs to reiserfs

-start array, format disk

-stop array, click on the disk, change fs back to xfs

-start array, format disk

 

Nice and simple - love it. Never used reiserfs before 😀

 

37 minutes ago, trurl said:

Same whether you have parity or not.

 

I thought I'd mention me not using parity, because with parity you would need to zero out the disk?

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.