John_M Posted December 4, 2015 Share Posted December 4, 2015 I have two identical hard disks (they even have almost adjacent serial numbers) that show up as having different capacities, though the same raw size within unRAID - see the attached screen shot. I've searched the boards for mention of this phenomenon and wherever it's mentioned it seems the cause is due to a Host Protected Area reserving space on the disk. However, this is not the case here: root@Lapulapu:~# hdparm -N /dev/sdg /dev/sdg: max sectors = 1953525168/1953525168, HPA is disabled root@Lapulapu:~# hdparm -N /dev/sdi /dev/sdi: max sectors = 1953525168/1953525168, HPA is disabled root@Lapulapu:~# The only difference I can think of is the fact that that Disk 1 was created when I first built the server (unRAID 6.0.1) while Disk 2 is the result of replacing a smaller (750 GB) disk and allowing it to rebuild from parity and for the filesystem to expand to fill it (unRAID 6.1.4). In a way, this is of academic interest only because I will be replacing both disks with much larger ones but presumably disks of any size can be affected. So can anyone shed any light, please? Edit: Here's the partition information for the two drives (they are identical, so is it a GUI thing?): root@Lapulapu:~# fdisk -l /dev/sdg Disk /dev/sdg: 1000.2 GB, 1000204886016 bytes 1 heads, 63 sectors/track, 31008336 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdg1 64 1953525167 976762552 83 Linux root@Lapulapu:~# fdisk -l /dev/sdi Disk /dev/sdi: 1000.2 GB, 1000204886016 bytes 1 heads, 63 sectors/track, 31008336 cylinders, total 1953525168 sectors Units = sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 512 bytes I/O size (minimum/optimal): 512 bytes / 512 bytes Disk identifier: 0x00000000 Device Boot Start End Blocks Id System /dev/sdi1 64 1953525167 976762552 83 Linux root@Lapulapu:~# But df shows them to be different (compare the /dev/md1 line with the /dev/md2 line), so is it an XFS thing?: root@Lapulapu:~# df Filesystem 1K-blocks Used Available Use% Mounted on tmpfs 131072 6216 124856 5% /var/log /dev/sda1 7819332 116480 7702852 2% /boot /dev/md1 976724396 733270436 243453960 76% /mnt/disk1 /dev/md2 976404852 499699936 476704916 52% /mnt/disk2 /dev/md3 976404852 523346908 453057944 54% /mnt/disk3 /dev/md4 732216852 451673488 280543364 62% /mnt/disk4 /dev/md5 732216852 244274776 487942076 34% /mnt/disk5 shfs 4393967804 2452265544 1941702260 56% /mnt/user root@Lapulapu:~# Link to comment
garycase Posted December 4, 2015 Share Posted December 4, 2015 Click on the disk in Main and see what it shows for partition format. I suspect one is 4k aligned, and the other is not (the one that was the result of the rebuild). Link to comment
John_M Posted December 4, 2015 Author Share Posted December 4, 2015 Interesting, Gary. Thank you but, no, that's not it. All my drives report as 4k aligned. It's almost as though the process of expanding the file system doesn't quite make it fill the partition. Actually, it's a long way short; 327,213,056 bytes short. I wonder if anyone else has noticed this? Link to comment
garycase Posted December 4, 2015 Share Posted December 4, 2015 Definitely interesting ... but, as you noted, of more academic interest than any real problem. Note that neither of the block counts correspond to the 976,762,584 that would match the actual sector counts. Not sure why that's the case ... likely due to reserved spares, but I'd actually expect those to be "outside" of the reported counts. This may be file-system related ... is the newer drive XFS? If so, is it also the larger one? I HAVE noticed that if you change a drive from Reiser to XFS the free space tends to grow a bit. Hadn't looked into the details of that ... I had assumed it was just a more efficient allocation (less slack space) ... but it may be related to what you're seeing here. Link to comment
John_M Posted December 5, 2015 Author Share Posted December 5, 2015 Both were created XFS - I don't use ReiserFS at all, so haven't done any conversions. The larger was formatted (starting with a pre-cleared disk) when I originally created the array. The other was first pre-cleared (as a handy test against early failures) then used to replace a smaller disk in the usual way. It must have something to do with the way the file system is grown. Maybe there's some minor bug that's gone unnoticed. Link to comment
garycase Posted December 5, 2015 Share Posted December 5, 2015 ... The other was first pre-cleared (as a handy test against early failures) then used to replace a smaller disk in the usual way ... ... and the smaller disk was already XFS ?? Link to comment
John_M Posted December 5, 2015 Author Share Posted December 5, 2015 Yes, indeed. The original 750 GB disk had always been XFS. It was formatted at the same time as the "larger" of the two 1 TB disks, when I originally created the array (1 TB parity + 1 TB data + 750 GB data + 750 GB data) using unRAID 6.0.1. It was a small array using disks I had available to prove the concept before committing myself. All were pre-cleared first to test them, then assigned and the array started and allowed to do its thing. Link to comment
garycase Posted December 5, 2015 Share Posted December 5, 2015 Just for grins, I'd copy all of the data off of the disk that you had rebuilt (to another PC or perhaps another UnRAID server); then Stop the array and change the file system on that disk (perhaps to RFS -- you're not going to leave it that way); then Start the array again and let it format the disk; then Stop the array again and change it back to XFS; and then Start it yet again and let it do the format. Then see if the size is the same as the other disk. ... and, in any event, copy the data back to it from wherever you saved it. Link to comment
John_M Posted December 5, 2015 Author Share Posted December 5, 2015 Understood. I'll give it a go when I can dedicate some time to it, before I replace the disks with larger capacity ones, hopefully next week. I'll report back. Link to comment
garycase Posted December 5, 2015 Share Posted December 5, 2015 Looking forward to the results. Clearly it's just an academic exercise, but it'd be nice to know if the rebuild was indeed the cause of the difference -- and it certainly seems like that's the case. Except for copying all the data off (which could take a good while depending on how full it is), the actual reformats will just take a few minutes (as I'm sure you know). Link to comment
John_M Posted December 18, 2015 Author Share Posted December 18, 2015 Apologies for not getting back to this sooner but I've been having problems* with this server and didn't want to change anything until I'd fixed it. I now believe it's stable again so I returned to my experimenting and, yes Gary, forcing a re-format by switching to ReiserFS and then back to XFS does indeed restore the full capacity. So I have to conclude that there's something slightly faulty with the "replace a smaller disk with a larger one and grow the file system" function, which is surely something that most users will have used at some time or another. *The four 1 TB Samsung F1 disks were all on the same "half" of an AOC-SASLP-MV8, which would drop the connection to one or more of them when under heavy I/O load, requiring a reboot to restore the connection. The other "half" had four 2 TB Toshiba disks connected and they were completely unaffected. I could run a pre-clear on one of the Samsungs and it would complete but if I started a pre-clear on a second then one of them would stop responding (it could be either). The SMART status of each disk is normal and they all pass the short and long self-tests. Swapping the two forward breakout cables over, the problem stayed with the disks, eliminating the controller. Replacing the cable had no effect. Replacing the PSU and power cables had no effect. Upgrading to 6.1.6 had no effect. Replacing the drives with four brand new 5 TB Toshibas seems to have fixed the problem (I already had four of them connected to the motherboard) and I was able to run four simultaneous pre-clears without a problem. But the Samsungs work fine individually or when connected directly to the motherboard SATA ports - which is how I managed to continue my capacity experiment. I think my best course of action will be to retire them now. Link to comment
garycase Posted December 18, 2015 Share Posted December 18, 2015 Agree with retiring the old 1TB disks ... but that's certainly an interesting issue. One of life's "little mysteries" that will likely never be solved [Trust me, I've had a lot of those over the years]. Thanks for the update on the size issue => I wonder if Limetech is aware of that issue. I'll send Tom a note with a pointer to this thread just in case it's something he wants to look at. Link to comment
John_M Posted December 19, 2015 Author Share Posted December 19, 2015 Yes, indeed. Thanks for your interest, Gary. Link to comment
Recommended Posts
Archived
This topic is now archived and is closed to further replies.