gustovier

Members
  • Posts

    49
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

gustovier's Achievements

Rookie

Rookie (2/14)

0

Reputation

  1. Yup, was thinking the same thing, and that worked. Did a power cycle and I can see my data once again. I also went ahead and disconnected all other drives completely but let my cache SSD hooked up to the onboard SATA controller, hoping that the bios would pick it for creating HPA, but upon boot up it did not. So now I'm a little worried the bios will pick another data drive or worse Parity drive to install HPA on... I'm about to try plugging back in all my other drives and if everything is good, then only thing left should be to reset Array config and let Parity rebuild. I'm going to need to get a new motherboard... I've had this box for about 10+ years now and never ran into this problem
  2. Sorry. I understood. The drive that I was able to remove HPA from is not using one of the onboard sata controllers (this MB has 2 on board). The other drive where I keep on getting the SG_IO error was already still using the onboard sata controller.
  3. No i took it off onboard sata and put onto my sata add on card. I still can’t figure out how to fix this other drive still with HPA on it. The hdparm command is failing as shown in the previous post. Does anyone have some guidance on how to resolve it? My research has not turned up much
  4. Yeah I saw that post. I have GA-EP35-DS3R motherboard, and from what I can tell I will need to actually downgrade the bios. Of course gigabyte has the bios in some .exe file(assuming self extracting) and of course I'm on a Mac, so gotta figure out a way to extract out the bios. I was able to remove HPA from one of the disks and I can now see all my data. ... the other drive I'm getting this following error... (apparently SD<letter> assignments can change as the impacted drive went from sdb to sdg) root@Tower:~# hdparm -N /dev/sdg /dev/sdg: max sectors = 7814035055/7814037168, HPA is enabled root@Tower:~# hdparm -N p7814037168 /dev/sdg /dev/sdg: setting max visible sectors to 7814037168 (permanent) SG_IO: bad/missing sense data, sb[]: 70 00 05 00 00 00 00 0a 10 51 40 01 21 00 00 00 a0 af 00 00 00 00 00 00 00 00 00 00 00 00 00 00 max sectors = 7814035055/7814037168, HPA is enabled
  5. To add ran these commands, which is really making me think the bios has caused this problem as listed in the message above... As you can see disk sdb and idc have HPA enabled... and these are the 2 disks with the problems.. /dev/sdb: max sectors = 7814035055/7814037168, HPA is enabled root@Tower:/dev# hdparm -N /dev/sdc /dev/sdc: max sectors = 15628051055/15628053168, HPA is enabled root@Tower:/dev# hdparm -N /dev/sdd /dev/sdd: max sectors = 19532873728/19532873728, HPA is disabled root@Tower:/dev# hdparm -N /dev/sde /dev/sde: max sectors = 19532873728/19532873728, HPA is disabled root@Tower:/dev# hdparm -N /dev/sdf /dev/sdf: max sectors = 937703088/937703088, HPA is disabled root@Tower:/dev# hdparm -N /dev/sdg /dev/sdg: max sectors = 11721045168/11721045168, HPA is disabled root@Tower:/dev# hdparm -N /dev/sdh /dev/sdh: max sectors = 19532873728/19532873728, HPA is disabled root@Tower:/dev# hdparm -N /dev/sdi /dev/sdi: max sectors = 11721045168/11721045168, HPA is disabled root@Tower:/dev# hdparm -N /dev/sdj /dev/sdj: max sectors = 19532873728/19532873728, HPA is disabled root@Tower:/dev# hdparm -N /dev/sdk /dev/sdk: max sectors = 19532873728/19532873728, HPA is disabled root@Tower:/dev#
  6. After some sleuthing ... something is telling me I'm being hit by this bug on my rather old gigabyte motherboard I'm using that is incorrectly reseting the drive size on the disks... http://www.users.on.net/~fzabkar/HDD/HDD_Capacity_FAQ.html
  7. How would I repair with GPT so the partition shows up again?
  8. The problem is that there is no partition. It’s like they disappeared.
  9. All, Need some big time help. I was trying to repair a drive, replaced a few sata cables and did a few reboots all of a sudden one of my drives (/dev/sdb) showed up as "wrong" although I didn't do anything to it. I then unplugged the data cable to that drive, rebooted. Following another drive (/dev/sdc) showed up as "wrong". I then plugged /dev/sdb sata cable back in and both drives continued to show as "wrong". At this point I figured something was just up with the array config, and I decided to do a "New Configuration" and I preserved all assignments, and restarted the array. Both drives were not seen by the array and I can't even mount them with unassigned drives (it only gives a Format option). I tried doing an XFS_repair, and that went no where root@Tower:/dev# xfs_repair -v /dev/md8 Phase 1 - find and verify superblock... xfs_repair: error - read only 0 of 512 bytes Also tried doing fdisk -L root@Tower:/dev# fdisk -l /dev/sdb Disk /dev/sdb: 3.65 TiB, 4000785948160 bytes, 7814035055 sectors Disk model: WDC WD40EZRZ-00G Units: sectors of 1 * 512 = 512 bytes Sector size (logical/physical): 512 bytes / 4096 bytes I/O size (minimum/optimal): 4096 bytes / 4096 bytes Disklabel type: dos Disk identifier: 0x00000000 Device Boot Start End Sectors Size Id Type /dev/sdb1 1 4294967295 4294967295 2T ee GPT I also tried running gdrisk... see below.. But at this point I've decided to not do anything more until advised by the experts here root@Tower:/dev# gdisk /dev/sdb GPT fdisk (gdisk) version 1.0.4 Warning! Disk size is smaller than the main header indicates! Loading secondary header from the last sector of the disk! You should use 'v' to verify disk integrity, and perhaps options on the experts' menu to repair the disk. Caution: invalid backup GPT header, but valid main header; regenerating backup header from main header. Warning! One or more CRCs don't match. You should repair the disk! Main header: OK Backup header: ERROR Main partition table: OK Backup partition table: ERROR Partition table scan: MBR: protective BSD: not present APM: not present GPT: damaged **************************************************************************** Caution: Found protective or hybrid MBR and corrupt GPT. Using GPT, but disk verification and recovery are STRONGLY recommended. **************************************************************************** tower-diagnostics-20210926-2047.zip
  10. Ok, to make matters worse.. My Power went out a few mins ago causing the machine to go down no so gracefully. So looks like I'll be starting this process all over again now
  11. All, I just recently converted all my disks from ReiserFS to XFS, and afterwards I figured it was a good idea to do a parity re-sync (I just use 1 parity drive). Which I believe was a good idea because there's currently 412K sync errors that's been corrected and it's only 30% done. The speed of the parity sync seems to also be moving at slow. The sync has been running for about 4 days/14hrs thus far and it's initial speed as like around 1mb/s. At some point the speed has risen to about ~66mb/sec (all 8 of my disks including parity are reading at about that same rate now) and now there's only about 1 day left so I might as well let it finish. Is this normal or are there some issues somewhere? tower-diagnostics-20200526-0845.zip
  12. MC as in midnight commander? Or just use the mv -rf command to move everything from the /Disk1 subdirectory to the root directory? This still won’t take up a lot of time due parity?
  13. All, I'm following the guide HERE to convert from ReisferFS to XFS. I just finished the entire procedure to replace my /mnt/disk1 with my swap drive being /mnt/disk8. Once I had the array back up I noticed that my new XFS /mnt/disk1 (which was previously disk8) had all the data copied into a sub-directory named "/disk1" instead of at the root level of the disk. After I noticed this, I immediately stopped the array, and reverted things back to the original configuration. I did start any dockers or have any clients that would have wrote to the array, so my parity drive I'm hoping is still valid. I ran the following command.. so not sure why rsync did the copy this way.. Here's what my XFS swapper disk8 looks like with all the data from disk1 in the /disk1 sub-directory Here's what my actual ReiserFS disk1 contents look like... Now that I wrote this is it because I didn't put the trailing "/" for each directory in the rsync command.. Should've done "rsync -avPX /mnt/disk1/ /mnt/disk8/" instead of "rsync -avPX /mnt/disk1 /mnt/disk8" Is the best thing to do, is erase everything on /mnt/disk10 and start the rsync all over again(took about 2 days) or take another path?
  14. Also to add, when I try assigning my 10TB drive in back for disk7. Unraid thinks its a completely new disk instead of the original disk.. Again I'm afraid to start the array so haven't done anything yet but see below..