unRAID Server Release 4.7 "final" Available


limetech

Recommended Posts

Thanks for the input RobJ. I'll give it a try although I did shut down the server and changed the power splitter and pushed the data cables in to make sure all was set in place and rebooted to find the error's gone (some info I read in another post), seems at this piont it might have been my problem not sure I'll keep an eye on it for a while and hope errors are solved.

Thanks Again

Lou

Link to comment
  • Replies 414
  • Created
  • Last Reply

Top Posters In This Topic

Top Posters In This Topic

Posted Images

New disks joining the system will be formatted correctly.

They will be "partitioned" correctly, but only if you set the MBR 4k-aligned option in the settings. 

The default for 4.7 is un-aligned, so before you let unRAID clear a new drive for you, set the MBR 4k-aligned option on the settings page.

 

Joe L.

Link to comment

Another drive replaced with -4 error here.  I followed instructions in first round of posts and it seems to have gotten worse.  Don't want to risk any data so asking for help.

 

1) Upgraded from 4.6.5 to 4.7 by copying memtest, bzimage, bzroot to flash share

2) Rebooted

3) Came up with issue 1 in screenshot, drive replaced with smaller, off by size of 4

4) Searched syslog for HPA

5) Found line

Feb  2 19:23:46 Tower kernel: ata2.00: HPA detected: current 586112591, native 586114704

Also found line referencing ata7, but there was only one error on screen so I assumed it was cache drive and not a problem.

6) Ran commands with results

root@Tower:/var/log# hdparm -N p3907029168 /dev/sdf

/dev/sdf:
setting max visible sectors to 3907029168 (permanent)
max sectors   = 2950727856/14715056(18446744073321613488?), HPA setting seems invalid (buggy kernel device driver?)
root@Tower:/var/log# hdparm -N /dev/sdf

/dev/sdf:
max sectors   = 2950727856/14715056(18446744073321613488?), HPA setting seems invalid (buggy kernel device driver?)
root@Tower:/var/log#

8) Rebooted

9) Error 2 on screenshot now shows

 

Help??

Looks like my parity got way smaller somehow.

Link to comment

Sorry forgot screenshot.

 

1598screenshot.jpg

 

Also noticed that now in the syslog there are 3 refs to HPA.  ata2, 7 and 9.  I think I used the ata7 or 9 number mistakenly.  That explains why it got worse.

 

Feb  2 19:23:46 Tower kernel: ata2.00: HPA detected: current 586112591, native 586114704
Feb  2 19:23:46 Tower kernel: ata9.00: HPA detected: current 2950727856, native 3907029168
Feb  2 19:23:46 Tower kernel: ata7.00: HPA detected: current 3907027055, native 3907029168

Link to comment

Ok I understand now that I applied the hdparm cmd to the wrong drive.  It should have been disk1 (sde) instead of disk2 (sdf).  What I don't understand is how doing that messed up my parity disk0 (sdg).

Disk designations are not guaranteed to be the same from one boot to the next.  It all depends on which hardware initializes itself on the motherboard first. It could have changed from one boot to the next.

 

Second, it looks like you typed the correct command but it set the size smaller than you requested.  It might be that the hdparm command will not work with that specific drive.  Might need to use seatools to set the HPA.

Link to comment

Ok I understand now that I applied the hdparm cmd to the wrong drive.  It should have been disk1 (sde) instead of disk2 (sdf).  What I don't understand is how doing that messed up my parity disk0 (sdg).

 

You must have issued a command directly on your parity disk, thus setting the parity drive smaller than your data drives.

 

As Joe L already indicated, disk designations are not guaranteed to be the same between reboots and even less guaranteed between unRAID version upgrades.

Link to comment

It does look like right size for the cmd right?  If I read this line correctly

Feb  2 19:23:46 Tower kernel: ata9.00: 2950727856 sectors, multi 16: LBA48 NCQ (depth 0/32)

then I don't see why it would be set to 29...  It should be set to 39...

My command should have had no effect since it was run on a drive with that size already.

 

Just so I understand clearly and don't go down the wrong path, I should shutdown my unraid, disconnect my parity drive, set it up as an external on another computer and run seatools to set the HPA.  Is that correct?

Link to comment

It does look like right size for the cmd right?  If I read this line correctly

Feb  2 19:23:46 Tower kernel: ata9.00: 2950727856 sectors, multi 16: LBA48 NCQ (depth 0/32)

then I don't see why it would be set to 29...  It should be set to 39...

My command should have had no effect since it was run on a drive with that size already.

 

Just so I understand clearly and don't go down the wrong path, I should shutdown my unraid, disconnect my parity drive, set it up as an external on another computer and run seatools to set the HPA.  Is that correct?

That should work.
Link to comment

The "hdparm -N" command is a dangerous command, and if used wrongly can cause data loss.  I'm going to recommend that we always instruct users to use the -N parameter without the sector count first (eg. hdparm -N /dev/sda), so that (1) the user can verify first that they are working on the correct drive, and (2) can verify the correct native sector count to use.

 

Stucco needs to be absolutely sure that the "save BIOS to disk" feature is disabled for this machine, or it may just create another HPA.

 

And Stucco, I think we all thought that most users would realize that the use of sdf and ata7 were specific to peter_sm's machine, and would be different for every other machine.  Sorry.  We need to make that point clear to new users.

 

I would leave the Cache drive alone, even though it has an HPA.  Just to be safe though, since you probably don't know how long it has had it, I would run a file system check on it.  With the array stopped, at a console type the following command (and answer Yes when asked): (that sda1 is sda plus a one, not an el)

reiserfsck --check /dev/sda1

 

You should however remove the HPA from Disk 1, and correct the Parity drive, by using the hdparm commands below.  Then run a parity check, which will probably report errors, but will also fix them.  Afterward, run Check Disk File systems on Disk 1, but NOT the Parity drive.

 

 (using drive ID's sde and sdg from the attached syslog, which you will need to verify)
hdparm -N /dev/sdg
 (make sure the numbers match the parity drive, if they don't then abort this.  if they match current=2950727856 and native=3907029168, then continue)
hdparm -N p3907029168 /dev/sdg
 (then verify the change)
hdparm -N /dev/sdg

 (then repeat for Disk 1)
hdparm -N /dev/sde
 (make sure the numbers match Disk 1, if they don't then abort this.  if they match current=3907027055 and native=3907029168, then continue)
hdparm -N p3907029168 /dev/sde
 (then verify the change)
hdparm -N /dev/sde
 (if correct, I *think* you start the array and let the Reiser file system rebuild itself on Disk 1, the proceed with the instructions above)

 

Update: I see you are considering removing the parity drive.  Joe, is there anything wrong with the procedure above?

 

Update2:  Ahh,  I see why you are going to try SeaTools instead.

Link to comment

After re-reading I see it was...dumb of me.  Sorry, less familiar with linux than most I suppose.  Thanks for help!

 

root@Tower:/var/log# reiserfsck --check /dev/sda1
reiserfsck 3.6.21 (2009 www.namesys.com)

*************************************************************
** If you are using the latest reiserfsprogs and  it fails **
** please  email bug reports to [email protected], **
** providing  as  much  information  as  possible --  your **
** hardware,  kernel,  patches,  settings,  all reiserfsck **
** messages  (including version),  the reiserfsck logfile, **
** check  the  syslog file  for  any  related information. **
** If you would like advice on using this program, support **
** is available  for $25 at  www.namesys.com/support.html. **
*************************************************************

Will read-only check consistency of the filesystem on /dev/sda1
Will put log info to 'stdout'

Do you want to run this program?[N/Yes] (note need to type Yes if you do):Yes
###########
reiserfsck --check started at Wed Feb  2 21:45:07 2011
###########
Replaying journal: Done.
Reiserfs journal '/dev/sda1' in blocks [18..8211]: 0 transactions replayed
Checking internal tree.. finished
Comparing bitmaps..finished
Checking Semantic tree:
finished
No corruptions found
There are on the filesystem:
        Leaves 323
        Internal nodes 4
        Directories 116
        Other files 1225
        Data block pointers 11581 (0 of them are zero)
        Safe links 0
###########
reiserfsck finished at Wed Feb  2 21:45:23 2011
###########
root@Tower:/var/log#

root@Tower:/var/log# hdparm -N /dev/sdg

/dev/sdg:
max sectors   = 2950727856/14715056(18446744073321613488?), HPA setting seems invalid (buggy kernel device driver?)
root@Tower:/var/log# hdparm -N p3907029168 /dev/sdg

/dev/sdg:
setting max visible sectors to 3907029168 (permanent)
SET_MAX_ADDRESS failed: Input/output error
max sectors   = 2950727856/14715056(18446744073321613488?), HPA setting seems invalid (buggy kernel device driver?)
root@Tower:/var/log#

 

Number matched so ran on sdg, but then stopped because of failure.  No reboots.

Link to comment

I can't seem to find the HPA feature in bios to turn off.  I have scoured for "copy bios to disk", "hidden protected area" or "HPA".  Anyone have a clue?  I have an Award 6.00 F11 on a Gigabyte GS-965P-DS3 Rev 1.3.

There was a series of Gigabyte motherboards where it could not be disabled.  most people end up replacing the motherboards, or if possible, upgrading the BIOS, as they are a ticking time bomb for any RAID system.

 

 

Link to comment

Hi I just started a preclear on a EARS Green 4K drives without the jumpers using the preclear_disk.sh -A /dev/sdb command it's at 13% and going.

Questions: Once completed and put into array do I have to change the setting in unraid 4.7 to MRB:4K-aligned?

And in my syslog I have these errors are these normal when doing a preclear? Thanks in advance for any feed back on this.

Lou

 

eb  4 11:04:38 Unraid kernel: Pid: 6025, comm: hdparm Tainted: G        W  2.6.32.9-unRAID #8
Feb  4 11:04:38 Unraid kernel: Call Trace:
Feb  4 11:04:38 Unraid kernel:  [<c102449e>] warn_slowpath_common+0x60/0x77
Feb  4 11:04:38 Unraid kernel:  [<c10244c2>] warn_slowpath_null+0xd/0x10
Feb  4 11:04:38 Unraid kernel:  [<c11b624d>] ata_qc_issue+0x10b/0x308
Feb  4 11:04:38 Unraid kernel:  [<c11ba260>] ata_scsi_translate+0xd1/0xff
Feb  4 11:04:38 Unraid kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Feb  4 11:04:38 Unraid kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Feb  4 11:04:38 Unraid kernel:  [<c11baa40>] ata_sas_queuecmd+0x120/0x1d7
Feb  4 11:04:38 Unraid kernel:  [<c11bc6df>] ? ata_scsi_pass_thru+0x0/0x21d
Feb  4 11:04:38 Unraid kernel:  [<f842169a>] sas_queuecommand+0x65/0x20d [libsas]
Feb  4 11:04:38 Unraid kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Feb  4 11:04:38 Unraid kernel:  [<c11a82c0>] scsi_dispatch_cmd+0x147/0x181
Feb  4 11:04:38 Unraid kernel:  [<c11ace4d>] scsi_request_fn+0x351/0x376
Feb  4 11:04:38 Unraid kernel:  [<c1126798>] __blk_run_queue+0x78/0x10c
Feb  4 11:04:38 Unraid kernel:  [<c1124446>] elv_insert+0x67/0x153
Feb  4 11:04:38 Unraid kernel:  [<c11245b8>] __elv_add_request+0x86/0x8b
Feb  4 11:04:38 Unraid kernel:  [<c1129343>] blk_execute_rq_nowait+0x4f/0x73
Feb  4 11:04:38 Unraid kernel:  [<c11293dc>] blk_execute_rq+0x75/0x91
Feb  4 11:04:38 Unraid kernel:  [<c11292cc>] ? blk_end_sync_rq+0x0/0x28
Feb  4 11:04:38 Unraid kernel:  [<c112636f>] ? get_request+0x204/0x28d
Feb  4 11:04:38 Unraid kernel:  [<c11269d6>] ? get_request_wait+0x2b/0xd9
Feb  4 11:04:38 Unraid kernel:  [<c112c2bf>] sg_io+0x22d/0x30a
Feb  4 11:04:38 Unraid kernel:  [<c112c5a8>] scsi_cmd_ioctl+0x20c/0x3bc
Feb  4 11:04:38 Unraid kernel:  [<c104cd4f>] ? __alloc_pages_nodemask+0xdb/0x42f
Feb  4 11:04:38 Unraid kernel:  [<c11b3257>] sd_ioctl+0x6a/0x8c
Feb  4 11:04:38 Unraid kernel:  [<c112a420>] __blkdev_driver_ioctl+0x50/0x62
Feb  4 11:04:38 Unraid kernel:  [<c112ad1c>] blkdev_ioctl+0x8b0/0x8dc
Feb  4 11:04:38 Unraid kernel:  [<c1131e2d>] ? kobject_get+0x12/0x17
Feb  4 11:04:38 Unraid kernel:  [<c112b0f8>] ? get_disk+0x4a/0x61
Feb  4 11:04:38 Unraid kernel:  [<c101b028>] ? kmap_atomic+0x14/0x16
Feb  4 11:04:38 Unraid kernel:  [<c11334a5>] ? radix_tree_lookup_slot+0xd/0xf
Feb  4 11:04:38 Unraid kernel:  [<c104a179>] ? filemap_fault+0xb8/0x305
Feb  4 11:04:38 Unraid kernel:  [<c1048c43>] ? unlock_page+0x18/0x1b
Feb  4 11:04:38 Unraid kernel:  [<c1057c63>] ? __do_fault+0x3a7/0x3da
Feb  4 11:04:38 Unraid kernel:  [<c105985f>] ? handle_mm_fault+0x42d/0x8f1
Feb  4 11:04:38 Unraid kernel:  [<c108b6c6>] block_ioctl+0x2a/0x32
Feb  4 11:04:38 Unraid kernel:  [<c108b69c>] ? block_ioctl+0x0/0x32
Feb  4 11:04:38 Unraid kernel:  [<c10769d5>] vfs_ioctl+0x22/0x67
Feb  4 11:04:38 Unraid kernel:  [<c1076f33>] do_vfs_ioctl+0x478/0x4ac
Feb  4 11:04:38 Unraid kernel:  [<c105dcdd>] ? do_mmap_pgoff+0x232/0x294
Feb  4 11:04:38 Unraid kernel:  [<c1076f93>] sys_ioctl+0x2c/0x45
Feb  4 11:04:38 Unraid kernel:  [<c1002935>] syscall_call+0x7/0xb
Feb  4 11:04:38 Unraid kernel: ---[ end trace d30d5becdb4af75b ]---

 

Link to comment

Hi I just started a preclear on a EARS Green 4K drives without the jumpers using the preclear_disk.sh -A /dev/sdb command it's at 13% and going.

Questions: Once completed and put into array do I have to change the setting in unraid 4.7 to MRB:4K-aligned?

If it is properly precleared, the MBR setting is ignored and the preclear MBR used.

If the preclear signature is not valid, and there is not already a valid MBR, the MBR-4k or MBR-unaligned setting is used.

And in my syslog I have these errors are these normal when doing a preclear? Thanks in advance for any feed back on this.

Lou

Those errors are not normal.  Are they continuing?  They seem to be occurring when hdparm is trying to map some memory.  Are you running multiple preclears?  Or running other process that may limit the memory available?)

How much memory do you have?

What does the

free

command show?

 

Joe L.

Link to comment

Hi I just started a preclear on a EARS Green 4K drives without the jumpers using the preclear_disk.sh -A /dev/sdb command it's at 13% and going.

Questions: Once completed and put into array do I have to change the setting in unraid 4.7 to MRB:4K-aligned?

If it is properly precleared, the MBR setting is ignored and the preclear MBR used.

If the preclear signature is not valid, and there is not already a valid MBR, the MBR-4k or MBR-unaligned setting is used.

And in my syslog I have these errors are these normal when doing a preclear? Thanks in advance for any feed back on this.

Lou

Those errors are not normal.  Are they continuing?  No

They seem to be occurring when hdparm is trying to map some memory. 

Are you running multiple preclears? No just the one

Or running other process that may limit the memory available?) Mabey something in unmenu i'll check

How much memory do you have? 4GiG

What does the

free

command show? In unmenu memory info it shows 2844360 free

 

Joe L.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.