Re: preclear_disk.sh - a new utility to burn-in and pre-clear disks for quick add


Recommended Posts

Version 1.9 is now attached to the first post in this thread. 

 

It fixes two issues.

One was a difference in how values are stored differently in 4.7 and prior vs. 5.0.  (in 5.0+ the values are surrounded by quotes)

This prevented preclear from detecting the default partition alignment setting on 5.0beta versions of unRAID.    This only showed itself if you did not specify "-a" or "-A" on the command line.

 

The other was sometimes excluding an extra drive when using the "-l" command.  (it did not show as a potential candidate for clearing)

 

Joe L.

 

I'm rebuilding my unRAID server version 4.7 and preclearing all the drives, one by one, that are formatted for Windows, without the AF pin 7-8 jumper, version 1.8, and the -A option. Is there any need to preclear with version 1.9 instead of the preclear version 1.8? I just started the process last night, so I could update the preclear_disk.sh script and start over.

 

Dave

Link to comment

Version 1.9 is now attached to the first post in this thread. 

 

It fixes two issues.

One was a difference in how values are stored differently in 4.7 and prior vs. 5.0.  (in 5.0+ the values are surrounded by quotes)

This prevented preclear from detecting the default partition alignment setting on 5.0beta versions of unRAID.    This only showed itself if you did not specify "-a" or "-A" on the command line.

 

The other was sometimes excluding an extra drive when using the "-l" command.  (it did not show as a potential candidate for clearing)

 

Joe L.

 

I'm rebuilding my unRAID server version 4.7 and preclearing all the drives, one by one, that are formatted for Windows, without the AF pin 7-8 jumper, version 1.8, and the -A option. Is there any need to preclear with version 1.9 instead of the preclear version 1.8? I just started the process last night, so I could update the preclear_disk.sh script and start over.

 

Dave

No need to start over.

 

The issue was if you did not specify the "-a" or "-A" option, and were on the latest 5.0 beta version of unRAID.  In that case it did not recognize how you set your "default" alignment and used MBR-unaligned regardless.   

 

Neither condition applied to you, so you'll be fine.  (you are not on the beta release AND you specified the "-A" option.)

 

Joe L.

Link to comment

Version 1.9 is now attached to the first post in this thread. 

 

It fixes two issues.

One was a difference in how values are stored differently in 4.7 and prior vs. 5.0.  (in 5.0+ the values are surrounded by quotes)

This prevented preclear from detecting the default partition alignment setting on 5.0beta versions of unRAID.    This only showed itself if you did not specify "-a" or "-A" on the command line.

 

The other was sometimes excluding an extra drive when using the "-l" command.  (it did not show as a potential candidate for clearing)

 

Joe L.

 

Thanks Joe L.!

Link to comment

I am starting my first server build with 9 new 2TB disks.  I have read through parts of this topic, but not all 60 pages.  I know I want to use this command:

preclear_disk.sh -A /dev/sda, but if I want to run it 3 times, where would the -c argument go?  And do you append "3" to the c so you can specify how many preclears you want to run?

 

From my understanding, it will take at least 24 hours to preclear a 2TB drive.  Running 3 preclears on 9 drives will take approximately 1 month until the drives are ready to use.  Is there any faster way to do this, like running preclears on multiple drives at once using the same computer?  If not, should I perhaps preclear 3 drives at first, set up my array, then add a new drive every 3 days as I preclear each one?

 

Sorry for so many questions.

Murray

Link to comment

I am starting my first server build with 9 new 2TB disks.  I have read through parts of this topic, but not all 60 pages.  I know I want to use this command:

preclear_disk.sh -A /dev/sda, but if I want to run it 3 times, where would the -c argument go?  And do you append "3" to the c so you can specify how many preclears you want to run?

 

From my understanding, it will take at least 24 hours to preclear a 2TB drive.  Running 3 preclears on 9 drives will take approximately 1 month until the drives are ready to use.  Is there any faster way to do this, like running preclears on multiple drives at once using the same computer?  If not, should I perhaps preclear 3 drives at first, set up my array, then add a new drive every 3 days as I preclear each one?

 

Sorry for so many questions.

Murray

You would put the -c as follows:

 

preclear_disk.sh -c 3 -A /dev/sda

or

preclear_disk.sh -A -c 3 /dev/sda

 

basically, anywhere between the command name itself and the disk name being cleared. (the disk must be at the end of the command line)

 

I suggest you do one cycle first, and then, if a disk passes, do another 2 cycles on it.

 

I would strongly suggest you install and use "screen" and then run the pre-clear commands under it.  It will make your task a lot easier.  Also, you can pre-clear all the disks in parallel.  No need to do them one at a time.  You just need to invoke them under separate "virtual screens"

This tutorial will help:

http://lime-technology.com/wiki/index.php?title=Configuration_Tutorial#Introduction

 

Make sure you download the most recent version of the preclear script.  ( I updated it only a few days ago )

 

Joe L.

 

Link to comment

I am starting my first server build with 9 new 2TB disks.  I have read through parts of this topic, but not all 60 pages.  I know I want to use this command:

preclear_disk.sh -A /dev/sda, but if I want to run it 3 times, where would the -c argument go?  And do you append "3" to the c so you can specify how many preclears you want to run?

 

From my understanding, it will take at least 24 hours to preclear a 2TB drive.  Running 3 preclears on 9 drives will take approximately 1 month until the drives are ready to use.  Is there any faster way to do this, like running preclears on multiple drives at once using the same computer?  If not, should I perhaps preclear 3 drives at first, set up my array, then add a new drive every 3 days as I preclear each one?

 

Sorry for so many questions.

Murray

You would put the -c as follows:

 

preclear_disk.sh -c 3 -A /dev/sda

or

preclear_disk.sh -A -c 3 /dev/sda

 

basically, anywhere between the command name itself and the disk name being cleared. (the disk must be at the end of the command line)

 

I suggest you do one cycle first, and then, if a disk passes, do another 2 cycles on it.

 

I would strongly suggest you install and use "screen" and then run the pre-clear commands under it.   It will make your task a lot easier.  Also, you can pre-clear all the disks in parallel.  No need to do them one at a time.  You just need to invoke them under separate "virtual screens"

This tutorial will help:

http://lime-technology.com/wiki/index.php?title=Configuration_Tutorial#Introduction

 

Make sure you download the most recent version of the preclear script.  ( I updated it only a few days ago )

 

Joe L.

 

Joe, thanks a lot for your thorough explanation.  I just downloaded the preclear script last night so I have the most recent version.  I'll take a look at screen...I like 3 days much better than 30 days!
Link to comment

I read the tutorial...what a great tool for new users like me!  I have two questions.

 

1. You can create a new screen for each preclear you need to run, however, it is not recommended to run more than 4 at once.
  Does this mean I shouldn't try to do all 9 of my drives at once?  I have an i3 if it is processor dependent.

 

2.  Note: The NTFS driver included with the UnRAID distribution does not support

      Unicode characters correctly and using this method WILL corrupt

      filenames if they contain Unicode characters!  If you have any files

      containing Unicode characters, it is recommended that you copy those

      files across the network.

  I am going to be moving files from my traditional raid array (around 1TB) to the unraid array.  These are windows files, movies, music, etc.  Would these typically contain unicode characters?

 

thanks,

Murray

Link to comment

Running preclear_disk version 1.7 on unRaid 5.0 Beta6a.  The default partition format is 4K aligned.  Called with

"preclear_disk -c 2 /dev/sdc" (no -a or -A).  Cycle 1 ran with partition start 64.  Cycle 2 is running with partition start 63.

Interesting...

 

I don't doubt you, but I see no way for that to occur...   (In other words, I'll have to test it myself)

If running with no option specified the "default" will be that you've specified in the unRAID "Settings" page.

 

The partition start is set prior to entering the "cycle" loop.  It is un-changed (as far as I know) otherwise.

 

Before I start my test,  are you sure you have 4k-aligned as the "default" set on your server?    (please double-check, so I can duplicate your situation here)

 

Also, once the second cycle is complete, let me know what the output says.   You might even run

preclear_disk.sh -t /dev/sdc

and let it tell you how the disk is partitioned.  I'll be curious what it says.

 

Joe L.

 

The default is 4k-Aligned.  I'll run -t when the cycle is done.  Also, the server isn't needed at the moment.  I'll repeat the 2-cycle test to verify.  Any files that would be of use?

 

Should have done more verification before posting.

 

What I encountered was the bug that you fixed in version 1.9 (preclear defaults to 63 sector alignment on unRaid 5 with no -a or -A).

Link to comment

Running preclear_disk version 1.7 on unRaid 5.0 Beta6a.  The default partition format is 4K aligned.  Called with

"preclear_disk -c 2 /dev/sdc" (no -a or -A).  Cycle 1 ran with partition start 64.  Cycle 2 is running with partition start 63.

Interesting...

 

I don't doubt you, but I see no way for that to occur...   (In other words, I'll have to test it myself)

If running with no option specified the "default" will be that you've specified in the unRAID "Settings" page.

 

The partition start is set prior to entering the "cycle" loop.  It is un-changed (as far as I know) otherwise.

 

Before I start my test,  are you sure you have 4k-aligned as the "default" set on your server?    (please double-check, so I can duplicate your situation here)

 

Also, once the second cycle is complete, let me know what the output says.   You might even run

preclear_disk.sh -t /dev/sdc

and let it tell you how the disk is partitioned.  I'll be curious what it says.

 

Joe L.

 

The default is 4k-Aligned.  I'll run -t when the cycle is done.  Also, the server isn't needed at the moment.  I'll repeat the 2-cycle test to verify.  Any files that would be of use?

 

Should have done more verification before posting.

 

What I encountered was the bug that you fixed in version 1.9 (preclear defaults to 63 sector alignment on unRaid 5 with no -a or -A).

The preclear script was fine with the older 4.7 releases... but the format of the variable in the disk.cfg file changed between the 4.7 and 5.X releases.

 

In 4.7 the value for the default disk format was not surrounded by quote marks.  In 5.X it is.

I was looking for a value of 1 or 2.  What exists in 5.X is

"1" or "2"

Link to comment

Joe,

 

When doing more than one consecutive preclear it would be nice if the preclear passes prior to the last would alternate writing ones on a pass and then zeros on the next. E.g., if two passes where chosen the first pass would write ones and the second/final pass would write zeros. This would make preclear a more robust test. What do you think?

 

Thanks,

David

Link to comment

Joe,

 

When doing more than one consecutive preclear it would be nice if the preclear passes prior to the last would alternate writing ones on a pass and then zeros on the next. E.g., if two passes where chosen the first pass would write ones and the second/final pass would write zeros. This would make preclear a more robust test. What do you think?

 

Thanks,

David

Interesting idea.    Although a "1" or a "0" as we know it is actually encoded on the disk as something entirely different.  It is an alternating pattern of magnetized areas.  Otherwise it would be nearly impossible to read an extended area of one value.

 

See here for a high level description: http://www.pcguide.com/ref/hdd/geom/data.htm

 

Joe L.

Link to comment

Quick question,

 

Just received 2 new 2TB Drives a WD EARS and a Samsung F3, stuck them in the server and started the preclear on both, (my server has happily precleared multiple drives before)

 

My Samsung is running fine at around 80MB/s, the EARS starts at around 100MB/s but within a few minutes drops to 5MB/s and after an hour is not even registering a speed, I have stopped and retried preclear twice on this drive,

 

I have smartctl'd the drive and although it shows a pass it also shows 1670 ATA errors in the five hours it has been powered up,

 

Any suggestions?

 

Thanks in advance!

Link to comment

Sounds like a cabling problem.  When you can, shut down the server and ensure that the data cables are secured on both ends (unplug and replug at a minimum).  If that doesn't help., replace the data cable with a new or known good one.  Also ensure that the power cable is securely connected.

Link to comment

Quick question,

 

Just received 2 new 2TB Drives a WD EARS and a Samsung F3, stuck them in the server and started the preclear on both, (my server has happily precleared multiple drives before)

 

My Samsung is running fine at around 80MB/s, the EARS starts at around 100MB/s but within a few minutes drops to 5MB/s and after an hour is not even registering a speed, I have stopped and retried preclear twice on this drive,

 

I have smartctl'd the drive and although it shows a pass it also shows 1670 ATA errors in the five hours it has been powered up,

I received a new 2TB EARS about a month ago that did basically the same thing you are describing.  The preclear would launch just fine and run for about 5 minutes at ~100 MB/s but then suddenly the preclear seemed to stop responding (i.e no updates were being posted to the screen).  At first I thought it was a problem with the script because I was using a newly released version (shame on me for doubting Joe L).  However after waiting about 20 minutes it finally updated and the read speed was really low, like ~3 MB/s.  I checked the syslog and there were tons of read errors.  I checked the connections, swapped cables, everything I could think of but each time I ran preclear on that drive it would bog down about 5 minutes into the process.  After 4 or 5 preclear attempts I eventually RMA'd the drive.
Link to comment

So are you saying there is a problem with the unraid driver for the SASLP-MV8 card?  I haven't been having great luck with it lately.  I even switched from a Asus MB to a recommended Biostar one because of it.

 

I took the same drive and pre_clear is working on it via the onboard sata port.  This was the message I received in the syslog in regards to it:

 

 

Mar 13 21:22:42 Hitch kernel: ------------[ cut here ]------------
Mar 13 21:22:42 Hitch kernel: WARNING: at drivers/ata/libata-core.c:5186 ata_qc_issue+0x10b/0x308()
Mar 13 21:22:42 Hitch kernel: Hardware name: A760G M2+
Mar 13 21:22:42 Hitch kernel: Modules linked in: tun md_mod xor atiixp ahci r8169 mvsas libsas scst scsi_transport_sas
Mar 13 21:22:42 Hitch kernel: Pid: 25832, comm: hdparm Not tainted 2.6.32.9-unRAID #8
Mar 13 21:22:42 Hitch kernel: Call Trace:
Mar 13 21:22:42 Hitch kernel:  [<c102449e>] warn_slowpath_common+0x60/0x77
Mar 13 21:22:42 Hitch kernel:  [<c10244c2>] warn_slowpath_null+0xd/0x10
Mar 13 21:22:42 Hitch kernel:  [<c11b624d>] ata_qc_issue+0x10b/0x308
Mar 13 21:22:42 Hitch kernel:  [<c11ba260>] ata_scsi_translate+0xd1/0xff
Mar 13 21:22:42 Hitch kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Mar 13 21:22:42 Hitch kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Mar 13 21:22:42 Hitch kernel:  [<c11baa40>] ata_sas_queuecmd+0x120/0x1d7
Mar 13 21:22:42 Hitch kernel:  [<c11bc6df>] ? ata_scsi_pass_thru+0x0/0x21d
Mar 13 21:22:42 Hitch kernel:  [<f843369a>] sas_queuecommand+0x65/0x20d [libsas]
Mar 13 21:22:42 Hitch kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Mar 13 21:22:42 Hitch kernel:  [<c11a82c0>] scsi_dispatch_cmd+0x147/0x181
Mar 13 21:22:42 Hitch kernel:  [<c11ace4d>] scsi_request_fn+0x351/0x376
Mar 13 21:22:42 Hitch kernel:  [<c1126798>] __blk_run_queue+0x78/0x10c
Mar 13 21:22:42 Hitch kernel:  [<c1124446>] elv_insert+0x67/0x153
Mar 13 21:22:42 Hitch kernel:  [<c11245b8>] __elv_add_request+0x86/0x8b
Mar 13 21:22:42 Hitch kernel:  [<c1129343>] blk_execute_rq_nowait+0x4f/0x73
Mar 13 21:22:42 Hitch kernel:  [<c11293dc>] blk_execute_rq+0x75/0x91
Mar 13 21:22:42 Hitch kernel:  [<c11292cc>] ? blk_end_sync_rq+0x0/0x28
Mar 13 21:22:42 Hitch kernel:  [<c112636f>] ? get_request+0x204/0x28d
Mar 13 21:22:42 Hitch kernel:  [<c11269d6>] ? get_request_wait+0x2b/0xd9
Mar 13 21:22:42 Hitch kernel:  [<c112c2bf>] sg_io+0x22d/0x30a
Mar 13 21:22:42 Hitch kernel:  [<c112c5a8>] scsi_cmd_ioctl+0x20c/0x3bc
Mar 13 21:22:42 Hitch kernel:  [<c11b3257>] sd_ioctl+0x6a/0x8c
Mar 13 21:22:42 Hitch kernel:  [<c112a420>] __blkdev_driver_ioctl+0x50/0x62
Mar 13 21:22:42 Hitch kernel:  [<c112ad1c>] blkdev_ioctl+0x8b0/0x8dc
Mar 13 21:22:42 Hitch kernel:  [<c1131e2d>] ? kobject_get+0x12/0x17
Mar 13 21:22:42 Hitch kernel:  [<c112b0f8>] ? get_disk+0x4a/0x61
Mar 13 21:22:42 Hitch kernel:  [<c101b028>] ? kmap_atomic+0x14/0x16
Mar 13 21:22:42 Hitch kernel:  [<c11334a5>] ? radix_tree_lookup_slot+0xd/0xf
Mar 13 21:22:42 Hitch kernel:  [<c104a179>] ? filemap_fault+0xb8/0x305
Mar 13 21:22:42 Hitch kernel:  [<c1048c43>] ? unlock_page+0x18/0x1b
Mar 13 21:22:42 Hitch kernel:  [<c1057c63>] ? __do_fault+0x3a7/0x3da
Mar 13 21:22:42 Hitch kernel:  [<c105985f>] ? handle_mm_fault+0x42d/0x8f1
Mar 13 21:22:42 Hitch kernel:  [<c108b6c6>] block_ioctl+0x2a/0x32
Mar 13 21:22:42 Hitch kernel:  [<c108b69c>] ? block_ioctl+0x0/0x32
Mar 13 21:22:42 Hitch kernel:  [<c10769d5>] vfs_ioctl+0x22/0x67
Mar 13 21:22:42 Hitch kernel:  [<c1076f33>] do_vfs_ioctl+0x478/0x4ac
Mar 13 21:22:42 Hitch kernel:  [<c105dcdd>] ? do_mmap_pgoff+0x232/0x294
Mar 13 21:22:42 Hitch kernel:  [<c1076f93>] sys_ioctl+0x2c/0x45
Mar 13 21:22:42 Hitch kernel:  [<c1002935>] syscall_call+0x7/0xb
Mar 13 21:22:42 Hitch kernel: ---[ end trace 7f1e9f192190e675 ]---

 

I just bought this card and have it hooked up to my Intel 975XBX2. I have 8 drives connected to it and one drive, a new 2TB WDEADS, that I am currently pre clearing. During the pre-read phase, i got the above in my syslog but so far (it is on step 2-10) nothing else. I did some digging and is is possible these errors are related to the linux kernel driver for this card? I know nothing about linux and related stuff so i could be way off but read this thread and see the related "fix" implemented. Again may not be applicable here but would be curious if anyone knows if this is related:

 

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/578212

 

Also see this thread here:

 

http://hardforum.com/showthread.php?t=1397855&page=25

 

Unrelated topic regarding screen, so if i install screen and then run my terminal session I can close the terminal window but the "session"still runs? I stupidly ran a telnet from OSX terminal on my laptop and now have to leave it home for the day because i dont want to restart my preclear. Thanks in advance.

Link to comment

So are you saying there is a problem with the unraid driver for the SASLP-MV8 card?  I haven't been having great luck with it lately.  I even switched from a Asus MB to a recommended Biostar one because of it.

 

I took the same drive and pre_clear is working on it via the onboard sata port.  This was the message I received in the syslog in regards to it:

 

 

Mar 13 21:22:42 Hitch kernel: ------------[ cut here ]------------
Mar 13 21:22:42 Hitch kernel: WARNING: at drivers/ata/libata-core.c:5186 ata_qc_issue+0x10b/0x308()
Mar 13 21:22:42 Hitch kernel: Hardware name: A760G M2+
Mar 13 21:22:42 Hitch kernel: Modules linked in: tun md_mod xor atiixp ahci r8169 mvsas libsas scst scsi_transport_sas
Mar 13 21:22:42 Hitch kernel: Pid: 25832, comm: hdparm Not tainted 2.6.32.9-unRAID #8
Mar 13 21:22:42 Hitch kernel: Call Trace:
Mar 13 21:22:42 Hitch kernel:  [<c102449e>] warn_slowpath_common+0x60/0x77
Mar 13 21:22:42 Hitch kernel:  [<c10244c2>] warn_slowpath_null+0xd/0x10
Mar 13 21:22:42 Hitch kernel:  [<c11b624d>] ata_qc_issue+0x10b/0x308
Mar 13 21:22:42 Hitch kernel:  [<c11ba260>] ata_scsi_translate+0xd1/0xff
Mar 13 21:22:42 Hitch kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Mar 13 21:22:42 Hitch kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Mar 13 21:22:42 Hitch kernel:  [<c11baa40>] ata_sas_queuecmd+0x120/0x1d7
Mar 13 21:22:42 Hitch kernel:  [<c11bc6df>] ? ata_scsi_pass_thru+0x0/0x21d
Mar 13 21:22:42 Hitch kernel:  [<f843369a>] sas_queuecommand+0x65/0x20d [libsas]
Mar 13 21:22:42 Hitch kernel:  [<c11a816c>] ? scsi_done+0x0/0xd
Mar 13 21:22:42 Hitch kernel:  [<c11a82c0>] scsi_dispatch_cmd+0x147/0x181
Mar 13 21:22:42 Hitch kernel:  [<c11ace4d>] scsi_request_fn+0x351/0x376
Mar 13 21:22:42 Hitch kernel:  [<c1126798>] __blk_run_queue+0x78/0x10c
Mar 13 21:22:42 Hitch kernel:  [<c1124446>] elv_insert+0x67/0x153
Mar 13 21:22:42 Hitch kernel:  [<c11245b8>] __elv_add_request+0x86/0x8b
Mar 13 21:22:42 Hitch kernel:  [<c1129343>] blk_execute_rq_nowait+0x4f/0x73
Mar 13 21:22:42 Hitch kernel:  [<c11293dc>] blk_execute_rq+0x75/0x91
Mar 13 21:22:42 Hitch kernel:  [<c11292cc>] ? blk_end_sync_rq+0x0/0x28
Mar 13 21:22:42 Hitch kernel:  [<c112636f>] ? get_request+0x204/0x28d
Mar 13 21:22:42 Hitch kernel:  [<c11269d6>] ? get_request_wait+0x2b/0xd9
Mar 13 21:22:42 Hitch kernel:  [<c112c2bf>] sg_io+0x22d/0x30a
Mar 13 21:22:42 Hitch kernel:  [<c112c5a8>] scsi_cmd_ioctl+0x20c/0x3bc
Mar 13 21:22:42 Hitch kernel:  [<c11b3257>] sd_ioctl+0x6a/0x8c
Mar 13 21:22:42 Hitch kernel:  [<c112a420>] __blkdev_driver_ioctl+0x50/0x62
Mar 13 21:22:42 Hitch kernel:  [<c112ad1c>] blkdev_ioctl+0x8b0/0x8dc
Mar 13 21:22:42 Hitch kernel:  [<c1131e2d>] ? kobject_get+0x12/0x17
Mar 13 21:22:42 Hitch kernel:  [<c112b0f8>] ? get_disk+0x4a/0x61
Mar 13 21:22:42 Hitch kernel:  [<c101b028>] ? kmap_atomic+0x14/0x16
Mar 13 21:22:42 Hitch kernel:  [<c11334a5>] ? radix_tree_lookup_slot+0xd/0xf
Mar 13 21:22:42 Hitch kernel:  [<c104a179>] ? filemap_fault+0xb8/0x305
Mar 13 21:22:42 Hitch kernel:  [<c1048c43>] ? unlock_page+0x18/0x1b
Mar 13 21:22:42 Hitch kernel:  [<c1057c63>] ? __do_fault+0x3a7/0x3da
Mar 13 21:22:42 Hitch kernel:  [<c105985f>] ? handle_mm_fault+0x42d/0x8f1
Mar 13 21:22:42 Hitch kernel:  [<c108b6c6>] block_ioctl+0x2a/0x32
Mar 13 21:22:42 Hitch kernel:  [<c108b69c>] ? block_ioctl+0x0/0x32
Mar 13 21:22:42 Hitch kernel:  [<c10769d5>] vfs_ioctl+0x22/0x67
Mar 13 21:22:42 Hitch kernel:  [<c1076f33>] do_vfs_ioctl+0x478/0x4ac
Mar 13 21:22:42 Hitch kernel:  [<c105dcdd>] ? do_mmap_pgoff+0x232/0x294
Mar 13 21:22:42 Hitch kernel:  [<c1076f93>] sys_ioctl+0x2c/0x45
Mar 13 21:22:42 Hitch kernel:  [<c1002935>] syscall_call+0x7/0xb
Mar 13 21:22:42 Hitch kernel: ---[ end trace 7f1e9f192190e675 ]---

 

I just bought this card and have it hooked up to my Intel 975XBX2. I have 8 drives connected to it and one drive, a new 2TB WDEADS, that I am currently pre clearing. During the pre-read phase, i got the above in my syslog but so far (it is on step 2-10) nothing else. I did some digging and is is possible these errors are related to the linux kernel driver for this card? I know nothing about linux and related stuff so i could be way off but read this thread and see the related "fix" implemented. Again may not be applicable here but would be curious if anyone knows if this is related:

 

https://bugs.launchpad.net/ubuntu/+source/linux/+bug/578212

 

Also see this thread here:

 

http://hardforum.com/showthread.php?t=1397855&page=25

unRAID 4.7 is running :Linux kernel 2.6.32.9

unRAID 5.0beta6 is running: Linux kernel 2.6.36.2

 

According to that first thread, the warning/error was fixed in 2.6.35.   I'm not sure if it solves all the errors, but the second thread you linked to seemed to indicate it might still present itself.   At least you know you are not alone and it is being worked.

 

Joe L.

Unrelated topic regarding screen, so if i install screen and then run my terminal session I can close the terminal window but the "session"still runs? I stupidly ran a telnet from OSX terminal on my laptop and now have to leave it home for the day because i dont want to restart my preclear. Thanks in advance.

 

Link to comment

I'm new to unRAID and preclear. I just started 1 cycle of preclearing last night on 3 WD20EARS drives. I'm using unRAID 4.7 and preclear 1.9. What is the suggested number of cycles I should preclear a drive?

 

I've seen multiple answers ranging from 2-3 (and more) preclears before using the drive in an array. Somewhere I think I saw that after 1 successful preclear pass, I should follow-up with 2 continuous cycles of preclear. I'm not quite sure how many cycles I should run. Maybe the answer is more simple in that I only need to run 1 cycle of preclear (I doubt this, but I would appreciate the community's feedback). I really have no idea, can someone help me out?

 

Thanks!

Link to comment

What's the benefit of doing more than one preclear? For instance, if I preclear a drive once and the results are perfect (hypothetically speaking), what would preclearing a second time prove? I suppose it gives you a peace of mind that the first results are verified, but does it do anything else?

 

I can totally understand preclearing a drive multiple times if there are errors or changes in values.

 

In my case, I'm going to wait till I get the results (tonight) before I decide to start another preclear cycle. I'm not against preclearing a 2nd time as prostuff1 suggested, but I'm also impatient, so doing preclearing more than that might be difficult for me.

 

Any other thoughts on this? Thanks again!

Link to comment

Basically what it comes down to for me is that while a first preclear may pass, it does not always find bad drives.  I had a couple of seagates fail on me after being in the array for about a month and after having passed a cycle of preclear.

 

From then on a do at least 2 and generally 3 cycles.

Link to comment

What's the benefit of doing more than one preclear? For instance, if I preclear a drive once and the results are perfect (hypothetically speaking), what would preclearing a second time prove? I suppose it gives you a peace of mind that the first results are verified, but does it do anything else?

 

I can totally understand preclearing a drive multiple times if there are errors or changes in values.

 

In my case, I'm going to wait till I get the results (tonight) before I decide to start another preclear cycle. I'm not against preclearing a 2nd time as prostuff1 suggested, but I'm also impatient, so doing preclearing more than that might be difficult for me.

 

Any other thoughts on this? Thanks again!

Subsequent cycles are an attempt to get past the early part of the "bathtub curve" where disks fail in their first few days of service.  Before they hold your data.

http://en.wikipedia.org/wiki/Bathtub_curve

Link to comment

Basically what it comes down to for me is that while a first preclear may pass, it does not always find bad drives.  I had a couple of seagates fail on me after being in the array for about a month and after having passed a cycle of preclear.

 

From then on a do at least 2 and generally 3 cycles.

 

It doesn't take much to convince me, I'm good with doing 2 preclears and I'll see how long the first on takes and I'll figure out if I have time for a 3rd preclear. I was really hoping to have the drives ready to be put into the array on Saturday, when I'll have time to work on this. But I'd rather be safe than sorry.

 

What's the benefit of doing more than one preclear? For instance, if I preclear a drive once and the results are perfect (hypothetically speaking), what would preclearing a second time prove? I suppose it gives you a peace of mind that the first results are verified, but does it do anything else?

 

I can totally understand preclearing a drive multiple times if there are errors or changes in values.

 

In my case, I'm going to wait till I get the results (tonight) before I decide to start another preclear cycle. I'm not against preclearing a 2nd time as prostuff1 suggested, but I'm also impatient, so doing preclearing more than that might be difficult for me.

 

Any other thoughts on this? Thanks again!

Subsequent cycles are an attempt to get past the early part of the "bathtub curve" where disks fail in their first few days of service.  Before they hold your data.

http://en.wikipedia.org/wiki/Bathtub_curve

 

Joe L., I had never heard of the bathtub curve before. I guess it's true, you learn something new everyday. As mentioned above, I'll do at least 2 preclears and hopefully I'll have time to fit in a 3rd.

 

Thanks everyone for the input - I think I might actually understand what I'm doing now.

Link to comment

Joe L., I had never heard of the bathtub curve before.

 

It's very useful, and can be applied to very many things.  Think about cars.  Loads of cars have little glitches when new, but then they get fixed / settle down and the car is good for some years.  Then things start to get out of specification or wear out and problems reappear.  The same characteristics can apply to simple things like light bulbs or more complex things like TVs, washing machines, computers, disk drives - just about any device that has mechanical, thermal or electrical wear mechanisms.

 

Big server hardware suppliers spend a significant amount of time "burning in" products before they ship in order to filter out the weaker units before they get to the customer.  Preclearing does the same kind of thing for unRAID users.

Link to comment

Joe L., I had never heard of the bathtub curve before. I guess it's true, you learn something new everyday. As mentioned above, I'll do at least 2 preclears and hopefully I'll have time to fit in a 3rd.

 

Thanks everyone for the input - I think I might actually understand what I'm doing now.

A really good article on "bathtub-curve"

http://www.weibull.com/hotwire/issue21/hottopics21.htm

http://www.weibull.com/hotwire/issue22/hottopics22.htm

 

Good luck with your drives.  I've got a 2TB Seagate that did not make it through 2 cycles before failing. It is sitting here and waiting for me to RMA.

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.