Re: preclear_disk.sh - a new utility to burn-in and pre-clear disks for quick add


Recommended Posts

After 12 hours, two of my three 1 TB are 27% done, but the third is only 18% of the way through the pre-read. Is this because they're all in a port-multiplied 4-bay enclosure and bandwidth is limiting? Or could it mean problems with one drive? It's great that my array is still online. :)

Link to comment

After 12 hours, two of my three 1 TB are 27% done, but the third is only 18% of the way through the pre-read. Is this because they're all in a port-multiplied 4-bay enclosure and bandwidth is limiting? Or could it mean problems with one drive? It's great that my array is still online. :)

Only way to know is for you to look in your syslog.  If no errors, odds are it is just bandwidth.  Since it is not uncommon for a single 1TB drive to take 12 hours for a full cycle, three disks sharing a single controller channel could easily take three times as long.

 

Joe L.

Link to comment

I aborted the process, as it was clear that it was now only clearing a few kb every hour. The MB/s was steadily decreasing. After 36 hours it was only 55% done.

I'll try the regular method and use the script for drives in my main tower next time.

If you were only clearing a few kb per hour, then you have other issues that will show regardless of how you access your disks.  this script does nothing different than what is done to normally read or write disks... other than to read and write all the sectors.

 

Before you reboot, post a copy of your syslog.   It might give the clues needed to know why your system is so slow at accessing those disks.

 

It is VERY normal for the MB/s to decrease as you get to the inner cylinders of the disks.  In my thread that announced preclear_disk.sh you can see how a 750Gig drive I was clearing went from 70MB/s down to 66MB/s after clearing only 2/3rds of it.  I expect it was below 60MB/s by the time it got to the end.   

 

Joe L.

Link to comment

Sorry, Joe. I didn't mean to imply that there was something wrong with the script. I'm sure there isn't. I wanted to try the normal method of clearing to confirm that there's another problem. Potential causes are numerous:

 

1) It could be that the new disks are all in my new Mediagate/Mediasonic 4 bay enclosure, which hasn't been tested with unRAID as far as I know. They showed up fine on the devices page, but that doesn't necessarily mean they'll be fine to work in unRAID.

 

2) Also they'd be data disks 14, 15 and 16, so I don't know if that'd be a problem, so I'm only going to clear 2 of the three disks. The disk that only got part way thought the pre-checking after 36 hours was a disk that had been used before. The 'faster' ones were both brand new.

 

3) There's a problem with all three disks, or any combo of them if they can affect each other's rate of clearing.

 

EDIT:

Okay, one advantage of the regular clearing method is that there's a temperature read-out. I think I got the bottom of the problem, but time will tell. When I added two of the three drives to my array, I saw immediately that the temps were 49 C, and that was after I'd stopped the clearing hours ago. Not being that familiar with my Mediasonic ProBox I checked the fan settings. I had it at level 3 of 3, which after some testing, turned out to be the lowest setting. Perhaps the drive were overheating and thus slowing down drastically? I wonder if they're okay? Anyway, I'm tempted to stop the clearing and go back to the script so I can have my array online while clearing, once I know the temps have stabilised. They're already dropping quickly with the fan on level 1.

Link to comment

Okay, one advantage of the regular clearing method is that there's a temperature read-out. I think I got the bottom of the problem, but time will tell. When I added two of the three drives to my array, I saw immediately that the temps were 49 C, and that was after I'd stopped the clearing hours ago.

I've added a temperature readout to the display while pre-clearing a drive.  I'm running it through a test now on one of my spare 1.5TB drives.  It is writing at about 63MB/s on my old PCI bus based server.   I'll run a second test tomorrow, and if nothing odd occurs I'll post a new version of preclear_disk.sh with the enhancement you suggested. (probably on Monday)   My script has been running on the 1.5TB drive for about 7 hours and is about through with a third of the writing of zeros.  The temperature of the drive has gone from 29 degrees up to 35 degrees.

 

Your critique has resulted in a nice improvement.  Thanks.

 

Edit: I've attached a screen shot... The 1.5T drive probably still has a few hours to go in the post-read process.  Temperature is stable at 35C degrees.

 

Joe L.

Link to comment

Thanks, Joe. Your script is going to be a life saver for me in the near future. For now, my Medisonic 4-bay Pro Box seems to be dead! :( None of the drives are recognised. I tried clearing two drives the old way:

 

Everything was going great with one drive (probably because your script had got through ~50% of it already). The other drive ended up showing as "Not installed". This was the one that was giving me problems with the script based clearing too - as you predicted because they're really doing the same thing. Then, the Pro Box stopped working and I had to restore my array with 13 data disks and rebuild parity again with the newly cleared disk missing. Anyway, I have a few 750GB drives to upgrade to 1 TB or 1.5 TB soon, so I will use your script to do so. Cheers!

Link to comment

I've added a temperature readout to the display while pre-clearing a drive.  I'm running it through a test now on one of my spare 1.5TB drives.  It is writing at about 63MB/s on my old PCI bus based server.   I'll run a second test tomorrow, and if nothing odd occurs I'll post a new version of preclear_disk.sh with the enhancement you suggested. (probably on Monday)   My script has been running on the 1.5TB drive for about 7 hours and is about through with a third of the writing of zeros.  The temperature of the drive has gone from 29 degrees up to 35 degrees.

Joe,

 

I just got a new unraid MB and CPU and I'm currently testing it with two new Samsung 1.5T drives.  I'm preclearing both and I'm not getting nowhere near the speeds you are.  If yours was a PCI based system..  Mine is a new pci-e based system.  I only have the two drives attached.  Syslog says they are runnign in 3.0Gbs...  But they are both going at a rate of about 25% every 4 hours for the preread.  Even when I just did one drive I was getting 2GB/min ~ 34MB/s.  I would expect a lot better than that! Right now I'm getting about 25.6MB/s.  Am I missing something.  In the log I see:

 

Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: smartctl version 5.38 [i486-slackware-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Home page is http://smartmontools.sourceforge.net/
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: 
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: === START OF INFORMATION SECTION ===
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Device Model: SAMSUNG HD154UI
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Serial Number: S1Y6J1KS743788
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Firmware Version: 1AG01118
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: User Capacity: 1,500,301,910,016 bytes
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Device is: In smartctl database [for details use: -P show]
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: ATA Version is: 8
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: ATA Standard is: ATA-8-ACS revision 3b
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Local Time is: Fri Aug 21 23:17:43 2009 EDT
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: 
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: ==> WARNING: May need -F samsung or -F samsung2 enabled; see manual for details.
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: 
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: SMART support is: Available - device has SMART capability.
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: SMART support is: Enabled
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: 

What is this -F samsung?

 

I'm running in AHCI mode set in the bios.  Anything else I'm missing?

Link to comment

I've added a temperature readout to the display while pre-clearing a drive.  I'm running it through a test now on one of my spare 1.5TB drives.  It is writing at about 63MB/s on my old PCI bus based server.   I'll run a second test tomorrow, and if nothing odd occurs I'll post a new version of preclear_disk.sh with the enhancement you suggested. (probably on Monday)   My script has been running on the 1.5TB drive for about 7 hours and is about through with a third of the writing of zeros.  The temperature of the drive has gone from 29 degrees up to 35 degrees.

Joe,

 

I just got a new unraid MB and CPU and I'm currently testing it with two new Samsung 1.5T drives.  I'm preclearing both and I'm not getting nowhere near the speeds you are.  If yours was a PCI based system..  Mine is a new pci-e based system.  I only have the two drives attached.  Syslog says they are runnign in 3.0Gbs...  But they are both going at a rate of about 25% every 4 hours for the preread.  Even when I just did one drive I was getting 2GB/min ~ 34MB/s.  I would expect a lot better than that! Right now I'm getting about 25.6MB/s.  Am I missing something.  In the log I see:

 

Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: smartctl version 5.38 [i486-slackware-linux-gnu] Copyright (C) 2002-8 Bruce Allen
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Home page is http://smartmontools.sourceforge.net/
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: 
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: === START OF INFORMATION SECTION ===
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Device Model: SAMSUNG HD154UI
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Serial Number: S1Y6J1KS743788
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Firmware Version: 1AG01118
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: User Capacity: 1,500,301,910,016 bytes
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Device is: In smartctl database [for details use: -P show]
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: ATA Version is: 8
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: ATA Standard is: ATA-8-ACS revision 3b
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: Local Time is: Fri Aug 21 23:17:43 2009 EDT
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: 
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: ==> WARNING: May need -F samsung or -F samsung2 enabled; see manual for details.
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: 
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: SMART support is: Available - device has SMART capability.
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: SMART support is: Enabled
Aug 21 23:17:43 Tower2 preclear_disk-start[14626]: 

What is this -F samsung?

 

I'm running in AHCI mode set in the bios.  Anything else I'm missing?

Why would you expect your system to be faster? 

 

A PCI bus can do about 133 MB/s.  When reading a single drive, it is easily able to keep up.  My drive was a 7200 RPM drive, I think yours is a 5400 RPM drive (correct me if I'm wrong)  I'd expect it to be a bit slower. 

 

Odds are good you are doing just fine.  With 2 1.5TB drives, I think you are a bit slower, but then I can't tell from here is anything else looks interesting in your syslog (because you did not post it)  Are you also copying files to the server at the same time, or doing a parity check or initial parity calculation?

 

As far as the -F samsung.... did you try looking in the smartctl manual page as instructed.

-F TYPE, --firmwarebug=TYPE

    [ATA only] Modifies the behavior of smartctl to compensate for some known and understood device firmware or driver bug. Except 'swapid', the arguments to this option are exclusive, so that only the final option given is used. The valid values are:

 

    none - Assume that the device firmware obeys the ATA specifications. This is the default, unless the device has presets for '-F' in the device database (see note below).

 

    samsung - In some Samsung disks (example: model SV4012H Firmware Version: RM100-08) some of the two- and four-byte quantities in the SMART data structures are byte-swapped (relative to the ATA specification). Enabling this option tells smartctl to evaluate these quantities in byte-reversed order. Some signs that your disk needs this option are (1) no self-test log printed, even though you have run self-tests; (2) very large numbers of ATA errors reported in the ATA error log; (3) strange and impossible values for the ATA error log timestamps.

 

    samsung2 - In more recent Samsung disks (firmware revisions ending in "-23") the number of ATA errors reported is byte swapped. Enabling this option tells smartctl to evaluate this quantity in byte-reversed order. An indication that your Samsung disk needs this option is that the self-test log is printed correctly, but there are a very large number of errors in the SMART error log. This is because the error count is byte swapped. Thus a disk with five errors (0x0005) will appear to have 20480 errors (0x5000).

 

    samsung3 - Some Samsung disks (at least SP2514N with Firmware VF100-37) report a self-test still in progress with 0% remaining when the test was already completed. Enabling this option modifies the output of the self-test execution status (see options '-c' or '-a' above) accordingly.

 

    Note that an explicit '-F' option on the command line will over-ride any preset values for '-F' (see the '-P' option below).

 

If your smartctl report looks otherwise normal, odds are you do not need the -F option.  It certainly is not needed for the preclear script, as it just looks for differences, not absolute values.

Link to comment

I wasn't doing anything else with the array..  It was stopped.  I was getting parity check speeds of 90-100MBs (parity synch was about 50-60MB/s) with the two drives when I tested that..  That's why I would expect to get something similiar with the pre-read.

 

 

Maybe I'll try some dd comands.  The preclear cycle for the disks took about 28 hours for one and 30 hours for the other.  One was fine and the longer one had some smart errors which I'll post in the other thread.

 

Jim

Link to comment

I wasn't doing anything else with the array..  It was stopped.  I was getting parity check speeds of 90-100MBs (parity synch was about 50-60MB/s) with the two drives when I tested that..

Wow... very nice speeds. 

  That's why I would expect to get something similiar with the pre-read.

Now I understand your comment.  There is one HUGE difference... Linear addressing and read-ahead buffering vs. random blocks of data and forcing tons of head seeking.

 

The pre/post read process is specifically designed to exercise the disk in was to uncover problems... before the disk ends up in the protected array.

 

For each block of data read from the disk linearly, it also reads three random blocks of data from somewhere else on the disk and also reads the very first block on the disk, and the very last.  Those last two two are read bypassing the disk buffer cache, so the disk head must make a sweep across the disk with the linear block in between somewhere.

 

For disk parity, and parity checks, the disk head barely has to move, and when it does, it moves just one track at a time...  I think that is the reason for the huge difference in speed. 

Maybe I'll try some dd comands.  The preclear cycle for the disks took about 28 hours for one and 30 hours for the other.  One was fine and the longer one had some smart errors which I'll post in the other thread.

 

Jim

as I said, a normal "dd" command is a linear read of all the blocks in turn.  you might find the "writing" phase of the preclear_disk script faster than the read phases... as it is a linear write to all the blocks...  For it, the track-to-track seek time has far less effect.

 

Joe L.

Link to comment

What was your cycle time for a 1.5T disk?

 

It seemed like yours was in the 17 hour time frame from your screen capture?  I would hope that I would get closer to that rather than the 28 hour time frame.

 

Oh..  And the zeroing took ~5 hours.

I started the 1.5TB preclear process on Aug 16 at 14:52:19  It ended Aug 17 at 08:13:51

 

So, it looks like 17.2 hours.

 

Your zeroing (writing) time is consistent with what I was saying... It is done linearly... so the disk does not have to move the disk heads very far or often compare to the read phases of pre-clear.

 

Joe L.

Link to comment

I after I ran it a second time by its self, I got to 17.8ish hours.  So I feel better..  I would have thought that running two disks wouldn't have THAT much of an effect!  Maybe I'll boost the memory speed and CPU speed.  Maybe that will help concurrent pre-clears.  I've got it crippled to lower the power...

Link to comment

I after I ran it a second time by its self, I got to 17.8ish hours.  So I feel better..  I would have thought that running two disks wouldn't have THAT much of an effect!  Maybe I'll boost the memory speed and CPU speed.  Maybe that will help concurrent pre-clears.   I've got it crippled to lower the power...

It is far more likely to be limited by the disk controller bandwidth, not the cpu or memory speeds.  But, give it a try and let us know.
Link to comment

Joe,

 

I just got a new unraid MB and CPU and I'm currently testing it with two new Samsung 1.5T drives.  I'm preclearing both and I'm not getting nowhere near the speeds you are.  If yours was a PCI based system..  Mine is a new pci-e based system.  I only have the two drives attached.  Syslog says they are runnign in 3.0Gbs...  But they are both going at a rate of about 25% every 4 hours for the preread.  Even when I just did one drive I was getting 2GB/min ~ 34MB/s.  I would expect a lot better than that! Right now I'm getting about 25.6MB/s.  Am I missing something.  In the log I see:

 

There is something weird going on...  I ran the test on both disks again concurrently and this time I got the same results as running a single.  I can't duplicate the 25MB/s or even the 34MB/s  Maybe I should just be happy I'm getting the faster speeds!  Maybe I was running a some modifed version of the script that ran slow?

 

Bizarre..

 

Link to comment

OK, newest  version 0.9.3 of preclear_disk.sh is now attached to the first post in this thread.

 

Funny things happened though while the thing was running: A whole bunch of processes got severely deadlocked in "disk sleeping" state, including samba, rtorrent, and some of my telnet sessions.  As processes in such state are not killable by any means, I almost pulled the power plug from the wall at one point, while also pulling my hair... But waited it out.  For 15 hours!!!

 

During all that time the whole system was not totally locked: some telnet sessions were still responsive, and the overall CPU usage reported by htop was in the lower 30%.

 

Once the preclear script was done doing its job,  the deadlocked processes got back to normal eventually, and I was able to cleanly restart the system.

 

It is an indication of you having a deadlock of some kind.  Since the pre-clear is only reading or writing the drive that is being cleared, it might be the combined resources needed by everything else you have running.  It has to be something at a pretty low level... below the file-system.  Perhaps something deadlocked in the device driver for your disk controller (you did have a lot of file-activity going on).     

 

Did you see anything in your syslog?

 

Today I got a couple of new hard disks, so I used the Preclear script on them.

 

Again my system experienced severe deadlocks while the scripts were runnung.

 

This time I am attaching my syslog, in hope that it could help resolve this problem.

 

I tried to look through the syslog, (I don't know much about Linux), and it seems like my new "latest-and-gratest" SATA-II disks are behaving like ATA/100, or ATA/33 even. Could the problem be that there is some old chipset inside this box -- something like ICH4 or so -- which does not have a good SATA controller on board?  Are there any linux boot settings I could change to address the problems?

 

Yours,

Purko

 

Link to comment

Please look at this screen shot where it says "341% Done"!

 

Hmmm... it happened when preclearing more than one disk at the same time...

 

Looks like all concurrent preclear scripts are using the same file: /tmp/zero.txt

 

Can they possibly use their own tmp file? Maybe one with the PID in the name?

 

Purko

 

Link to comment

I think I fiured out what was happening with the speed of my tests.  It turn out the disks are slower when they are formatted.  Not sure why..  but if I "clear" the disk and wipe out the partitions, the test is much faster.  It uses a much smaller block size when it's formatted vs when it's clear.

 

fdisk (formatted):

root@Tower2:/boot/scripts# fdisk -l /dev/sda

 

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes

1 heads, 63 sectors/track, 46512336 cylinders

Units = cylinders of 63 * 512 = 32256 bytes

Disk identifier: 0x00000000

 

  Device Boot      Start        End      Blocks  Id  System

/dev/sda1              2    46512336  1465138552+  83  Linux

Partition 1 does not end on cylinder boundary.

 

fdisk after clearing:

root@Tower2:/boot/scripts# fdisk -l /dev/sda

 

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes

255 heads, 63 sectors/track, 182401 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x00000000

 

  Device Boot      Start        End      Blocks  Id  System

/dev/sda1              1      182402  1465138552+  0  Empty

Partition 1 does not end on cylinder boundary.

The smaller block size makes the test run much slower.  Have you seen this? This makes my reads drop to about 30-40MB/s from 90ish MB/s

 

Why does fdisk report differently?

 

Does the smaller block size make the test any better?  As in more thrashing?  This explains why I had two very different speeds!

 

Jim

 

 

Link to comment

I think I fiured out what was happening with the speed of my tests.  It turn out the disks are slower when they are formatted.  Not sure why..  but if I "clear" the disk and wipe out the partitions, the test is much faster.  It uses a much smaller block size when it's formatted vs when it's clear.

 

fdisk (formatted):

root@Tower2:/boot/scripts# fdisk -l /dev/sda

 

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes

1 heads, 63 sectors/track, 46512336 cylinders

Units = cylinders of 63 * 512 = 32256 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1               2    46512336  1465138552+  83  Linux

Partition 1 does not end on cylinder boundary.

 

fdisk after clearing:

root@Tower2:/boot/scripts# fdisk -l /dev/sda

 

Disk /dev/sda: 1500.3 GB, 1500301910016 bytes

255 heads, 63 sectors/track, 182401 cylinders

Units = cylinders of 16065 * 512 = 8225280 bytes

Disk identifier: 0x00000000

 

   Device Boot      Start         End      Blocks   Id  System

/dev/sda1               1      182402  1465138552+   0  Empty

Partition 1 does not end on cylinder boundary.

The smaller block size makes the test run much slower.  Have you seen this? This makes my reads drop to about 30-40MB/s from 90ish MB/s

 

Why does fdisk report differently?

 

Does the smaller block size make the test any better?  As in more thrashing?  This explains why I had two very different speeds!

 

Jim

 

 

Neither block size makes the test any better, or worse... Certainly there will be more thrashing with the smaller block size.  You are executing MANY more individual read commands. (255 times as many) so you are moving the disk head from first to last 255 times as much.

 

The notion of Cylinders, heads, and sectors is a carry-over from the early days of disk drives... pre-dating even MS-DOS.  Today's disks will have one disk head per platter.  They will very likely have more than 63 sectors per track.  The problem is the MBR record was designed with small fields that can't hold true values, so they are "faked" by the drive.  The actual addressing of the disk is hidden from us entirely and is based on sector number.

 

Trust me, you did not magically go from 1 disk head to 255 by clearing it.

 

Your observation is interesting though, and it suggests we might add code to multiply the "Unit" by some value if it is under 1M, just to make the efficiency a bit better.  I think I still want the math to work out so we do not have a partial "read" at the end of the disk.

 

Something like this pseudo code might work

tu=$units

while [ "$units" -lt 1000000 ]

do

  let units = $units+$tu

done

 

Joe L.

Link to comment
This time I am attaching my syslog, in hope that it could help resolve this problem.

 

I tried to look through the syslog, (I don't know much about Linux), and it seems like my new "latest-and-gratest" SATA-II disks are behaving like ATA/100, or ATA/33 even.

 

One of your disks is running in ATA/33 mode, but it's one you're not using in the array. Your 1TBand 2TB WD drives are set as UDMA/133 and 3G SATA.

 

Aug 28 12:58:35 Tower kernel: Probing IDE interface ide0...

Aug 28 12:58:35 Tower kernel: hda: WDC WD3200BEVE-00A0HT0, ATA DISK drive

Aug 28 12:58:35 Tower kernel: hda: host max PIO4 wanted PIO255(auto-tune) selected PIO4

Aug 28 12:58:35 Tower kernel: hda: host side 80-wire cable detection failed, limiting max speed to UDMA33

Aug 28 12:58:35 Tower kernel: hda: UDMA/33 mode selected

Aug 28 12:58:35 Tower kernel: ide0 at 0x170-0x177,0x376 on irq 15

Aug 28 12:58:35 Tower kernel: hda: max request size: 512KiB

Aug 28 12:58:35 Tower kernel: hda: 625142448 sectors (320072 MB) w/8192KiB Cache, CHS=38913/255/63

Aug 28 12:58:35 Tower kernel: hda: cache flushes supported

Aug 28 12:58:35 Tower kernel:  hda: hda1

 

Aug 28 12:58:35 Tower kernel: ata1: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

Aug 28 12:58:35 Tower kernel: ata1.00: ATA-8: WDC WD1001FALS-00J7B1, 05.00K05, max UDMA/133

Aug 28 12:58:35 Tower kernel: ata1.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 31/32)

Aug 28 12:58:35 Tower kernel: ata1.00: max_sectors limited to 256 for NCQ

Aug 28 12:58:35 Tower kernel: ata1.00: max_sectors limited to 256 for NCQ

Aug 28 12:58:35 Tower kernel: ata1.00: configured for UDMA/133

Aug 28 12:58:35 Tower kernel: ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

Aug 28 12:58:35 Tower kernel: ata2.00: ATA-8: WDC WD1001FALS-00J7B0, 05.00K05, max UDMA/133

Aug 28 12:58:35 Tower kernel: ata2.00: 1953525168 sectors, multi 0: LBA48 NCQ (depth 31/32)

Aug 28 12:58:35 Tower kernel: ata2.00: max_sectors limited to 256 for NCQ

Aug 28 12:58:35 Tower kernel: ata2.00: max_sectors limited to 256 for NCQ

Aug 28 12:58:35 Tower kernel: ata2.00: configured for UDMA/133

Aug 28 12:58:35 Tower kernel: ata3: SATA link down (SStatus 0 SControl 300)

Aug 28 12:58:35 Tower kernel: ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

Aug 28 12:58:35 Tower kernel: ata4.00: HPA detected: current 3907029168, native 18446744073321613488

Aug 28 12:58:35 Tower kernel: ata4.00: ATA-8: WDC WD20EADS-00S2B0, 04.05G04, max UDMA/133

Aug 28 12:58:35 Tower kernel: ata4.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)

Aug 28 12:58:35 Tower kernel: ata4.00: max_sectors limited to 256 for NCQ

Aug 28 12:58:35 Tower kernel: ata4.00: max_sectors limited to 256 for NCQ

Aug 28 12:58:35 Tower kernel: ata4.00: configured for UDMA/133

Aug 28 12:58:35 Tower kernel: ata5: SATA link up 3.0 Gbps (SStatus 123 SControl 300)

Aug 28 12:58:35 Tower kernel: ata5.00: HPA detected: current 3907029168, native 18446744073321613488

Aug 28 12:58:35 Tower kernel: ata5.00: ATA-8: WDC WD2002FYPS-01U1B0, 04.05G04, max UDMA/133

Aug 28 12:58:35 Tower kernel: ata5.00: 3907029168 sectors, multi 0: LBA48 NCQ (depth 31/32)

Aug 28 12:58:35 Tower kernel: ata5.00: max_sectors limited to 256 for NCQ

Aug 28 12:58:35 Tower kernel: ata5.00: max_sectors limited to 256 for NCQ

Aug 28 12:58:35 Tower kernel: ata5.00: configured for UDMA/133

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.