Preclear.sh results - Questions about your results? Post them here.


Recommended Posts

Do the disks show as available in the management interface?  (with a size, and free space?)

Until the array is started, yes.  There's no indication of what is new and cleared.  But as soon as you start the array, the drives that were pre-cleared and newly installed show up as needing formatting.  At least that how it was on the last two I just did with the 9.6 version of preclear_disk.sh.

 

Why would it matter if there was a parity drive or not?

 

No, all of the bytes in the MBR (the first 512) are significant, and are tested by unRAID.   Interestingly, the remainder of the first cylinder (up to sector 62) are currently not used at all... for historical reasons that allow the disk to be recognized as partitioned in older BIOS and in Windows)  So, you could write something there... as a "note."   I'd just write something down on paper... If you accidentally made a drive look like it was pre-cleared, and it had anything other than zeros on the remainder of the disk, you would completely break all the parity calculations and be unable to restore from a disk failure unless you did a full parity check and fixed all the parity ... prior to the failure ...

Don't think I understand this.  If the drive had something extra written to say, sector 2, and that was consistent throughout the life of the drive in that array, the parity check would always be consistent, wouldn't it?  Why would you not be able to restore?

 

It has been a long time since the old C/H/S geometry worked as originally defined.... In fact, it could not handle disks > 8 Gig.  (At that time, 8 Gig was a dream... disk sizes were measures in Megabytes, not Terabytes. and a (tiny by today's standards) 20 Megabyte drive was nearly $1000.  See here: http://www.mattscomputertrends.com/harddiskdata.html)

 

Joe L.

Oh yes.  I can't tell you the number of times I've had to enter the old drive parms into a BIOS manually, in the (not so) 'good old days'.  But we were in heaven then with the new 386-25 or 33Mhz motherboard and our 50-100MB full size drives...  Screamers, you know.

 

--Bill

Link to comment

Another issue, if I may.  Preclear_disk.sh seems to suck up a lot of memory -- sometimes.  Specifically, dd is the culprit.  When free memory (shown by top) gets down to around 400M, access from the outside world via ethernet (the unRAID manager and unMenu) is all but dead.  I don't know if you could still serve video or audio at that point.  On the console everything continues to be peppy, despite the slowly increasing load average.  As soon as preclear is done, free memory shoots back to normal and remote services return.

 

The behavior is the same with 4GB RAM (2.98G usable) as with 2GB RAM except that it just takes longer to happen.  But sometimes it doesn't happen at all, everything works normally through the entire pre-clear.  It seems to have something to do with how long the machine has been up, and/or how many previous pre-clears have been done.  I'm running 4.4.2.

 

Any  thoughts on this behavior?

 

--Bill

Link to comment

Another issue, if I may.  Preclear_disk.sh seems to suck up a lot of memory -- sometimes.  Specifically, dd is the culprit.  When free memory (shown by top) gets down to around 400M, access from the outside world via ethernet (the unRAID manager and unMenu) is all but dead.  I don't know if you could still serve video or audio at that point.  On the console everything continues to be peppy, despite the slowly increasing load average.  As soon as preclear is done, free memory shoots back to normal and remote services return.

 

The behavior is the same with 4GB RAM (2.98G usable) as with 2GB RAM except that it just takes longer to happen.  But sometimes it doesn't happen at all, everything works normally through the entire pre-clear.  It seems to have something to do with how long the machine has been up, and/or how many previous pre-clears have been done.  I'm running 4.4.2.

 

Any  thoughts on this behavior?

 

--Bill

It is using the cache, and linux will let it use as much as it can.  It is not specifically "dd", but the fact that you are reading/writing every block of the disk being cleared, and each block is being held in cache in case you will be referencing it again shortly... It has no way to know the usage patterns of an unRAID server  It just frees the least recently used buffer memory when it is needed and not currently being referenced. (And odds are pretty high that your disk size is greater than the amount of memory you have in the server, no matter how much RAM you have.  ;D )

 

You might set the cache pressure to allow it to re-use the buffer cache more quickly rather than to keep entries in the cache.

sysctl vm.vfs_cache_pressure=100

Link to comment

Do the disks show as available in the management interface?  (with a size, and free space?)

Until the array is started, yes.  There's no indication of what is new and cleared.  But as soon as you start the array, the drives that were pre-cleared and newly installed show up as needing formatting.  At least that how it was on the last two I just did with the 9.6 version of preclear_disk.sh.

 

Why would it matter if there was a parity drive or not?

It would not matter if there was a parity drive or not.

No, all of the bytes in the MBR (the first 512) are significant, and are tested by unRAID.   Interestingly, the remainder of the first cylinder (up to sector 62) are currently not used at all... for historical reasons that allow the disk to be recognized as partitioned in older BIOS and in Windows)  So, you could write something there... as a "note."   I'd just write something down on paper... If you accidentally made a drive look like it was pre-cleared, and it had anything other than zeros on the remainder of the disk, you would completely break all the parity calculations and be unable to restore from a disk failure unless you did a full parity check and fixed all the parity ... prior to the failure ...

Don't think I understand this.  If the drive had something extra written to say, sector 2, and that was consistent throughout the life of the drive in that array, the parity check would always be consistent, wouldn't it?  Why would you not be able to restore?

Basically, it would... but not for the reason you think.  Sectors 0 through 62 are unused, and not part of any partition, and NOT part of the protected data.  The protected data in unRAID does not start until the first partition.  (You can write anything you want to sector 2, but it will be gone when you restore.)

 

The good news however, is the first 64k of a reiserfs partition is unused by reiserfs, and specifically for boot records, etc.  So... you can write to /dev/md1 to its first sector (which is actually the first sector on the first partition) and you will then get it recorded on the disk, and protected by parity.  Never write directly to /dev/sdX1 as it will cause a parity error.

It has been a long time since the old C/H/S geometry worked as originally defined.... In fact, it could not handle disks > 8 Gig.  (At that time, 8 Gig was a dream... disk sizes were measures in Megabytes, not Terabytes. and a (tiny by today's standards) 20 Megabyte drive was nearly $1000.  See here: http://www.mattscomputertrends.com/harddiskdata.html)

 

Joe L.

Oh yes.  I can't tell you the number of times I've had to enter the old drive parms into a BIOS manually, in the (not so) 'good old days'.  But we were in heaven then with the new 386-25 or 33Mhz motherboard and our 50-100MB full size drives...  Screamers, you know.

 

--Bill

Oh yes...  I go back a bit more than that...  Do you remember 8" floppy disks?  (or loading programs via punched paper tape on a teletype machine?)  Talk about slow...
Link to comment

It is using the cache, and linux will let it use as much as it can.  It is not specifically "dd", but the fact that you are reading/writing every block of the disk being cleared, and each block is being held in cache in case you will be referencing it again shortly... It has no way to know the usage patterns of an unRAID server  It just frees the least recently used buffer memory when it is needed and not currently being referenced. (And odds are pretty high that your disk size is greater than the amount of memory you have in the server, no matter how much RAM you have.  ;D )

Actually, no.  But cumulatively on one boot, yes.  I pre-cleared two 1.5T, then a 500G.  For some reason it didn't show problems until the end of the 500G drive.  I have 4G RAM installed (2.9G available to Linux), but I failed to notice how much was freed each time.

 

You might set the cache pressure to allow it to re-use the buffer cache more quickly rather than to keep entries in the cache.

sysctl vm.vfs_cache_pressure=100

Gotcha.  Ok, thanks.  I'll try that.

 

--Bill

Link to comment

It is using the cache, and linux will let it use as much as it can.  It is not specifically "dd", but the fact that you are reading/writing every block of the disk being cleared, and each block is being held in cache in case you will be referencing it again shortly... It has no way to know the usage patterns of an unRAID server  It just frees the least recently used buffer memory when it is needed and not currently being referenced. (And odds are pretty high that your disk size is greater than the amount of memory you have in the server, no matter how much RAM you have.  ;D )

Actually, no.  But cumulatively on one boot, yes.  I pre-cleared two 1.5T, then a 500G.  For some reason it didn't show problems until the end of the 500G drive.  I have 4G RAM installed (2.9G available to Linux), but I failed to notice how much was freed each time.

 

You might set the cache pressure to allow it to re-use the buffer cache more quickly rather than to keep entries in the cache.

sysctl vm.vfs_cache_pressure=100

Gotcha.  Ok, thanks.  I'll try that.

 

--Bill

Actually, yes... Your 4Gig of RAM (2.9Gig available for programs/buffers) is most certainly less than even the smallest of your disk drives (500gig)

 

Your buffer cache used for disk I/O will be used at a rate of about 75-80 MB/second  (however fast you are reading and/or writing to the disk).  It is only freed when another program needs memory.

Link to comment

Basically, it would... but not for the reason you think.  Sectors 0 through 62 are unused, and not part of any partition, and NOT part of the protected data.  The protected data in unRAID does not start until the first partition.  (You can write anything you want to sector 2, but it will be gone when you restore.)

 

The good news however, is the first 64k of a reiserfs partition is unused by reiserfs, and specifically for boot records, etc.  So... you can write to /dev/md1 to its first sector (which is actually the first sector on the first partition) and you will then get it recorded on the disk, and protected by parity.  Never write directly to /dev/sdX1 as it will cause a parity error.

 

That's really good information!  Writing a historical pre-clear byte there is problematic, though.  Let's say you had to quickly install a replacement drive (new, never written to) and didn't have time to pre-clear it.  So it's installed and being restored by the system via the parity drive.  Now, since the pre-clear byte from the original (now failed) disk is part of the restore, you now have an indicator that says this new drive was pre-cleared, when in fact it wasn't.  You'd need to manually clear that byte if it was to be accurate.

 

You said earlier "If you accidentally made a drive look like it was pre-cleared, and it had anything other than zeros on the remainder of the disk, you would completely break all the parity calculations and be unable to restore from a disk failure unless you did a full parity check and fixed all the parity ... prior to the failure...".  We'd have to go with the presumption that only preclear_disk.sh would set that byte appropriately. 

 

But I'm curious, here you're talking about anything that existed within the ReiserFS space?  The first 62 sectors before the FS starts are cleared when sector 0 is rewritten?  So presumably nothing written anywhere in there would count in parity, or would be preserved when sector 0 was updated, correct?

 

It doesn't seem that the formatting (creating the ReiserFS) takes enough time to clear the entire remaining partition, it just sets up the directory and FS structures.  So any drive that has been in use in another system and is not pre-cleared could have blocks with garbage in them in the filesystem space.  But they wouldn't be 'seen' because there would be no pointers to it, but that same garbage *would* be seen by parity calcs.  If that's true, then if a drive had been previously used in another system and NOT pre-cleared, you absolutely could not depend on its restorability and there would be no indication of a problem. ????

 

Or is the parity calc aware of allocated FS blocks and only takes the data within them for the calcs?

 

Oh yes.  I can't tell you the number of times I've had to enter the old drive parms into a BIOS manually, in the (not so) 'good old days'.  But we were in heaven then with the new 386-25 or 33Mhz motherboard and our 50-100MB full size drives...  Screamers, you know.

 

--Bill

Oh yes...  I go back a bit more than that...  Do you remember 8" floppy disks?  (or loading programs via punched paper tape on a teletype machine?)  Talk about slow...

 

No, I missed all the fun with punched tape and all that, but I was running a bulletin board (BBS) that I wrote in 1981 that used a 5.25 floppy boot drive (about 90KB) and two dual Persci 8" data drives with something like 220KB per disk.  What capacity!  Shortly after that I bought my first hard drive, a big external cabinet with a 10MB drive in it.  That and the adaptor cost just shy of $2k.  I still have a bunch of those 8" disks around here someplace.

 

--Bill

Link to comment

It is using the cache, and linux will let it use as much as it can.  It is not specifically "dd", but the fact that you are reading/writing every block of the disk being cleared, and each block is being held in cache in case you will be referencing it again shortly... It has no way to know the usage patterns of an unRAID server  It just frees the least recently used buffer memory when it is needed and not currently being referenced. (And odds are pretty high that your disk size is greater than the amount of memory you have in the server, no matter how much RAM you have.  ;D )

Actually, no.  But cumulatively on one boot, yes.  I pre-cleared two 1.5T, then a 500G.  For some reason it didn't show problems until the end of the 500G drive.  I have 4G RAM installed (2.9G available to Linux), but I failed to notice how much was freed each time.

 

Actually, yes... Your 4Gig of RAM (2.9Gig available for programs/buffers) is most certainly less than even the smallest of your disk drives (500gig)

 

Uh, yes, you're right.  I had some kind of a 'moment' back there...

 

--Bill

Link to comment

Basically, it would... but not for the reason you think.  Sectors 0 through 62 are unused, and not part of any partition, and NOT part of the protected data.  The protected data in unRAID does not start until the first partition.  (You can write anything you want to sector 2, but it will be gone when you restore.)

 

The good news however, is the first 64k of a reiserfs partition is unused by reiserfs, and specifically for boot records, etc.  So... you can write to /dev/md1 to its first sector (which is actually the first sector on the first partition) and you will then get it recorded on the disk, and protected by parity.  Never write directly to /dev/sdX1 as it will cause a parity error.

 

That's really good information!  Writing a historical pre-clear byte there is problematic, though.  Let's say you had to quickly install a replacement drive (new, never written to) and didn't have time to pre-clear it.  So it's installed and being restored by the system via the parity drive.  Now, since the pre-clear byte from the original (now failed) disk is part of the restore, you now have an indicator that says this new drive was pre-cleared, when in fact it wasn't.  You'd need to manually clear that byte if it was to be accurate.

Correct...  The byte you are trying to use to indicate the drive status is part of parity protected data and would be restored.  For that reason, it is not a place to put a drive specific indicator. (unless you added intelligence to it.  If it was the drive "serial number" and it did not match the physical drive serial, you can be pretty certain it was not the actual pre-cleared drive.)

You said earlier "If you accidentally made a drive look like it was pre-cleared, and it had anything other than zeros on the remainder of the disk, you would completely break all the parity calculations and be unable to restore from a disk failure unless you did a full parity check and fixed all the parity ... prior to the failure...".  We'd have to go with the presumption that only preclear_disk.sh would set that byte appropriately. 

 

But I'm curious, here you're talking about anything that existed within the ReiserFS space?

Correct.  Only partition 1 is part of the "md" device.  The initial unused blocks are not currently part of it, although personal e-mail from Tom has indicated that will probably have to change in the future when he changes unRAID to be able to handle other file-system types.
  The first 62 sectors before the FS starts are cleared when sector 0 is rewritten?

No, sector 0 is the area usually used for the master boot record and the partition table. unRAID does not protect it with parity, but completely re-creates it if rebuilding a disk, as its contents are known and completely based on the drive geometry.  Sectors 1 through 62 are unused... and not part of the parity protected data.  I'm going to guess unRAID would not touch any prior contents in those sectors, even when re-constructing onto a new disk.  Those sectors are probably cleared when unRAID does its own clearing of drives, and I know my preclear_disk script does clear them. 

So presumably nothing written anywhere in there would count in parity, or would be preserved when sector 0 was updated, correct?
I think you are right... Sectors 1 through 62 are potential places for an indicator of some kind that will not be restored from a parity reconstruction.

It doesn't seem that the formatting (creating the ReiserFS) takes enough time to clear the entire remaining partition, it just sets up the directory and FS structures.  So any drive that has been in use in another system and is not pre-cleared could have blocks with garbage in them in the filesystem space.

Exactly correct.  It is why a drive can't just be "added" to the array without either clearing it entirely, or completely rebuilding parity.
  But they wouldn't be 'seen' because there would be no pointers to it, but that same garbage *would* be seen by parity calcs. 
All bits are seen by parity... correct.  It is why unRAID does not need to pre-clear newly added disks if you do not have a parity disk assigned. When a parity disk is eventually assigned, it calculates parity on the bits it finds, regardless if they are parts of current files, or un-referenced parts of old deleted files.
  If that's true, then if a drive had been previously used in another system and NOT pre-cleared, you absolutely could not depend on its restorability and there would be no indication of a problem.
You could, as long as the parity drive was added after the data drive.

 

If you have an established array, and add a properly formatted reiserfs disk with data to it, I'm going to guess the array will either come on-line with no parity initially, and a parity calc under-way, or.. stay off-line until a full parity calc is performed, or, force you to use the "restore" button to force a full parity calc as it stores a new configuration.

Or is the parity calc aware of allocated FS blocks and only takes the data within them for the calcs?

All bits are used, not just the allocated ones in the file-system.

Oh yes.  I can't tell you the number of times I've had to enter the old drive parms into a BIOS manually, in the (not so) 'good old days'.  But we were in heaven then with the new 386-25 or 33Mhz motherboard and our 50-100MB full size drives...  Screamers, you know.

 

--Bill

Oh yes...  I go back a bit more than that...  Do you remember 8" floppy disks?  (or loading programs via punched paper tape on a teletype machine?)  Talk about slow...

No, I missed all the fun with punched tape and all that, but I was running a bulletin board (BBS) that I wrote in 1981 that used a 5.25 floppy boot drive (about 90KB) and two dual Persci 8" data drives with something like 220KB per disk.  What capacity!  Shortly after that I bought my first hard drive, a big external cabinet with a 10MB drive in it.  That and the adaptor cost just shy of $2k.  I still have a bunch of those 8" disks around here someplace.

 

--Bill

I go back far enough to where the first computer I worked on was all discrete transistors and diode logic.  There were no IC chips...  The "motherboard" equivalent was racks and racks of equipment that filled a room about 50 by 100 feet in size.    The I/O device was a mechanical teletype machine.    We serviced at the individual "bit" level.  The dual-core-CPU itself was 7 feet high, and 12 feet wide.  (It was early computing, but real-tiime-redundant processing and way ahead of its time.  It used parity and hamming error detection on its memory.  it could detect and correct any single bit error, and detect any double bit errors...but be unable to correct them... All this back in the early 70s.  The two CPUs ran in parallel, on the same program, each from their own set of "RAM" and compared results with each other many times during each instruction cycle.  Any difference and diagnostics would kick in...)  Either CPU could use either set of RAM, or any part of it.  If a bank of ram were to fail, both CPUs would share the mirrored bank, and we'd start working to fix the faulty one.  a single 16k bank of memory was 4 feet wide and 7 feet tall. It was 47 bits wide... two 20 bit half words, and 7 bits of hamming and parity.    We had 32 banks of memory ...  each generated about 2000 watts of heat.  If the room AC was turned off, the room temperature rose about a degree a minute.  Lots of things stopped working well when the room temperature got to 115 degrees...  (equipment AND people)  We know because we did a heat-stress test before the system went into production mode.  It was lots of fun.

 

My first home "terminal" was home-made... with the "firmware source code" on it supplied by the vendor on an 8" floppy disk... in CP/M format...  How many people today have the source code of the firmware of their terminal...  It emulated an adm-3 or a vt100 terminal...

 

Joe L.

Link to comment

Thanks for all the info, Joe!

 

No, sector 0 is the area usually used for the master boot record and the partition table. unRAID does not protect it with parity, but completely re-creates it if rebuilding a disk, as its contents are known and completely based on the drive geometry.  Sectors 1 through 62 are unused... and not part of the parity protected data.  I'm going to guess unRAID would not touch any prior contents in those sectors, even when re-constructing onto a new disk.  Those sectors are probably cleared when unRAID does its own clearing of drives, and I know my preclear_disk script does clear them.

 

When you say that unRAID "completely recreates it" (the MBR) are you talking about just when you put in an uncleared disk, or could that also happen during the formatting phase (pre or post)?  I notice in your script you don't ever clear it, but just write to the appropriate places to size the filesystem, partition #1 and pointers.  Is it just unnecessary as other bytes don't matter, or what?

 

It would be ok if unRAID clears all of 1-62 sectors with any flag bytes in it, because I think I'd only want that extra flag there if preclear_disk actually did the work.

 

Bytes 510 and 511 of MBR are preclear_disk.sh's flag, not really a function of MBR bytes, correct?  That's the only place in your code I can see a signature write, even though it's labeled as signing the MBR.  Are the 0x55AA values arbitrary or significant in some way (other than being mirrored alternating bit-sets)?

 

On a different subject, which mail binary is supposed to be used with this script?  It looks like it's the old standard /bin/mail by the arguments, but I don't find anything like that on the distribution site.  Bashmail is a totally different animal although I found some apparently incorrect references on the Forum that if you had unraid_notify running you could use its back-end for mail.

 

I go back far enough to where the first computer I worked on was all discrete transistors and diode logic.  There were no IC chips...  The "motherboard" equivalent was racks and racks of equipment that filled a room about 50 by 100 feet in size.    The I/O device was a mechanical teletype machine.    We serviced at the individual "bit" level.  The dual-core-CPU itself was 7 feet high, and 12 feet wide.  (It was early computing, but real-tiime-redundant processing and way ahead of its time.  It used parity and hamming error detection on its memory.  it could detect and correct any single bit error, and detect any double bit errors...but be unable to correct them... All this back in the early 70s.  The two CPUs ran in parallel, on the same program, each from their own set of "RAM" and compared results with each other many times during each instruction cycle.  Any difference and diagnostics would kick in...)  Either CPU could use either set of RAM, or any part of it.  If a bank of ram were to fail, both CPUs would share the mirrored bank, and we'd start working to fix the faulty one.  a single 16k bank of memory was 4 feet wide and 7 feet tall. It was 47 bits wide... two 20 bit half words, and 7 bits of hamming and parity.    We had 32 banks of memory ...  each generated about 2000 watts of heat.  If the room AC was turned off, the room temperature rose about a degree a minute.  Lots of things stopped working well when the room temperature got to 115 degrees...  (equipment AND people)  We know because we did a heat-stress test before the system went into production mode.  It was lots of fun.

 

My first home "terminal" was home-made... with the "firmware source code" on it supplied by the vendor on an 8" floppy disk... in CP/M format...  How many people today have the source code of the firmware of their terminal...   It emulated an adm-3 or a vt100 terminal...

 

Joe L.

 

Oh man.  It sounds like that would have been both exciting and incredibly frustrating at the same time!  That was back when, during the early transition from tubes?  Sounds like you were quite 'inside' at the hardware level.  What insight that would give you!

 

And who even remembers CP/M for that matter.  It was kind of cool in its own way, though.  At least compared to what else was available at the time.  The whole OS could relocate anywhere in memory to suit the hardware or other requirements.  I had (among other things) one of those 'portable' Kaypro 10 CP/M computers, maybe around 1982-1983.  Very cool at the time with that huge 10M drive...

 

--Bill

 

 

Link to comment

Thanks for all the info, Joe!

 

No, sector 0 is the area usually used for the master boot record and the partition table. unRAID does not protect it with parity, but completely re-creates it if rebuilding a disk, as its contents are known and completely based on the drive geometry.  Sectors 1 through 62 are unused... and not part of the parity protected data.  I'm going to guess unRAID would not touch any prior contents in those sectors, even when re-constructing onto a new disk.  Those sectors are probably cleared when unRAID does its own clearing of drives, and I know my preclear_disk script does clear them.

 

When you say that unRAID "completely recreates it" (the MBR) are you talking about just when you put in an uncleared disk, or could that also happen during the formatting phase (pre or post)?  I notice in your script you don't ever clear it, but just write to the appropriate places to size the filesystem, partition #1 and pointers.  Is it just unnecessary as other bytes don't matter, or what?

Actually, I do completely set every byte in the MBR.  You just need to look a tiny bit closer to figure it all out. All bytes matter..  The first 512 bytes in their entirety are the signature, not just a few bytes in it.  All of the bytes are compared to what is expected before the disk is considered to be pre-cleared.  It must not be marked as bootable, it must not be set as active, it must not have a partition type defined, it must only have one partition, and it must be partition 1, and the partition must start on sector 63 and use the entire drive based on the reported geometry. 

 

In step 5 the first 446 bytes of the MBR are set to zero.

In step 7 the next 16 bytes, starting at 446 are set define a first partition to be the correct geometry based on the size of your drive

In step 4 the next 48 bytes, starting at 462 are set to zero. 

In step 6 the last two bytes of the sector are set the MS-DOS signature used by the motherboard BIOS to detect a MBR;

 

The steps are not in "byte" order, but the entire first sector is written.

It would be ok if unRAID clears all of 1-62 sectors with any flag bytes in it, because I think I'd only want that extra flag there if preclear_disk actually did the work.

I'm pretty certain it would clear all the bytes on the disk... but never looked or tested by writing something to check for later and letting it do the normal clear disk processing.  Perhaps some day I'll give it a try with my small disk... problem is I really don't want it in my array, so I'd have to re-compute parity when I remove it.

Bytes 510 and 511 of MBR are preclear_disk.sh's flag, not really a function of MBR bytes, correct?

Incorrect.

 

This quote is from a microsoft site:

The Master Boot Record (MBR) is created when the disk is partitioned. The MBR contains a small amount of executable code called the master boot code, the disk signature, and the partition table for the disk. At the end of the MBR is a 2-byte structure called a signature word or end of sector marker, which is always set to 0x55AA. A signature word also marks the end of an extended boot record (EBR) and the boot sector.

 

That's the only place in your code I can see a signature write, even though it's labeled as signing the MBR.  Are the 0x55AA values arbitrary or significant in some way (other than being mirrored alternating bit-sets)?
It is significant to your BIOS.  It is the signature used for an MBR since the early days of the first partition tables.

On a different subject, which mail binary is supposed to be used with this script?  It looks like it's the old standard /bin/mail by the arguments, but I don't find anything like that on the distribution site.  Bashmail is a totally different animal although I found some apparently incorrect references on the Forum that if you had unraid_notify running you could use its back-end for mail.

True.  The script expects a standard "mail" command be present that can be used to send mail.    At first I used a shell script I wrote that I copied to /bin/mail.  It used "netcat" to do the actual sending of the mail.  Eventually, we learned that "bash" could do networking on its own, and the bashmail version was created.  Today I use mailx (the real linux mail command) I install as an add-on and configure.  It in turn needs "sendmail" or the equivalent, but that is too big and prone to security issues, so I use a smaller sendmail replacement named ssmtp.

 

as part of my startup, I do this:

cd /boot/packages

installpkg ssmtp-2.61-i486-1suk.tgz

installpkg mailx-12.3-i486-1.tgz

ln -s /usr/sbin/ssmtp /usr/sbin/sendmail

cp /boot/custom/ssmtp.conf /etc/ssmtp/ssmtp.conf

The ssmtp.conf file I use looks like this:

root@Tower:/boot/packages# cat /boot/custom/ssmtp.conf

#

# /etc/ssmtp.conf -- a config file for sSMTP sendmail.

#

# The person who gets all mail for userids < 1000

# Make this empty to disable rewriting.

[email protected]

# The place where the mail goes. The actual machine name is required

# no MX records are consulted. Commonly mailhosts are named mail.domain.com

# The example will fit if you are in domain.com and you mailhub is so named.

mailhub=smtp-server.somewhere.rr.com

# Where will the mail seem to come from?

#rewriteDomain=

# The full hostname

[email protected]

ssmtp.conf can handle many more options needed for different types of mail servers... My server does not use encryption, so I can use just the few fields I defined.

 

I ended up using the real "mail" (installed from the package listed above) because the "at" and "cron" commands in linux sends status e-mail, however, it expects the mail command to be a binary executable. (it does not invoke it with a shell interpreter)  This works for me... and is easy once you download the two packages needed and define your ssmtp.conf file.  (all mail programs will need to be configured for your mail server)

I go back far enough to where the first computer I worked on was all discrete transistors and diode logic.  There were no IC chips...  The "motherboard" equivalent was racks and racks of equipment that filled a room about 50 by 100 feet in size.    The I/O device was a mechanical teletype machine.    We serviced at the individual "bit" level.  The dual-core-CPU itself was 7 feet high, and 12 feet wide.  (It was early computing, but real-tiime-redundant processing and way ahead of its time.  It used parity and hamming error detection on its memory.  it could detect and correct any single bit error, and detect any double bit errors...but be unable to correct them... All this back in the early 70s.  The two CPUs ran in parallel, on the same program, each from their own set of "RAM" and compared results with each other many times during each instruction cycle.  Any difference and diagnostics would kick in...)  Either CPU could use either set of RAM, or any part of it.  If a bank of ram were to fail, both CPUs would share the mirrored bank, and we'd start working to fix the faulty one.  a single 16k bank of memory was 4 feet wide and 7 feet tall. It was 47 bits wide... two 20 bit half words, and 7 bits of hamming and parity.    We had 32 banks of memory ...  each generated about 2000 watts of heat.  If the room AC was turned off, the room temperature rose about a degree a minute.  Lots of things stopped working well when the room temperature got to 115 degrees...  (equipment AND people)  We know because we did a heat-stress test before the system went into production mode.  It was lots of fun.

 

My first home "terminal" was home-made... with the "firmware source code" on it supplied by the vendor on an 8" floppy disk... in CP/M format...  How many people today have the source code of the firmware of their terminal...   It emulated an adm-3 or a vt100 terminal...

 

Joe L.

 

Oh man.  It sounds like that would have been both exciting and incredibly frustrating at the same time!  That was back when, during the early transition from tubes?  Sounds like you were quite 'inside' at the hardware level.  What insight that would give you!

When I learned electronics it was during the transition from tubes to transistors during the 60's.  Yes, I had to trouble-shoot at the bit level...

Each bit in the "instruction" op-code represented specific logic gate combinations, and we could actually follow a logical 1 or 0 from place to place as it was moved and processed.  Registers in the cpu were collections of flip/flops... with logic to set and reset...    Gave great insight to logic flow... because it did.  Half of the time, the issues were not even in the defined logic, but where poor connections between un-affiliated logic was accidentally made.  (a small wire-clipping.. where is should not have been.  the equivalent today would be to sprinkle iron filings across the motherboard... and try to figure out where one was based on the failure)

And who even remembers CP/M for that matter.  It was kind of cool in its own way, though.  At least compared to what else was available at the time.  The whole OS could relocate anywhere in memory to suit the hardware or other requirements.  I had (among other things) one of those 'portable' Kaypro 10 CP/M computers, maybe around 1982-1983.  Very cool at the time with that huge 10M drive...

 

--Bill

A friend of mine had s similar Kaypro.  He printed the assembly listing of my terminal for me... (reading the 8" floppy disk!)

 

Joe L.

Link to comment

Actually, I do completely set every byte in the MBR.  You just need to look a tiny bit closer to figure it all out. All bytes matter..  The first 512 bytes in their entirety are the signature, not just a few bytes in it.   All of the bytes are compared to what is expected before the disk is considered to be pre-cleared.   It must not be marked as bootable, it must not be set as active, it must not have a partition type defined, it must only have one partition, and it must be partition 1, and the partition must start on sector 63 and use the entire drive based on the reported geometry. 

 

In step 5 the first 446 bytes of the MBR are set to zero.

In step 7 the next 16 bytes, starting at 446 are set define a first partition to be the correct geometry based on the size of your drive

In step 4 the next 48 bytes, starting at 462 are set to zero. 

In step 6 the last two bytes of the sector are set the MS-DOS signature used by the motherboard BIOS to detect a MBR;

 

The steps are not in "byte" order, but the entire first sector is written.

 

Very clever!  That's what I first thought, but the 'signature' bytes threw me away from that notion for some reason.  So what makes it show as uncleared after an unRAID format is that the filesystem type (83) is now defined, which breaks the tests.

 

True.  The script expects a standard "mail" command be present that can be used to send mail.    At first I used a shell script I wrote that I copied to /bin/mail.  It used "netcat" to do the actual sending of the mail.   Eventually, we learned that "bash" could do networking on its own, and the bashmail version was created.   Today I use mailx (the real linux mail command) I install as an add-on and configure.  It in turn needs "sendmail" or the equivalent, but that is too big and prone to security issues, so I use a smaller sendmail replacement named ssmtp.

 

That seems like a lot of redundancy since bashmail can already talk to a local smtp server, it just doesn't have the right front end, plus relies on unraid-notify's conf file for default smtp info.  Seems like it wouldn't be too difficult to make a new front end for bashmail and add a .conf file and reader just for it for a quick and simple delivery backend.  I don't have a netcat, what's that about?

 

as part of my startup, I do this:

cd /boot/packages

installpkg ssmtp-2.61-i486-1suk.tgz

installpkg mailx-12.3-i486-1.tgz

ln -s /usr/sbin/ssmtp /usr/sbin/sendmail

cp /boot/custom/ssmtp.conf /etc/ssmtp/ssmtp.conf

The ssmtp.conf file I use looks like this:

root@Tower:/boot/packages# cat /boot/custom/ssmtp.conf

#

# /etc/ssmtp.conf -- a config file for sSMTP sendmail.

#

# The person who gets all mail for userids < 1000

# Make this empty to disable rewriting.

[email protected]

# The place where the mail goes. The actual machine name is required

# no MX records are consulted. Commonly mailhosts are named mail.domain.com

# The example will fit if you are in domain.com and you mailhub is so named.

mailhub=smtp-server.somewhere.rr.com

# Where will the mail seem to come from?

#rewriteDomain=

# The full hostname

[email protected]

ssmtp.conf can handle many more options needed for different types of mail servers... My server does not use encryption, so I can use just the few fields I defined.

 

I ended up using the real "mail" (installed from the package listed above) because the "at" and "cron" commands in linux sends status e-mail, however, it expects the mail command to be a binary executable. (it does not invoke it with a shell interpreter)  This works for me... and is easy once you download the two packages needed and define your ssmtp.conf file.  (all mail programs will need to be configured for your mail server)

 

Yeah...  I guess that would be easier (less time involved) and more universal overall.  I was hesitant to be loading packages that are far more than they need to be to get the job done.  But then there's the time issue (mine)...

 

--Bill

Link to comment

as part of my startup, I do this:

cd /boot/packages

installpkg ssmtp-2.61-i486-1suk.tgz

installpkg mailx-12.3-i486-1.tgz

ln -s /usr/sbin/ssmtp /usr/sbin/sendmail

cp /boot/custom/ssmtp.conf /etc/ssmtp/ssmtp.conf

....

 

I'm not finding that ssmtp package on the Slackware site anywhere from build 11 to 13.  The only reference is to a libcddb that has a simple smtp backend.  Any idea where to locate it?

 

That version of mailx is there, however.

 

--Bill

Link to comment

as part of my startup, I do this:

cd /boot/packages

installpkg ssmtp-2.61-i486-1suk.tgz

installpkg mailx-12.3-i486-1.tgz

ln -s /usr/sbin/ssmtp /usr/sbin/sendmail

cp /boot/custom/ssmtp.conf /etc/ssmtp/ssmtp.conf

....

 

I'm not finding that ssmtp package on the Slackware site anywhere from build 11 to 13.  The only reference is to a libcddb that has a simple smtp backend.  Any idea where to locate it?

 

That version of mailx is there, however.

 

--Bill

http://slackware.sukkology.net/repository/ssmtp/ssmtp-2.61-i486-1suk.tgz

 

 

Link to comment

Hi, I made another preclear on a disk with new cables, that had even syslogerrors before. Syslog is clean now, preclear did it, but tells about one difference in seek error rate. Something to worry about?

 

THanks, Guzzi

 

===========================================================================

=                unRAID server Pre-Clear disk /dev/sdg

=                      cycle 1 of 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Testing if the clear has been successful.    DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 32C, Elapsed Time:  15:00:48

============================================================================

==

== Disk /dev/sdg has been successfully precleared

==

============================================================================

S.M.A.R.T. error count differences detected after pre-clear

note, some 'raw' values may change, but not be an indication of a problem

58c58

<  7 Seek_Error_Rate        0x000e  200  200  051    Old_age  Always      -      0

---

>  7 Seek_Error_Rate        0x000e  100  253  051    Old_age  Always      -      0

============================================================================

 

Link to comment

No problems at all.  Since the rate of zero (the RAW number) is the same, there is no real change here.  For some reason, the scaled numbers, VALUE and WORST, have been reset.  The number 253 usually seems to indicate "Not Used Yet".

Thanks - I mounted the disk in the machine and got kernel errors - took me some time to find, it was the 16+ drive bug in current beta release - because I only had 16 drives (15+1, no cache drive). So it seems, the 16+-bug is not only related to the number of drives - but also related to the slots! After deleting super.dat (otherwise unraid always crashed on startup when trying to sync) and moving all drives to the lower slots it works fine ... now syncing...

Link to comment
  • 3 weeks later...

Seagate 1.5TB 7200rpm:

 

===========================================================================

=                unRAID server Pre-Clear disk /dev/sdh

=                      cycle 1 of 1

= Disk Pre-Clear-Read completed                                DONE

= Step 1 of 10 - Copying zeros to first 2048k bytes            DONE

= Step 2 of 10 - Copying zeros to remainder of disk to clear it DONE

= Step 3 of 10 - Disk is now cleared from MBR onward.          DONE

= Step 4 of 10 - Clearing MBR bytes for partition 2,3 & 4      DONE

= Step 5 of 10 - Clearing MBR code area                        DONE

= Step 6 of 10 - Setting MBR signature bytes                    DONE

= Step 7 of 10 - Setting partition 1 to precleared state        DONE

= Step 8 of 10 - Notifying kernel we changed the partitioning  DONE

= Step 9 of 10 - Creating the /dev/disk/by* entries            DONE

= Step 10 of 10 - Testing if the clear has been successful.    DONE

= Disk Post-Clear-Read completed                                DONE

Disk Temperature: 35C, Elapsed Time:  18:41:55

============================================================================

==

== Disk /dev/sdh has been successfully precleared

==

============================================================================

S.M.A.R.T. error count differences detected after pre-clear

note, some 'raw' values may change, but not be an indication of a problem

54c54

<  1 Raw_Read_Error_Rate    0x000f  111  099  006    Pre-fail  Always      -      30858950

---

>  1 Raw_Read_Error_Rate    0x000f  119  099  006    Pre-fail  Always      -      206230051

58c58

<  7 Seek_Error_Rate        0x000f  070  060  030    Pre-fail  Always      -      12579753

---

>  7 Seek_Error_Rate        0x000f  071  060  030    Pre-fail  Always      -      12656194

64,66c64,66

< 189 High_Fly_Writes        0x003a  087  087  000    Old_age  Always      -      13

< 190 Airflow_Temperature_Cel 0x0022  064  046  045    Old_age  Always      -      36 (Lifetime Min/Max 35/36)

< 195 Hardware_ECC_Recovered  0x001a  059  031  000    Old_age  Always

---

> 189 High_Fly_Writes        0x003a  080  080  000    Old_age  Always      -      20

> 190 Airflow_Temperature_Cel 0x0022  065  046  045    Old_age  Always      -      35 (Lifetime Min/Max 35/38)

> 195 Hardware_ECC_Recovered  0x001a  056  031  000    Old_age  Always

70,72c70,72

< 240 Head_Flying_Hours      0x0000  100  253  000    Old_age  Offline      -      11016591118409

< 241 Unknown_Attribute      0x0000  100  253  000    Old_age  Offline      -      1999204935

< 242 Unknown_Attribute      0x0000  100  253  000    Old_age  Offline      -      1405887411

---

> 240 Head_Flying_Hours      0x0000  100  253  000    Old_age  Offline      -      183124520603740

> 241 Unknown_Attribute      0x0000  100  253  000    Old_age  Offline      -      635175937

> 242 Unknown_Attribute      0x0000  100  253  000    Old_age  Offline      -      1250497505

============================================================================

root@Tower:/boot#

 

I'm looking for some interpretations on this data. I believe the firmware of this Seagate 1.5tb is not the one that was causing issues (it's a later one).

 

Head flying hours concerns me, as that's nearly 1,200 millenia.

 

Thanks!

Link to comment

Seagate 1.5TB 7200rpm:

 

70,72c70,72

< 240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       11016591118409

< 241 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       1999204935

< 242 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       1405887411

---

> 240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       183124520603740

> 241 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       635175937

> 242 Unknown_Attribute       0x0000   100   253   000    Old_age   Offline      -       1250497505

============================================================================

root@Tower:/boot#

 

I'm looking for some interpretations on this data. I believe the firmware of this Seagate 1.5tb is not the one that was causing issues (it's a later one).

 

Head flying hours concerns me, as that's nearly 1,200 millenia.

 

Thanks!

The "raw" values are meaningful only to the manufacturer.    In some cases, smartctl can make sense of them...  Perhaps it is tracking nanoseconds...who knows... I know for sure is is not "hours," or you are using one of the Area 51 2.5TB drives that has been in a warehouse for about 75 years, but actually much much older.  ;)

 

What is important is the "Threshold" value is 000 and the current value is 100, nowhere near the threshold.

 

Joe L.

Link to comment

Hello, I'm just getting ready to put this into my array as a data drive and was curious as to it's health?

 

It's a Western Digital 1.5TB Green Drive and I ran preclear_disk with -c 2 (twice)

 

The UDMA_CRC_Error_Count looks like it went down so it all looks good ;-)

 

 

Oct 3 15:30:07 storage preclear_disk-start[4545]: ID# ATTRIBUTE_NAME 		FLAG   VAL WOR THR TYPE     UPDATED WHEN_FAILED RAW_VALUE
Oct 3 15:30:07 storage preclear_disk-start[4545]: 1 Raw_Read_Error_Rate 	0x002f 100 253 051 Pre-fail Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 3 Spin_Up_Time 		0x0027 100 253 021 Pre-fail Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 4 Start_Stop_Count 		0x0032 100 100 000 Old_age  Always - 6
Oct 3 15:30:07 storage preclear_disk-start[4545]: 5 Reallocated_Sector_Ct 	0x0033 200 200 140 Pre-fail Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 7 Seek_Error_Rate 		0x002e 100 253 000 Old_age  Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 9 Power_On_Hours 		0x0032 100 100 000 Old_age  Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 10 Spin_Retry_Count 		0x0032 100 253 000 Old_age  Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 11 Calibration_Retry_Count 	0x0032 100 253 000 Old_age  Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 12 Power_Cycle_Count 		0x0032 100 100 000 Old_age  Always - 5
Oct 3 15:30:07 storage preclear_disk-start[4545]: 192 Power-Off_Retract_Count 	0x0032 200 200 000 Old_age  Always - 4
Oct 3 15:30:07 storage preclear_disk-start[4545]: 193 Load_Cycle_Count 		0x0032 200 200 000 Old_age  Always - 6
Oct 3 15:30:07 storage preclear_disk-start[4545]: 196 Reallocated_Event_Count 	0x0032 200 200 000 Old_age  Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 197 Current_Pending_Sector 	0x0032 200 200 000 Old_age  Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 198 Offline_Uncorrectable 	0x0030 100 253 000 Old_age  Offline - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 199 UDMA_CRC_Error_Count 	0x0032 200 253 000 Old_age  Always - 0
Oct 3 15:30:07 storage preclear_disk-start[4545]: 200 Multi_Zone_Error_Rate 	0x0008 100 253 000 Old_age  Offline - 0

 

 

Oct 5 00:17:04 storage preclear_disk-finish[12534]: ID# ATTRIBUTE_NAME 		FLAG   VAL WOR THR TYPE     UPDATED WHEN_FAILED RAW_VALUE
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 1 Raw_Read_Error_Rate 	0x002f 200 200 051 Pre-fail Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 3 Spin_Up_Time 		0x0027 100 253 021 Pre-fail Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 4 Start_Stop_Count 		0x0032 100 100 000 Old_age  Always - 6
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 5 Reallocated_Sector_Ct 	0x0033 200 200 140 Pre-fail Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 7 Seek_Error_Rate 		0x002e 200 200 000 Old_age  Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 9 Power_On_Hours 		0x0032 100 100 000 Old_age  Always - 32
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 10 Spin_Retry_Count 	0x0032 100 253 000 Old_age  Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 11 Calibration_Retry_Count 	0x0032 100 253 000 Old_age  Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 12 Power_Cycle_Count 	0x0032 100 100 000 Old_age  Always - 5
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 192 Power-Off_Retract_Count 0x0032 200 200 000 Old_age  Always - 4
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 193 Load_Cycle_Count 	0x0032 200 200 000 Old_age  Always - 16
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 196 Reallocated_Event_Count 0x0032 200 200 000 Old_age  Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 197 Current_Pending_Sector 	0x0032 200 200 000 Old_age  Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 198 Offline_Uncorrectable 	0x0030 100 253 000 Old_age  Offline - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 199 UDMA_CRC_Error_Count 	0x0032 200 200 000 Old_age  Always - 0
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 200 Multi_Zone_Error_Rate 	0x0008 100 253 000 Old_age  Offline - 0

 

Thanks for your time,

Bobby

Link to comment

The UDMA_CRC_Error_Count looks like it went down so it all looks good ;-)

 

Oct 5 00:17:04 storage preclear_disk-finish[12534]: ID# ATTRIBUTE_NAME 		FLAG   VAL WOR THR TYPE     UPDATED WHEN_FAILED RAW_VALUE
Oct 5 00:17:04 storage preclear_disk-finish[12534]: 199 UDMA_CRC_Error_Count 	0x0032 200 200 000 Old_age  Always - 0

 

Thanks for your time,

Bobby

The value went from 253 (the factory initialized value) to 200 (a median value once the disk started collecting statistics)  Yes it went down, and when it reaches the "THReshold" value of "0" the disk will be considered as failed.

 

Lower SMART values are not better... they are worse.  But don't worry, the behavior of your drive is perfectly normal and it looks good.

Link to comment

The drive looks fine, obviously brand new with 0 Power_On_Hours initially!  The UDMA_CRC_Error_Count is 0, and stayed 0.  The number 253 seems to generally be used as an indicator of "Not Used Yet" for an attribute, so this probably represents the fact that the function handling UDMA_CRC_Error_Count was called at least once, and therefore since it was the first time, initialized WORST (the number in the WORST column) to its starting value of 200.  So that does not really represent a change.

 

At some point, I would like to create a guide to help others understand these SMART attributes.  They are unfortunately very inconsistent in their behavior, not only between the different attributes, but between the various drive models, and especially between brands.  In some cases, the RAW_VALUE is the counter to watch, in others, it is more important to watch what VALUE does, and there are many other possible behaviors.  To understand a particular SMART line, you have to understand how that SMART attribute is usually handled, keeping in mind who the manufacturer is, and to a lesser extent, what drive model it is.  I have tried researching it online, but information is really skimpy, nothing authoritative at all from the manufacturers themselves.  You can use this table of SMART attributes to help you understand them, but every manufacturer uses a different set of those attributes, even uses the common ones in different ways, even across their own drive models.

Link to comment

Thank you everyone for you help and advise.

 

I just wanted to be extra careful ;-)

 

I recently bought a 500GB Seagate drive using it to replace the OS drive on my main computer.

 

The smart reports on the drive looked good before I put it into the main system ...

 

Within three months it was dying ... I just thought it was windows XP being a microsoft product

 

By the time I realized the drive was dying I had a months worth of backups of a failing drive

 

I installed Windows 7 to try it out and it asked if I wanted to backup the failing drive ... (So I think Windows 7 might handle smart information better than XP)

 

A guide would be very helpful for us in interpreting the smart values ... any ambiguity in the definitions is probably to avoid people RMA'ing drives earlier rather than later

 

 

Link to comment

Join the conversation

You can post now and register later. If you have an account, sign in now to post with your account.
Note: Your post will require moderator approval before it will be visible.

Guest
Reply to this topic...

×   Pasted as rich text.   Restore formatting

  Only 75 emoji are allowed.

×   Your link has been automatically embedded.   Display as a link instead

×   Your previous content has been restored.   Clear editor

×   You cannot paste images directly. Upload or insert images from URL.