Jump to content

Koolkiwi

Members
  • Posts

    108
  • Joined

  • Last visited

Everything posted by Koolkiwi

  1. Thanks. For the record: Further research does indeed show that the ATA Attachment Command Set documentation confirms that the ATA Self-test log data structure is a 16-bit word value for the Life timestamp, that: "shall contain the power-on lifetime of the device in hours when command completion occurred." So there's no reasonable workaround for this, and I suspect unRAID systems still containing drives older than 65535 hours (~7.5 years) are probably a relatively rare edge case anyway.
  2. I have an old (but still good) drive, that is in excess of 65535 Power on hours. ie. Over 7.5 years! I just noticed, after doing a SMART Extended offLine self-test, that the unRAID GUI is showing the test as being completed after 8401 "LifeTime(hours)", instead of the expected ~73036 hours. As per the screenshot attached, you can see the current SMART Attribute RAW VALUE shown in the GUI is now: 73955 (8y, 5m, 6d, 11h). So, it appears the SMART Attribute RAW VALUE is being show correctly, but the "SMART self-test history" "LifeTime(hours)" value is being truncated (wrapped) as a 16-bit value.
  3. Thanks jonathanm, I think the above (and your prior reply) does answer my question. What I appear to have been overlooking was: This being the case, I can now understand the logic for ensuring the full Parity Drive represents the full Parity (for the full Parity size space). If I've understood you correctly, you're saying that when a new larger drive is installed, it is (in effect) the new "empty" drive space that is being brought into line with the equivalent empty / cleared space Parity. ie. It is the new Data space that is cleared, not the Parity being updated to reflect whatever was on the new Data drive space. In terms of the Parity process itself, no I'm not going to cry semantics. I do believe I have a good understanding of the Parity mechanism, and do understand that parity is calculated across the raw content of the drives (yep, I've had too long a career in computers, IT, and software development).
  4. Yes, of course. But the point of discussing questions of this nature, is so that the developers can consider potential improvements to the product in some future version (if it is deemed a valid point).
  5. So you are basically saying that the continued reading is just to verify that the remainder of the Parity drive reflects the Parity (0?) for Zeroed Data (which is in effect what you could say we have when there is no Data). I can see the argument for doing this (ie. easier / quicker to add a pre-cleared new disk that is larger than all your other data disks!). But, in terms of efficiency / parity disk wear, that does seem rather wasteful to be doing this extended zero parity check *every time* you do a Parity Check, just to allow for the single case of adding a new pre-cleared disk that is also larger than any existing Data disk! It would seem more efficient (and logical) to deal with this extra Parity only in the actual case of you adding a new disk that is larger than all other data disks (even when pre-cleared). So that the added disk size space would only, on that one occasion, then have the extra Parity space initialised / updated (if needed). ie. In my case, if I added my first 8TB Data disk, then there would just be a *once-off* need to initialise the extra 2TB of the Parity disk. Noting also that if my new 8TB drive had not been pre-cleared, the added Parity space is going to need to be initialised anyway!
  6. Hi Benson. Thanks, but I'm not questioning the MB/s speed of my system. I'm questioning why the Parity Check *needs* to continue beyond the Data Disk size. ie. Once all the Data drives have been Parity checked against the Parity Drive(s), then I don't understand why the Parity Drive (alone) needs to continue to be read? There is no further Data Parity to calculate / check! Speed of the system is not relevant to this question. On any system, if you are performing a Parity Check before another operation (eg. upgrading a Data Drive), then the additional time waiting for the larger Parity Drive to (seemingly unnecessarily) read *all* the way through, just adds to the overall time for completing the whole operation.
  7. A quick question I couldn't find the answer to... On my system I have an 8TB Parity Drive, while my largest Data drive is 6TB. Perhaps not uncommon, as I'm allowing for future data drive upgrades. When doing a Parity Check it takes say 2 - 3 days to check all the required Data Drive Parity (ie. up to the 6TB mark), after which all the data drives have spun-down and only the Parity Drive remains active. The Parity Drive then diligently continues reading through it's remaining 2TB (which is not protecting any data drives). This continued reading of the remaining 2TB of the Parity Drive adds another 10 - 12 hours to the Parity Check completion! So, my question is: Why does a Parity Check need to continue reading the Parity Drive, beyond the capacity of the largest Data Drive? Could this not be optimised, such that once the calculated Parity has been Checked for all the Data Drives, the Parity Check is complete? ie. Assuming there is indeed no need to just continue on with reading the Parity Drive, when there is no remaining Data Drive capacity to "Parity Check".
  8. Thanks Joe. Although, based on the syslogs I posted in the my new 'import' - 'no device' post thread, the unraid appears to be in a loop trying to 'import' (whatever that means) the drive every minute? I'm assuming from this, that the drive hasn't actually started clearing yet? I can hear some disk activity, but the HDD light is not constantly on - in a way you would probably expect during a constant write 'clearing'. PS: New topic thread on current issue is here: http://lime-technology.com/forum/index.php?topic=597.0
  9. Thanks for proving this Joe.L. Just viewed the syslog, and I'm getting an 'import' loop, so I'll start another topic to seek help on this problem (since the question of this topic is now resolved).
  10. There were a couple of references I found posted by limetech: And more specifically: Sure you can have just 1 data drive. You could also have a JBOD by simply not assigning a parity disk. So I think I might have another issue. I setup the BIOS as AHCI, based on my initial testing experiences on this thread: http://lime-technology.com/forum/index.php?topic=571.0 I think I will go back and set the ICH8R BIOS Southbridge back to 'IDE' mode for the SATA controller, and see if that makes any difference.
  11. Hi, Following on from my earlier thread testing the P5B-E motherboard, I have bought a 500GB SATA drive, and I am now trying to setup my first working unRAID system to test. I'm using the 4.0beta-7 with the following hardware: Asustek P5B-E mobo with 6 x onboard ICH8R SATA (+2 JMicron SATA) Onboard Attansic L1 GigE Celeron D 3.06Ghz 533FSB 512M Cache 1GB DDR2 DRAM (2x 512MB G.Skill CL4) Cheap as I could find PCI-e Video card (XFX NVidia 7100GS) Seagate 500GB 16MB Cache SATA Drive My question is, how many hard drives are needed to get started? Seems like a simple question, but I can't find an answer anywhere, only a reference to not needing a parity drive. Although, I know that for a normal RAID5 system, you need 3 drives to get started. Based on the post mentioning that you don't need a parity drive, I assumed I should be able to startup unRAID with only a single drive (allocated as Data Disk 1), with no parity drive allocated yet (on the assumption that when I get a couple more drives, I would add a second Data drive and then the Parity drive). However, when I allocate the only drive as Data Disk 1, and click Start, the web page shows that the drive is "mounting"I assume it is formatting the drive at this time). This stays like this for maybe 10 minutes or more when I come back and refresh the page, and I can also hear disk activity. But then if I come back a little later and refresh the page, I get no response (ie. the web server is not responding). If I logon to the console as root, the server is still responding (ie. has not locked up), so I can shutdown successfully etc. If I restart the system, I am back at square one! Any ideas what I am doing wrong? Or is it a problem caused by only having one drive attached? Do I need to go and buy a couple more drives to get started (although I would prefer to test all is operating with one drive first - if possible). TIA
×
×
  • Create New...