Koolkiwi

Members
  • Content Count

    108
  • Joined

  • Last visited

Community Reputation

1 Neutral

About Koolkiwi

  • Rank
    Advanced Member

Converted

  • Gender
    Undisclosed
  1. Thanks. For the record: Further research does indeed show that the ATA Attachment Command Set documentation confirms that the ATA Self-test log data structure is a 16-bit word value for the Life timestamp, that: "shall contain the power-on lifetime of the device in hours when command completion occurred." So there's no reasonable workaround for this, and I suspect unRAID systems still containing drives older than 65535 hours (~7.5 years) are probably a relatively rare edge case anyway.
  2. I have an old (but still good) drive, that is in excess of 65535 Power on hours. ie. Over 7.5 years! I just noticed, after doing a SMART Extended offLine self-test, that the unRAID GUI is showing the test as being completed after 8401 "LifeTime(hours)", instead of the expected ~73036 hours. As per the screenshot attached, you can see the current SMART Attribute RAW VALUE shown in the GUI is now: 73955 (8y, 5m, 6d, 11h). So, it appears the SMART Attribute RAW VALUE is being show correctly, but the "SMART self-test history" "LifeTime(hours)" value is being tr
  3. Thanks jonathanm, I think the above (and your prior reply) does answer my question. What I appear to have been overlooking was: This being the case, I can now understand the logic for ensuring the full Parity Drive represents the full Parity (for the full Parity size space). If I've understood you correctly, you're saying that when a new larger drive is installed, it is (in effect) the new "empty" drive space that is being brought into line with the equivalent empty / cleared space Parity. ie. It is the new Data space that is cleared, not the Parity being updated to
  4. Yes, of course. But the point of discussing questions of this nature, is so that the developers can consider potential improvements to the product in some future version (if it is deemed a valid point).
  5. So you are basically saying that the continued reading is just to verify that the remainder of the Parity drive reflects the Parity (0?) for Zeroed Data (which is in effect what you could say we have when there is no Data). I can see the argument for doing this (ie. easier / quicker to add a pre-cleared new disk that is larger than all your other data disks!). But, in terms of efficiency / parity disk wear, that does seem rather wasteful to be doing this extended zero parity check *every time* you do a Parity Check, just to allow for the single case of adding a new pre-
  6. Hi Benson. Thanks, but I'm not questioning the MB/s speed of my system. I'm questioning why the Parity Check *needs* to continue beyond the Data Disk size. ie. Once all the Data drives have been Parity checked against the Parity Drive(s), then I don't understand why the Parity Drive (alone) needs to continue to be read? There is no further Data Parity to calculate / check! Speed of the system is not relevant to this question. On any system, if you are performing a Parity Check before another operation (eg. upgrading a Data Drive), then the additional tim
  7. A quick question I couldn't find the answer to... On my system I have an 8TB Parity Drive, while my largest Data drive is 6TB. Perhaps not uncommon, as I'm allowing for future data drive upgrades. When doing a Parity Check it takes say 2 - 3 days to check all the required Data Drive Parity (ie. up to the 6TB mark), after which all the data drives have spun-down and only the Parity Drive remains active. The Parity Drive then diligently continues reading through it's remaining 2TB (which is not protecting any data drives). This continued reading of the remaining 2TB of
  8. Thanks redia. But that's exactly what I did, when I said I'd "sent a support request last Saturday to ask how I get replacement keys". The copy confirmation auto-reply was from Tom's email address, so I didn't send any further email directly to him (no point adding further to his inbox). However, I've now replied to the support confirmation email, on the assumption it's just been overlooked. Perhaps he will see that email or this forum post. Fingers crossed my suspect backup flash drive hangs on! EDIT: All good... I have now heard back from Tom and sent him the new GUID's for so
  9. I have a problem. My 4+ year old flash drive failed. Since I paid for a 2 pack license I have managed to get up and running again with my backup flash drive (I strongly recommend everyone buying the 2 pack!), however the second flash drive is in a sorry state, with a burn mark on the end - so I suspect it's failure is imminent. My problem is that the license keys are tied to the GUID's of my old failed and failing flash drives. I've gone out and purchased 2 brand new better quality Sandisk Cruzer Flash Drives, and sent a support request last Saturday to ask how I get replacement ke
  10. Hi, I've just come back after a long period running 4.2.4 without any issues. I decided to upgrade to stable 4.5.6 so I'm a little more current. However I seem to have a problem. To conserve power, I intentionally built my unRAID Server without a VGA card. This has been working fine in the past. I do have a spare VGA card that I can install if I ever have an issue that requires me to access the server from the console. However for normal use, I have no VGA card installed (reducing power consumption and heat), and I have no keyboard connected. ie. Just a bare minimum network con
  11. Hi Flambot, fellow Kiwi here. I started out using the Seagate 500GB SATA drives, and have just made the jump to 750GB in the form of the WD7500AAKS. What triggered me was the price of the WD7500AAKS dropping below NZ$.50/GB. Still more expensive than you can now get the 500GB drives (per GB), but considering I paid NZ$.50+ per GB for the 6x Seagate 500GB drives I have, I'm a happy camper! Reading the various reviews, the WD7500AAKS appears to perhaps be an even better drive than the Seagate 500GB, in terms of noise level and performance. The only negative comment I saw was th
  12. Thanks for the info Tom. Just to clarify though... is the re-scan button truly gone altogether? Not just in regard to writable user shares? ie. If writing to the existing individual drive shares will it also no longer be necessary to "re-scan"? And if so (which is fantastic by the way), I assume the automated process will not reset the user shares network connections like the current "re-scan" button does. ie. So you could be watching a movie (uninterrupted), while completing a write to the unRAID server?
  13. Refer my post over here (with reference link): http://lime-technology.com/forum/index.php?topic=571.msg3710#msg3710 ie. AHCI has full driver support for SATA features such as NCQ, instead of emulating SATA as a PATA drive. What performance benefit for unRAID is (if any) I have not tried to test, but possibly more significant in terms of hardware support. ie. The single fully functional AHCI driver will likely work with any AHCI compatible controller (eg. The jMicron). Also can't answer the formatting question, although I would not have thought this would affect the existing da
  14. Thanks for the clarification Tom. I overlooked the kernel change, which of course is a significant change. I would also agree that moving to a later kernel to pick up related bug fixes is a very good move, ahead of adding new features. I would much prefer new features are added to as stable a base as possible, rather than building on wobbly foundations! John: Please have patience, we are all awaiting these new features, I'm sure Tom will release 4.2beta when it is good and ready.
  15. Hi Joe. Just thought I would add my views on the points you raise. I can see what you mean by the version numbering and lack of a beta release, but I would add that this 4.1 release is really no different to the 4.0 final release process in this regard. ie. If you refer to the change log there were 2 changes made between the last 4.0beta release and the subsequent release that was labelled 4.0 final (so what would have happened if there were issues introduced by these 2 'final release' changes?). Ideally a beta release process should continue until there are no reported issues fro