Koolkiwi

Members
  • Posts

    108
  • Joined

  • Last visited

Everything posted by Koolkiwi

  1. Thanks. For the record: Further research does indeed show that the ATA Attachment Command Set documentation confirms that the ATA Self-test log data structure is a 16-bit word value for the Life timestamp, that: "shall contain the power-on lifetime of the device in hours when command completion occurred." So there's no reasonable workaround for this, and I suspect unRAID systems still containing drives older than 65535 hours (~7.5 years) are probably a relatively rare edge case anyway.
  2. I have an old (but still good) drive, that is in excess of 65535 Power on hours. ie. Over 7.5 years! I just noticed, after doing a SMART Extended offLine self-test, that the unRAID GUI is showing the test as being completed after 8401 "LifeTime(hours)", instead of the expected ~73036 hours. As per the screenshot attached, you can see the current SMART Attribute RAW VALUE shown in the GUI is now: 73955 (8y, 5m, 6d, 11h). So, it appears the SMART Attribute RAW VALUE is being show correctly, but the "SMART self-test history" "LifeTime(hours)" value is being truncated (wrapped) as a 16-bit value.
  3. Thanks jonathanm, I think the above (and your prior reply) does answer my question. What I appear to have been overlooking was: This being the case, I can now understand the logic for ensuring the full Parity Drive represents the full Parity (for the full Parity size space). If I've understood you correctly, you're saying that when a new larger drive is installed, it is (in effect) the new "empty" drive space that is being brought into line with the equivalent empty / cleared space Parity. ie. It is the new Data space that is cleared, not the Parity being updated to reflect whatever was on the new Data drive space. In terms of the Parity process itself, no I'm not going to cry semantics. I do believe I have a good understanding of the Parity mechanism, and do understand that parity is calculated across the raw content of the drives (yep, I've had too long a career in computers, IT, and software development).
  4. Yes, of course. But the point of discussing questions of this nature, is so that the developers can consider potential improvements to the product in some future version (if it is deemed a valid point).
  5. So you are basically saying that the continued reading is just to verify that the remainder of the Parity drive reflects the Parity (0?) for Zeroed Data (which is in effect what you could say we have when there is no Data). I can see the argument for doing this (ie. easier / quicker to add a pre-cleared new disk that is larger than all your other data disks!). But, in terms of efficiency / parity disk wear, that does seem rather wasteful to be doing this extended zero parity check *every time* you do a Parity Check, just to allow for the single case of adding a new pre-cleared disk that is also larger than any existing Data disk! It would seem more efficient (and logical) to deal with this extra Parity only in the actual case of you adding a new disk that is larger than all other data disks (even when pre-cleared). So that the added disk size space would only, on that one occasion, then have the extra Parity space initialised / updated (if needed). ie. In my case, if I added my first 8TB Data disk, then there would just be a *once-off* need to initialise the extra 2TB of the Parity disk. Noting also that if my new 8TB drive had not been pre-cleared, the added Parity space is going to need to be initialised anyway!
  6. Hi Benson. Thanks, but I'm not questioning the MB/s speed of my system. I'm questioning why the Parity Check *needs* to continue beyond the Data Disk size. ie. Once all the Data drives have been Parity checked against the Parity Drive(s), then I don't understand why the Parity Drive (alone) needs to continue to be read? There is no further Data Parity to calculate / check! Speed of the system is not relevant to this question. On any system, if you are performing a Parity Check before another operation (eg. upgrading a Data Drive), then the additional time waiting for the larger Parity Drive to (seemingly unnecessarily) read *all* the way through, just adds to the overall time for completing the whole operation.
  7. A quick question I couldn't find the answer to... On my system I have an 8TB Parity Drive, while my largest Data drive is 6TB. Perhaps not uncommon, as I'm allowing for future data drive upgrades. When doing a Parity Check it takes say 2 - 3 days to check all the required Data Drive Parity (ie. up to the 6TB mark), after which all the data drives have spun-down and only the Parity Drive remains active. The Parity Drive then diligently continues reading through it's remaining 2TB (which is not protecting any data drives). This continued reading of the remaining 2TB of the Parity Drive adds another 10 - 12 hours to the Parity Check completion! So, my question is: Why does a Parity Check need to continue reading the Parity Drive, beyond the capacity of the largest Data Drive? Could this not be optimised, such that once the calculated Parity has been Checked for all the Data Drives, the Parity Check is complete? ie. Assuming there is indeed no need to just continue on with reading the Parity Drive, when there is no remaining Data Drive capacity to "Parity Check".
  8. Thanks redia. But that's exactly what I did, when I said I'd "sent a support request last Saturday to ask how I get replacement keys". The copy confirmation auto-reply was from Tom's email address, so I didn't send any further email directly to him (no point adding further to his inbox). However, I've now replied to the support confirmation email, on the assumption it's just been overlooked. Perhaps he will see that email or this forum post. Fingers crossed my suspect backup flash drive hangs on! EDIT: All good... I have now heard back from Tom and sent him the new GUID's for some replacement keys. Looks like it was just an overlooked email issue, something we all suffer from every now and then.
  9. I have a problem. My 4+ year old flash drive failed. Since I paid for a 2 pack license I have managed to get up and running again with my backup flash drive (I strongly recommend everyone buying the 2 pack!), however the second flash drive is in a sorry state, with a burn mark on the end - so I suspect it's failure is imminent. My problem is that the license keys are tied to the GUID's of my old failed and failing flash drives. I've gone out and purchased 2 brand new better quality Sandisk Cruzer Flash Drives, and sent a support request last Saturday to ask how I get replacement keys. However I have had no response, other than the copy confirmation of my email enquiry. Is there another process I need to follow to get support? If my second flash disk fails I will without my unRAID! Aaargh - a very scary prospect!
  10. Hi, I've just come back after a long period running 4.2.4 without any issues. I decided to upgrade to stable 4.5.6 so I'm a little more current. However I seem to have a problem. To conserve power, I intentionally built my unRAID Server without a VGA card. This has been working fine in the past. I do have a spare VGA card that I can install if I ever have an issue that requires me to access the server from the console. However for normal use, I have no VGA card installed (reducing power consumption and heat), and I have no keyboard connected. ie. Just a bare minimum network connected black box. However, after upgrading to 4.5.6, my server will no longer boot-up if I do not have my VGA card installed. Basically, without the VGA card installed I see no disk activity, and I cannot network connect as the ethernet interface has presumably not been initialized at the point the boot appears to fail / freeze. With the VGA card installed all works fine, except I have a toasty hot display card unnecessarily using up power. I'm not sure how to diagnose this, as when the system refuses to start up without a VGA card, I have now way to connect to capture a log to see how w2here the boot process has stopped. Any ideas on what to do next, or what may have changed since 4.2.4 that now requires a VGA card to be present for successful boot? Thanks Greg
  11. Hi Flambot, fellow Kiwi here. I started out using the Seagate 500GB SATA drives, and have just made the jump to 750GB in the form of the WD7500AAKS. What triggered me was the price of the WD7500AAKS dropping below NZ$.50/GB. Still more expensive than you can now get the 500GB drives (per GB), but considering I paid NZ$.50+ per GB for the 6x Seagate 500GB drives I have, I'm a happy camper! Reading the various reviews, the WD7500AAKS appears to perhaps be an even better drive than the Seagate 500GB, in terms of noise level and performance. The only negative comment I saw was the relatively higher start-up current, but if you have a decent Power Supply this shouldn't be any major concern. I bought my first WD7500AAKS a couple of weeks ago and swapped out my parity drive (giving me only a 500GB additional data drive from the old parity), but looking forward to adding the next one with a huge 750GB capacity per drive. Assuming I eventually add another 7 drives to my array, this will equate to an extra 1.75TB over what I would have had with the 500GB drives. I don't have the screen in front of me at the moment, but from memory the 750GB WD was actually more than 150% of the formatted capacity of the Seagate 500GB drives. I can check this later, unless someone else has the numbers.
  12. Thanks for the info Tom. Just to clarify though... is the re-scan button truly gone altogether? Not just in regard to writable user shares? ie. If writing to the existing individual drive shares will it also no longer be necessary to "re-scan"? And if so (which is fantastic by the way), I assume the automated process will not reset the user shares network connections like the current "re-scan" button does. ie. So you could be watching a movie (uninterrupted), while completing a write to the unRAID server?
  13. Refer my post over here (with reference link): http://lime-technology.com/forum/index.php?topic=571.msg3710#msg3710 ie. AHCI has full driver support for SATA features such as NCQ, instead of emulating SATA as a PATA drive. What performance benefit for unRAID is (if any) I have not tried to test, but possibly more significant in terms of hardware support. ie. The single fully functional AHCI driver will likely work with any AHCI compatible controller (eg. The jMicron). Also can't answer the formatting question, although I would not have thought this would affect the existing data on the drive? Can anyone else chime in here?
  14. Thanks for the clarification Tom. I overlooked the kernel change, which of course is a significant change. I would also agree that moving to a later kernel to pick up related bug fixes is a very good move, ahead of adding new features. I would much prefer new features are added to as stable a base as possible, rather than building on wobbly foundations! John: Please have patience, we are all awaiting these new features, I'm sure Tom will release 4.2beta when it is good and ready.
  15. Hi Joe. Just thought I would add my views on the points you raise. I can see what you mean by the version numbering and lack of a beta release, but I would add that this 4.1 release is really no different to the 4.0 final release process in this regard. ie. If you refer to the change log there were 2 changes made between the last 4.0beta release and the subsequent release that was labelled 4.0 final (so what would have happened if there were issues introduced by these 2 'final release' changes?). Ideally a beta release process should continue until there are no reported issues from 'beta' testing, such that the 'final' stable release has only the version number changed from the last beta compile (ie. so no new issues can possibly be introduced). With regard to version numbering, I agree that the change of a minor version number is necessary for these subtle changes to an already released 4.0 final. However, the issue should really be that the more substantial feature enhancements of both 'security' and 'writable user shares' do really warrant a more significant version number increment than just the equivalent release increment used for the subtle changes that produced this 4.1 release increment? Personally, I would have considered the current release as more of a 4.0.1 release, with the 'security' and 'writable shares' additions still being considered a 4.1 release increment. Apologies for rambling Tom, but also being a software developer you tend to focus on these sorts of things.
  16. Hi Tom. You certainly had me rushing over to the forum when I saw the 4.1 thread, thinking security beta yippee! But a big thank you for a very welcome release to provide the 16 drive support for our 6+2 SATA Motherboards. PS: I'm still very keen to know if you are configuring the MD1500 machines for AHCI mode on the SATA ports, as I have done with my P5B-E setup? ie. I don't think I have seen any official comment on whether this is now the preferred / recommended config.
  17. Agreed! Very nice choice of components Tom! I didn't even know about the Asus P5B-VM DO variant motherboard. This is the first VM motherboard I have seen with 6x SATA ICH8DO + 2x JMicron SATA. One step up from the P5B-E that I used, as it comes in a smaller form-factor, includes the on-board video, and uses Intel GigaLAN. Same initial question as Joe... assume you now have 15 (or 16 with eSATA) max drive support available? Also, curious if you are configuring these machines for AHCI mode on the SATA ports, as I have done with my P5B-E setup. Great to see you have also thought to offer a rack mount option, I can see that this will be very popular for business use. Now you just need to get the security implemented, and you will have a huge potential market open up for SMB customers. NB: Also spotted that you have sourced a lime colored USB flash drive! Now, since I live in New Zealand, I'll need to check local parts availability to build my own MD1500, to avoid the killer freight costs to ship one of these pre-built! edit: Hmmm... the international freight on the 'lighter weight' MD1500/LL actually isn't tooo horrific. I just need to fill up my existing unraid server so I can justify an MD1500/LL as my second unraid solution.
  18. Since Tom mentioned he was looking at MediaWiki (hows this progressing Tom?), I thought I would mention that the XBox Media Center wiki is nicely done, and is also based on MediaWiki: http://www.xboxmediacenter.com/wiki Tom, this might provide some ideas for how best to setup an official unRAID wiki section of your website?
  19. Entirely over to you. Personally, I only add a new drive when my last drive is nearly full. This is mainly to spread the cost of my unRAID Server, but also to take advantage of falling HDD prices (since building the Server 2 months ago, that last 500GB drive I bought was $50 cheaper than the first one I bought). I figure at my current rate I am adding a drive a month, so I guess I'll be building a second unRAID server in about a year. A related observation is that the top priority unRAID development seems to be writable Shares. At first I thought this was a good idea, but in practice I prefer to write data to a specific drive (via a hidden write share), and make my 'User Shares' read-only for other home users to access. The thought of writing a DVD folder backup via writable users shares, and having different files withing the folder ending up on different drives does not appeal to me. I would prefer to dictate which drive I am putting each collection of related files onto (so they are all in one place). As I fill up each drive before moving onto the next, this is not an inconvenience. It probably comes down to what you are using unRAID for, but in my view there does not appear to be any real need for Writable User Shares for the primary intended unRAID purpose of a media archive storage server. My vote would probably therefore go to adding Security as the top priority. ie. So we could password protect selected folder shares etc. and also individual select which folders were published or hidden.
  20. Thanks, but as already noted in my post above:
  21. Hmmm... this post seems to be related to this question: http://lime-technology.com/forum/index.php?topic=505.msg4266#msg4266
  22. For what it's worth, I rip the whole DVD to hard drive, using either DVD Decryptor or AnyDVD, and then using PowerDVD to playback by just pointing to the 'movie from hard drive folder' option to playback exactly as the original disc would have. I then pack my DVD's away in a box to save shelf space and preserve them! Given the relatively low cost of storage these days, a typical DVD is only about 7GB, meaning you can fit about 70 on a 500GB drive. So your 700 DVD's would consume about 10 drives (11 with parity). Or to put it another way, given that there are also many DVD's that are less than 7GB, a full 500GB based 14 drive unRAID server, could hold around 1000 full DVD images. The other way to put this DVD storage space in context, is that newer High Definition content takes up significantly more space than even a full DVD image. Even a recorded HDTV movie is typically a 10-12GB+ transport stream file, and if you were thinking of future HD DVD / BD, then you are looking at 20-25GB or even ~45GB per movie!
  23. This is a good question. I believe the biggest current drain contributing devices in any PC are: - CPU - Northbridge - Memory - Video card - Hard Drives For unRAID: - Video card can be removed (or certainly a low performance / lower power PCI card can be used). - Hard Drives are spun down when not being used (what is the spun down power drain? - I don't know - but based on the spun down Drives returning to room temperature, I suspect not much power used). Therefore, the other controllable factor relating to power consumption is the choice of CPU / memory speed / clock speed. I have chosen a Celeron CPU. Clearly a Prescot core Pentium 4 would be a bad choice in terms of power consumption. In the interests of minimising Power consumption, it would therefore seem useful to know the optimum CPU / unRAID performance trade-off. ie. The point where installling a faster / higher power consuming CPU has greatly diminished performance gains.
  24. I only looked into this briefly, with no success, then gave up as I didn't have the time. If I recall correctly, what I did was something like this (I should have documented it at the time): ethtool eth0 This will show you the current 'Wake-on' status, which I think needs to be 'g' for wake up on magic packet. So if your Wake-on is showing as 'd' (for example), you would need to change to 'g' via something like: ethtool -s eth0 wol g The first issue, is that this is apparently reset on start-up, so you would need to add this to the start-up script. I think where I got to was I sucessfully set the interface to 'g' Wake-on mode, but was unable to get the PC to power up with a magic packet using it's NIC address. Not sure if the problem was at the linux end, or if it is a motherboard / BIOS setting issue (eg. I don't recall finding any specific BIOS setting related to WOL, only wake on PCI or PCI-e event etc.). However, if I google the P5B-E, I seem to find plenty of references to it supporting WOL? I would certainly like to progress this if anyone has any ideas / advice or has just simply succeeded with a WOL unRAID setup, as it would be great to be able to fire up the unRAID server from the Home Theater, without having to wander through to a back room to manually Power Up.
  25. Under 'Settings' on the unRAID management web page, make sure the name you enter for the 'Workgroup' is the same as the Windows Workgroup name you have set on your Windows PC's (and Xbox if such a setting exists). This will ensure the unRAID server appears as a local network share. No it does not hurt to close the webpage. The webpage is just showing the current status when the page was loaded, so if you re-open your browser later it is just the same as refreshing the page (web browsing is stateless - ie. the connection is not maintained). If you did not close down your unRAID server correctly ('cleanly'), this will result in a parity check on restart. Make sure you use the web management page to first [stop] the array, and then press the [Power down] button for a clean Power down! If doing a parity check, I believe the entire drive must be read to check the parity of all bits, therefore whether you have 5GB or 50GB of files, the speed will be the same. ie. The parity check speed will be related to the number of drives in the array, not the size of the files on the drives. Ensuring you always do a clean powerdown will avoid unRAID having to check the parity due to being in an unknown state at the next powerup.