mvdzwaan

Members
  • Posts

    116
  • Joined

  • Last visited

Everything posted by mvdzwaan

  1. Completely missed this thread I created the new skins in 2012 with the intention to not alter the awk files (eventually I did change a couple, but only minor). As I did send the files to Joe svn but at that time they were not integrated, I never looked at unmenu updates again, so as not to overwrite my personal skin files So sorry for not supporting them, and big thanks to Zoggy for doing "my" work. Now to finally upgrade my 2 unraid boxes to unmenu 1.6 as well...
  2. Same here. No offense to anyone, and I appreciate the good intentions, but I would discourage users to download from "unknown" sources, like private mirrors. I know they can be "secured" using sha hashes on the official site, but not all users are willing or capable of performing the checks to validate the downloaded file....
  3. This is what I worry about. So if something does go wrong is it possible to recover? My main reason for choosing Unraid (the same reason why I ran Windows Home Server v1) is that when everything fails, the disks themselves are normal linux formatted disks, which can be place in another machine and the contents can be read. So even in the case of double (or triple) disk failure you can always get to the data on the non failed disks, and perhaps try the failed disks in a pc as well (most of the time a disk failure is not a complete failure, and you can recover most data from it). All other raid systems always employ "custom" striping schemes, and if the machine cannot recover the array, most of the time *all* data on *all* disks is lost. Recovery is very difficult in these cases.
  4. There's faulty logic in it: Mentioning how many years he's been usung unRAID is somehow supposed to be a proof that what he's saying is correct. The fact remains: Simple parity code can only detect an odd number of bits in error. It cannot detect the position of the error, therefore it cannot correct errors. http://www.raid-recovery-guide.com/raid5-write-hole.aspx Garycase said exactly the same. You can use existing backups (as him) or md5/crc hashes (as I'm doing) to detect the actual data error. Bases on both Garycase and my own experience, whenever there were errors during the parity (correct) sysc, the errors always was in the parity information. The 'errors' from the main unraid interface only detects errors during read/write, and not perse the example you gave where a bit just changes value without writing/reading it. But your hdd also stores a crc for each sector and will throw a crc error when such a sector is accessed (and thereby also have unraid show an error). From what I know, in this case (hdd throws a read error), unraid will automatically try to reconstruct the sector from parity and write it again. If writing then fails, the drive will be redballed, if the write succeeds (because the sector is fine, or the hdd remaps it) everything is ok again. Only question for me is if the 'read which causes the correct' is also triggered during a parity (correct) sync....
  5. Did you even read what I wrote? Or you just start talking? I read both, and thought the response from garycase was appropriate
  6. That's an opinion, not fact, despite the bold. Actually it is a fact. I'm not purchasing licenses until there's an official release with >2TB support. And if I find a better solution elsewhere in the meantime, then the opportunity to gain me as a customer will be lost. That's the long and short of it. Tom has I believe declared it stable with minor UI fixes to be done for the V5 release. It also has a modified timestamp problem which will be fixed in v5 final (would not have mind a 16d release...)
  7. There are problems with the modified timestamp on files See http://lime-technology.com/forum/index.php?topic=28638.0
  8. I'm not adding a new disk, but replacing an existing one. Unraid then rebuilds the disk, so it will rebuild the disk as a 2Tb MBR-unaligned, and then upsize it as it sees the disk is bigger. But in this case it not only needs to resize it in reiserfs but also needs to change the disk to GPT
  9. But are they automatically converted when upsizing the disk after the rebuild ?
  10. Thanks, now I know what I did wrong. I had the flash share open when stopping the array, and thus getting the disconnected message. B.t.w. never had any problems replacing the files while the array was started. As these are unpacked and loaded into memory I guess there should be no problem
  11. Something which has 'bothered' me ever since running the 5.0beta/rc and upgrading it are the upgrade instructions. It states : 'Prepare the flash: either shutdown your server and plug the flash into your PC or Stop the array and perform the following actions referencing the flash share on your network: ' But whenever I shut down the array, samba is stopped and thus the flash share is not available anymore. I always copy the bz files while the array is started and then reboot without any problems. Are the instructions wrong or am I doing something which should not be done ?
  12. I have 2 WD EADS 2Tb disks, which I want to replace with Seagate HDD.15 4Tb disks. Currently these disks are MBR-unaligned, but the 4Tb disks should be GPT (aligned). Will the resizing process of unraid after rebuilding convert the disks to GPT and make it aligned ?
  13. mvdzwaan

    Status

    http://lime-technology.com/forum/index.php?topic=28788.0
  14. It's a plugin issue? I only have two things I have installed as I have been waiting for 5 to go final before really jumping on board so I am relatively new to it. I have literally only loaded unmenu and simplefatures that I can think of. I have barely got through the installation/configuration guide so there is nothing "hardcore" running at all. I would classify SF as hardcore.. If it's the amount of people that use it, or the complexity of this plugin, but the number of problem threads I've read on this forum which could be traced back to SF is very large in my opinion. Not to discredit the makers ofcourse Just try it without SF and post your results...
  15. That should not be true in -rc11. Only the user specified in the "FTP user(s)" field should be allowed access (though that user will have full access as mentioned above). In -rc11a you can enter a list of users separated by spaces. Prior to -rc11 any user could access via FTP. The FTP app on the Network Services page along with this functionality was added in rc11 precisely to "lock down" FTP so that only a specific user could access via FTP - otherwise known as a "bandaid solution" You're right, only the ftp specific users get all rights. I circumvented my problem with using it by changing the local root to the desired user share. With all its limitations it's actually exactly enough for what I wanted to do with ftp.
  16. Is it intended behaviour the FTP service does not honor user rights and/or share settings ? Everything gets published and everything is available to each account.
  17. If it's Willamette core it's 64bit, if it's Nortwood it's 32 bit only
  18. My suggestions were reactions on samcons question : "Since the drive was already precleared, can I use the script to just put the "signature" to indicate the drive is cleared? If possible, is this a good idea?"
  19. Because he stated he cleared a 3Tb disk with preclear 1.1. unraid did not recognize it as precleared as well, so in fact it wasn't precleared in the sense that you would have a guarantee it was 100% zero'ed
  20. This is incorrect. pre-clear will take much longer than a parity check and the array is protected during a parity check. During a normal parity check the array is protected, but in this case a drive has been added which might not be zeroed but as the signature is available unraid trusted the disk to be zeros.... If the disk did not contain zeros and a disk fails it will be reconstructed using wrong parity data (or wrong disk data, just how you look at it). If the parity checks finds no corrections (and thus the new disk was indeed 100% zeroed) then the array was protected during the check, if the parity check finds corrections the array was not protected during the check. Regarding the speed issue. If you preclear it again using 1 pass and only the write pass this could be faster than a parity check which reads all disks. In my case with a 4Tb parity disk and new 3Tb data disks a write-only preclear is always faster dan a full parity check. Writing occurs at 100Mb/s initially whereas my parity checks never go beyond 50Mb/s. Another point to make, your previous pre-clear might have written only 2Tb, and thus the 'stress-test' component of pre-clearing has not been run on the complete disk, so another reason to do a full (read-write-read) preclear on this new disk. A full preclear will be slower than the parity check.
  21. If you ask me, I would preclear it with 1.13 again just to be sure. If you don't you still have to do a parity check (with correct) to verify it was indeed pre-cleared and this would take more time than pre-clearing it again. Also during the parity check your array will not be protected. So just preclear it again...
  22. If you accept the fact that yoy're going to rebuild parity (I would advise in doing this instead of a 'trust my parity/array' action), you only have to figure out which drive is the parity drive. All data drives will have a reiserfs filesystem while the parity disk will not contain a filesystem (as every bit it the parity data of all data bits)
  23. Not the total size of the array determines the time needed. What's the size of your parity disk and then you got to factor in the different size disks etc.etc..
  24. And still you knew exactly what this user meant. I run an internet speedtest site and there you also have MB/s and Mb/s, but whichever one I use, there's always a large group which does not understand it. Maybe the units are simply not clear enough by themselves. Before you know it this leads to a linux/windows discussion on wether casing is enough of a differentatior. I'm inclined to say no, as a programmer I think variable value1 and Value1 should be the same.