mvdzwaan

Members
  • Posts

    116
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed
  • Personal Text
    CM 590, X7SPA-HF, 13x 2TB

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

mvdzwaan's Achievements

Apprentice

Apprentice (3/14)

1

Reputation

  1. Completely missed this thread I created the new skins in 2012 with the intention to not alter the awk files (eventually I did change a couple, but only minor). As I did send the files to Joe svn but at that time they were not integrated, I never looked at unmenu updates again, so as not to overwrite my personal skin files So sorry for not supporting them, and big thanks to Zoggy for doing "my" work. Now to finally upgrade my 2 unraid boxes to unmenu 1.6 as well...
  2. Same here. No offense to anyone, and I appreciate the good intentions, but I would discourage users to download from "unknown" sources, like private mirrors. I know they can be "secured" using sha hashes on the official site, but not all users are willing or capable of performing the checks to validate the downloaded file....
  3. This is what I worry about. So if something does go wrong is it possible to recover? My main reason for choosing Unraid (the same reason why I ran Windows Home Server v1) is that when everything fails, the disks themselves are normal linux formatted disks, which can be place in another machine and the contents can be read. So even in the case of double (or triple) disk failure you can always get to the data on the non failed disks, and perhaps try the failed disks in a pc as well (most of the time a disk failure is not a complete failure, and you can recover most data from it). All other raid systems always employ "custom" striping schemes, and if the machine cannot recover the array, most of the time *all* data on *all* disks is lost. Recovery is very difficult in these cases.
  4. There's faulty logic in it: Mentioning how many years he's been usung unRAID is somehow supposed to be a proof that what he's saying is correct. The fact remains: Simple parity code can only detect an odd number of bits in error. It cannot detect the position of the error, therefore it cannot correct errors. http://www.raid-recovery-guide.com/raid5-write-hole.aspx Garycase said exactly the same. You can use existing backups (as him) or md5/crc hashes (as I'm doing) to detect the actual data error. Bases on both Garycase and my own experience, whenever there were errors during the parity (correct) sysc, the errors always was in the parity information. The 'errors' from the main unraid interface only detects errors during read/write, and not perse the example you gave where a bit just changes value without writing/reading it. But your hdd also stores a crc for each sector and will throw a crc error when such a sector is accessed (and thereby also have unraid show an error). From what I know, in this case (hdd throws a read error), unraid will automatically try to reconstruct the sector from parity and write it again. If writing then fails, the drive will be redballed, if the write succeeds (because the sector is fine, or the hdd remaps it) everything is ok again. Only question for me is if the 'read which causes the correct' is also triggered during a parity (correct) sync....
  5. Did you even read what I wrote? Or you just start talking? I read both, and thought the response from garycase was appropriate
  6. That's an opinion, not fact, despite the bold. Actually it is a fact. I'm not purchasing licenses until there's an official release with >2TB support. And if I find a better solution elsewhere in the meantime, then the opportunity to gain me as a customer will be lost. That's the long and short of it. Tom has I believe declared it stable with minor UI fixes to be done for the V5 release. It also has a modified timestamp problem which will be fixed in v5 final (would not have mind a 16d release...)
  7. There are problems with the modified timestamp on files See http://lime-technology.com/forum/index.php?topic=28638.0
  8. I'm not adding a new disk, but replacing an existing one. Unraid then rebuilds the disk, so it will rebuild the disk as a 2Tb MBR-unaligned, and then upsize it as it sees the disk is bigger. But in this case it not only needs to resize it in reiserfs but also needs to change the disk to GPT
  9. But are they automatically converted when upsizing the disk after the rebuild ?
  10. Thanks, now I know what I did wrong. I had the flash share open when stopping the array, and thus getting the disconnected message. B.t.w. never had any problems replacing the files while the array was started. As these are unpacked and loaded into memory I guess there should be no problem
  11. Something which has 'bothered' me ever since running the 5.0beta/rc and upgrading it are the upgrade instructions. It states : 'Prepare the flash: either shutdown your server and plug the flash into your PC or Stop the array and perform the following actions referencing the flash share on your network: ' But whenever I shut down the array, samba is stopped and thus the flash share is not available anymore. I always copy the bz files while the array is started and then reboot without any problems. Are the instructions wrong or am I doing something which should not be done ?
  12. I have 2 WD EADS 2Tb disks, which I want to replace with Seagate HDD.15 4Tb disks. Currently these disks are MBR-unaligned, but the 4Tb disks should be GPT (aligned). Will the resizing process of unraid after rebuilding convert the disks to GPT and make it aligned ?
  13. mvdzwaan

    Status

    http://lime-technology.com/forum/index.php?topic=28788.0
  14. It's a plugin issue? I only have two things I have installed as I have been waiting for 5 to go final before really jumping on board so I am relatively new to it. I have literally only loaded unmenu and simplefatures that I can think of. I have barely got through the installation/configuration guide so there is nothing "hardcore" running at all. I would classify SF as hardcore.. If it's the amount of people that use it, or the complexity of this plugin, but the number of problem threads I've read on this forum which could be traced back to SF is very large in my opinion. Not to discredit the makers ofcourse Just try it without SF and post your results...
  15. That should not be true in -rc11. Only the user specified in the "FTP user(s)" field should be allowed access (though that user will have full access as mentioned above). In -rc11a you can enter a list of users separated by spaces. Prior to -rc11 any user could access via FTP. The FTP app on the Network Services page along with this functionality was added in rc11 precisely to "lock down" FTP so that only a specific user could access via FTP - otherwise known as a "bandaid solution" You're right, only the ftp specific users get all rights. I circumvented my problem with using it by changing the local root to the desired user share. With all its limitations it's actually exactly enough for what I wanted to do with ftp.