BartDG

Members
  • Posts

    138
  • Joined

  • Last visited

Everything posted by BartDG

  1. Thanks Gary! Yes, I'm running v6. Unfortunately, I don't have a second system at my disposal, so that's sadly enough not an option. I don't believe I really understand what you're trying to say with the whole "have UnRAID "forget" about the current configuration and re-defining the array", but I guess that will become clear when I look into the Tools menu. Edit: I think I understand it now, you want me to: - Pull all the drives from the old config - setup a new config (one parity, one data) with two of the new 4TB drives, add the 2tb drive as an existing data drive, and the old 3TB parity drive to the array (isn't this about the same as the "swap disable" procedure?) - let the parity run until it's finished - then add the failing 3TB drive and copy over it's data? (when do I add it to then, I suppose not the array? Or can I add it as an extra drive somehow, not part of the array? Is this what you mean? With regards to the "user share copy bug", I believe what you're saying is to copy from disk to disk, but not using any shares. Heh, come to think of it, that would have been easier with v5, because I could then see the disks themselves under my "network" in my windows explorer (which I always thought was useless, and glad to see gone in v6). Alas now I cannot. To completely get rid of this risk, wouldn't it be better to use midnight commander and move the files on the server itself from disk to disk? That way no shares are used. I don't even see how I could do it from windows otherwise? Am I not always using shares to copy over the network? Thanks for being so patient with me.
  2. It's a bit more dangerous, as there are points in it where a failure would result in data loss. Since you've had a good parity check, I'd not use it. I'd first replace the parity drive -- keeping the old parity drive untouched until the new drive has completed its parity sync and you've done a parity check afterwards. Then I would ADD a new drive to the array (one of the other 4TB drives); and then copy all of the data off of the "failing" drive to the new 4TB drive [be CERTAIN you do NOT do anything in this copy process that will result in the "user share copy bug" -- this would cause massive data loss => if you're not sure what this means, ASK]. There are then a couple ways to remove the "failing" drive -- but what I'd do is a New Config that does not include the "failing" drive but DOES include the other new 4TB drive and the old parity drive. When you then Start the array the old parity drive will show as "unmountable" -- just leave it that way until the parity sync has completed; and then format it and it will be mounted fine. Thanks Gary, This is good advice. Somehow I suspected the "swap disable" feature as a bit more dangerous. Don't know why. Gut feeling I guess. I don't know what the "use share copy" bug is, sorry. I'll have a search for it before I do anything. I must admit I don't really understand what you mean by: "do a new config". Do you mean I could create a secondary array next to the first one, with also it's parity drive and 2 disks? (Meaning, I would have 2 "arrays",both consisting of one parity drive and two data disks) If yes, this would be an excellent solution. I could then simply set the second array up and then copy everything from the one array to the other. Or is this not what you mean? Edit: I'm now busy copying all the data off of the server onto external disks, for backup. So should anything go wrong, I'd be safe. Also: thank you Danioj for your suggestion of using a copy utility which can verify the copy. I'm always using Teracopy, but I don't always enable the "verify copy" option, because it slow things down enormously. I guess WILL do it now.
  3. Ah, come to think of this, one final question: if I do the change-swap procedure, and basically replace my parity drive with the 4TB drive and use my "old" 3TB drive as the new data drive, I'm guessing the drive will still be ReiserFS, right? So if I want to change everything to XFS, that would take me another step of adding another 4TB drive to the array and actually copying everything over, correct?
  4. Thanks for your advise Remotevisitor! I feel more confident now. You're right, I'll do a preclear before using the new drive. If, should the 3TB fail before it's complete (small chance), I could still do the "swap disable" procedure anyway. I'll go with your advise and use XFS. Disaster recovery is also very important, I agree. Hmmm. maybe ZFS with unRAID will be an option in time, now that Ubuntu will implement it by default in their upcoming LTS release.
  5. I'm currently running an unRAID system containing 3 drives : one 3TB for parity, one 3TB and one 2TB. Now, the 3TB (non parity) drive has started to show smart errors, more in particular: # Attribute Name Flag Value Worst Threshold Type Updated Failed Raw Value 197 Current pending sector 0x0032 200 198 000 Old age Always Never 1 198 Offline uncorrectable 0x0030 198 198 000 Old age Offline Never 1137 It also showed some errors on the main tab, but it seems those are all gone now...strange. A parity check turned out ok. I don't really know what these "smart" notifications mean, but getting an orange exclamation mark instead of a green thumbs up icon is actually a red flag to me. I don't want to take any chances, so I'm going to replace the drive, and expand the array a bit while I'm at it. I've bought 3 HGST 7k4000 (4TB drives). Now, since I want to replace the 3TB disk with a 4TB disk, and my parity is also only 3TB, I guess I'd best to the "swap-disable" procedure (read about that here) Now my questions are : (some pretty obvious, but still): 1) do the new drives need to be precleared before replacing the old ones? I'm guessing yes? 2) I probably don't need to change anything to the unRAID setting of the drives? (meaning parity remains parity and data drive remains data drive)? 3) Is this "swap-disable" procedure more dangerous than normal? 4) I'll also start using XFS insead of ReiserFS with the new drives. The old 2TB drive will also remain in the system, so there will be both XFS and ReiserFS drives in the system. Would that cause problems? If the answer is yes, or "it might", then I'll also update the 2TB drive with XFS, which brings me to the question: 5) Is there an easy way to copy all the data off of a drive to a new drive? Using Midnight Commander? I've read something about it, but I have no experience with it (little linux experience to be honest). I have read though that is can be slow. How can I be absolutely sure all the data is moved from the drive before pulling/replacing it? 6) I'm still in doubt whether I should use XFS or BTRFS. One of the main reasons for using a server is for storage and using it as a data time vault. Bitrot is thus my worst enemy, so you'd think BTRFS would be the obvious choice. But I'm in doubt since I have absolutely zero experience with it and I'm also not sure if it's 100% stable by now? What would you recommend? Thank you for your time.
  6. Ok, I deleted the whole VM and started over again, and waddayaknow: it works again. I started thinking: could this be because I registered Unraid yesterday and entered a key, essentially making the data on the USB stick not 100% equal to the data on the VHD anymore? Anyway, the system's back up. Now for some playing time!
  7. I didn't change anything. Just powered down the VM (correctly) yesterday after not being able to create shares, figuring I'd take another look tomorrow. So today I powered it back on, and the error is back. Strange. Tried a couple of times again now, but no love. Very puzzled now.
  8. Ok, something weird going on here. Now I get the old "flash device error, contact support" error again, which is very strange since I didn't change anything to the setup. The VM is still the same and the usb stick is inserted (and readable). I'll try a few thing, but I've pretty much haven't got any idea as to why this is happening...
  9. So if I get this correctly, if you run a plain vanilla Unraid v5 server (without any packages) and want to upgrade to v6, all you need to do is reformat the flash disk and install v6 onto it, and next assign the disks to the correct slots (with extra attention to parity and cache) and everything should be peachy?
  10. Thanks so much for your help, I'll check this evening and post feedback here!
  11. After the solution TheOne provided, I was able to boot correctly and set everything up. Another thing he said was correct: one of the usable 3 disks was indeed also taken by the VHD in the trial version of Unraid. So I registered for a basic licence, something I should have done a long time ago anyway. I was able to setup the disks (even a cache disk, even though in my case it's probably useless since it's all virtual but I wanted to try it nonetheless), but the only thing that puzzled me was the I seemed unable to create shares. I could enter the "shares" tab, but there was no "create share" button or something to actually create one. Strange. I'm now at work so I can't check anything, but I'll pick this up again tonight when I'm back home.
  12. Thanks for your speedy reply TheOne, but there's one thing I'm not getting: I'm not booting from the USB stick. I've made a VHD from the USB stick. This is a bit like a bootable ISO file. I've then connected this VHD file to Virtualbox. The system then boots from this VHD drive. (I did this specifically to cicumvent the problems with booting from USB with the free Virtualbox) But as per your advice, I ended up adding the USB stick to VirtualBox also, and tadaaaa: that worked. A bit strange needing two devices to get a proper boot, but hey, it works. Thanks for the help!
  13. I've been using unraid v4 for a while now, but figured it might be time for an upgrade. But I want to try things out first before changing my main unraid box, so I figured I'd go virtual first and tinker with it a bit. So I downloaded the newest Unraid V6 ZIP, copied it to an 8GB Sandisk USB stick and made it bootable. Next, I created a VHD from that using WinImage. The I setup the Virtual machine, creating 4 virtual HD's extra (3 data disks, 1 parity), each 10 GB. Then I boot up the virtual machine and I can get it to boot. I can even browse to the Unraid webmanagement and see the Unraid Server configuration utility. Unfortunately, it says: "flash device error, contact support". I can see the 4 Virtual HD's, but not assign them to anything. Obviously somehting has gone wrong. Anybody any idea, 'cause I'm not a virtualisation Guru... Thanks!
  14. Solved! I needed to copy the entire syslinux directory to the flashdrive. This wasn't mentioned in the FAQ, it only mentioned the syslinux.cfg file. Everything is peachy now. I guess the FAQ should be updated.
  15. Hi All, I setup my unRAID server last August with 5.0-rc16 back then. I'm not a Linux wizard, but with the help of the FAQ and wiki's I was able to pull it off and get a working setup. Yesterday I saw that there was now a 5.0 final, the most recent being 5.0.4. The upgrade intructions clearly stated: So I stopped my array and copied the bzimage and bzroot files to the root of the flash drive, overwriting the original files. I also copied the syslinux.cfg file. One thing strange about that, was that in the 5.0.4. zip, the syslinux.cfg file was located in a separate /syslinux folder, while on my flash drive, it was located in the root of the drive. I decided to copy the syslinux.cfg file from the /syslinux folder to the root of my flashdrive nonetheless, overwriting the original file. Then I shut the system down and rebooted. Now, when the system boots I get the message "Could not find kernel image : /syslinux/menuc32 and it doesn't boot. My instincts tell me to go ahead and copy the entire /syslinux folder to the flash drive and try again, but I'm reluctant to do so before any confirmation saying that's the correct path to choose. What am I to do now? I already re-ran the "make_bootable" script. Thanks!
  16. Thanks for the info. But if this is the case, I guess I'll just continue shutting it down via the GUI then. Pity there is no such command. Oh well...
  17. Perfect! Thanks a lot!! Edit : would it also be possible to shut the system down with a single command of some sort, so that I could create a shortcut icon on my windows desktop to shut the server down safely? Or maybe even turn it on/off between certain hours of the day? Sort of like a cron job? Thx!
  18. HI all, I'm new to unRAID but managed to set it up. It'll mostly be used as a backup server and video/mp3 server. For this I don't need it to run 24/7. So I usually turn it off when I'm done using it. Now, the problem here is that, when I reboot, the array is stopped. This needed to be done before I can safely turn off the server. My question is: how can I make it so that the unRAID server boots up from cold boot and automatically starts the array (and the shares show up on my Windows machine)? Now, every time a manual intervention is needed and this is not handy. In time I want to set up a VPN so I can safely access my server from outside my house, but I would then also turn the server on and off from the remote location. Having to start the array manually is a pain then. Basically I just want to push the button/click the WOL icon on my windows machine and have the server boot up completely and be done with it. Thanks!
  19. And so they have! Thank you very much for your reply!
  20. Ah, great! Thanks so much for the info! But why didn't the 3TB spin down then? I wasn't accessed for hours as well? I thought unRaid always spun down its drives after a certain amount of time?
  21. I guess you're right. When I got up this morning, I checked and it said (after screen refresh) that everything was fine! Yay! Only thing is: the green dot next to my second disk (the 2TB one) flashes on and off. What does this mean? Apart from that, everything looks peachy!
  22. Well, something must have gone wrong, because I've built the array now (3TB parity, 3TB+2TB data) clicked "start" and then clicked "format", and it's been at it for hours again now. I guess the worst part is that it just says "formatting", but no status indicator or anything, so I don't even know if it's actually busy or if the system just hangs. I guess I'll leave it overnight and have another look in the morning...
  23. I wrote this same post in a different thread, but I guess it wasn't the right thread for it, so I'm creating a new thread. I'm very new to unRaid, this is my first setup. First I had to buy a new motherboard because my Gigabyte board couldn't boot from USB. Now I've got a new "old" Asus bord from eBay and I was good to go. I created the boot thumb drive and installed unmenu and the preclear script. Then I installed 3 disks: two 3TB Western Digital WD30EZRX (one of which is for parity) and one Western Digital 2TB WD20EARS. I managed to install screen and ran three instances of Preclear. The 2TB was obviously finished first, but when it was running, I immediately noticed something strange. With the 2TB drive, the script said: "cycle 1 of 1, partition start on sector 64". With the 3TB drives, it said : "cycle 1 of 1, partition start on sector 1" The 2TB drive finished without a problem, after 35 hours. The 3TB drives gave some sort of notification, something concerning the partition and I had to re-type "Yes" again. But then it said it could not continue of some sort. I fear now that instead of at sector 64, these partitions started at sector 1. I found that strange because I did use the -A switch (preclear_disk.sh -A /dev/sda). What did I do wrong? I hope you can help me, because after more than 40 hours of pre-clearing, I really don't feel like redoing it. Edit: oh, and does it matter that the motherboard uses an older bios which does not support disks larger than 2TB? I've read in the past that Linux is capable of ignoring the BIOS setting and using it's own, but I'm not sure if it'll work in this case. If not, then I will probably have to buy a SATA controller. (the BIOS shows the 3 TB disks as 800 GB or something). Thanks!
  24. I'm very new to unRaid, this is my first setup. First I had to buy a new motherboard because my Gigabyte board couldn't boot from USB. Now I've got a new "old" Asus bord from eBay and I was good to go. I created the boot thumb drive and installed unmenu and the preclear script. Then I installed 3 disks: two 3TB Western Digital WD30EZRX (one of which is for parity) and one Western Digital 2TB WD20EARS. I managed to install screen and ran three instances of Preclear. The 2TB was obviously finished first, but when it was running, I immediately noticed something strange. With the 2TB drive, the script said: "cycle 1 of 1, partition start on sector 64". With the 3TB drives, it said : "cycle 1 of 1, partition start on sector 1" The 2TB drive finished without a problem, after 35 hours. The 3TB drives gave some sort of notification, something concerning the partition and I had to re-type "Yes" again. But then it said it could not continue of some sort. I fear now that instead of at sector 64, these partitions started at sector 1. I found that strange because I did use the -A switch (preclear_disk.sh -A /dev/sda). What did I do wrong? I hope you can help me, because after more than 40 hours of pre-clearing, I really don't feel like redoing it. Edit: oh, and does it matter that the motherboard uses an older bios which does not support disks larger than 2TB? I've read in the past that Linux is capable of ignoring the BIOS setting and using it's own, but I'm not sure if it'll work in this case. If not, then I will probably have to buy a SATA controller. (the BIOS shows the 3 TB disks as 800 GB or something). Thanks! Thanks!
  25. For those interested: Western Digital already had their Black, Blue and Green line of harddisks, but now they have Red as well. WD claims these drives contain special firmware, meant for optimal performance in combination with NAS'es. They say they have developed special "NASware" firmware. No idea what would make these disks more special than their other drives. They come in 1, 2 or 3TB, are SATA600 and have 64MB of cache, so pretty average in my book. You can find the first online review on the 3TB version of these disks here. These disks probably aren't any better for unRAID use than the Green disks are (since unRAID isn't classical RAID), but I thought I'd mention it for those interested.