isochronous

Members
  • Posts

    26
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

Recent Profile Visitors

The recent visitors block is disabled and is not being shown to other users.

isochronous's Achievements

Noob

Noob (1/14)

0

Reputation

  1. Sure enough, you're right. I guess I'll just copy the data from the old drive to the new one and rebuild the parity from scratch. None of the other disks show any problems in their SMART reports - the only time any beeping has ever occurred was during the data rebuild. Weird.
  2. I did it on /dev/sdh1, I believe. As far as the beeping goes, that's just what the IT guy at work suggested, but now that I think about it I believe the RAID controller would only beep with parity errors if it were using the hardware RAID built into the card, so I'm not really sure what the beeping was. It was always just two concurrent beeps from the computer speaker, and there might be as little as 5 minutes between two sets of beeps, or up to 45 minutes, which is why I initially thought he might be right. Other than parity errors, I have no idea what could be causing the beeping - I've had this many drives in my system for a long time, and done at least 3 data rebuilds in the past, and never had any beeping occur in the past.
  3. Here's the situation... A few weeks ago, one my my Seagate 3TB disks ( Disk 8 ) started reporting a lot of read errors. Having run across this before, and done some research, I knew it wasn't an uncommon problem for these particular drives to have, especially when used in an unraid array. I had a spare drive (unfortunately, it was the exact same type of 3TB seagate), so I popped it in and ran one cycle of the preclear_disk script on it. Unfortunately, I has also recently noticed that I was having difficulty writing to the array, saying I had no write permissions, which led me to believe at least one other drive was being mounted in read-only mode due to file system errors, so I ran reiserfsck on the other drives while preclearing the new drive. Sure enough, my disk 7 (which was a Hitachi 3TB drive) reported that I needed to rebuilt the FS tree... so, not thinking about the fact that I would soon need to do a data rebuild from parity, I went ahead and started the process. As soon as I saw the first tree correction being written to the disk, I realized what I had done - basically I'd invalidated my parity information. Sure enough, once the preclear of the new disk finished and I started a data rebuild, I got a "remaining time" estimate of a little over 60 days. Normally that's pretty standard during the first few minutes of a data rebuild, but typically it drops down to 20 hours or less once it's been going a while. Not this time. In addition, I heard frequent double beeps during the process, which I believe was the "parity error" beep pattern. So, I stopped the rebuild. Since the old, failed drive hadn't actually died yet, I figured that I could skip the whole data rebuild process and just hook that drive up to another PC (or just not assign it to the array on my unraid box) and then copy all of its contents over to the new Disk 8, tell unraid to trust the array, do a parity check, and all would be fine. Am I on the right track? If not, is there any alternative? If so, what exact steps do I need to take to trust the array in the New unraid 5.0, as the wiki page seems to indicate that the old process won't work anymore.
  4. Just for anyone else who runs into a problem like this, you can easily find any open file handles in user shares by running "lsof | grep /mnt/user", and you can then run pkill on the processes keeping those handles open. I added the following line to my go script: alias userfiles='lsof | grep /mnt/user' that way I can just type "userfiles" and hit enter, and it will show any open files in user shares.
  5. While that did fix the problem, my question still stands: If there was previously a "MediaStore" share that spanned all disks (so there's a "MediaStore" directory on each disk's FS), and I re-create that share, will it... a) pick back up the existing directories (i.e. will the share show up "populated"), or... b) Overwrite all of the MediaStore directories (very, very bad), or... c) Create some other "MediaStore" directory (eg. MediaStore0) that will be treated as the "MediaStore" share (not ideal, but I can work with that)
  6. I actually have the same problem - if I log in via telnet and go to /mnt/user (or /mnt/user0) I can see three shares there, but in the web GUI I have zilch. When I go to my flash share into the config/samba directory, I have no smb.conf file, though I remember one being there before... If I recreate each share with the same name in the Web GUI, will it overwrite that share's directories, or will it just restore the share?
  7. Sure looks like it... I think it's even a gigabyte mobo. Thanks for the help! I'll post another reply in the off chance it doesn't work.
  8. Just like the title said. I followed the upgrade procedure (copy bzimage, bzroot, & memtest to the flash share & reboot). After rebooting, I could not start my array, as there was an error about "replacement disk is smaller than original" or something similar. It showed my Disk3 with two entries - both with the same serial number, but with the "replacement" version reading as having 4KB less disk capacity. I captured a syslog and reverted back to 4.5.4, where I captured another syslog and a SMART report for the drive in question. Those logs are attached. I do have unraid-web and unmenu installed (on 4.5.4 at least), though I'm sure it doesn't make a difference for this issue. 4.5.4_syslog.txt 4.7_syslog.txt smart.txt
  9. That's not especially helpful. What are those modifications? What's different from the stock mover script?
  10. Recently I came home after a sever thunderstorm to find my unraid server making a horrible clicking noise from one of the hard drives. Unfortunately, at the same time, my router had somehow bricked itself during the storm, and I was unable to access the unraid web GUI. I tried to soft-power-off the server by hitting its power button, but it just kept spinning down then spinning up the dying drive, so I was eventually forced to hold the power button in and shut it off the brute force way. So now I know that one of my drives is dead, I don't know which one, and I don't really want to power back up the array until I have a replacement drive ready. I decided that now was as good a time as any to add some capacity, so I decided to buy two 2TB drives, one as a new parity disk, and the other as the replacement for the dead drive. The problem (well, the main problem, now) is that currently my parity disk is 640GB. Which means that if I want to replace the dead drive (assuming it's not the parity disk) then I have to install the replacement drive, reconstruct it from the parity info, then swap out the parity drive. I'm sure at least some of you already see the problem: the parity disk has to be at least as big as any drive in the array, which means I'm going to have problems using the 640GB parity disk to reconstruct data on the 2TB drive. So then I should swap out the parity drive first, but then obviously the problem is that I won't be able to reconstruct the data on the dead drive. My question is: Is there any way to tell unraid to just use the first 640GB of the new data drive, and then later use something like GParted to size up the partition after I've swapped out the parity drive? Or, failing that, is there a way to copy the parity data from the existing parity drive to the new 2TB drive, then swap out the data disk and rebuild the array? Is there some other answer to this problem that I haven't thought about? Your help will be very much appreciated.
  11. I have the same question as 172pilot. I'd really like to have a single share, ie DriveIndex, which I can map as a network drive, and which would contain folders that act like hard links to the different shares (essentially the same thing as the user share). One idea I had was to just use an SMB script to share the /mnt/users folder directly, but I wasn't sure if that would horribly screw things up or what. Any ideas? To be clear, right now I have these shares: Applications Games Movies Music Operating Systems TV Shows but each one is a distinct network location. I'd like to have one mapped network drive that has all those shares as sub-folders. If I try to map \\tower to a network drive it won't let me because it's not an actual drive. So I'm thinking about exposing the /mnt/user mount point as an SMB share, which would allow me to do what I'm trying to do. Are there any caveats or reasons this would be a bad idea? Are there any alternative/better solutions?