davehahn

Members
  • Posts

    9
  • Joined

  • Last visited

Converted

  • Gender
    Undisclosed

davehahn's Achievements

Noob

Noob (1/14)

0

Reputation

  1. I love the flexibility unraid allows me for a multitude of configuration options. I love that if my enterprise gear dies, I can transfer my disks to anything and still have my data. I love that if parity fails, I only loose the data on the disks that failed and not the entire array. I have been running unraid for a really long time and I have never lost data. The support community is fantastic! I’d like to see support for more than 30 disks someday. I’d also like to see the ability to have archive arrays where the catalog and directory structure could live on the main unraid array, perhaps with stub files on the main array, and if those stub files were called the remote array could wake on lan and transfer over the files, then power back down. Each archive array could be licensed a little cheaper than a full license and limited to not have direct SMB access perhaps, using APIs to talk to the main array.
  2. I am also experiencing this. I didn't have the issue on my Ubuntu vm, but I created a CentOS 7 vm and the issue starts up 10 minutes after booting up the vm. I have tried the suggestions above and just like everybody else, they don't fix the issue. Really hoping someone discovers the solution to this. I have a supermicro motherboard and AMD processor.
  3. This is true, there is a good chance - but I have a cronjob that runs on the first of the month that does find /mnt/user -type f -print0 | xargs -0 md5sum > "/mnt/user/scripts/md5sums.$(date +%F_%R)" and I keep a file of hashes from my offsite backups - both as an index, and as a reference to compare to if I encounter a file I suspect may have become corrupt.
  4. I'm not sure what you mean by read failures being handled, I guess you are ok with losing the data and have good backups? After a bad experience early on, I no longer leave any questionable disks in the array. A known bad disk jeopardizes the ability of unraid recovering the data on any other failed disk. I mean: From http://lime-technology.com/wiki/index.php/Troubleshooting If your array has been running fine for days/weeks/months/years and suddenly you notice a non-zero value in the error column of the web interface, what does that mean? Should I be worried? Occasionally unRAID will encounter a READ error (not a WRITE error) on a disk. When this happens, unRAID will read the corresponding sector contents of all the other disks + parity to compute the data it was unable to read from the source. It will then WRITE that data back to the source drive. Without going into the technical details, this allows the source drive to fix the bad sector so next time, a read of that sector will be fine. Although this will be reported as an "error", the error has actually been corrected already. This is one of the best and least understood features of unRAID! Yes - I have offline offsite backups and I'm not terribly concerned with the data. I use old disks until they are marked dead. My critical data lives elsewhere. But with the intent of kicking this array down the road another day and not resorting to backups - I can conclude the best course of action with highest probability of success is to rsync the data on the failing full drive to one of the existing empty disks, then drop both the failing full and failing empty drives out of the array.
  5. I have 4 failing disks - the disks are old junk disks and a few read failures doesn't concern me, unraid handles that nicely. But obviously 500K failures on a disk that's empty is no bueno - I want to eject that from the array, and replace the one that has data and has 30K errors. The log leads me to believe it's a media failure and not a SATA cable. I had to break up the diagnostics into 2 files because it was over the 320KB forum attachment limit.
  6. I have 2 disks failing. One is full of data, the other is empty. See screenshot. I would like to remove the empty bad drive from the array without causing parity to need rebuilt - is that possible? The goal is to be able to rebuild the drive that has data without the empty drive causing an issue that makes the rebuild fail. Any suggestions? I guess I should mention I had a power failure, and when unraid booted back up and started a parity check, these two drives are racking up the errors like mad. Parity check is only at 30% and says it will take 72 days to complete... so I feel that at least one drive failing out is imminent.
  7. From tailing the syslog, it looks like it's still clearing the new drive - but I'm seeing the following: Aug 15 21:53:19 Tower in.telnetd[2333]: connect from 10.0.1.2 (10.0.1.2) Aug 15 21:53:20 Tower login[2334]: ROOT LOGIN on `pts/0' from `10.0.1.2' Aug 15 21:55:29 Tower emhttp: ... clearing 84% complete Aug 15 21:57:02 Tower kernel: mdcmd (59): spindown 11 Aug 15 21:57:35 Tower kernel: mdcmd (61): spindown 11 Aug 15 21:57:42 Tower kernel: mdcmd (63): spindown 11 Aug 15 21:58:00 Tower kernel: mdcmd (65): spindown 11 Aug 15 22:01:28 Tower emhttp: ... clearing 85% complete
  8. I was monitoring the progress of a new drive that was clearing (via the web interface) and noticed around 83% all the drives were flashing and spundown. Any idea why the drives all spun down for no apparent reason? The syslog shows the following: Aug 15 19:49:19 Tower emhttp: ... clearing 65% complete Aug 15 19:55:51 Tower emhttp: ... clearing 66% complete Aug 15 20:02:19 Tower emhttp: ... clearing 67% complete Aug 15 20:08:52 Tower emhttp: ... clearing 68% complete Aug 15 20:15:24 Tower emhttp: ... clearing 69% complete Aug 15 20:21:52 Tower emhttp: ... clearing 70% complete Aug 15 20:28:21 Tower emhttp: ... clearing 71% complete Aug 15 20:34:50 Tower emhttp: ... clearing 72% complete Aug 15 20:41:22 Tower emhttp: ... clearing 73% complete Aug 15 20:47:54 Tower emhttp: ... clearing 74% complete Aug 15 20:54:35 Tower emhttp: ... clearing 75% complete Aug 15 21:01:21 Tower emhttp: ... clearing 76% complete Aug 15 21:07:53 Tower emhttp: ... clearing 77% complete Aug 15 21:14:32 Tower emhttp: ... clearing 78% complete Aug 15 21:21:19 Tower emhttp: ... clearing 79% complete Aug 15 21:28:16 Tower emhttp: ... clearing 80% complete Aug 15 21:35:04 Tower emhttp: ... clearing 81% complete Aug 15 21:41:55 Tower emhttp: ... clearing 82% complete Aug 15 21:49:01 Tower emhttp: ... clearing 83% complete Aug 15 21:52:00 Tower kernel: mdcmd (40): spindown 0 Aug 15 21:52:00 Tower kernel: mdcmd (41): spindown 1 Aug 15 21:52:01 Tower kernel: mdcmd (42): spindown 2 Aug 15 21:52:02 Tower kernel: mdcmd (43): spindown 3 Aug 15 21:52:03 Tower kernel: mdcmd (44): spindown 4 Aug 15 21:52:03 Tower kernel: mdcmd (45): spindown 5 Aug 15 21:52:03 Tower kernel: mdcmd (46): spindown 6 Aug 15 21:52:03 Tower kernel: mdcmd (47): spindown 7 Aug 15 21:52:04 Tower kernel: mdcmd (48): spindown 8 Aug 15 21:52:04 Tower kernel: mdcmd (49): spindown 9 Aug 15 21:52:04 Tower kernel: mdcmd (50): spindown 10 Aug 15 21:52:05 Tower kernel: mdcmd (51): spindown 11 Aug 15 21:52:11 Tower kernel: mdcmd (53): spindown 11 Aug 15 21:52:15 Tower kernel: mdcmd (55): spindown 11 Aug 15 21:52:35 Tower kernel: mdcmd (57): spindown 11