jji666

Members
  • Posts

    18
  • Joined

  • Last visited

Everything posted by jji666

  1. I thank everyone in advance for their help. On one of my unraid servers will be fine for weeks on end. Then, seemingly unrelated to any access of the drive shares, the disk activity light will go and stay on. This seems to correlate to a very slow parity check rate. After I started the parity check, noticed the slow rate, and tried to stop the array, it froze. The shares disappeared and I could not get the log. After a reboot the whole array is fine and passes a parity check. This is repetitive behavior, and it seems to to happen every 3-4 weeks. Any ideas? Can I ignore this? I'd like to.
  2. Well I just had to mention this somewhere: about a month ago I was concerned about my Seagate 1.5s so I pulled them and ran the Seatools tests on them, then updated firmware, and ran the Seatools tests again. Sometimes, and it seemed to happen most with one of my 4 drives, the long test would freeze up. So I sent in a tech query through the Seagate website asking if freezing during the long test could possibly indicate a problematic drive. Well, I waited ONE MONTH to have Seagate send me a canned email response about firmware in a small percentage of drives being a problem and here is the link to the updated firmware, which I happened to mention I actually had in my query to Seagate. So don't expect anyone at Seagate to have a brain or give a care about this. Whatever happened to good service, or at least reading your emails? The web enables all kinds of good things, but it also makes large companies think they can handle customer issues without trying at all. Pathetic.
  3. Thanks for the response. I waited maybe 10-15 minutes. Never appeared to come back. Next time it happens, I'll see what I can get. I did try - tried to telnet in to get the log -- but nothing was responsive. The video signal goes to sleep after some period of time and the only way to bring it back is with a keystroke, and I can't hook up a keyboard w/o pulling it out of the rack. I'll make sure to have that all set next time so I can see what's on screen. You're right - I hadn't considered it could be just a network issue. Thanks.
  4. Here's a question -what exactly are the symptoms of the problems with these Seagates? Not failure, but the drive pause issue with streaming media. While watching Gods and Generals in HD off of one of these drives, my whole unraid server locked up - could not access any shares or the web interface, nor telnet. Had to hard reboot. After, the parity check showed no errors. Second time it locked up like this in three weeks. Not sure if this is the drive issue -- if the drive "pauses" for 30 seconds like many owners have complained, would this totally freeze unraid requiring a reboot or would the drive just be unavailable for a minute? Question is should I be looking at the drive or some other possible issue with Unraid? I've got 4 of these monster 1.5's in the box as well as a 1TB and 3 750s. PS is an Antec 500 and the box didn't seem that hot, although the drive serving the media (holding Gods...) was at about 38 degrees. Thanks!
  5. Does anyone know this? I've been upgrading my Seagate 1.5 TB (firmware SD17) to firmware SD1B and then doing the long test from Seatools on them. One drive upgraded well and then passed the long test without too much concern. A second drive seems to randomly freeze the long test at various, inconsistent points. It has made it through the long test twice and frozen 4 times. Basically the whole Seatools (for DOS, bootable ISO for CD-ROM) interface freezes up, and I have to reboot to get out of it. The first drive didn't freeze Seatools, although possibly I just got lucky. However, I am wondering if it's possible Seatools is freezing by some issue external to the second drive since the drive has passed the test twice and I would think that Seatools would be designed well enough that if the drive is the cause of the freeze that Seatools would report this as a problem rather than just freezing. What I am trying to figure out is whether this drive is failing and I should RMA it. I've asked the same question through the support interface at Seagate but I hardly expect an answer before summer. Anyone know enough about Seatools to know if this is more of a software problem, or a problem with my desk-assembled caseless firmware upgrade rig, or it really is this drive? Thanks
  6. It's even more of a mess because certain drives with firmware SD17, SD18, and SD19 are stated on the Seagate website (through their serial number tool) as being "unaffected, no action necessary." While this might be true, to me there also seems a substantial chance that they will later say -- "oops, those drives were affected" and only after we've all suffered more drive failures. I am upgrading all my drives with firmware in this range. I appreciate that Seagate's response has been more than perfunctory, but what they need to do is make volumes of information available so that we can all really easily determine whether any sort of firmware upgrade is necessary on every possible drive.
  7. I didn't mean to hijack this thread. I've got direct fans on all the drives. The main problem is that all of these 4x3 adapters for the drive bays crowd the drives quite close together and there isn't that much airflow between them even though there is a fan on one end blowing through. 42 degrees is the temp of the hottest drive right now during a parity check when they are all spinning. While I might add another fan in the back of the case to pull some air out, the case is rather maxed out at the moment.
  8. Sorry to butt in, but I thought I'd share my recent experience of gaining 3-4 degrees by taking the air filters off of the fans that were directly blowing air from outside the case on my hard drives. Adding the fans helped a lot, but while running a parity check I saw that one drive was still at about 46-47 degrees. Since I could remove the filter without opening the case, I did so, and in about 15 minutes the temperature went down to 43. That is pretty direct evidence that you can gain a few degrees back by removing the filter. I suppose that means I have to take the box outside and blow all the dust out at some point, but that is preferable to drive failure... Edit: the total gain was 5-6 degrees by the time the drives had the chance to fully benefit from the reduced filter. I am sure this varies dramatically based on the case layout, fans and filters. But it's something to consider if you need an immediate improvement.
  9. FYI, Seagate is now on firmware SD1B. Apparently SD1A bricked some drives.
  10. All- Thanks for your responses with this. Since the problem I was having with my unraid box (independent of the drive format issue) continued, I actually reformatted my usb key and rebuilt the array. So I cannot use a mount command now and this question will have to (and hopefully will) remain theoretical for a while. Thanks again! Unraid is great, I just hope I can keep my boxes stable!
  11. Thanks. I think this is just standard DDR PC3200 but I will run memtest on it. However I would be very surprised if memory had anything to do this with problem since it started before I replaced the memory and the last batch of RAM also passed the test. I just upgraded mostly because I planned on using a lot of user shares on this unraid box and it appeared from my readings that more RAM helps user shares with large amounts of files. Can anyone else chime in as to the problems with deleting files?
  12. I'll reply to the questions on my other post but I think this is the root of the problem or at least the most obvious symptom so I will start a new thread for it: As a test, I copied large amounts of data onto a drive in my unraid array and, then, deleted it through Windows Explorer (through My Network Places) from a WinXp machine. While deleting a few folders with a few GB of data appears to work, when I wholesale deleted 150GB of data, maybe 500 files, the delete process basically froze to a halt in Windows Explorer. It appears to have stopped deleting files, and although some of the folders still show up on the drive, they cannot be accessed and trying to do so freezes Windows Explorer. Moreover, that seems to have frozen the web interface on the array. I tried to spin down the disks and have been getting a persistent "wait" message. I can still see all the drive shares through My Network Places (although as I said trying to access some of the folders that I had tried to delete, which still show up, freezes Windows Explorer). Can't shut it down, either, without hitting the power button on the box. This is repeatable behavior. Clearly there's a problem, but is this indicative of a particular issue? I should add, this is unlikely to be a problem with RAM because I've replaced it with 2GB of new fancy OCZ memory since the problem started, and not the power supply since I've replaced that with a 650W Corsair supply (8 drives), and not with the controller since I've replaced the Promise controller with a RocketRaid 464. Thanks!
  13. bump, since I simplified the question
  14. Thanks to both for your help. i was repeatedly getting 2 errors for about a week, but I took out all the round IDE cables and made a few other minor hardware enhancements and have gotten 0 errors the last few parity checks.
  15. OK here is a shorter version of the question. If I had a drive assigned as disk7 and then pulled that drive out, reformatted it NTFS, and then reinserted it back in the unraid array and then started the array, is unraid supposed to rewrite zeros to the disk and then format it? Or does this happen during mounting without showing any notification? Because this is what I did, and the drive just shows up as disk7 and appears to be writing and reading data normally. Assuming parity is rebuilt and I am able to copy data and serve it from disk7, is this safe? Was unraid supposed to format the drive when it mounted it to disk7 or does it just do it automatically? By the way, thanks very much for the help. Unraid is very cool if I can keep it running in stable fashion.
  16. Sorry I've searched and tried to find some sort of perspective on the number of parity errors and the severity of it. Is 2 errors, as reported by the admin screen after a parity check, 2 bits? 2 drives? Is that severe or nothing? Thanks.